id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.11879
Open-Domain Text Evaluation via Contrastive Distribution Methods
Recent advancements in open-domain text generation, driven by the power of large pre-trained language models (LLMs), have demonstrated remarkable performance. However, assessing these models' generation quality remains a challenge. In this paper, we introduce a novel method for evaluating open-domain text generation called Contrastive Distribution Methods (CDM). Leveraging the connection between increasing model parameters and enhanced LLM performance, CDM creates a mapping from the _contrast_ of two probabilistic distributions -- one known to be superior to the other -- to quality measures. We investigate CDM for open-domain text generation evaluation under two paradigms: 1) _Generative_ CDM, which harnesses the contrast of two language models' distributions to generate synthetic examples for training discriminator-based metrics; 2) _Discriminative_ CDM, which directly uses distribution disparities between two language models for evaluation. Our experiments on coherence evaluation for multi-turn dialogue and commonsense evaluation for controllable generation demonstrate CDM's superior correlate with human judgment than existing automatic evaluation metrics, highlighting the strong performance and generalizability of our approach.
Sidi Lu, Hongyi Liu, Asli Celikyilmaz, Tianlu Wang, Nanyun Peng
2023-06-20T20:37:54Z
http://arxiv.org/abs/2306.11879v4
# Open-Domain Text Evaluation via Meta Distribution Modeling ###### Abstract Recent advances in open-domain text generation models powered by large pre-trained language models (LLMs) have achieved remarkable performance. However, evaluating and controlling these models for desired attributes remains a challenge, as traditional reference-based metrics such as BLEU, ROUGE, and METEOR are insufficient for open-ended generation tasks. Similarly, while trainable discriminator-based evaluation metrics show promise, obtaining high-quality training data is a non-trivial task. In this paper, we introduce a novel approach to evaluate open-domain generation - the Meta-Distribution Methods (MDM). Drawing on the correlation between the rising parameter counts and the improving performance of LLMs, MDM creates a mapping from the contrast of two probabilistic distributions - one known to be superior to the other - to quality measures, which can be viewed as a distribution of distributions _i.e._ Meta-Distribution. We investigate MDM for open-domain text generation evaluation under two paradigms: 1) _Generative_ MDM, which leverages the Meta-Distribution Methods to generate in-domain negative samples for training discriminator-based metrics; 2) _Discriminative_ MDM, which directly uses distribution discrepancies between two language models for evaluation. Our experiments on multi-turn dialogue and factuality in abstractive summarization demonstrate that MDMs correlate better with human judgment than existing automatic evaluation metrics on both tasks, highlighting the strong performance and generalizability of such methods. ## 1 Introduction Over the past few years, open-domain text generation powered by large pretrained generative language models (LLMs) has seen significant strides, attracting substantial interest (Radford et al., 2018, 2019; Brown et al., 2020; OpenAI, 2022, 2023). These systems exibit remarkable abilities, including generating human-like responses, aiding in natural language understanding, and accomplishing complex tasks such as programming and article writing. Despite ongoing debates regarding theoretical indistinguishability of near-optimal LLMs (Sadasivan et al., 2023), creating a reliable and scalable automatic evaluation for these models is imperative but remains an open challenge. Existing metrics from pre-LLM eras have limitations in evaluating the quality of the generated responses. Specifically, reference-based statistical metrics (e.g. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005)) do not work well for open-ended generation problems with high content diversity like storytelling (Yao et al., 2019) and dialogue systems (Mesgar et al., 2019; Li et al., 2017; Wen et al., 2016), as evidenced by their low correlation with human judgments (Liu et al., 2016; Hu et al., 2020). For these tasks, it is challenging, if not impossible, to collect a sufficiently large number of reference examples that represent the distribution of all feasible outputs, thus limiting the effectiveness of reference-based metrics. With recent progress in pretrained models, model-based reference metrics like BERTScore (Zhang et al., 2019), Bluert (Sellam et al., 2020) are proposed to facilitate automatic evaluation for text generation. They alleviate the sample efficiency issue of statistical reference-based methods by using pretrained models to compute the similarities between texts based on higher-level semantics. However, the effectiveness of such methods is still reliant on the representativeness of the reference set, and thus falls short when the output semantic space is also highly diverse. Reference-free evaluation metrics, which take in the evaluating text and directly output a score, are thus proposed to provide a more reliable solution for automatically evaluating open-domain generation. There are generally two paradigms of training a model to evaluate text without references: 1) _prediction-based_ approaches like ADEM (Lowe et al., 2017) and DEAM (Ghazarian et al., 2022), which treat the problem as a prediction task, and try to train a classifier or regressor to produce a score. These works either require extensive human annotations or dedicated manual designs of the manipulation process in order to automatically generate negative samples to train the classifier. 2) _distribution/divergence-based_ approaches (Pillutta et al., 2021; Pimentel et al., 2022) concern the target as obtaining a continuous divergence score between distributions have shown promising results. However, they generally lack the ability to accurately assign credit to each data point to perform instance-level evaluations. Note that the boundary between the two paradigms can sometimes be subtle, and there are several existing model-based metrics (Zhong et al., 2022; Liu et al., 2023) built from modules under both philosophical paradigms. In this paper, we propose Meta-Distribution Methods (MDM), a general and reference-free framework for evaluating open-domain text generation. MDM is built on a simple but reasonable assumption: models with similar architectures but different model sizes generally obtain better generation quality with larger models. Therefore, MDM learns an energy function \(E(p)\) to represent the dynamics of model performances scale with increasing parameter counts. With this energy function \(E(p)\), which can also be viewed as a distribution over distributions (thus a meta-distribution), we can perform inference in both generative and discriminative fashion to build automatic evaluation metrics. MDM improves over previous reference-free evaluation metrics in two aspects: 1) The generative MDM as illustrated in Figure 1(a) produces effective negative samples without the necessity of additional human annotations or sophisticated design for the manipulation process, and thus facilitates the learning of prediction-based evaluation metrics. 2) The discriminative MDM as illustrated in Figure 1(b) provides a distribution-level measurement without sacrificing the capability to assign credits to individual sentences, and thus results in more reliable divergence-based evaluation metrics. Results on dialogue and abstractive summarization evaluation show that, MDMs achieve strong performance and outperform existing methods on all datasets without much task-specific designs. ## 2 Background and Related Works **Open-Domain Text Evaluation** There has been a synchronously growing interest in developing robust evaluation methods for open-domain text generation models. Traditional evaluation metrics, such as BLEU and ROUGE, have been shown to be inadequate for assessing the quality of complex, multi-sentence responses generated by these models. As a result, researchers have explored alternative evaluation methods, including human evaluation, adversarial evaluation, and unsupervised metrics. Figure 1: Conceptual illustration of the Meta-Distribution Methods. There are two major paradigms of Meta-Distribution Methods: _generative MDM_ and _discriminative MDM_. (a) _Generative MDM_ constructs fake negative samples from positive ones for training a classifier-based metric. (b) _Discriminative MDM_ directly evaluate the distribution/sequence in a Naive Bayes framework. Human evaluation remains the gold standard, but it is time-consuming and costly. Adversarial evaluation, which involves testing models against a set of challenging examples, has shown promise in identifying weaknesses in current models. Unsupervised metrics, such as BERTScore and Perplexity, provide quick and automated evaluation, but their correlation with human judgments remains a topic of debate. The field of open-domain text evaluation continues to evolve, and developing reliable evaluation methods will be essential for advancing the state-of-the-art in this exciting area of research. **Prediction-Based Metrics** ADEM (Lowe et al., 2017) is one of the first attempts at training a model to evaluate machine-generated text. It deals with single-turn dialogue evaluation problem, and uses the contextualized representation of the context in interaction with that of the responses to train the model. DEAM (Ghazarian et al., 2022) is a novel evaluation metric that aims to assess the coherence and quality of multi-turn dialogue systems. Unlike traditional evaluation metrics, DEAM uses manipulation techniques to construct negative samples from positive samples, allowing for a more nuanced assessment of model performance. DEAM operates by first parsing the sequence into an abstract meaning representation (AMR), and then manipulating the AMR to introduce inconsistencies and irrelevancies that undermine the coherence of the dialogue. The manipulated AMR is then transformed back into text form for evaluation. This method supports multi-turn dialogue evaluation and has achieved state-of-the-art performance on various benchmark datasets. By using AMR-based semantic manipulations, DEAM provides a promising approach for evaluating the quality of dialogue systems in a more comprehensive and accurate manner. _Generative_ MDM shares a similar process, as it manipulates the positive true samples for the generation of negative samples, serving the purpose of training a classifier. **Distribution/Divergence-based Metrics** MAUVE and follow-up works (Pillutla et al., 2021; Pimentel et al., 2022) analyse the quality gap between human-generated text and machine-generated text by studying the divergence frontier of human-generated samples in contrast to the learnt model. While their setup is not directly relevant to our approach, it provides an insightful perspective of using the likelihood predictions of LMs for evaluation purposes. Zhong et al. (2022) proposes a multi-dimensional evaluation system for more robust automatic evaluation. It ensembles the score from a set of _classifier-based_ metrics, each of which trained to evaluate a specific aspect in intuition of the text quality. GPTEval (Liu et al., 2023) tries to quantitatively exploit large language models that are trained with strong human alignment. It uses the score prediction from GPT-4 (OpenAI, 2023) to evaluate how well the given text adheres to human opinion. _Discriminative_ MDM falls under this paradigm, since it serves as a metric with more continuously distributed scores for the evaluated text. **Contrastive Decoding and Contrastive Momentum** Contrastive decoding is a decoding algorithm that leverages the strengths of two language models: a stronger expert model and a weaker amateur model. The algorithm decodes towards the objective of maximizing the difference between the log-probabilities of the expert and amateur models, resulting in high-quality generated samples. Specifically, the algorithm tries to decode sequences that maximize the _contrastive momentum_: \[\log p_{\text{expert}}(x)-\log p_{\text{amateur}}(x), \tag{1}\] where \(x\) is the generated sample and \(p_{\text{expert}}\), \(p_{\text{amateur}}\) are the expert and amateur models, respectively. The original paper (Li et al., 2022) demonstrates that this approach results in even higher quality samples than decoding from the expert model alone. Contrastive decoding provides an insightful way to study the dynamics of how models' capabilities scale up with larger parameter numbers. Under the proposed framework of MDM, contrastive decoding can be viewed as leveraging the two-point momentum for an approximate optimization towards the maximization of \(E(p)\). ## 3 Methodology ### Notations and Problem Formulation We use \(\mathbf{s}\) to denote a sequence and \(s_{i}\) to denote the \(i-\)th token in \(\mathbf{s}\). \(p(\mathbf{s})\) denotes the probability of sequence \(\mathbf{s}\) under a model \(p\). We assume model \(p\) is a probabilistic distribution defined on \(\Sigma^{*}\), where \(\Sigma\) is the set of valid tokens and \(\Sigma^{*}\) is the universal set of all sequences consisting of such tokens. Consider an energy function \(E(p)\) which projects from a model's likelihood prediction \(p(\mathbf{s})\) to "a measure of model performance" - a scalar. This function does not necessarily have an analytical form, however, we assume that we have access to some partial order relations it defines. If we normalize this energy function using some activation functions such as the softmax function, it becomes a "_distribution of distributions_" hence a Meta-Distribution. Intuitively, any evaluation metrics that aim to correlate better with model performance, as evaluated through human judgments, can be viewed as one version of \(E(p)\). In the ideal case, \(E(p)\) can be used to perform: * _Discriminative_ inference: to evaluate any existing distribution by ranking them according to \(E(p)\), and/or quantify the contribution of a specific sequence's likelihood \(p(\mathbf{s})\)_w.r.t._\(E(p)\); * _Generative_ inference: to improve/degenerate the model performance on the fly by altering \(p\) towards maximization/minimization of \(E(p)\). In this paper, we adapt both discriminative and generative inference of MDM to either directly serve as the metric (_discriminative_) or produce negative training samples (_generative_) as needed by the training of the metric model. ### Connection with Contrastive Decoding Contrastive decoding (Li et al., 2022) can be regarded as a simplified concrete approach of _Generative_ inference with \(E(p)\). The _contrastive momentum_ can be viewed as a finite difference approximation of \(m=\frac{\partial E(p)}{\partial p}\), where it does the following two simplification: * \(m=\frac{\partial E(p)}{\partial p}\) is approximated by the finite difference between **two** models that have a **pre-assumed performance superiority** of one to the other (namely the _expert_ and _amateur_ model in the original paper (Li et al., 2022) ). * It concerns the joint probability of each sequence as the product of step-wise probabilities, _i.e._ the auto-regressive formulation. Considering the 1-normalization constraint of probabilities, one feasible solution is the following approximation in contrastive decoding: \[\log m(x|s)\approx\log\ p_{1}(x|s)-\log\ p_{0}(x|s)\] Searching towards this objective is equivalent to using \(m(x|s)\) to optimize \(p\) towards the maximization of \(E(p)\). ### The Partial Order Assumption We hereby discuss in detail how we can conduct contrastive methods for evaluation purposes. While it is nontrivial, if not impossible, to come up with analytical forms for \(E(p)\), we can make some assumptions to obtain partial orders from \(E(p)\). Consider a _series_ of models that share similar architectures and other pretraining/finetuning setups, but differ in model sizes (e.g. T5-small/base/large, etc.). It is usually reasonable to assume that, the model variant with a larger number of parameters usually would perform better under the evaluation of most metrics. While it would still be challenging for a more general superiority assumption on models from different model classes, we can assume the following partial order (a linear order within one concerned model class) for a distribution measure as is induced by the energy function \(E(p)\): \[E(p_{small})<E(p_{base})<E(p_{large})\] See Figure 2(a). ### Meta-Distribution Methods #### 3.4.1 First Order Approximation of \(E(p)\) As is previously mentioned in Sec 3.1, it could be intractable to estimate the accurate version of \(\frac{\partial E(p)}{\partial p}\). Following similar approximations as in previous methods(Li et al., 2022), we approximate it using a secant hyperplane between two distribution/model points. We denote them as the _origin_ distribution \(p_{o}\) and the _guide_ distribution \(p_{g}\). In a sense, this is equivalent to a first-order approximation of \(E(p)\) in the space \(\{p(\mathbf{s})|\mathbf{s}\in\Sigma\mathbf{*}\}\). Different choices of \(p_{o}\) and \(p_{g}\) result in different quality of the first-order approximations for \(E(p)\), hence different performance of the evaluation metric. We need to investigate the general principle for choosing the origin and guide distributions. #### 3.4.2 Generative MDM: Synthetic Data Generation with MDM Generative MDM follows prior works such as ADEM (Lowe et al., 2017) and DEAM (Ghazarian et al., 2022) to formulate reference-free evaluation metrics as prediction tasks. In order to evaluate generated texts, a discriminator can be trained on positive and negative examples to serve as the evaluation metric.1 However, we usually only have human-written texts which are considered positive examples while negative examples are non-trivial to obtain. Randomly generating negative examples using a uniform distribution over all possible sequences of tokens is not efficient, as the discriminator would easily degenerate into a poor local minimum. On the other hand, simply generating negative examples from pretrained large-language models (LLMs) may not result in real low-quality texts, which would confuse the discriminator. Footnote 1: The discriminator does not necessarily need to provide binary decisions, it can also produce scores. But we use binary examples for simplicity. Generative MDM provides a controllable approach to reduce the performance of existing pretrained models to generate "sufficiently deceptive and targeted negative examples". Following our previous analysis framework, this is effectively equivalent to descending along the direction of \(-\frac{\partial E(p)}{\partial p}\) from an initial point \(p_{o}\) -- usually, a small (thus worse-performing) model. Although we do not have a tractable explicit estimation for \(E(p)\), we can still utilize the previous approximation which is to replace the differential operation with the difference between \(p_{o}\) and a selected guide distribution \(p_{g}\). Specifically, we now consider an anchored _origin_ model \(p_{o}\) to compute the contrastive momentum towards several different superior _guide_\(p_{g}\) models. To control irrelevant factors, we obtain a guidance model \(p_{g}\) from a similar architecture but with more layers/parameters. By leveraging the contrastive momentum \(\log m_{o\to g}=\log p_{g}-\log p_{o}\) in the reversed direction, we controllably degenerate from the _origin_ model. Consequently, we obtain a probability distribution \(\log p_{o}^{-}\propto\log p_{o}-\gamma m_{o\to g}\) that disproportionately amplifies the likelihood of "machine artifacts" in a controllable (by setting \(\gamma\)) scale. Sampling from \(p_{o}^{-}\) allows us to obtain suitable negative examples. We hereby discuss how to generate negative examples in a more targeted manner. The whole process can be viewed as a controllable generation problem. We start from existing positive examples and selectively decrease the overall text quality without altering the majority of the context and meaning. Figure 2: (a) It’s difficult, if ever possible, to assume a total order under the evaluation of \(E(p)\) for models from different model classes. It’s still possible to assume partial orders for models from the same model class. (b) Generative MDM uses the degraded distributtion \(p_{o}^{-}\) to synthesize fake samples for training a discriminator as the metric. The warm/cold region indicates the decision boundary of the resulting trainable metric induced by fake samples from \(p_{o}^{-}\). (c) Discriminative MDM directly determines the decision boundary by pooling the values of the step-wise contrastive momentum. In this sense, the generated negative examples would be more targeted and misleading compared to sampling from \(p_{o}^{-}\) at the very beginning. Although for differentiable data such as images this is simple, one can directly use gradient-based approach to achieve it. However, this does not apply to non-differentiable data like text. We hereby discuss how to construct a similar process. One feasible approach is to train a model to address a _segment insertion/reconstruction_ problem. Given the context and the position at which a segment is removed (randomly or strategically), we model a conditional distribution that reconstructs the original segment. Once we have obtained such a model and its degenerated form, we can utilize it to repeatedly operate on existing positive examples, allowing us to construct negative examples in a further well-controlled manner. This process enables us to generate negative examples that are specifically targeted and deceptive. Although text data is non-differentiable, this approach leverages the model's ability to learn conditional distributions and eventually constructs the desired ideal negative examples. The full process can be summarized as follows: #### 3.4.3 Discriminative MDM: Directly using MDM as the metric Although Generative MDM is a reasonably flexible and scalable framework, there are still many variable factors in the generation process (e.g. how to choose which segment to remove, the randomness in the process of segment reconstruction, degradation strength factor \(\gamma\) etc.) that may affect the performance of the resulting metric. Therefore, it is desirable to remove the generation subroutine completely. Consider why we need to train a discriminator as a metric, because we usually do not have a tractable model for the positive or negative distribution. However, under the MDM framework, we do have a tractable model \(p_{o}^{-}\) for the negative distribution, which is composed from the origin model \(p_{o}\) and the contrastive momentum \(m\). In light of this, we can consider directly deploying \(m\) as a divergence-based metric for evaluation. For each sequence, we concern the step-wise likelihood prediction from the _origin_ model \(\log p_{o}(x|\mathbf{s}_{<t})\) and the _guide_ model \(\log p_{g}(x|\mathbf{s}_{<t})\). If a sequence is considered good, both models' predicted likelihood will be relatively high, but \(\sum_{t}\log p_{g}(x|\mathbf{s}_{<t})\) should be more significant than \(\sum_{t}\log p_{o}(x|\mathbf{s}_{<t})\). If we strictly follow the definition, summing up the step-wise contrastive momentum over the entire sequence (_i.e._ sum-pooling) would be the metric to evaluate the generation quality. See Figure 2(c). However, we argue that there would be a subtle discrepancy between theory and practice. First, the sum-pooled score would still be numerically influenced by the sequence length (even though not as severely as directly using the log-likelihood). Moreover, sum-pooling overemphasizes the impact of extremely low probability steps because the discrepancy between the origin model and guide model predictions in those regions could be further amplified on the logarithmic scale. In the experiments, we compare different strategies to _pool_ the sequence of step-wise contrastive momentum values into a sentence-level evaluation score. We call this paradigm of using MDM as Discriminative MDM. There is no explicit generation process in Discriminative MDM as we only treat the two models and their contrastive momentum as likelihood value predictors. ## 4 Experiments ### Dialogue Evaluation with MDM The first part of our experiment is primarily focused on dialogue evaluation. Given a set of annotated dialogues, each with human-annotated quality scores ranging from \(0.0\) to \(1.0\), our objective is to assign scores to each evaluated sequence that maximizes the correlation with human annotations. Additionally, for dialogue evaluation, we assume we are not permitted to perform any training on data within the same domain. Our training/fine-tuning exercises are conducted on dialogues from both TopicalChatGopalakrishnan et al. (2019) and PersonalChat datasets(Zhang et al., 2018). Subsequently, we evaluate our methods on annotated dialogues from the FED (Mehri and Eskenazi, 2020) and DSTC9(Gunasekara et al., 2020) datasets. Dataset and Experiment SetupWe adopt most experimental settings from DEAM (Ghazarian et al., 2022) to verify the effectiveness of our method. The statistics of the involved datasets in our experiments are shown as follows: The two models in each contrastive and expert model from different sizes of T5 (Raffel et al., 2019; Wei et al., 2021) checkpoints respectively. For ablation study of how different choices of such model pairs and other factors impact the performance of MDM, please refer to Table 3. #### 4.1.1 Model Specification There would be multiple strategies to construct the context-prediction pairs. We study the following cases for Generative MDM: * Segment-Single: The manipulation of data is only applied once to a random segment no longer than 20 tokens in a real dialogue. * Utterance-Single: The manipulation of data is only applied once to a random utterance in a real dialogue. * Mixed-Single: The manipulation of data is only applied once to a random utterance or a random segment no longer than 20 tokens in a real dialogue. * Mixed-Multi: The manipulation of data is applied randomly for a uniformly random value from 1\(\sim\)4 times. * AMR-Multi: The location of data manipulation is guided by similar approach as in DEAM Ghazarian et al. (2022). Similarly, we study following aggregation strategies for discriminative MDM: * Pooling along the timestep axis (Avg-Pooled/Max-Pooled/Min-Pooled) * Classifier-Pooled: We train a small linear classifier to convert the sequence of as a trainable pooler using annotated training data (from the original dataset or as synthesized by DEAMGhazarian et al., 2022)). #### 4.1.2 Results and Analysis We present quantitative results in Table 2. Given that MDM is designed to model discrete comparison relations, our approach aligns with methodology established by previous research (Mesgar et al., 2019; Vakulenko et al., 2018; Zhang et al., 2021; Ghazarian et al., 2022) and report the Spearman correlation to better evaluate MDM against these baselines. We find that DSTC9 favors segment-level manipulation while FED favors utterance-level manipulation. Additionally, our findings indicate that larger performance gap between the origin/guide models in general induces better performance. Finally, Discriminative MDM methods present less \begin{table} \begin{tabular}{l c c} \hline \hline **Dataset** & **size** & **Avg. len** \\ \hline TopicalChat + PersonalChat (Gopalakrishnan et al., 2019; Zhang et al., 2018) & 17567/2078 & 377 \\ \hline FED (test, w/ human annotation) (Mehri and Eskenazi, 2020) & 125 & 168 \\ DSTC9 (test, w/ human annotation) (Gunasekara et al., 2020) & 2200 & 318 \\ \hline \hline \end{tabular} \end{table} Table 1: Data usage in our experiments \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & \multicolumn{2}{c}{FED} & \multicolumn{2}{c}{DSTC9} \\ & Coherence & Overall & Coherence & Overall \\ \hline Mesgar et al. (2019) & 0.10 & -0.01 & 0.02 & 0.05 \\ Vakulenko et al. (2018) & 0.13 & 0.10 & 0.00 & 0.00 \\ DynaEval (Zhang et al., 2021) & -0.36 & -0.4 & -0.03 & -0.01 \\ DEAM Ghazarian et al. (2022) & 0.47 & 0.55 & 0.19 & 0.20 \\ \hline Generative MDM (Ours) & 0.51 & 0.52 & 0.19 & 0.23 \\ Discriminative MDM (Ours) & **0.59** & **0.61** & **0.27** & **0.25** \\ \hline \hline \end{tabular} \end{table} Table 2: Spearman correlation of different approaches for dialogue evaluation. All reported results from our approach have \(p\)-value \(<\) 1e-4. We show the best-performing results with **bolded** numbers and second-best with underlined numbers. bias across datasets and offer more efficiency during training, as they eliminate the necessity for training an additional classifier model. ### Factuality Evaluation for Abstractive Summarization Models We now concern a slightly different setup, where we use MDM for evaluating the factuality of abstractive summarization models. Previous methods usually consider the problem as an NLI problem with only the "_entailment_" case of annotations. We evaluate our model in comparison to existing works, such as Falsesum(Utama et al., 2022), QAGS(Wang et al., 2020), Coco(Xie et al., 2021) and FactCC(Kryscinski et al., 2019). #### 4.2.1 Dataset and Experiment Setup We adopt most experimental settings from existing works on factuality evaluation of abstractive summarization. The following table lists some key statistics of the datasets we use in this part of experiments. We train our likelihood functions under both supervised and unsupervised setups: * **Unsupervised** For annotated data, we only train the likelihood function to _maximize_ the likelihood of positive samples. In this sense, the model is more similar to the one we used in dialogue evaluation. * **Supervised** We train the likelihood function as factual-counterfactual label-conditioned probabilities using synthesized pseudo labels following the methods as described in \begin{table} \begin{tabular}{l c} \hline \hline **Dataset** & **size** \\ \hline CNN/DailyMail (train) (Nallapati et al., 2016) & 1,003,355 \\ \hline QAGS-CNN/DailyMail (test, w/ human annotation) (Wang et al., 2020) & 235 \\ QAGS-XSum (test, w/ human annotation) (Wang et al., 2020) & 239 \\ SummEval (test, w/ human annotation) (Fabbri et al., 2021) & 1200/3600 \\ \hline \hline \end{tabular} \end{table} Table 4: Data usage in our experiments \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & \multicolumn{2}{c}{FED} & \multicolumn{2}{c}{DSTC9} \\ & Coherence & Overall & Coherence & Overall \\ \hline Generative MDM & & & & \\ \hline - Segment-Single (small-large) & 0.12 & 0.07 & 0.11 & 0.10 \\ - Utterance-Single (small-large) & 0.29 & 0.36 & 0.05 & 0.08 \\ - Mixed-Single (small-large) & 0.32 & 0.35 & 0.14 & 0.12 \\ - Mixed-Multi (small-large) & 0.42 & 0.40 & 0.17 & 0.18 \\ - AMR-Multi (small-large) & 0.49 & 0.53 & 0.20 & 0.22 \\ \hline - AMR-Multi (small-base) & 0.48 & 0.51 & 0.19 & 0.20 \\ - AMR-Multi (small-large) & 0.49 & 0.53 & 0.20 & 0.22 \\ - AMR-Multi (small-xl) & 0.51 & 0.52 & 0.19 & 0.23 \\ \hline Discriminative MDM & & & & \\ \hline - Avg-Pooled (small-large) & 0.31 & 0.32 & 0.12 & 0.13 \\ - Min-Pooled (small-large) & 0.27 & 0.28 & 0.07 & 0.04 \\ - Max-Pooled (small-large) & 0.46 & 0.43 & 0.16 & 0.15 \\ - Classifier-Pooled (small-large) & 0.53 & 0.56 & 0.24 & 0.22 \\ \hline - Classifier-Pooled (small-base) & 0.42 & 0.44 & 0.13 & 0.10 \\ - Classifier-Pooled (small-large) & 0.53 & 0.56 & 0.24 & 0.22 \\ - Classifier-Pooled (base-large) & 0.39 & 0.40 & 0.09 & 0.11 \\ - Classifier-Pooled (small-3b) & **0.59** & **0.61** & **0.27** & **0.25** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on how different variants and hyperparameters of the proposed MDM impact the final performance of the methods. FalseSum(Utama et al., 2022). When performing the discriminative MDM, we use factual-conotloned larger model as the _guide_ model and counterfactual-conditioned smaller model as the _origin_. #### 4.2.2 Results and Analysis We show results in Table 5. Previous works report their performance inconsistently in either Spearman/Pearson correlation or an accuracy score with 0/1 quantization of the annotations. We adopt 0/1-quantization and report the accuracy of each baseline/model. * Learned pooling does not show as significant improvements as in dialogue for factuality evaluation. This is probably due to that the majority of the output semantics is already contained within the condition, and the statistical properties of the step-wise contrastive momentum score are more consistently distributed. * With the explicitly annotated data with pseudo labels, using the supervised setting is in general performing better. This is probably due to that, conditioning the likelihood function on the pseudo labels implicitly creates an ensemble of MDM and the existing method Falsesum (Utama et al., 2022). ## 5 Conclusion and Future Work This paper presents the Meta-Distribution Methods (MDM) as a general framework for evaluating open-domain text generation models. MDM is constructed around analyzing the correlation between model scales and the respective distribution prediction, and how it can be exploited to alter the performance of a certain model on-the-fly in inference. We demonstrate how MDM can be used for evaluation purposes in two general paradigms: Generative MDM, which manipulates existing positive samples to generate in-domain negative samples and subsequently trains a classifier, and Discriminative MDM, which employs the contrastive momentum as a direct metric for evaluation. Our experiments results in multi-turn dialogue evaluation and factuality evaluation for abstractive summarization illustrate that MDM correlates better with human intuition than traditional metrics. In summary, the MDM method emerges as a promising and scalable approach for evaluating open-domain text generation systems, among others. For future work, it is interesting to investigate higher order approximations of the meta distribution that account for the dynamics of an extended series of models across different scales. Furthermore, it presents an interesting avenue to explore whether the Generative MDM approach can be extended for a more effective ensemble of heterogeneous models that differ in scale, architecture or even training data but operate under a reasonable assumption of a partially ordered performance level. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multirow{2}{*}{SummEval} & \multicolumn{3}{c}{QAGS} \\ & & Overall & CNN/DM & XSum \\ \hline Falsesum (Utama et al., 2022) & 65.18 & 75.05 & 94.89* & 55.65* \\ \hline QAGS (Wang et al., 2020) & 59.82* & 72.15 & 88.08* & 56.48* \\ \hline Coco (Xie et al., 2021) & 66.71* & 77.00* & 93.62* & 60.66* \\ \hline FactCC (Krysčinski et al., 2019) & 60.04 & 73.42 & 85.96* & 61.09* \\ \hline Discriminative MDM (Ours.) & & & & \\ \hline - Max-Pooled (small-large, Unsupervised) & 64.17 & 74.05 & 85.11 & 63.18 \\ - Classifier-Pooled (small-large, Unsupervised) & 64.74 & 74.47 & 90.21 & 59.00 \\ - Classifier-Pooled (small-large, Supervised) & 67.17 & 76.79 & 93.19 & 60.67 \\ - Classifier-Pooled (small-xl, Supervised) & **68.17** & **78.27** & 93.62 & 63.18 \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation results for abstrative text summarization. For all models and datasets, we show the quantized accuracy scores. Results with * means ones from our re-implementation or re-evaluation so as to unify the metric.
2305.08878
Learning to Learn Unlearned Feature for Brain Tumor Segmentation
We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks. Our approach is based on active learning and meta-learning. One of the difficulties in medical image segmentation is the lack of datasets with proper annotations, because it requires doctors to tag reliable annotation and there are many variants of a disease, such as glioma and brain metastasis, which are the different types of brain tumor and have different structural features in MR images. Therefore, it is impossible to produce the large-scale medical image datasets for all types of diseases. In this paper, we show a transfer learning method from high grade glioma to brain metastasis, and demonstrate that the proposed algorithm achieves balanced parameters for both glioma and brain metastasis domains within a few steps.
Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi
2023-05-13T05:26:25Z
http://arxiv.org/abs/2305.08878v1
# Learning to Learn Unlearned Feature for Brain Tumor Segmentation ###### Abstract We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks. Our approach is based on active learning and meta-learning. One of the difficulties in medical image segmentation is the lack of datasets with proper annotations, because it requires doctors to tag reliable annotation and there are many variants of a disease, such as glioma and brain metastasis, which are the different types of brain tumor and have different structural features in MR images. Therefore, it is impossible to produce the large-scale medical image datasets for all types of diseases. In this paper, we show a transfer learning method from high grade glioma to brain metastasis, and demonstrate that the proposed algorithm achieves balanced parameters for both glioma and brain metastasis domains within a few steps. ## 1 Introduction The performance of semantic segmentation using deep neural networks has been improved recently. These segmentation networks are applied to the medical image analysis to help doctors to save time in diagnosis. The state-of-the-art networks still require large amounts of training data points for pre-training and fine-tuning. Gathering medical image datasets, however, is an expensive and time-consuming so that there are fewer datasets than the datasets for common objects [1; 3; 7]. In particular, in brain tumor segmentation, there is a proper dataset called BRaTS for the High Grade Glioma (HGG) and the Low Grade Glioma (LGG) [5], and no well-annotated dataset for brain metastasis. HGG and brain metastasis have the different structural feature but the similar contrast feature. In contrast enhanced MR image, the region of brain tumor is highlighted because of the contrast media. In both cases, tumors have the similar contrast. These brain tumors have different pathological properties, so they have different structural characteristics. Therefore, a pre-trained network using the HGG dataset can not generate perfect segmentation for brain metastasis. In this paper, We learn the unlearned feature of brain metastasis without forgetting the pre-trained feature in order to optimize the balanced parameters between HGG and metastasis. We first pre-trained the fully convolutional network (FCN) [4] with HGG in the dataset [5]. The gradient descent based fine tuning, which we call naive tune, uses many selected data points which have balanced instances per each class to produce the optimal fine-tuning results [3; 7]. We propose two novel fine-tuning methods, passive meta-tune and active meta-tune, to optimize the pre-trained network. These two methods decide which training data points are first learned and update the network with [2]. The orders of training dataset is determined with two active learning based rules, the passive learning and the active learning [6]. In this work, we produce the annotated brain metastasis data samples with 30 patients. Similar to the BraTS dataset, for a patient, there are 4 MR sequences, FLAIR, T1, T1 contrast-enhanced, T2, and 25 slices per each sequence. ## 2 Proposed methodology We propose two active learning based meta-tune methods, each of which is based on random sampling and a variant of uncertainty sampling [6]. We define the meta-tune as a fine-tune method using MAML algorithm [2]. The meta-tune generalize the unlearned training examples more quickly than the gradient-based fined tune method (naive tune). As shown in Figure (1), we train the model with the learned features continuously as well as the unlearned features not to forget the learned features. ``` 1: Set learning rate hyper parameters : \(\alpha,\beta\), pre-trained parameters : \(\theta\) 2: Divide inputs into inner-loop data and outer-loop data, Order inputs by passive or active method 3:for the meta batch size of one patient do 4: Decompose tasks into good task \(\mathcal{T}_{g}\), bad tasks \(\mathcal{T}_{b}\) 5:for\(\mathcal{T}_{i}\in\mathcal{T}_{g}\cup\mathcal{T}_{b}\)do 6: Sample one data point \(D=(\mathbf{x},\mathbf{y})\) by each sample method 7: Compute \(\theta^{\prime}_{i}=\theta-\alpha\nabla_{\theta}\mathcal{L}_{\mathcal{T}_{i}} (f_{\theta})\) 8: Sample one data point \(D^{\prime}_{i}=(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) by each sample method 9:endfor 10: Meta update \(\theta=\theta-\beta\nabla_{\theta}\sum_{\mathcal{T}_{i}}\mathcal{L}_{\mathcal{ T}_{i}}(f_{\theta^{\prime}_{i}})\) using \(D_{i}\) 11:endfor ``` **Algorithm 1** Active learning based meta-tune algorithms ## 3 Experimental Results We clinically-acquire multimodal MRI scans and produce all the ground truth annotations by neuroradiologists. We use a VGG-16 based FCN network as the pre-train network. Then, we test our algorithms. (yellow: edema, green: necrosis, brown: enhancing tumor: brown, red : high probability). ## 4 Conclusion We proposed an active meta-tune method which learns unlearned feature without forgetting the original task. We show that our method have a generalization effect within the target domain segmentation (brain metastasis).We expect our method can be extended to other medical lesion applications. Figure 1: Two active learning based meta-tune methods : divide the results of segmentation task into the good result tasks (learned) and the bad result task (unlearned). At the first meta-tune step, the pre-trained parameter \(\theta_{0}\) usually lies on the sub-optimal region of the pre-training optimum \(\theta^{*}_{g}\). To reach the balanced target parameter \(\theta^{*}_{t}\), sample the order of meta-tune inputs by each method. (a) is passive meta-tune that trains with randomly ordered inputs. The optimal parameters \(\alpha_{1},\beta_{1}\) for randomly sampled tasks lie in each circle. The update direction depends on the combination of \(\alpha_{i},\beta_{i}\). (b) is DSC based order inputs. The first input is the farthest data points across the optimums \(\theta^{*}_{g},\theta^{*}_{b}\). The induced update direction flows smoothly into \(\theta^{*}_{t}\). Figure 3: Segmentation results on the training dataset (a)-(c). The result of passive meta-tune oscillates (b). Segmentation result on the validation dataset (d)-(f). #### Acknowledgments This work is in part supported by SNU Eng-Med Collaboration Grant, Basic Science Research Program (NRF-2017R1A2B2007102) through NRF funded by MSIP, Technology Innovation Program (10051928) funded by MOTIE, Bio-Mimetic Robot Research Center funded by DAPA (UD130070ID), INMAC, and BK21-plus.
2302.05323
On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence
Deep Neural Network (DNN) Inference in Edge Computing, often called Edge Intelligence, requires solutions to insure that sensitive data confidentiality and intellectual property are not revealed in the process. Privacy-preserving Edge Intelligence is only emerging, despite the growing prevalence of Edge Computing as a context of Machine-Learning-as-a-Service. Solutions are yet to be applied, and possibly adapted, to state-of-the-art DNNs. This position paper provides an original assessment of the compatibility of existing techniques for privacy-preserving DNN Inference with the characteristics of an Edge Computing setup, highlighting the appropriateness of secret sharing in this context. We then address the future role of model compression methods in the research towards secret sharing on DNNs with state-of-the-art performance.
Daphnee Chabal, Dolly Sapra, Zoltán Ádám Mann
2023-02-10T15:34:42Z
http://arxiv.org/abs/2302.05323v2
# On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence ###### Abstract Deep Neural Network (DNNs) Inference in Edge Computing, often called Edge Intelligence, requires solutions to insure that sensitive data confidentiality and intellectual property are not revealed in the process. Privacy-preserving Edge Intelligence is only emerging, despite the growing prevalence of Edge Computing as a context of Machine-Learning-as-a-Service. Solutions are yet to be applied, and possibly adapted, to state-of-the-art DNNs. This position paper provides an original assessment of the compatibility of existing techniques for privacy-preserving DNN Inference with the characteristics of an Edge Computing setup, highlighting the appropriateness of secret sharing in this context. We then address the future role of model compression methods in the research towards secret sharing on DNNs with state-of-the-art performance. ## Introduction Deep Neural Networks (DNNs), the prominent tools used in the field of Artificial Intelligence, are sought after in many sectors of activities to optimize decision-making and improve the quality of services [11]. Specifically, the amount of DNN deployments for commercial purposes during a customer's interaction with everyday objects is proliferating [13]. Privacy-preserving Inference aims to protect the privacy and security of data belonging to the multiple parties involved in Neural Network Inference. There is a global rise of smart services offered by internet-connected devices (sometimes called the Internet of Things or IoT), which are increasingly immersed in daily life (e.g., smartwatches, smartphones, personal digital assistants), and recording confidential facts about our lives. The International Data Corporation estimates that in 2025 there will be more than 55 billion IoT devices in the world [14], compared to 12.5 billion in 2010 [21]. The data these devices collect at the edge of the edge-cloud computing continuum will, in many use cases, be processed locally, through an emerging decentralized computing paradigm called Edge Computing [22, 23, 24]. The privacy risk in processing the data through DNNs is two-fold. On one hand, Inference data is produced by individuals, institutions, businesses, and is held by the devices they own or use, and may be shared with the businesses that make those devices available. The data however needs to be shared in full with parties that facilitate the "intelligence", as is the case for the Machine-Learning-as-a-Service (MLaaS) business model. On the other hand, DNNs are costly for companies to develop. The DNNs architecture, inner parameters, as well as the sensitive features contained in the data used during training are then deemed valuable confidential proprietary data for the companies. The model however still needs to be made available to third parties to generate meaningful (i.e., accurate) Inference outputs. In recent years, many techniques have been put forward to solve the predicament of functional-yet-privacy-preserving DNN Inference [15, 16, 17]. Most works however do not consider the global context of secure and private AI deployment for MLaaS, in terms of (1) the characteristics of distributed systems that execute DNN Inference and (2) the computational requirements of actual state-of-the-art DNNs underlying commercial smart services. Edge Computing offers several advantages over cloud computing, including reduced latency for better user experience and increased agency over the data's life cycle, as data is redirected through fewer nodes, is less attainable to unknown third parties, and risks of bottleneck in gateways decrease [25, 26]. However, a major drawback is that the devices involved, with Inference clients such as IoT objects or sensors, and DNNholders such as Edge servers or small data centers, have less computational capacity than that offered on demand by cloud platforms [22, 14, 15]. Moreover, methods of privacy-preserving AI are assumed to be applied to systems already equipped with ubiquitous security procedures existing in distributed systems globally (e.g., access control, anomaly detection, encrypted communication) and that themselves add load (Aqeel-ur Rehman et al., 2016; Lachner, Mann, and Dustdar, 2021). Some work (Huang et al., 2019; Baccour et al., 2020; Yan, Pei, and Li, 2019) has been proposed to bring privacy-preserving DNN Inference to de-centralized Edge Computing. However, these methods were evaluated with outdated DNNs that are less computationally complex than state-of-the-art models we see in present-day AI applications. These simpler models have little to no real-world application in commercial MLaaS setups. The aim of this paper is to present informed recommendations for upcoming research to achieve privacy-preserving DNN Inference in modern and commercially relevant Edge Computing settings. While promising, works emerging in this domain are still isolated efforts. The research avenues we formulate, which we coin here as **Privacy-Preserving Edge Intelligence**, are at the emerging intersection of two very active research fields, privacy-preserving DNN Inference and DNN Inference in Edge Computing. It is important to note that the training phase is out of scope for this paper as training is impractical in Edge Computing settings, especially in the context of MLaaS. In particular, Federated Learning is a promising solution already put forward for computationally-sensitive privacy-preserving training in a collaborative setting and is actively researched (Yin, Zhu, and Hu, 2021; Mothukuri et al., 2021; El Ouadthriri and Abdelhadi, 2022), but is not in our scope as we focus on inference. ## Privacy Requirements for Edge Intelligence Devices and Edge servers have a high risk of malicious tampering and interventions due to their ease-of-access (Aqeel-ur Rehman et al., 2016). Solutions for privacy-preserving Edge Intelligence must therefore be effective in providing information security (Mann, 2022). There are 4 main privacy requirements during the Inference phase in MLaaS: the client may not learn 1) the model's architecture and 2) the model's trained parameters, while the party holding the model, typically the server, must not learn 3) the Inference input data nor 4) the Inference output. Here, we assume that standard system security methods (e.g., limiting the number of Inference requests) are in place to protect a fifth piece of potentially sensitive data, the training dataset (see model inversion attacks) (Boulemtafes, Derhab, and Challal, 2020). General characteristics of Edge Intelligence are described extensively in (Yu et al., 2017; Deng et al., 2020; Xu et al., 2021; Chen and Ran, 2019), providing criteria for privacy-preserving solutions applied in an Edge Intelligence context. Solutions should allow practical implementations in a commercial setting, and must provide accurate and timely Inference. Edge intelligence setups may involve more than two parties during DNN Inference. For example, in a smart home sensor-actuator setup, computations offloaded to several servers (operated by different companies) may receive Inference input from some sources, while sending Inference output to other devices. Information should remain private, even if parties are secretly colluding. In an Edge setup, clients sending Inference inputs are often low-capacity devices with minimal compute capacity for data collection, temporary storage, and transmission tasks, while the model is held and evaluated by a nearby Edge server. Servers are capable nowadays of receiving and sending more than a Gbps using LAN or other fast intranet networks. A potential communication bottleneck arises however, when network transmission is of type PAN (e.g., Bluetooth), WAN, MAN, or LPWAN, all common to Edge Computing setups. Client drop-out occurs mostly to mobile devices such as smartphones, which can also easily turn off. In most cases, client dropout is inconsequential: even if the client is assigned chunks of Inference computations, DNN Inference can be paused until the client is within reach again. This criterion is however important for cases where device drop-out would disrupt task distribution (e.g., swarm intelligence with drones, and mobile computing). Lastly, to make IoT objects available to consumers at affordable costs, they may lack state-of-the-art hardware specialized in DNN Inference. Additionally, when a device drops out, Edge Computing algorithms aim to dynamically re-assign tasks to the next available device regardless of hardware. Solutions should therefore be applicable to most types of hardware, without assuming that specialized hardware is available. ### Assessment of Privacy-Preserving Techniques The main techniques that constitute the field of Privacy-preserving DNN Inference are reviewed in several comprehensive surveys (Boulemtafes, Derhab, and Challal, 2020; Zhang, Xin, and Wu, 2021; Pulido-Gaytan et al., 2021; Ball et al., 2019). In this section, we assess the compatibility of each category of techniques with the requirements of Edge Intelligence (summarized in Table 1). **Fully Homomorphic Encryption (FHE)**: a form of encryption \(E\) performed by the client to input data \(x\). \(E(x)\) is sent to the server, which returns after Inference, the cyber-text \(E(y)\), to the client. \(E(y)\) is the encrypted equivalent of the correct Inference output \(y\). Optionally, some parameters of the DNN can also be encrypted. FHE meets all four privacy requirements, and the Inference task itself can be completed in case of client drop-out. However, low-end client devices cannot carry the heavy cryptographic operations required. Despite promising recent advances (Brutzkus, Gilad-Bachrach, and Elisha, 2019; Reagen et al., 2021; Lee et al., 2022) since its inception (Gentry, 2009), FHE schemes are still too computationally demanding for applications to Edge Intelligence. Additionally, FHE only supports additions and multiplications, and thus require additional processing for other non-linear operations (e.g., polynomial approximation of some activation functions). Despite this, little to no loss in accuracy has been reported, especially via re-training modified DNNs. **Garbled Circuits**: 2-party protocol based on converting a neural network to a Boolean circuit made of AND, XOR, and XNOR gates, where each gate corresponds to an operation. The architecture of the DNN is known to both parties, but the input data and the weights of the DNN are kept secret. As a sub-protocol, oblivious transfer is used, a public-key cryptography-based scheme that enables one party to send one of two inputs to a second party so that the second party only learns one of the inputs and the first party does not learn which input the second party learned. Garbled Circuits support both linear and non-linear operations but are computationally costly (especially for AND gates [1]). Moreover, creation of the garbled circuit includes creation and permutation of a truth table per gate, to further encrypt it (e.g., using AES-based cryptography). As with FHE, this method is thus ill-adapted for low-end clients. **Secret Sharing**: \(n\)-party secure Multi-Party computing protocols in which each value involved in DNN Inference (i.e., input and model parameters) are divided into \(n\) shares, such that individual shares do not reveal anything about the secret values. Since originally introduced [1], several different secret sharing schemes have been proposed. Additive [14, 15] and replicated [16, 17] secret sharing seem especially appropriate for DNN Inference. For a complete Inference, the evaluation of the layers of the DNN may be performed in several ways, depending on various factors. In particular, different protocols can be used for addition, multiplication (e.g., masking inputs with Beaver Triplets), and non-linear operations (e.g., polynomial approximation, garbled circuits). The protocols also depend on the number of parties involved, as well as whether colluding is accounted for or not. Servers can also send a client shares to compute. Secret Sharing is not encryption-based and therefore relatively cheap to add to DNN Inference tasks. DNN Inference however fails if devices holding information on how shares are created (e.g., random number generator) drop out. Secret Sharing is still a communication-intensive privacy-preserving method, necessitating a high number of communication rounds. **Model Splitting without noise**: partitioning a DNN so that each party receives unprocessed chunks of calculations, including raw weights and inputs. Accuracy is therefore preserved. The higher the number of devices recruited, the higher the privacy as well as speed (with possibility of parallel computing), as no party may reconstruct the neural network, nor infer the training nor input data from the parts it receives [1]. This method requires the client to perform the initial and last computations, but does not need a powerful server. In case of colluding, model architecture and parameter privacy are largely lost. **Model Splitting with noise**: noise is added by the client to the input data, intermediary results, and/or weights when the client receives partial computations from a DNN. The noise added must fulfill the requirements for Differential Privacy, which mathematically guarantee that data is obfuscated sufficiently to conceal individual records (e.g., a person's identity) it may contain. This is necessary because raw input data can be reconstructed from intermediary results after even 6 layers [13]. There is a privacy/accuracy trade-off based on the amount of noise added. The client is responsible for noising and de-noising, which can be computationally expensive depending on the scheme used (e.g., auto-encoders and decoders for obfuscation). **Secure Enclaves**: are dedicated portions of memory which are designated by the CPU as inaccessible from the operating system nor any other application, and within which data can be secretly processed and encrypted/de-crypted if necessary. A popular example of Secure Enclaves is Intel's SGX [12]. For DNN Inference, two parties may send an encrypted model and input data, respectively, which can then be decrypted and processed within Secure Enclaves, finally returning the Inference output to the appropriate party, thus providing full privacy. Secure Enclaves are however costly and more memory limited than traditional hardware. This assessment indicates particular suitability of Secret Sharing for Edge Intelligence, as it meets the most criteria (7 out of 9), especially meeting all information privacy and performance-related requirements. Therefore, we dedicate the rest of the paper to discuss secret sharing and its applicability for Edge Intelligence. ## Implications of State-of-the-Art Performance Recent solutions for Secret Sharing in DNN Inference [15], not only for Edge Computing, all still use outdated Convolutional Neu \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline & & Fully & Garbled & Secret & Model & Model & Secure \\ & & Homomorphic & Circuit & Sharing & Splitting & Splitting & En- \\ & & Encryption & & w/o noise & w/ noise & clave \\ \hline & fulfills the 4 privacy requirements & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ & can involve \(>\)2 parties & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ \\ & Inference accuracy & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ \\ Edge & low latency expected & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ Intelligence & minimal compute capacity (client) & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ Requirements & limited compute capacity (server) & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ & limited communication & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\ & high drop-out rate (client) & ✓ & ✗ & ✓ & ✗ & ✗ \\ & hardware independence & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of our assessment of Privacy-Preserving techniques for Edge Intelligence. ral Networks (CNNs) as evaluative benchmarks (e.g., AlexNet [14] trained on MNIST [4]). The performance of these CNNs is humble compared to that Transformers (e.g., answer generation from multi-modal inputs [11]), a state-of-the-art category of DNNs now ubiquitous in the field of AI [15]. The question then arises: would the solutions, as they are, be applicable for fast Edge Intelligence in the context of a real and state-of-the-art MLaaS task? The answer is probably 'no'. Primarily, larger DNNs have higher complexity (i.e., more parameters to compute, more nodes in layers due to larger inputs), leading to an increase in the number of secret shares to produce and re-combine. The amount of non-linear operations during Secret Sharing, while manageable on smaller DNNs, can become problematic as it increases, which may necessitate more Garbled Circuits and/or Beaver Triplets, and both methods are especially intensive in 2-party settings, despite possibilities of some offline processing [13]. Furthermore, state-of-the-art DNNs have a higher diversity in the types of layers, (e.g., Self-Attention, Recurrent) than benchmark CNNs do which current Secret Sharing schemes are not yet designed to handle [13]. New types of layers (e.g., Self-Attention) have more data processing, such as parallel encodings of subsets of inputs (e.g., a single word), as well as more operations to perform per layer than classic layers (e.g., ReLu, pooling), requiring new protocols other than current ones which consider a layer as a unit only taking in simultaneous inputs [12]. ## Model Compression and Secret Sharing A first step towards bringing large Transformers to Secret Sharing particularly for Edge Intelligence, is to tackle the computation and communication bottleneck. Three categories of solutions exist: 1) Hardware Acceleration [15], consisting of a set of instructions to parallelize computational tasks into specialized hardware components (e.g., Neural network Processing Units - NPUs [16]) - similarly to Secure Enclaves, they may be too expensive to integrate in commercial objects; 2) Software Orchestration [4, 1], consisting of developing data pipelines or algorithms to optimize resource management to reduce latency of DNN Inference - it is assumed to be applied to some extent in any distributed system, and cannot reduce the rounds of communication required for Secret Sharing; and 3) Model Compression [16, 1], reducing the amount and complexity of the computations - typically requiring re-training to retain accuracy. Model Compression techniques, namely quantization, pruning [10], knowledge distillation [13], and low-rank approximation [12], while increasingly customary in Edge Intelligence, have yet to be compared in the context of Secret Sharing. Table 2 provides a qualitative comparison. Quantization reduces the byte-size of each value, and consequently the size of the secret shares communicated between parties, but leaves the total number of communication rounds largely unaffected. Particularly, solutions with binary or ternary quantization to weights, and a limited fixed-point size to activation functions, preserve the granularity of input data while significantly reducing communication [10]. Quantization can be combined with other model compression techniques as well. Pruning removes inconsequential computations in a DNN. It offers no guarantees of effectiveness in addressing computational complexity specific to Secret Sharing. In the best cases, however, Pruning may remove a significant number of connections between the nodes of a DNN, thus reducing computation and communication. Knowledge Distillation (i.e., training a smaller network off of a larger one) and Low-rank Approximation (i.e., reducing the dimensionality of each layer via matrix decomposition) are more promising candidates for secret sharing. Firstly, they remove extra features from the input data sooner, thus reducing the amount of input to propagate throughout the network. Secondly, they both reduce the amount of computations systematically throughout the DNN (i.e., most layers are reduced in size). Consequently, less rounds of Secret Sharing communication are necessary per layer. Knowledge Distillation also reduces the number of layers [15]. Lastly, both Knowledge Distillation and low-rank approximation are actively researched and have recently been successfully applied to state-of-the-art transformers (e.g., BERT [14] became DistilBERT [13] and Ladabert [15]). The accuracy of compressed versions of those model is also improving [15]. ## Conclusion Secret Sharing was deemed the most promising privacy-preserving technique given Edge Intelligence characteristics, but is not yet applicable to state-of-the-art Deep Neural Networks. Future research should address the new types \begin{table} \begin{tabular}{l c c c c} \hline \hline & Less operations & Less non-linear & Reduced & Less communication \\ & in total & operations & message sizes & rounds \\ \hline Quantizing & yes & yes & yes & no \\ Pruning & yes & not purposefully & yes & not purposefully \\ Knowledge Distillation & yes & yes & no & yes \\ Low-rank approximation & yes & yes & no & yes \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the impact of Model Compression techniques on Secret Sharing for DNN Inference of DNN layers and computations that current Secret Sharing schemes do not yet account for, while optimizing performance for MLaaS in Edge Computing. We put forward, pending experiments, Knowledge Distillation and Low-Rank Approximation as promising means to further accommodate new Secret Sharing protocols, for practical Edge Intelligence.
2302.13541
Solar Energetic Particle Events with Short and Long Onset Times
Gradual solar energetic particle (SEP) events, usually attributed to shock waves driven by coronal mass ejections (CMEs), show a wide variety of temporal behaviors. For example, TO, the >10 MeV proton onset time with respect to the launch of the CME, has a distribution of at least an order of magnitude, even when the source region is not far from the so-called well-connected longitudes. It is important to understand what controls TO, especially in the context of space weather prediction. Here we study two SEP events from the western hemisphere that are different in TO on the basis of >10 MeV proton data from the Geostationary Operations Environmental Satellite, despite similar in the CME speed and longitude of the source regions. We try to find the reasons for different TO, or proton release times, in how the CME-driven shock develops and the Alfv\'en Mach number of the shock wave reaches some threshold, by combining the CME height-time profiles with radio dynamic spectra. We also discuss how CME-CME interactions and active region properties may affect proton release times.
Kosuke Kihara, Ayumi Asai, Seiji Yashiro, Nariaki V. Nitta
2023-02-27T06:40:10Z
http://arxiv.org/abs/2302.13541v1
# Solar Energetic Particle Events with Short and Long Onset Times ###### Abstract Gradual solar energetic particle (SEP) events, usually attributed to shock waves driven by coronal mass ejections (CMEs), show a wide variety of temporal behaviors. For example, TO, the \(>\)10 MeV proton onset time with respect to the launch of the CME, has a distribution of at least an order of magnitude, even when the source region is not far from the so-called well-connected longitudes. It is important to understand what controls TO, especially in the context of space weather prediction. Here we study two SEP events from the western hemisphere that are different in TO on the basis of \(>\)10 MeV proton data from the Geostationary Operations Environmental Satellite, despite similar in the CME speed and longitude of the source regions. We try to find the reasons for different TO, or proton release times, in how the CME-driven shock develops and the Alfven Mach number of the shock wave reaches some threshold, by combining the CME height-time profiles with radio dynamic spectra. We also discuss how CME-CME interactions and active region properties may affect proton release times. Solar energetic particles (1491); Solar coronal mass ejections (310); Solar coronal mass ejection shocks (1997); Space weather (2037) ## 1 Introduction Gradual solar energetic particle (SEP) events are almost always accompanied by fast and extended coronal mass ejections (CMEs) that drive shock waves. These SEP events can be extremely intense, posing various space weather impacts, for example, on human bodies, satellite operations, high-frequency communications, etc. Their temporal variations as well as magnitudes are among the most important items of space weather prediction. There is a general trend of timescales with respect to the locations of the associated flares such that SEP events originating in regions in the western hemisphere start earlier and reach the peak fluxes in shorter times than those occurring elsewhere (Cane et al., 1988). Kahler (2005) and Kahler (2013) introduced three timescales as follows, TO: the SEP onset time with respect to the CME launch, TR: the rise time from the SEP onset time to the SEP half-peak during the rising phase, TD: the duration between SEP half-peak during the rising and declining phases. These papers revealed that TR and TD were positively correlated with CME speed, and interpreted that fast CME continued to drive the shock wave and injected SEP for a long time. They also revealed that TO was related to CME speed and peak proton flux, but no correlation with the acceleration of CME was found. TO is particularly challenging to understand, as we know of some events with short TO from likely far side regions that are almost certainly ill-connected (e.g., Cliver et al., 2005; Gomez-Herrero et al., 2015; Kahler, 2016). TO, as determined by first-arriving particles, may contain more information on acceleration processes close to the Sun than TR and TD, which may be more susceptible to transport processes. In a recent statistical study of the association of fast CMEs with SEP events mostly during solar cycle 24 (Kihara et al., 2020), the three timescales were measured and compared with the source locations and CME speeds. In particular, TO was found to be short if the source region was within 60\({}^{\circ}\) in longitude from the footpoint of the Parker spiral (median: 86 minutes but 308 minutes in other longitudinal ranges), and negatively correlated with the CME speed for better connected events. But the scatter was quite large even for events with small longitudinal separations from the footpoint of the Parker spiral. In this paper, we further investigate two events from Kihara et al. (2020) that apparently had different TO, despite their similar source locations in the western hemisphere and similar CME speeds of \(\sim\)1200 km s\({}^{-1}\). We explore the possibility that the event with longer TO may reflect a slow growth of the CME-driven shock wave that becomes strong enough for particle acceleration only at later times. Combining CME height-time profiles with radio dynamic spectra that contain type II radio bursts, we follow the temporal evolution of the Alfven Mach number of the shock wave with time above the two active regions without conducting advanced modeling. In Section 2, we describe the event selection and give an overview of the two events. We revisit in Section 3 the SEP timescales that are used for the subsequent analysis. In Section 4, we study how the shock waves develop in the two events in relation to TO or the SEP release times. In addition we study other factors that may affect these times. We summarize our findings in Section 5. ## 2 Observations ### Event Selection Kihara et al. (2020) conducted a statistical study of energetic CMEs that occurred between December 2006 and October 2017 in terms of their associations with SEP events. They also studied the timescales of the associated SEP events with respect to the speeds and source locations of the CMEs as shown in the Table 2 of Kihara et al. (2020). They based the SEP analysis on data from the Energetic Particle Sensor (Onsager et al., 1996) on the Geostationary Operations Environmental Satellite (GOES), and the High-Energy Telescope (HET; von Rosenvinge et al., 2008) and the Low-Energy Telescope (LET; Mewaldt et al., 2008), which belong to the suite of instruments for the In Situ Measurements of Particles and CME Transients (IMPACT; Luhmann et al., 2008) on the Solar-Terrestrial Relations Observatory (STEREO; Kaiser et al., 2008). The SEP events were identified when the \(>\)10 MeV proton flux exceeded 1 particle flux unit (pfu; defined as particles s\({}^{-1}\) sr\({}^{-1}\) cm\({}^{-2}\)). The CMEs responsible for the SEP events and the associated flares were found in white-light coronagraph and EUV low-coronal images produced by the instruments on the Solar and Heliospheric Observatory (SOHO; Domingo et al., 1995), Solar Dynamics Observatory (SDO; Pesnell et al., 2012), and STEREO. As expected, Kihara et al. (2020) found that SEP events that occurred in regions not far from the magnetic footpoints of the observer tend to have shorter timescales (in both TO and TR, see Figure 6 of the paper). However, TO mostly (77/82) ranges from 0.5 to 4 hours even when the longitudinal separation of the region from the Parker spiral footpoint is less than 60\({}^{\circ}\). TO also appears to depend on the speed of the associated CME. In this paper, we selected two events that have widely different TO (i.e., 62 and 158 minutes) even though they came from regions in similar longitudes and were associated with halo CMEs with similar speeds. They occurred on 2014 April 18 and 2017 July 14. We hereafter refer to these SEP events as Event 1 and Event 2, respectively. Their basic parameters are shown in Table 1. The primary purpose of this work is to explain this wide difference in TO. We also revise the SEP onset times in Section 3, which we will use in the subsequent analyses. ### Overview of the Events In Figure 1 we plot the soft X-ray (SXR) and SEP (proton) time profiles of the two events over two-day intervals. The flare associated with Event 1 (in panel (a)) is M7.3 in the GOES classification (the peak 1 - 8 A flux of 7.3\(\times 10^{-5}\) W m\({}^{-2}\)), whereas the one associated with Event 2 (in panel (c)) is M2.4. The latter flare is of much longer duration, staying above the pre-event level in the GOES 1 - 8 A channel for more than two days. Both flares are associated with halo CMEs, whose mean linear speed is \(\sim\)1200 km s\({}^{-1}\) across the combined field of view (FOV) of the C2 and C3 telescopes of the Large Angle Spectrometric Coronagraph (LASCO; Brueckner et al., 1995) on board SOHO. The CME launch times in black dashed lines are calculated by extrapolating the height-time relations from the LASCO C2 and C3 data to the unit height (1 solar radius R\({}_{\odot}\)), i.e. the solar surface. Both Event 1 and Event 2 are accompanied by type II radio bursts, while their appearances are quite different as found in Figure 2, where we show radio dynamic spectra between 180 MHz and 0.1 MHz that consist of data from ground-based observatories and the Radio and Plasma Wave Experiment (WAVES; Bougeret et al., 1995) on the Wind spacecraft. In Event 1 (Figure 2(a)), the type II radio burst started at 12:55 UT from about 60 MHz (fundamental), \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{CME} & & \multicolumn{3}{c}{type II radio burst\({}^{e}\)} & \multicolumn{2}{c}{SEP event} \\ \cline{2-9} ID & launch\({}^{a}\) & speed\({}^{b}\) & width\({}^{c}\) & source\(d\) & frequency & time & \(I_{p}\)\({}^{f}\) & TO\({}^{g}\) \\ & date and time & (km s\({}^{-1}\)) & (deg) & location & (MHz) & & (pft) & (min) \\ \hline Event 1 & 2014-04-18 12:43 & 1203 & 360 & S20W34 & 60 & 12:55 & 58.5 & 62 \\ Event 2 & 2017-07-14 01:12 & 1200 & 360 & S06W29 & 14 & 01:20 & 13.6 & 158 \\ \hline \end{tabular} \({}^{a}\) The launch time of CME calculated by extrapolating the height-time relations from the LASCO C2 and C3 data to the solar surface. Cited from the LASCO CME catalog. \({}^{b}\) The linear speed obtained by fitting whole data points in LASCO C2 and C3. Cited from the LASCO CME catalog. \({}^{c}\) The width in the plane of the sky of CME measured in LASCO C2 FOV. Cited from the LASCO CME catalog. \({}^{d}\) The location of associated flare analyzed in Kihara et al. (2020). \({}^{e}\) Frequency and time at the onset of the associated type II radio burst in each event. \({}^{f}\) The peak proton flux with energies above 10 MeV observed by GOES satellite. Defined in Kihara et al. (2020). \({}^{g}\) The \(>\)10 MeV proton onset time with respect to the launch of the CME. Defined in Kihara et al. (2020). \end{table} Table 1: Basic Parameters of the Two SEP Events Figure 1: The soft X-ray (SXR) and the integrated flux of \(>\)10 MeV protons observed by GOES satellite for Event 1 ((a) and (b)) and for Event 2 ((c) and (d)). In panels (a) and (c), red and blue lines correspond to 1-8 Å and 0.5-4 Å. In panels (b) and (d), the launch time of CME and the time of proton onset are indicated by the black and red dashed lines, respectively. which is 12 minutes after the CME launch (12:43 UT) and 8 minutes before the SXR peak (13:03 UT). It is preceded by strong type III radio bursts during the flare impulsive phase. In Event 2 (Figure 2(b)), the type II radio burst is weak and intermittent and seen only in Wind/WAVES data below 14 MHz. It started at 01:20 UT, which is 12 minutes after the CME launch (01:12 UT) and 49 minutes before the SXR peak (02:09 UT). Type III radio bursts are also weak in Event 2, mostly at frequencies below the type II radio burst, sometimes categorized as shock-accelerated events (Cane et al., 1981). Although type II radio bursts are widely considered to signify shock waves, accelerating \(\lesssim\)10 keV electrons, the proton onset is delayed in Event 2 much more than expected of \(\sim\)10 MeV protons, as reflected in larger TO. Lastly, note strong emissions starting around 03:00 UT in Figure 2(b). They do not follow the frequency drift of the type II radio burst. These features may indicate an interaction of the CME in Event 2 with a previously-launched CME (Gopalswamy et al., 2002). In Section 4.2 we will briefly discuss the possible effect of this CME-CME interaction on the observed SEPs in Event 2. Spatially-resolved coronal observations of the two events are given in Figure 3, where panels (a) - (d) and (e) - (h) cover Event 1 and Event 2, respectively. Low coronal images (panels (a), (b), (e), and (f)) come from 211 A channel of the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) on board SDO. The remaining panels consist of coronagraph images that come from LASCO. The origins of the eruptions - NOAA AR 12036 at S17W35 for Event 1 and NOAA AR 12665 at S06W29 for Event 2 - are contained in the yellow boxes in Figures 3(a) and 3(d), in which we note coronal dimmings in pre-event subtracted images (Figures 3(b) and 3(f)). Both events are associated with a halo CME, although asymmetric, as seen in Figures 3(d) and 3(h). Figures 3(c) and 3(g) show the first available LASCO C2 images of the CMEs in the two events. It appears that we miss an early development of the CME in Event 1 due to the data gap of \(\sim\)40 minute preceding the image in Figure 3(c). The CME in Event 2 was preceded by a narrower CME, which was associated with a C3.0 flare from AR 12667 around N12W71. This region, indicated by a green arrow in Figure 3(e), produced C2.0, C5.9, and C3.0 flares starting, respectively, at 21:27, 21:46, and 23:30 UT on July 13. All of them produced a slow and narrow CME, and an electron event across the 10 keV - 2 MeV range but not a proton event. When protons increased in Event 2, the electron background was still elevated due to the electron event associated with the C3.0 flare, so it is not clear whether Event 2 produced an electron event. In contrast, Event 1 was accompanied by a strong electron event well above the elevated background in Event 2. The CME in Event 2 apparently caught up with the narrow CME and possibly resulted in a CME-CME interaction suggested in Figure 3(h). However, this is an hour earlier than the CME-CME interaction indicated in radio data (Figure 2(b)). Figure 2: Radio dynamic spectra of the two events in the combined metric and DH ranges. The latter data are obtained with the Wind/WAVES instrument, and the metric data obtained at (a) RSTN/San Vito for Event 1 and (b) Culgoora Observatories for Event 2, respectively. The black (red) dashed lines indicate the CME launch times (the onset times of \(>\)10 MeV protons as observed by GOES), which replicate those in Figure 1. The cyan dashed lines indicate the start times of the type II radio bursts. The purple line in (b) indicates a proton onset time from SOHO/ERNE (see Section 3). ## 3 Further analysis of SEP events Here, we re-evaluate TO of the two events. The GOES energetic particle data suffer from high background, which may prevent the SEP onset from being properly captured if the particle flux rises slowly from a low level. Another problem may be a possibly inadequate energy discrimination because the detector is only passively shielded (Posner, 2007; Kuhl and Heber, 2019). These issues drive us to study similar data from other instruments. Here we analyze data from the High Energy Detector (HED) of the Energetic and Relativistic Nuclei and Electron (ERNE; Torsti et al., 1995) on board SOHO, which measures protons in the energy range of 13 - 130 MeV divided into 10 channels and has much lower background. In Event 1, protons were detected above the background up to the 64 - 80 MeV channel, where the onset time is found to be the same (\(\pm\)5 minutes) as that of the GOES \(>\)10 MeV integral channel; the onset time of the 13 - 16 MeV channel comes \(\sim\)30 minutes later. We keep the same TO that was calculated by Kihara et al. (2020) for Event 1, noting that it refers to the ERNE 64 - 80 MeV channel. In Event 2, on the other hand, the onset times of all the ERNE/HED channels that detected protons above the background (up to the 50 - 64 MeV channel) are much earlier than that of the GOES \(>\)10 MeV integral channel, suggestive of an effect of the high background of the latter. The "revised" onset time, coming from the 50 - 64 MeV channel of ERNE/HED, is used to redefine TO as indicated by the purple line in Figure 2(b). This gives TO=85 minutes (down from 158 minutes). The updated TO for Event 2 is still \(\sim\)25 minutes longer than TO for Event 1. We also conduct the Velocity Dispersion Analysis (VDA; see, e.g., Vainio et al., 2013), using all the ERNE/HED channels in which protons were detected above the background (up to the 64 - 80 MeV channel for Event 1 and the 50 - 64 MeV channel for Event 2). We plot the onset times (per visual inspection) against the inverse of speed \((v/c)^{-1}\), which corresponds to the effective energy of the channel (Figure 4). A least-square fit yields the proton release time of 2014 April 18 13:13 UT\(\pm\)4.7 minutes and 2017 July 14 02:00 UT\(\pm\)5.3 minutes for Event 1 and Event 2, respectively. The associated path lengths come out as 1.32\(\pm\)0.14 AU and 1.44\(\pm\)0.15 AU, which are somewhat longer than the lengths of the nominal Parker Spiral for the observed solar wind speeds but within a range that suggests no major effect of scattering in the interplanetary space. Figure 5 shows the summary of the timeline of each event. We indicate the Figure 3: Low coronal and coronagraph images for Event 1 ((a) – (d)) and Event 2 ((e) – (h)). (a) and (e): AIA 211 Å images prior to the eruptions that led to the CMEs. The active regions that hosted the eruptions are included in the yellow boxes, in which coronal dimmings are noted in difference images with the pre-eruption image subtracted ((b) and (f)). (c) and (g): first available LASCO images of the CMEs in Event 1 and Event 2. (d) and (h): later LASCO images. The green arrow in (e) points to AR 12667, which produced narrower and slower CMEs than that in Event 2. proton release times with 8.3 minutes added to account for the 1 AU travel time of light so that we can compare them with other electromagnetic-wave-based observations. From now on, instead of TO, we shall investigate the proton release time corrected for the 1 AU travel time of light, even though we originally aimed at explaining different TO. Moreover, we concern the proton release time with respect to the start time of the type II radio burst rather than the CME launch time. The proton release is delayed by 26\(\pm\)4.7 minutes for Event 1 and 48\(\pm\)5.3 minutes for Event 2. So the difference still exists between Event 1 and Event 2, although not as large as in the original TO. Lastly, the multi-channel data from ERNE let us obtain fluence energy spectra of the two events. We integrate the background-subtracted proton flux in each channel while it is above the background. The spectral fitting gives the power-law index of 3.65 for Event 1 and 4.18 for Event 2. These are close to the value of 3.83, that is the averaged indices of fluence spectra of well-connected SEP events reported by Gopalswamy et al. (2016). The slightly softer index in Event 2 may be an indication of a weaker shock, possibly related to a longer delay of the SEP onset time, but the difference may not be large enough to be conclusive. ## 4 Factors that may control the particle release time As shown in Section 3, protons are not released immediately after the formation of the shock wave as manifested in type II radio bursts. However, the time difference is longer for Event 2. What is the reason for varying proton release times? In the following, we consider the evolution of the CME-driven shock wave, CME-CME interaction and properties of the active region that produces the CME. ### Evolution of Shock Waves with Height In this section, we investigate the possibility that particles (protons) are accelerated and released only when the shock wave becomes strong enough. Specifically, we study how the Alfven Mach number (\(M_{A}\)) of the shock wave changes with time. The Alfven Mach number is expressed as \(M_{A}=(v_{s}-v_{sw})/v_{A}\), where \(v_{s}\) is the shock speed, \(v_{sw}\) is the solar wind speed, and \(v_{A}\) is the Alfven speed. Figure 4: VDA analysis based on ERNE/HED data. The observed onset times are plotted against the inverse velocities \((v/c)^{-1}\) calculated from the effective energies of the individual channels. The vertical axis is the elapsed time since (a) 2014 Apr 18 13:00 and (b) 2017 July 14 2:00, respectively. For the shock speed, we could simply use the linear or quadratic fits to the height of the leading edge of the CME, as published in the CDAW LASCO CME catalog1(Yashiro et al., 2004). However, these fits are made on the height measurements in the whole (C2 and C3) FOV and may be too coarse to discuss the CME kinematics in the height range that likely corresponds to the SEP onset (e.g., below 10 R\({}_{\odot}\)). Here we instead model the height-time profiles of CMEs such that they undergo constant acceleration from the onset to the peak of the SXR flux. This may be justified by the general tendency of CMEs to accelerate in the flare impulsive phase (e.g., Zhang et al., 2004; Temmer et al., 2010). We further assume that CMEs move with a constant speed in the LASCO FOV after the SXR peak. This modeled CME height-time profile is meant to better reproduce the behavior of the shock speed near the proton release time, and does not necessarily match the information from the CME catalog, including the estimated time of CME launch. The dashed curves in Figures 5(a) and 5(b) show the modeled CME height-time profiles of Event 1 (blue) and Event 2 (green), respectively. For each event the shock speed \(v_{s}\) is calculated using the modeled CME height-time profile (Figure 6(a)). Even though the average speeds in the LASCO FOV are similar in both CMEs, their height-speed profiles are very different. The Event 1 CME (blue) accelerates quickly with a large acceleration of Figure 5: Summary of the timeline for (a) Event 1 and (b) Event 2. SXR (1 – 8Å) flux observed by the GOES satellite is shown as solid curves (black). The height of the leading edge of the CME is shown as crosses (measurements) and dashed curves (models). The vertical dashed lines in black and purple indicate the onset times of the flare and type II radio burst, respectively. The shaded areas in red indicates the proton release times with uncertainties (described in Section 3). The purple arrow indicates the interval between the type II onset and the estimated proton release from the VDA analysis. 627 m s\({}^{-2}\) and reaches \(\sim\)1200 km s\({}^{-1}\) before 3 \(R_{\odot}\), while the Event 2 CME (green) accelerates slowly (188 m s\({}^{-2}\)) and reaches \(\sim\)1300 km s\({}^{-1}\) at 6.6 \(R_{\odot}\). The Alfven speed, \(v_{A}\), depends on the density and magnetic field, neither of which is directly observed, so we must rely on models. In order to address the inherently model-dependent nature of our attempt to calculate the Alfven Mach number of the shock waves, we use the frequency drift of the type II radio burst to constrain the density profile with height. We choose the density model that places the shock wave of the type II radio burst at heights closest to the modeled CME heights at overlapping times. Consider the frequency (fundamental) of the type II radio burst to be the local plasma frequency, and we can get the density. For both events, the 3-fold (multiplied by 3) Saito model (Saito et al., 1977) yields the heights of the shock wave of the type II burst that best match the modeled CME heights. For the magnetic field, we consider the following three models: \[B_{1}(r) = 2.2r^{-2} \tag{1}\] \[B_{2}(r) = 6r^{-3}+1.18r^{-2}\] (2) \[B_{3}(r) = 0.5(r-1)^{-1.5} \tag{3}\] These are (1) the model assuming magnetic flux conservation(Mann et al., 1999), (2) the model based on measurement of Faraday rotation (Patzold et al., 1987), and (3) the empirical model by Dulk & McLean (1978). Three Alfven speed profiles derived from these three magnetic field models (\(B_{1}\), \(B_{2}\), and \(B_{3}\)) are shown in the red solid (\(v_{A,1}\)), dash-dotted (\(v_{A,2}\)), and dotted lines (\(v_{A,3}\)) in Figure 6(a), respectively. For the solar wind speed, \(v_{sw}\), the model by Sheeley et al. (1997) has been widely used, but it starts only at 4.5 R\({}_{\odot}\). Recently, \(v_{sw}\) closer to the Sun (down to 1.53 R\({}_{\odot}\)) has been obtained by Bemporad et al. (2021). We use the latter model up to the height of 5.07 R\({}_{\odot}\) (where \(v_{sw}\) from the former model becomes larger), and the former model at greater heights. The solar wind speed profile is shown as the black line in Figure 6(a). We finally calculate three versions of \(M_{A}\), based on the three magnetic field models \(B_{1}\), \(B_{2}\), and \(B_{3}\) that were used to calculate \(v_{A}\). Figure 6(b) shows the evolution of \(M_{A}\) with time for Event 1. The different types of lines for \(M_{A,1}\), \(M_{A,2}\), and \(M_{A,3}\), distinguish the corresponding magnetic field models, \(B_{1}\), \(B_{2}\), and \(B_{3}\). The vertical lines and the shaded area are identical to those in Figure 5. Figure 6(c) is the same as Figure 6(b) but for Event 2. Note that the reliability of the magnetic field models may be somewhat compromised near the solar surface. For example, the model of \(B_{2}\) was originally calculated only in the range of 2 - 15 \(R_{\odot}\). Accordingly, Figure 6 show the result only in \(>\) 1.5 \(R_{\odot}\). Despite an apparent dependence of \(M_{A}\) on the assumed magnetic field models, we may understand the proton release times in relation to \(M_{A}\). In both cases, all the three models of magnetic field yield \(M_{A}\) that increase toward the proton release times. Including errors, \(M_{A}\) reaches 1.6 - 2.6 for Event 1 and 2.0 - 3.0 for Event 2 during the estimated proton release time. Although it is beyond the scope of our work to discuss the critical Mach number (e.g., Bemporad & Mancuso, 2011; Rouillard et al., 2016), \(M_{A}\) in the above ranges may serve as thresholds, above which protons can be accelerated. In Event 1, when the CME ceases to accelerate at \(\sim\)3 \(R_{\odot}\) and \(\sim\)10 minutes after the onset of the type II radio burst, the shock speed already reaches \(\sim\)1200 km s\({}^{-1}\). However, the Alfven Mach number remains low, because of the high Alfven speed due to strong magnetic field at a low altitude. \(M_{A}\) reaches the critical value only \(\sim\)20 minutes after the CME stops accelerating as \(v_{A}\) decreases. In Event 2, on the other hand, the CME continues to accelerate for \(\sim\)50 minutes after the onset of the type II radio burst until it travels to the height of \(\sim\)6.6 \(R_{\odot}\). During the acceleration phase, \(M_{A}\) only slowly increases, until it reaches the threshold as the CME attains the speed of \(\sim\)1200 km s\({}^{-1}\). This may explain why the proton release is delayed more in Event 2 than in Event 1. It also aligns with a longer duration of the flare in Event 2. ### CME-CME Interaction As an alternative explanation for a later particle release in Event 2, let us assume that the shock is in fact too weak for particle acceleration, irrespective of the analysis given in Section 4.1. Then what distinguishes Event 2 is the CME-CME interaction, which may compensate for the weak shock. It is proposed that when a fast CME catches up with a preceding CME, preconditioning by the preceding CME results in efficient particle acceleration (e.g., Gopalswamy et al., 2002; Li & Zank, 2005; Li et al., 2012). The calculated proton release time is around 02:00 UT (Section 3), which is close to the time the CME in Event 2 caught up with the preceding narrow CME associated with a C3.0 flare in AR 12667 (Figure 3(h)). Note that the radio signatures that may indicate a CME-CME interaction starts only around 03:00 UT (Figure 2(b)). However, it is not clearly understood at which timing during a CME-CME interaction such signatures appear in radio spectra. The possibility that Event 2 originally produced a weak shock wave with poor acceleration efficiency may be supported by the bandwidth of the type II radio burst. Iwai et al. (2020) reported a positive correlation between the bandwidth of the type II radio burst and the peak proton flux, and proposed that the bandwidth represents the strength of the shock wave. In our examples, the time-averaged bandwidth for Event 1 was \(>1000\) kHz, wider than that for Event 2 (\(<500\) kHz), suggesting that the shock wave in Event 2 was weaker. ### Properties of the Active Regions We discuss how different proton release times may be traced back to the properties of the active regions that produced the CMEs. Figure 7 shows H\(\alpha\) images and magnetograms of the active regions that produced the two events (AR 12036 and AR 12665). For each of the regions, the top three rows display H\(\alpha\) images taken at three times (before and around the flare peak and during the decay phase) that are indicated by black dashed lines on the GOES 1 - 8 A light curves in the bottom panels. The fourth row gives a line-of-sight magnetogram from the Helioseismic and Magnetic Imager Figure 6: (a): Shock speed for Event 1 (blue, solid) and Event 2 (green, solid) calculated from the modeled height-time profiles. The red lines represent the Alfvén speeds based on the three models of magnetic field. \(v_{sw}\) is shown as the black solid line. (b) and (c): \(M_{A}\) for Event 1 and Event 2, respectively, calculated with all the information presented in (a). As in Figure 5, the dashed lines in black and purple indicate the onset times of the flare and type II radio burst, respectively, and the shaded areas in red the proton release times with uncertainties. Figure 7: The observations of the solar surface at each event. The top three panels are H\(\alpha\) ground-based observations, and the bottom is HMI line-of-sight magnetogram. The bottom panel is a 1 –8Å light curve of SXR, and the black dashed and solid lines correspond to the observation times of H\(\alpha\) and the magnetic field, respectively. The cyan cross markers indicate the location of the top of the post-flare loop with reference to the flare ribbon and polarity inversion line. (HMI; Scherrer et al., 2012) on board SDO, taken in the early phase of the flare (see the black solid line in the bottom row). The H\(\alpha\) data come from the Global Oscillation Network Group (GONG2; Harvey et al., 1996) for Event 1 and the Solar Dynamics Doppler Imager (SDDI; Ichimoto et al., 2017) installed on the Solar Magnetic Activity Research Telescope (SMART; Ueno et al., 2004) at Hida Observatory. Footnote 2: [https://gong.nso.edu](https://gong.nso.edu) We readily note from the magnetograms that the region for Event 1 is more magnetically complex than the one for Event 2, which is dominated by essentially a simple bipolar topology. This difference is also noted in the pre-flare H\(\alpha\) images (the top row of Figure 7). The complex magnetic field configuration of the region for Event 1 may be reflected also in the complex evolution of the flare ribbons in H\(\alpha\) images as shown in the second and third rows of Figure 7. However, the apparent difference of the complexity of the two regions may not be reflected in basic magnetic parameters from the Space-Weather HMI Active Region Patches (SHARP; Bobra et al., 2014) over several (e.g., 6 hour, 12 hour, 24 hour) intervals preceding the flare onsets. None of them seem to distinguish the two regions in a significant way. Flare ribbons contain additional information of flares. Concerning our examples, the initial distance of the flare ribbons for Event 1 is shorter (\(\sim\)20 Mm) than that for Event 2 (\(\sim\)50 Mm). This is consistent with the result that those with widely separated ribbons in the beginning tend to be of long duration (Toriumi et al., 2017). Accordingly, the flare loops are longer in the region for Event 2 than in the region for Event 1, which may translate to a higher initial reconnection point in Event 2. The magnetic field strength near the reconnection point is, therefore, expected to be weaker in Event 2, suggesting that it could not drive the faster ejection near the solar surface. Another information we can get from the area of flare ribbons is the reconnection flux, which may be related to the photospheric magnetic flux traversed by the flare ribbons (e.g., Forbes and Priest, 1984; Kazachenko et al., 2017). Analyzing the flare ribbons in AIA 1600 A images, Kazachenko et al. (2017) created a database of the reconnected flux of 3137\(\gtrsim\)C1 flares up to April 2016. The reconnection flux of the flare for Event 1 is 9.44\(\times\)10\({}^{21}\)Mx, according to the database. A new calculation shows that for Event 2 to be 2.92\(\times\)10\({}^{21}\)Mx (M. Kazachenko, 2021, private communication). However, the reconnection rate normalized by the duration of the flare is 3.21\(\times\)10\({}^{18}\) Mx s\({}^{-1}\) in Event 1 and 3.53\(\times\)10\({}^{17}\) Mx s\({}^{-1}\) in Event 2, which is smaller by one order of magnitude. Lastly, to address the possible difference of the overlying magnetic structure in the regions responsible for Event 1 and Event 2, we calculate the decay index, \(n=-d\mathrm{ln}B/d\mathrm{ln}h\), which shows how quickly the magnetic field weakens with height over the polarity inversion lines that align with the tops of the post-flare loops (cyan marks in the third and fourth rows of Figure 7). The PFSS model3 is used to calculate the coronal magnetic field. The bottom boundary is the Figure 8: The decay index vs height for the regions responsible for (a) Event 1 and (b) Event 2. The red line corresponds to the critical height at which the decay index becomes 1.5. The error bars show the standard deviation at each height. downsized (720\(\times\)360 pixels) standard HMI Carrington synoptic map, embedded with the original HMI magnetogram of an area of (200\({}^{\prime\prime}\))\({}^{2}\) around the core of the active region, which is taken just before the flare. The decay index gives a criterion for torus instability to trigger an eruption (Kliem & Torok, 2006). In Figure 8, the decay index over the height up to 200 Mm is shown for the two regions. We find almost no difference in the so-called critical height (\(h_{crit}\)) at which \(n_{crit}\)=1.5. However, the decay index above 50 Mm tends to be larger in the region for Event 2, meaning the magnetic field decreases more quickly with height. Indeed, \(|B_{r}|\) at the source surface of 2.5 R\({}_{\odot}\), which is the upper boundary of the calculation domain, is smaller in Event 2 region than that in Event 1 region. We note that even the value in Event 1 is smaller than calculated with any of the magnetic field models that are used in Section 4.1. A similar finding was reported for example by Rouillard et al. (2016), consistent with the smaller open flux at 1 AU as predicted with photospheric magnetograms than actually observed (e.g., Linker et al., 2017). ## 5 Summary and Conclusions In this work, we choose two events that had widely different TO, the time of the SEP onset (found in GOES \(>\)10 MeV proton data) from the onset of the CME, although they occurred at similar longitudes and were associated with CMEs that had similar speeds (Kihara et al., 2020). After reviewing SOHO/ERNE data, we decided to use these data with much lower background and better energy discrimination. The revised TO, or particle (proton) release time (as obtained with a VDA) from the onset of the type II radio burst, shows a smaller difference between the two events. However, the difference of 20 - 25 minutes is still significant, unaccounted for by different path lengths (1.34 AU vs 1.44 AU, see Figure 4) from the VDA. In order to understand the longer delay of the proton release time in Event 2, we focus on how the shock wave grows close to the Sun as characterized by the Alfven Mach number \(M_{A}\) of the shock waves, by more closely examining the height-time profiles of the CMEs than single fits over the entire FOV covered by LASCO data. Despite strong model dependency of the Alfven speed \(v_{A}\) especially on the magnetic field, \(M_{A}\) keeps rising and reaches certain thresholds around the proton release times. It has been shown with more sophisticated tools (e.g., Rouillard et al., 2016; Kouloumvakos et al., 2019) that protons are released when \(M_{A}\) reaches a critical value. We note that slow acceleration of the CME over long time while the soft X-ray flux was on the rise is a key to the delayed acceleration/release of protons in Event 2. Another possibility is that the shock wave driven by the CME in Event 2 was intrinsically weak, as suggested by small bandwidths of the type II radio burst (cf., Iwai et al., 2020), not being capable of accelerating protons on its own, but that interaction with the previous CME may have been instrumental in the production of energetic protons (e.g., Gopalswamy et al., 2002; Li & Zank, 2005; Li et al., 2012). The timing of the possible CME-CME interaction is consistent with the proton release as far as LASCO imagery is concerned, but radio signatures come an hour later. This may not be a problem until we better understand at what timing during CME-CME interactions we expect to observe the radio signatures. In either case, we try to find different properties of the active regions that hosted the CMEs. The region for Event 2 had much simpler magnetic field configurations, consistent with the way flare ribbons developed. The eruption involved a larger volume, producing a flare that lasted for more than a day. Even though it is not straightforward to extract the possible differences of active regions in the forms of the routinely calculated magnetic field properties with HMI data (SHARP; Bobra et al., 2014), reconnection flux from flare ribbons (Kazachenko et al., 2017), or the decay index (Kliem & Torok, 2006), the overall simple magnetic configurations allowed slow but steady acceleration of the CME in Event 2. They should also be conducive to the long-duration flare. Although the energy release was not intense at first, the injection of magnetic energy lasted for a long time. Eventually a shock wave strong enough to generate SEPs was formed, or a CME fast enough to catch up with the former one was formed. In this paper, we try to show that the acceleration of CMEs and the growth of the Alfven Mach number below \(\sim\)10 R\({}_{\odot}\) may play a significant role in the onset of SEPs. This needs to be verified in a larger sample of events. To make such an attempt meaningful, it would be vital to characterize the distribution of magnetic field and density in this height range beyond the utilization of simple models, which may be helped by MHD simulations. Transport processes such as cross field diffusion, which may be at work even in the height range of interest, could affect the onset behaviors of SEP events. To evaluate the effect of such processes, we would need more detailed modeling of heliospheric magnetic field. In addition, we speculate on the basis of our findings that there may be a connection between the complexity of active regions and the timescales of flares and CMEs, which may more directly affect the temporal characteristics of SEPs. This presents one interesting possibility for a comprehensive explanation of these solar active phenomena. We close with a cautionary note that GOES EPS data may not be suitable for scientific analyses of onset times in particular for events like our Event 2, where protons increase slowly from a low level. The smaller difference in the proton release time as found using ERNE data may explain only marginal differences in active region properties. We suggest that the past and ongoing results based on GOES EPS data should be calibrated with other data. This study is based on the discussion at the Coordinated Data Analysis Workshops held in August 2018 and 2019 held under the auspice of the Project for Solar-Terrestrial Environment Prediction (PSTEP; Kusano et al., 2021). We thank the reviewer for their helpful comments on the manuscript. This work was supported by JSPS KAKENHI grant No. JP22J11442 (K.K.) and JP21H01131 (A.A.) and also by the joint research project of the Unit of Synergetic Studies for Space, Kyoto University and BroadBand Tower, Inc. (BBT). The work of N.V.N. was supported by NASA grants 80NSSC18K1126 and 80NSSC20K0287.
2308.06671
Law of Balance and Stationary Distribution of Stochastic Gradient Descent
The stochastic gradient descent (SGD) algorithm is the algorithm we use to train neural networks. However, it remains poorly understood how the SGD navigates the highly nonlinear and degenerate loss landscape of a neural network. In this work, we prove that the minibatch noise of SGD regularizes the solution towards a balanced solution whenever the loss function contains a rescaling symmetry. Because the difference between a simple diffusion process and SGD dynamics is the most significant when symmetries are present, our theory implies that the loss function symmetries constitute an essential probe of how SGD works. We then apply this result to derive the stationary distribution of stochastic gradient flow for a diagonal linear network with arbitrary depth and width. The stationary distribution exhibits complicated nonlinear phenomena such as phase transitions, broken ergodicity, and fluctuation inversion. These phenomena are shown to exist uniquely in deep networks, implying a fundamental difference between deep and shallow models.
Liu Ziyin, Hongchao Li, Masahito Ueda
2023-08-13T03:13:03Z
http://arxiv.org/abs/2308.06671v1
# Law of Balance and Stationary Distribution of Stochastic Gradient Descent ###### Abstract The stochastic gradient descent (SGD) algorithm is the algorithm we use to train neural networks. However, it remains poorly understood how the SGD navigates the highly nonlinear and degenerate loss landscape of a neural network. In this work, we prove that the minibatch noise of SGD regularizes the solution towards a balanced solution whenever the loss function contains a rescaling symmetry. Because the difference between a simple diffusion process and SGD dynamics is the most significant when symmetries are present, our theory implies that the loss function symmetries constitute an essential probe of how SGD works. We then apply this result to derive the stationary distribution of stochastic gradient flow for a diagonal linear network with arbitrary depth and width. The stationary distribution exhibits complicated nonlinear phenomena such as phase transitions, broken ergodicity, and fluctuation inversion. These phenomena are shown to exist uniquely in deep networks, implying a fundamental difference between deep and shallow models. ## 1 Introduction The stochastic gradient descent (SGD) algorithm is defined as \[\Delta\theta_{t}=-\frac{\eta}{S}\sum_{x\in B}\nabla_{\theta}\ell(\theta,x), \tag{1}\] where \(\theta\) is the model parameter, \(\ell(\theta,x)\) is a per-sample loss whose expectation over \(x\) gives the training loss. \(B\) is a randomly sampled minibatch of data points, each independently sampled from the training set, and \(S\) is the minibatch size. The training-set average of \(\ell\) is the training objective \(L(\theta)=\mathbb{E}_{x}\ell(\theta,x)\), where \(\mathbb{E}_{x}\) denotes averaging over the training set. Two aspects of the algorithm make it difficult to understand this algorithm: (1) its dynamics is discrete in time, and (2) the randomness is highly nonlinear and parameter-dependent. This work relies on the continuous-time approximation and deals with the second aspect. In natural and social sciences, the most important object of study of a stochastic system is its stationary distribution, which is often found to offer fundamental insights into understanding a given stochastic process [37, 31]. Arguably, a great deal of insights into SGD can be obtained if we have an analytical understanding of the stationary distribution, which remains unknown until today. Predominantly many works study the dynamics and stationary properties of SGD in the case of a strongly convex loss function [43, 44, 20, 46, 27, 52, 21]. The works that touch on the nonlinear aspects of the loss function rely heavily on the local approximations of the stationary distribution of SGD close to a local minimum, often with additional unrealistic assumptions about the noise. For example, using a saddle point expansion and assuming that the noise is parameter-independent, Refs. [23, 44, 20] showed that the stationary distribution of SGD is exponential. Taking partial parameter-dependence into account and near an interpolation minimum, Ref. [27] showed that the stationary distribution is power-law like and proportional to \(L(\theta)^{-c_{0}}\) for some constant \(c_{0}\). However, the stationary distribution of SGD is unknown when the loss function is beyond quadratic and high-dimensional. Since the stationary distribution of SGD is unknown, we will compare our results with the most naive theory one can construct for SGD, a continuous-time Langevin equation with a constant noise level: \[\dot{\theta}(t)=-\eta\nabla_{\theta}L(\theta)+\sqrt{2T_{0}}\epsilon(t), \tag{2}\] where \(\epsilon\) is a random time-dependent noise with zero mean and \(\mathbb{E}[\epsilon(t)\epsilon(t^{\prime})^{T}]=\eta\delta(t-t^{\prime})I\) with \(I\) being the identity operator. Here, the naive theory relies on the assumption that one can find a constant scalar \(T_{0}\) such that Eq. (2) closely models (1), at least after some level of coarse-graining. Let us examine some of the predictions of this model to understand when and why it goes wrong. There are two important predictions of this model. The first is that the stationary distribution of SGD is a Gibbs distribution with temperature \(T_{0}\): \(p(\theta)\propto\exp[-L(\theta)/T]\). This implies that the maximum likelihood estimator of \(\theta\) under SGD is the same as the global minimizer of the \(L(\theta)\): \(\arg\max p(\theta)=\arg\min L(\theta)\). This relation holds for the local minima as well: every local minimum of \(L\) corresponds to a local maximum of \(p\). These properties are often required in the popular argument that SGD approximates Bayesian inference [23, 25]. Another implication is ergodicity [39]: any state with the same energy will have an equal probability of being accessed. The second is the dynamical implication: SGD will _diffuse_. If there is a degenerate direction in the loss function, SGD will diffuse along that direction.1 Footnote 1: Note that this can also be seen as a dynamical interpretation of the ergodicity. However, these predictions of the Langevin model are not difficult to reject. Let us consider a simple two-layer network with the loss function: \(\ell(u,w,x)=(uwx-y(x))^{2}\). Because of the rescaling symmetry, a valley of degenerate solution exists at \(uw=c_{0}\). Under the simple Langevin model, SGD diverges to infinity due to diffusion. One can also see this from a static perspective. All points on the line \(uw=c_{0}\) must have the same probability at stationarity, but such a distribution does not exist because it is not normalizable. This means that the Langevin model of SGD diverges for this loss function. Does this agree with the empirical observation? Certainly not.2 See Fig. 1. We see that contrary to the prediction of the Langevin model, \(|u^{2}-w^{2}|\) converges to zero under SGD. Under GD, this quantity is conserved during training [7]. Only the Gaussian GD obeys the prediction of the Langevin model, which is expected. This sharp contrast shows that the SGD dynamics is quite special, and a naive theoretical model can be very far from the truth in understanding its behavior. There is one more lesson to be learned. The fact that the Langevin model disagrees the most with the experiments when symmetry conditions are present suggests that the symmetry conditions are crucial tools to probe and understand the nature of the SGD noise, which is the main topic of our theory. Footnote 2: In fact, had it been the case, no linear network or ReLU network can be trained with SGD. ## 2 Law of Balance Now, we consider the actual continuous-time limit of SGD [16, 17, 19, 33, 9, 13]: \[d\theta=-\nabla_{\theta}Ldt+\sqrt{TC(\theta)}dW_{t}, \tag{3}\] Figure 1: SGD converges to a balanced solution. **Left**: the quantity \(u^{2}-w^{2}\) is conserved for GD without noise, is divergent for GD with an isotropic Gaussian noise, which simulates the simple Langevin model, and decays to zero for SGD, making a sharp and dramatic contrast. **Right**: illustration of the three types of dynamics. Gradient descent (GD) moves along the conservation line due to the conservation law: \(u^{2}(t)-w^{2}(t)=u^{2}(0)-w^{2}(0)\). GD with an isotropic Gaussian noise expands and diverges along the flat direction of the minimum valley. The actual SGD oscillates along a balanced solution. where \(dW_{t}\) is a stochastic process satisfying \(dW_{t}\sim N(0,Idt)\) and \(\mathbb{E}[dW_{t}dW_{t^{\prime}}^{T}]=\delta(t-t^{\prime})I\), and \(T=\eta/T\). Apparently, \(T\) gives the average noise level in the dynamics. Previous works have suggested that the ratio \(\eta/S:=T\) is the main factor determining the behavior of SGD, and a higher \(T\) often leads to better generalization performance [32, 20, 50]. The crucial difference between Eq. (3) and (2) is that in (3), the noise covariance \(C(\theta)\) is parameter-dependent and, in general, low-rank when symmetries exist. Due to standard architecture designs, a type of invariance - the rescaling symmetry - often appears in the loss function and exists for all sampling of minibatches. The per-sample loss \(\ell\) is said to have the rescaling symmetry for all \(x\) if \(\ell(u,w,x)=\ell\left(\lambda u,w/\lambda,x\right)\) for an arbitrary scalar \(\lambda\) Note that this implies that the expected loss \(L\) also has the same symmetry. This type of symmetry appears in many scenarios in deep learning. For example, it appears in any neural network with the ReLU activation. It also appears in the self-attention of transformers, often in the form of key and query matrices [38]. When this symmetry exists between \(u\) and \(w\), one can prove the following result, which we refer to as the law of balance. **Theorem 1**.: (Law of balance.) _Let \(u\) and \(w\) be vectors of arbitrary dimensions. Let \(\ell(u,w,x)\) satisfy \(\ell(u,w,x)=\ell(\lambda u,w/\lambda,x)\) for arbitrary \(x\) and \(\lambda\neq 0\). Then,_ \[\frac{d}{dt}(\|u\|^{2}-\|w\|^{2})=-T(u^{T}C_{1}u-w^{T}C_{2}w), \tag{4}\] _where \(C_{1}=\mathbb{E}[A^{T}A]-\mathbb{E}[A^{T}]\mathbb{E}[A]\), \(C_{2}=\mathbb{E}[AA^{T}]-\mathbb{E}[A]\mathbb{E}[A^{T}]\) and \(A_{ki}=\frac{\partial\tilde{\ell}}{\partial(u_{i}w_{k})}\) with \(\tilde{\ell}(u_{i}w_{k},x)\equiv\ell(u_{i},w_{k},x)\)._ Our result holds in a stronger version if we consider the effect of a finite step-size by using the modified loss function (See Appendix A.7) [2, 34]. For common problems, \(C_{1}\) and \(C_{2}\) are positive definite, and this theorem implies that the norms of \(u\) and \(w\) will be approximately balanced. To see this, we can simplify the expression to \[-T(\lambda_{1M}\|u\|^{2}-\lambda_{2m}\|w\|^{2})\leq\frac{d}{dt}(\|u\|^{2}-\|w \|^{2})\leq-T(\lambda_{1m}\|u\|^{2}-\lambda_{2M}\|w\|^{2}), \tag{5}\] where \(\lambda_{1m(2m)},\lambda_{1M(2M)}\) represent the minimal and maximal eigenvalue of the matrix \(C_{1(2)}\), respectively. In the long-time limit, the value of \(\|u\|^{2}/\|w\|^{2}\) is restricted by \[\frac{\lambda_{2m}}{\lambda_{1M}}\leq\frac{\|u\|^{2}}{\|w\|^{2}}\leq\frac{ \lambda_{2M}}{\lambda_{1m}}, \tag{6}\] which implies that the stationary dynamics of the parameters \(u,w\) is constrained in a bounded subspace of the unbounded degenerate local minimum valley. Conventional analysis shows that the difference between SGD and GD is of order \(T^{2}\) per unit time step, and it is thus often believed that SGD can be understood perturbatively through GD [13]. However, the law of balance implies that the difference between GD and SGD is not perturbative. As long as there is any level of noise, the difference between GD and SGD at stationarity is \(O(1)\). This theorem has an important implication: the noise in SGD creates a qualitative difference between SGD and GD, and we must study SGD noise in its own right. This theorem also implies the loss of ergodicity, an important phenomenon in nonequilibrium physics [29, 35, 24, 36], because not all solutions with the same training loss will be accessed by SGD with equal probability. The theorem greatly simplifies when both \(u\) and \(w\) are one-dimensional. **Corollary 1**.: _If \(u,w\in\mathbb{R}\), then, \(\frac{d}{dt}|u^{2}-w^{2}|=-TC_{0}|u^{2}-w^{2}|\), where \(C_{0}=\mathrm{Var}[\frac{\partial\ell}{\partial(uw)}]\)._ Before we apply the theorem to study the stationary distributions, we stress the importance of this balance condition. This relation is closely related to Noether's theorem [26, 1, 22]. If there is no weight decay or stochasticity in training, the quantity \(\|u\|^{2}-\|w\|^{2}\) will be a conserved quantity under gradient flow [7, 15], as is evident by taking the infinite \(S\) limit. The fact that it monotonically decays to zero at a finite \(T\) may be a manifestation of some underlying fundamental mechanism. A more recent result in Ref. [40] showed that for a two-layer linear network, the norms of two layers are within a distance of order \(O(\eta^{-1})\), suggesting that the norm of the two layers are balanced. Our result agrees with Ref. [40] in this case, but our result is far stronger because our result is nonperturbative and only relies on the rescaling symmetry, and is independent of the loss function or architecture of the model. Example: two-layer linear network.It is instructive to illustrate the application of the law to a two-layer linear network, the simplest model that obeys the law. Let \(\theta=(w,u)\) denote the set of trainable parameters; the per-sample loss is \(\ell(\theta,x)=\left(\sum_{i}^{d}u_{i}w_{i}x-y\right)^{2}+\gamma\|\theta\|^{2}\). Here, \(d\) is the width of the model, \(\|\theta\|^{2}\) is the common \(L_{2}\) regularization term that encourages the learned model to have a small norm, \(\gamma\geq 0\) is the strength of regularization, and \(\mathbb{E}_{x}\) denotes the averaging over the training set, which could be a continuous distribution or a discrete sum of delta distributions. It will be convenient for us also to define the shorthand: \(v:=\sum_{i}^{d}u_{i}w_{i}\). The distribution of \(v\) is said to be the distribution of the "model." Applying the law of balance, we obtain that \[\frac{d}{dt}(u_{i}^{2}-w_{i}^{2})=-4[T(\alpha_{1}v^{2}-2\alpha_{2}v+\alpha_{3} )+\gamma](u_{i}^{2}-w_{i}^{2}), \tag{7}\] where we have introduced the parameters \[\begin{cases}\alpha_{1}\coloneqq\mathrm{Var}[x^{2}],\\ \alpha_{2}\coloneqq\mathbb{E}[x^{3}y]-\mathbb{E}[x^{2}]\mathbb{E}[xy],\\ \alpha_{3}\coloneqq\mathrm{Var}[xy].\end{cases} \tag{8}\] When \(\alpha_{2}^{2}-\alpha_{1}\alpha_{3}\) or \(\gamma>0\), the time evolution of \(|u^{2}-w^{2}|\) can be upper-bounded by an exponentially decreasing function in time: \(|u_{i}^{2}-w_{i}^{2}|(t)<|u_{i}^{2}-w_{i}^{2}|(0)\exp\left(-4T(\alpha_{2}^{2} -\alpha_{1}\alpha_{3})t/\alpha_{1}-4\gamma t\right)\to 0\). Namely, the quantity \((u_{i}^{2}-w_{i}^{2})\) decays to \(0\) with probability \(1\). We thus have \(u_{i}^{2}=w_{i}^{2}\) for all \(i\in\{1,\cdots,d\}\) at stationarity, in agreement with what we see in Figure 1. ## 3 Stationary Distribution of SGD As an important application of the law of balance, we solve the stationary distribution of SGD for a deep diagonal linear network. While linear networks are limited in expressivity, their loss landscape and dynamics are highly nonlinear and is regarded as a minimal model of nonlinear neural networks [14, 18, 48, 41]. ### Depth-\(0\) Case Let us first derive the stationary distribution of a one-dimensional linear regressor, which will be a basis for comparison to help us understand what is unique about having a "depth" in deep learning. The per-sample loss is \(\ell(x,v)=(vx-y)^{2}+\gamma v^{2}\), for which the SGD dynamics is \(dv=-2(\beta_{1}v-\beta_{2}+\gamma v)dt+\sqrt{TC(v)}dW(t)\), where we have defined \[\begin{cases}\beta_{1}:=\mathbb{E}[x^{2}],\\ \beta_{2}:=\mathbb{E}[xy].\end{cases} \tag{9}\] Note that the closed-form solution of linear regression gives the global minimizer of the loss function: \(v^{*}=\beta_{2}/\beta_{1}\). The gradient variance is also not trivial: \(C(v)\coloneqq\mathrm{Var}[\ell(v,x)]=4(\alpha_{1}v^{2}-2\alpha_{2}v+\alpha_{3})\). Note that the loss landscape \(L\) only depends on \(\beta_{1}\) and \(\beta_{2}\), and the gradient noise only depends on \(\alpha_{1}\), \(\alpha_{2}\) and, \(\alpha_{3}\). These relations imply that \(C\) can be quite independent of \(L\), contrary to popular beliefs in the literature [27, 23]. It is also reasonable to call \(\beta\) the landscape parameters and \(\alpha\) the noise parameters. We will see that both \(\beta\) and \(\alpha\) are important parameters appearing in all stationary distributions we derive, implying that the stationary distributions of SGD are strongly dependent on the data. Another important quantity is \(\Delta:=\min_{v}C(v)\geq 0\), which is the minimal level of noise on the landscape. For all the examples in this work, \[\Delta=\mathrm{Var}[x^{2}]\mathrm{Var}[xy]-\mathrm{cov}(x^{2},xy)=\alpha_{1} \alpha_{3}-\alpha_{2}^{2}. \tag{10}\] When is \(\Delta\) zero? It happens when, for all samples of \((x,y)\), \(xy+c=kx^{2}\) for some constant \(k\) and \(c\). We focus on the case \(\Delta>0\) in the main text, which is most likely the case for practical situations. The other cases are dealt with in Section A. For \(\Delta>0\), the stationary distribution for linear regression is found to be \[p(v)\propto\left(\alpha_{1}v^{2}-2\alpha_{2}v+\alpha_{3}\right)^{-1-\frac{\beta_{ 1}^{\prime}}{2T\alpha_{1}}}\exp\left[-\frac{1}{T}\frac{\alpha_{2}\beta_{1}^{ \prime}-\alpha_{1}\beta_{2}}{\alpha_{1}\sqrt{\Delta}}\arctan\left(\frac{ \alpha_{1}v-\alpha_{2}}{\sqrt{\Delta}}\right)\right], \tag{11}\] roughly in agreement with the result in Ref. [27]. Two notable features exist for this distribution: (1) the power exponent for the tail of the distribution depends on the learning rate and batch size, and (2) the integral of \(p(v)\) converges for an arbitrary learning rate. On the one hand, this implies that increasing the learning rate alone cannot introduce new phases of learning to a linear regression; on the other hand, it implies that the expected error is divergent as one increases the learning rate (or the feature variation), which happens at \(T=\beta_{1}^{\prime}/\alpha_{1}\). We will see that deeper models differ from the single-layer model in these two crucial aspects. ### Deep Diagonal Networks Now, we consider a diagonal deep linear network, whose loss function can be written as \[\ell=\left[\sum_{i}^{d_{0}}\left(\prod_{k=0}^{D}u_{i}^{(k)}\right)x-y\right]^{ 2}, \tag{12}\] where \(D\) is the depth and \(d_{0}\) is the width. When the width \(d_{0}=1\), the law of balance is sufficient to solve the model. When \(d_{0}>1\), we need to eliminate additional degrees of freedom. A lot of recent works study the properties of a diagonal linear network, which has been found to well approximate the dynamics of real networks [30, 28, 3, 8]. We introduce \(v_{i}\coloneqq\prod_{k=0}^{D}u_{i}^{(k)}\), and so \(v=\sum_{i}v_{i}\) and call \(v_{i}\) a "subnetwork" and \(v\) the "model." The following proposition shows that the dynamics of this model can be reduced to a one-dimensional form. **Theorem 2**.: _For all \(i\neq j\), one (or more) of the following conditions holds for all trajectories at stationarity:_ 1. \(v_{i}=0\)_, or_ \(v_{j}=0\)_, or_ \(L(\theta)=0\)_;_ 2. \(\operatorname{sgn}(v_{i})=\operatorname{sgn}(v_{j})\)_. In addition,_ 1. _if_ \(D=1\)_, for a constant_ \(c_{0}\)_,_ \(\log|v_{i}|-\log|v_{j}|=c_{0}\)_;_ 2. _if_ \(D>1\)_,_ \(|v_{i}|^{2}-|v_{j}|^{2}=0\)_._ This theorem contains many interesting aspects. First of all, the three situations in item 1 directly tell us the distribution of \(v\), which is the quantity we ultimately care about.3 This result implies that if we want to understand the stationary distribution of SGD, we only need to solve the case of item 2. Once the parameters enter the condition of item 2, item 2 will continue to hold with probability 1 for the rest of the trajectory. The second aspect is that item 2 of the theorem implies that all the \(v_{i}\) of the model must be of the same sign for any network with \(D\geq 1\). Namely, no subnetwork of the original network can learn an incorrect sign. This is dramatically different from the case of \(D=0\). We will discuss this point in more detail below. The third interesting aspect of the theorem is that it implies that the dynamics of SGD is qualitatively different for different depths of the model. In particular, \(D=1\) and \(D>1\) have entirely different dynamics. For \(D=1\), the ratio between every pair of \(v_{i}\) and \(v_{j}\) is a conserved quantity. In sharp contrast, for \(D>1\), the distance between different \(v_{i}\) is no longer conserved but decays to zero. Therefore, a new balancing condition emerges as we increase the depth. This qualitative distinction also corroborates the discovery in Refs. [48] and [51], where \(D=1\) models are found to be qualitatively different from models with \(D>1\). Footnote 3: \(L\to 0\) is only possible when \(\Delta=0\)_and_\(v=\beta_{2}/\beta_{1}\). With this theorem, we are now ready to solve for the stationary distribution. It suffices to condition on the event that \(v_{i}\) does not converge to zero. Let us suppose that there are \(d\) nonzero \(v_{i}\) that obey item 2 of Theorem 2 and \(d\) can be seen as an effective width of the model. We stress that the effective width depends on the initialization and can be arbitrary.4 Therefore, we condition on a fixed value of \(d\) to solve for the stationary distribution of \(v\) (Appendix A): Footnote 4: One can systematically initialize the parameters in a way that \(d\) takes any desired value between \(1\) and \(d_{0}\); for example, one way to achieve this is to initialize on the stationary conditions at the desired value of \(d\). \[p_{\pm}\big{(}|v|\big{)}\propto\frac{1}{|v|^{3(1-1/(D+1))}(\alpha_{1}|v|^{2}\mp 2 \alpha_{2}|v|+\alpha_{3})}\exp\left(-\frac{1}{T}\int_{0}^{|v|}d|v|\frac{d^{1-2/( D+1)}(\beta_{1}|v|\mp\beta_{2})}{(D+1)|v|^{2D/(D+1)}(\alpha_{1}|v|^{2}\mp 2\alpha_{ 2}|v|+\alpha_{3})}\right), \tag{13}\] where \(p_{-}\) is the distribution of \(v\) on \((-\infty,0)\) and \(p_{+}\) is that on \((0,\infty)\). Next, we analyze this distribution in detail. Since the result is symmetric in the sign of \(\beta_{2}=\mathbb{E}[xy]\), we assume that \(\mathbb{E}[xy]>0\) from now on. #### 3.2.1 Depth-\(1\) Nets We focus on the case \(\gamma=0\).5 The distribution of \(v\) is Footnote 5: When weight decay is present, the stationary distribution is the same, except that one needs to replace \(\beta_{2}\) with \(\beta_{2}-\gamma\). Other cases are also studied in detail in Appendix A and listed in Table. 1. \[p_{\pm}\big{(}|v|\big{)}\propto\frac{|v|^{\pm\beta_{2}/2\alpha_{3}T-3/2}}{( \alpha_{1}|v|^{2}\mp 2\alpha_{2}|v|+\alpha_{3})^{1\pm\beta_{2}/4T\alpha_{3}}} \exp\left(-\frac{1}{2T}\frac{\alpha_{3}\beta_{1}-\alpha_{2}\beta_{2}}{\alpha_ {3}\sqrt{\Delta}}\arctan\frac{\alpha_{1}|v|\mp\alpha_{2}}{\sqrt{\Delta}}\right), \tag{14}\] This measure is worth a close examination. First, the exponential term is upper and lower bounded and well-behaved in all situations. In contrast, the polynomial term becomes dominant both at infinity and close to zero. When \(v<0\), the distribution is a delta function at zero: \(p(v)=\delta(v)\). To see this, note that the term \(v^{-\beta_{2}/2\alpha_{3}T-3/2}\) integrates to give \(v^{-\beta_{2}/2\alpha_{3}T-1/2}\) close to the origin, which is infinite. Outside the origin, the integral is finite. This signals that the only possible stationary distribution has a zero measure for \(v\neq 0\). The stationary distribution is thus a delta distribution, meaning that if \(x\) and \(y\) are positively correlated, the learned subnets \(v_{i}\) can never be negative, no matter the initial configuration. For \(v>0\), the distribution is nontrivial. Close to \(v=0\), the distribution is dominated by \(v^{\beta_{2}/2\alpha_{3}T-3/2}\), which integrates to \(v^{\beta_{2}/2\alpha_{3}T-1/2}\). It is only finite below a critical \(T_{c}=\beta_{2}/\alpha_{3}\). This is a phase-transition-like behavior. As \(T\to(\beta_{2}/\alpha_{3})_{-}\), the integral diverges and tends to a delta distribution. Namely, if \(T>T_{c}\), we have \(u_{i}=w_{i}=0\) for all \(i\) with probability \(1\), and no learning can happen. If \(T<T_{c}\), the stationary distribution has a finite variance, and learning may happen. In the more general setting, where weight decay is present, this critical \(T\) shifts to \[T_{c}=\frac{\beta_{2}-\gamma}{\alpha_{3}}. \tag{15}\] When \(T=0\), the phase transition occurs at \(\beta_{2}=\gamma\), in agreement with the critical point identified in [51]. This critical learning rate also agrees with the discrete-time analysis performed in Refs. [49, 47] Figure 2: Stationary distributions of SGD for single linear regression (\(D=0\)), and a two-layer network (\(D=1\)) across different \(T=\eta/S\): \(T=0.05\) (**left**) and \(T=0.5\) (**Mid**). We see that for \(D=1\), the stationary distribution is strongly affected by the choice of the learning rate. In contrast, for \(D=0\), the stationary distribution is also centered at the global minimizer of the loss function, and the choice of the learning rate only affects the thickness of the tail. **Right**: the stationary distribution of a one-layer tanh-model, \(f(x)=\tanh(vx)\) (\(D=0\)) and a two-layer tanh-model \(f(x)=w\tanh(ux)\) (\(D=1\)). For \(D=1\), we define \(v:=wu\). The vertical line shows the ground truth. The deeper model never learns the wrong sign of \(wu\), whereas the shallow model can learn the wrong one. and the approximate continuous-time analysis in Ref.[4]. See Figure 2 for illustrations of the distribution across different values of \(T\). We also compare with the stationary distribution of a depth-0 model. Two characteristics of the two-layer model appear rather striking: (1) the solution becomes a delta distribution at the sparse solution \(u=w=0\) at a large learning rate; (2) the two-layer model never learns the incorrect sign (\(v\) is always non-negative). See Figure 2. Therefore, training with SGD on deeper models simultaneously have two advantages: (1) a generalization advantage such that a sparse solution is favored when the underlying data correlation is weak; (2) an optimization advantage such that the training loss interpolates between that of the global minimizer and the sparse saddle and is well-bounded (whereas a depth-0 model can have arbitrarily bad objective value at a large learning rate). Another exotic phenomenon implied by the result is what we call the "fluctuation inversion." Naively, the variance of model parameters should increase as we increase \(T\), the noise level in SGD. However, for the distribution we derived, the variance of \(v\) and \(u\) both decrease to zero as we increase \(T\): injecting noise makes the model fluctuation vanish. We discuss more about this "fluctuation inversion" in the next section. Also, while there is no other phase-transition behavior below \(T_{c}\), there is still an interesting and practically relevant crossover behavior in the distribution of the parameters as we change the learning rate. When we train a model, we often run SGD only once or a few times. When we do this, the most likely parameter we obtain is given by the maximum likelihood estimator of the distribution, \(\hat{v}:=\arg\max p(v)\). Understanding how \(\hat{v}(T)\) changes as a function of \(T\) is crucial. This quantity also exhibits nontrivial crossover behaviors at critical values of \(T\). When \(T<T_{c}\), a nonzero maximizer for \(p(v)\) must satisfy \[v^{*}=-\frac{\beta_{1}-10\alpha_{2}T-\sqrt{(\beta_{1}-10\alpha_{2}T)^{2}+28 \alpha_{1}T(\beta_{2}-3\alpha_{3}T)}}{14\alpha_{1}T}. \tag{16}\] The existence of this solution is nontrivial, which we analyze in Appendix A.5. When \(T\to 0\), a solution always exists and is given by \(v=\beta_{2}/\beta_{1}\), which does not depend on the learning rate or noise \(C\). Note that \(\beta_{2}/\beta_{1}\) is also the minimum point of \(L(u_{i},w_{i})\). This means that SGD is only a consistent estimator of the local minima in deep learning in the vanishing learning rate limit. How biased is SGD at a finite learning rate? Two limits can be computed. For a small learning rate, the leading order correction to the solution is \(v=\frac{\beta_{2}}{\beta_{1}}+\left(\frac{10\alpha_{2}\beta_{2}}{\beta_{1}^{2 }}-\frac{7\alpha_{1}\beta_{2}^{2}}{\beta_{1}^{2}}-\frac{3\alpha_{3}}{\beta_{1 }}\right)T\). This implies that the common Bayesian analysis that relies on a Laplace expansion of the loss fluctuation around a local minimum is improper. The fact that the stationary distribution of SGD is very far away from the Bayesian posterior also implies that SGD is only a good Bayesian sampler at a small learning rate. It is instructive to consider an example of a structured dataset: \(y=kx+\epsilon\), where \(x\sim\mathcal{N}(0,1)\) and the noise \(\epsilon\) obeys \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\). We let \(\gamma=0\) for simplicity. If \(\sigma^{2}>\frac{8}{21}k^{2}\), there always exists a transitional learning rate: \(T^{*}=\frac{4k+\sqrt{42}\sigma}{4(21\sigma^{2}-8k^{2})}\).6 Obviously, \(T_{c}/3<T^{*}\). One can characterize the learning of SGD by comparing \(T\) with \(T_{c}\) and \(T^{*}\). For this simple example, SGD can be classified into roughly 5 different regimes. See Figure 3. Footnote 6: We say“transitional” to indicate that it is different from the critical learning rate. ### Power-Law Tail of Deeper Models An interesting aspect of the depth-1 model is that its distribution is independent of the width \(d\) of the model. This is not true for a deep model, as seen from Eq. (13). The \(d\)-dependent term vanishes only if \(D=1\). Another intriguing aspect of the depth-1 distribution is that its tail is independent of any hyperparameter of the problem, dramatically different from the linear regression case. This is true for deeper models as well. Since \(d\) only affects the non-polynomial part of the distribution, the stationary distribution scales as \(p(v)\propto\frac{1}{v^{3(1-t/(D+1))}(\alpha_{1}v^{2}-2\alpha_{2}v+\alpha_{3})}\). Hence, when \(v\to\infty\), the scaling behaviour is \(v^{-5+3/(D+1)}\). The tail gets monotonically thinner as one increases the depth. For \(D=1\), the exponent is \(7/2\); an infinite-depth network has an exponent of 5. Therefore, the tail of the model distribution only depends on the depth and is independent of the data or details of training, unlike the depth-0 model. In addition, due to the scaling \(v^{5-3/(D+1)}\) for \(v\to\infty\), we can see that \(\mathbb{E}[v^{2}]\) will never diverge no matter how large the \(T\) is. See Figure 4-mid. One implication is that neural networks with at least one hidden layer will never have a divergent training loss. This directly explains the puzzling observation of the edge-of-stability phenomenon in deep learning: SGD training often gives a neural network a solution where a slight increment of the learning rate will cause discrete-time instability and divergence [43, 5]. These solutions, quite surprisingly, exhibit low training and testing loss values even when the learning rate is right at the critical learning rate of instability. This observation contradicts naive theoretical expectations. Let \(\eta_{\text{sta}}\) denote the largest stable learning rate. Close to a local minimum, one can expand the loss function up to the second order to show that the value of the loss function \(L\) is proportional to \(\text{Tr}[\Sigma]\). However, \(\Sigma\propto 1/(\eta_{\text{sta}}-\eta)\) should be a very large value [45, 50, 20], and therefore \(L\) should diverge. Thus, the edge of stability phenomenon is incompatible with the naive expectation up to the second order as pointed out in Ref. [6]. Our theory offers a direct explanation of why the divergence of loss does not happen: for deeper models, SGD always has a finite loss because of the power-law tail and fluctuation inversion. See Figure 4-right. Figure 4: SGD on deep networks leads to a well-controlled distribution and training loss. **Left**: Power law of the tail of the parameter distribution of deep linear nets. The dashed lines show the upper (\(-7/2\)) and lower (\(-5\)) bound of the exponent of the tail. The predicted power-law scaling agrees with the experiment, and the exponent decreases as the theory predicts. **Mid**: training loss of a tanh network. \(D=0\) is the case where only the input weight is trained, and \(D=1\) is the case where both input and output layers are trained. For \(D=0\), the model norm increases as the model loses stability. For \(D=1\), a “fluctuation inversion” effect appears. The fluctuation of the model vanishes before it loses stability. **Right**: performance of fully connected tanh nets on MNIST. Scaling the learning rate as \(1/D\) keeps the model performance relatively unchanged. Figure 3: Regimes of learning for SGD as a function of \(T=\eta/S\) and the noise in the dataset \(\sigma\). According to (1) whether the sparse transition has happened, (2) whether a nontrivial maximum probability estimator exists, and (3) whether the sparse solution is a maximum probability estimator, the learning of SGD can be characterized into 5 regimes. Regime **I** is where SGD converges to a sparse solution with zero variance. In regime **II**, the stationary distribution has a finite spread, and the probability density of the sparse solution diverges. Hence, the probability of being close to the sparse solution is very high. In regime **III**, the probability density of the sparse solution is zero, and therefore the model will learn without much problem. In regime **b**, a local nontrivial probability maximum exists, and hence SGD has some probability of successful learning. The only maximum probability estimator in regime **a** is the sparse solution. ### Role of Width As discussed, for \(D>1\), the model width \(d\) directly affects the stationary distribution of SGD. However, the integral in the exponent of Eq. (13) cannot be analytically calculated for a generic \(D\). Two cases exist where an analytical solution exists: \(D=1\) and \(D\to\infty\). We thus consider the case \(D\to\infty\) to study the effect of \(d\). As \(D\) tends to infinity, the distribution becomes \[p(v)\propto\frac{1}{v^{3+k_{1}}(\alpha_{1}v^{2}-2\alpha_{2}v+\alpha_{3})^{1-k_ {1}/2}}\exp\left(-\frac{d}{DT}\left(\frac{\beta_{2}}{\alpha_{3}v}+\frac{\alpha _{2}\alpha_{3}\beta_{1}-2\alpha_{2}^{2}\beta_{2}+\alpha_{1}\alpha_{3}\beta_{2 }}{\alpha_{3}^{2}\sqrt{\Delta}}\arctan(\frac{\alpha_{1}v-\alpha_{2}}{\sqrt{ \Delta}})\right)\right), \tag{17}\] where \(k_{1}=d(\alpha_{3}\beta_{1}-2\alpha_{2}\beta_{2})/(TD\alpha_{3}^{2})\). The first striking feature is that the architecture ratio \(d/D\) always appears simultaneously with \(1/T\). This implies that for a sufficiently deep neural network, the ratio \(D/d\) also becomes proportional to the strength of the noise. Since we know that \(T=\eta/S\) determines the performance of SGD,7 our result thus shows an extended scaling law of training: \(\frac{d}{D}\frac{S}{n}=const\). For example, if we want to scale up the depth without changing the width, we can increase the learning rate proportionally or decrease the batch size. This scaling law thus links all the learning rates, the batch size, and the model width and depth. The architecture aspect of the scaling law also agrees with the suggestion in Refs. [10, 11], where the optimal architecture is found to have a constant ratio of \(d/D\). See Figure 4. Footnote 7: Therefore, scaling \(\eta\) with \(1/S\) is known as the learning-rate-batch-size scaling law [12]. Now, let us fix \(T\) and understand the different limits of the stationary distribution, which is decided by the scaling of \(d\) as we scale up \(D\). There are three situations: (1) \(d=o(D)\), (2) \(d=c_{0}D\) for a constant \(c_{0}\), (3) \(d=\Omega(D)\). If \(d=o(D)\), \(k_{1}\to 0\) and the distribution converges to \(p(v)\propto v^{-3}(\alpha_{1}v^{2}-2\alpha_{2}v+\alpha_{3})^{-1}\), which is a delta distribution at \(0\). Namely, if the width is far smaller than the depth, the model will collapse, and no learning will happen under SGD. Therefore, we should increase the model width as we increase the depth. In the second case, \(d/D\) is a constant and can thus be absorbed into the definition of \(T\) and is the only limit where we obtain a nontrivial distribution with a finite spread. If \(d=\Omega(D)\), one can perform a saddle point approximation to see that the distribution becomes a delta distribution at the global minimum of the loss landscape, \(p(v)=\delta(v-\beta_{2}/\beta_{1})\). Therefore, the learned model locates deterministically at the global minimum. ## 4 Discussion The most important implication of our theory is that the behavior of SGD cannot be understood through gradient flow or a simple Langevin approximation. Having even a perturbative amount of noise in SGD leads to an order-1 change in the stationary distribution of the solution. Our result suggests that one promising way to understand SGD is to study its behavior on a landscape from the viewpoint of symmetries. We showed that SGD systematically moves towards a balanced solution when rescaling symmetry exists. Likewise, it is not difficult to imagine that for other types of symmetries, SGD will also have interesting systematic Figure 5: Loss landscape and noise covariance of a two-layer linear network with one hidden neuron and \(\gamma=0.005\). The orange dahsed curve shows the noise covariance \(C(w,u)\) where \(w=u\). We see that the shape of the gradient noise is, in general, a more complicated function than the landscape itself. tendencies to deviate from gradient flow and Brownian motion. An important future direction is thus to understand and characterize the dynamics of SGD on a loss function with different types of symmetries. By utilizing the symmetry conditions in the loss landscape, we are able to characterize the stationary distribution of SGD analytically. To the best of our knowledge, this is the first analytical expression for SGD obtained for a globally nonconvex and highly nonlinear loss function without the need for any approximation. With this solution, we have demonstrated many phenomena of deep learning that were previously unknown. For example, we showed the qualitative difference between networks with different depths, the finiteness of the training loss, the fluctuation inversion effect, the loss of ergodicity, and the incapability of learning a wrong sign for a deep model. Lastly, let us return to the original question we raised in Introduction. Why is the Gibbs measure a bad model of SGD? When the number of data points \(N\gg S\), a standard computation shows that the noise covariance of SGD takes the following form:\(C(\theta)=T(\mathbb{E}_{x}[(\nabla_{\theta}\ell)(\nabla_{\theta}\ell)^{T}]-( \nabla_{\theta}L)(\nabla_{\theta}L)^{T})\), which is nothing but the covariance of the gradients of \(\theta\). A key feature of the noise is that it depends on the dynamical variable \(\theta\) in a highly nontrivial manner. See Figure 5 for an illustration of the landscape against \(C\). We see that the shape of \(C(\theta)\) generally changes faster than the loss landscape. For the Gibbs distribution to hold (at least locally), we need \(C(\theta)\) to change much slower than \(L(\theta)\). A good criterion is thus to compare to the relative magnitude of \(\left\|\nabla L\right\|\) and \(\left\|\nabla\text{Tr}[C]\right\|\), which tells us which term changes faster. When \(\left\|\nabla\text{Tr}[C]\right\|\) is larger, unexpected phenomena will happen and one must consider the parameter dependence of \(C\) to understand SGD. ## Acknowledgement We thank Prof. Tsunetsugu for the discussion on ergodicity. We also thank Shi Chen for valuable discussions about symmetry. This work is financially supported by a research grant from JSPS (Grant No. JP22H01152).
2303.04314
Analogies between hadron-in-jet and dihadron fragmentation
We describe the formal analogies in the description of the inclusive production in hard processes of hadron pairs (based on dihadron fragmentation functions) and of a single hadron inside a jet (based on hadron-in-jet fragmentation functions). Since several observables involving dihadron fragmentation functions have been proposed in the past, we are able to suggest new interesting observables involving hadron-in-jet fragmentation functions, in lepton-hadron deep-inelastic scattering and hadronic collisions.
Alessandro Bacchetta, Marco Radici, Lorenzo Rossi
2023-03-08T01:28:59Z
http://arxiv.org/abs/2303.04314v1
# Analogies between hadron-in-jet ###### Abstract We describe the formal analogies in the description of the inclusive production in hard processes of hadron pairs (based on dihadron fragmentation functions) and of a single hadron inside a jet (based on hadron-in-jet fragmentation functions). Since several observables involving dihadron fragmentation functions have been proposed in the past, we are able to suggest new interesting observables involving hadron-in-jet fragmentation functions, in lepton-hadron deep-inelastic scattering and hadronic collisions. ## I Introduction Investigation of the partonic structure of hadrons is based on the crucial method of factorization, which makes it possible to split the cross section of a given process in a perturbative calculable hard cross section (describing the underlying elementary process at the partonic level) and one or more nonperturbative functions (describing the distribution of partons inside hadrons and/or their fragmentation into detected hadronic final states). Although factorization has been established for many hard processes in the collinear framework, where transverse momenta of all partons are integrated, this is not the case for transverse-momentum dependent partonic functions (TMDs). For certain processes involving two hadrons in the initial state and with observed hadronic final states, e.g., inclusive production of hadrons in hadronic collisions \(A+B\to C+D+X\), TMD factorization can be explicitly broken because the strongly interacting particles are entangled by a complicate color flow [1; 2]. Because of this, it is not possible to describe these processes in terms of the TMDs that appear in other processes, like the inclusive production of a hadron \(C\) in Deep-Inelastic Scattering (Semi-Inclusive DIS - SIDIS - denoted as \(\ell+A\to\ell^{\prime}+C+X\)) [3; 4] or the inclusive production of two hadrons \(C\), \(D\) in electron-positron annihilations (\(e^{+}+e^{-}\to C+D+X\)) [5] or the Drell-Yan process [6]. Even neglecting factorization-breaking contributions, the TMDs involved in hadron-hadron collisions would be different from the ones in the other processes, an effect that has been referred to as generalized universality [7; 8; 9; 10]. The most familiar example where this problem occurs is the study of the Collins effect [11]. The so-called "Collins function" can be used as an analyzer of the transverse polarization of the fragmenting quark. It can appear in SIDIS in combination with the chiral-odd TMD parton distribution function (TMD PDF) \(h_{1}\), called "transversity", and in the \(e^{+}+e^{-}\to C+D+X\) process [12; 13; 14]. However, because of TMD factorization breaking, it is not possible to rigorously study the Collins function in hadronic collisions. An alternative option to the Collins effect is represented by the inclusive production of two hadrons coming from the fragmentation of a single parton. In this case, the analyzer of the transverse polarization of the fragmenting quark is represented by the transverse component of the relative momentum of the hadron pair [15]. The advantage is that this correlation survives the integration over parton transverse momenta and can be analyzed in the collinear framework. Hence, in the SIDIS process \(\ell+A^{\uparrow}\to\ell^{\prime}+(C_{1}\,C_{2})+X\) the transversity \(h_{1}\) can be extracted as a collinear PDF through the chiral-odd partner \(H_{1}^{\spherical}\), the dihadron fragmentation function (DiFF) that describes the fragmentation of a transversely polarized quark into the hadron pair [16; 17; 18] and is also called Interference Fragmentation Function (IFF) [16; 17]. As for the Collins function, the \(H_{1}^{\spherical}\) can be independently extracted from azimuthal asymmetries in the production of two opposite dihadron pairs in \(e^{+}e^{-}\) annihilations, in the collinear framework [19; 20; 21; 22]. This last remark makes the crucial difference. First of all, it allows to cross-check the universality of both \(h_{1}\) and \(H_{1}^{\spherical}\) in hadronic collisions of the type \(A+B^{\uparrow}\to(C_{1}\,C_{2})+X\)[23]. Secondly, it makes it possible to extract the chiral-odd PDF \(h_{1}\) from a global fit of SIDIS, \(e^{+}e^{-}\) and hadronic collision data in the same theoretically rigorous way as it is usually done for the other unpolarized \(f_{1}\) and helicity \(g_{1}\) PDFs [24]. Another intriguing option is represented by the inclusive production of a hadron inside a jet. In fact, for a collision process like \(A+B\to(\mathrm{Jet}\,C)+X\) the cross section can be factorized in a hybrid form [25]: it involves collinear PDFs in the initial collision, but the final state is represented by a new function, the jet TMDFF (jTMDFF) 1, that depends on the jet kinematics. The jTMDFF can be matched onto the same TMD FF of hadron \(C\) which appears in SIDIS and \(e^{+}e^{-}\) cross sections in the TMD framework [26]. It is then possible to access TMD FFs even for that class of processes where factorization in the TMD framework is not available. When one of the two colliding hadrons is transversely factorized, say \(B^{\uparrow}\), the fragmentation of the transversely polarized quark is described by the polarized jTMDFF \(\mathcal{H}_{1}^{\perp}\) that can be matched onto the Collins function \(H_{1}^{\perp}\)[27]: this "Collins-in-jet" effect makes it possible to check the universality of the Collins function and gives an alternative option to access the transversity \(h_{1}\) in a rigorously factorized framework. Footnote 1: In Ref. [26], the jTMDFF is called semi-inclusive TMD fragmenting jet function (siTMDFJF). The hybrid factorization for the hadron-in-jet inclusive production has been shown to work also for the SIDIS cross section [28; 29; 30]. Hence, it comes natural to consider the formal similarities between the inclusive production of dihadrons and of hadrons inside a jet, _i.e._ between DiFFs and jTMDFFs. In this way, we are able to transfer the knowledge acquired on one mechanism to the other one, and suggest new channels to investigate the partonic structure of hadrons. The paper is organized as follows. In Sec. II, we recall the formalism for describing the inclusive dihadron production in unpolarized proton-proton collisions. In Sec. III, we illustrate the formulae for the inclusive hadron-in-jet production in the same process. In Sec. IV, we generalize the formalism to the case of collisions with one transversely polarized hadron. In Sec. V, by comparing the cross sections for the two mechanisms we establish a general set of correspondence rules. In Sec. VI, we use these rules to extend the study of two processes: a) the inclusive production of two back-to-back hadrons-in-jet in unpolarized proton-proton collisions, which could give access to jets initiated by linearly polarized gluons; b) the inclusive production of a hadron-in-jet in SIDIS up to subleading twist, which could give access to the chiral-odd PDF \(e(x)\) related to the nucleon scalar charge. Finally, in Sec. VII we conclude and give some future perspectives. ## II Fragmentation into a pair of hadrons We consider the fragmentation of an unpolarized quark \(q\), with 4-momentum \(p\) and mass \(m\), into two unpolarized hadrons inside the same jet, with 4-momenta \(P_{1},\ P_{2}\) and masses \(M_{1},\ M_{2}\), respectively. We define the total 4-momentum \(P=P_{1}+P_{2}\) and relative 4-momentum \(R=(P_{1}-P_{2})/2\) of the pair, where \(P^{2}=M_{hh}^{2}\) is its invariant mass. We choose the \(\hat{z}\) axis along the direction of the jet axis. At leading order in the strong coupling constant (LO), we identify the jet axis with the direction of the 3-momentum \(\mathbf{p}\). We choose the so-called "collinear" kinematics where the 3-momentum \(\mathbf{P}\) is pointing along \(\mathbf{p}\). The transverse components of \(R\) with respect to \(\hat{z}\) is denoted by \(R_{\perp}\), with \(R_{\perp}^{2}=-\mathbf{R}_{\perp}^{2}\) (see Fig. 1). 2 Footnote 2: In the following, we adhere the conventions adopted in Ref. [31]: the fragmenting quark momentum is denoted by \(p\), the transverse component of hadron and quark vectors with respect to each other is denoted by the \({}_{\perp}\) subscript, in all other cases with the \({}_{T}\) subscript. The hadron pair is inclusively produced from a hard process in deep-inelastic regime. When specifying the kinematics on the light cone, the dominant components \(P_{1}^{-},\ P_{2}^{-},\ p^{-}\) can be used to define the following invariants [18; 21; 32; 33]3 Footnote 3: In Refs. [17; 19; 20; 34; 35], the less symmetric definition \(\xi=(\zeta+1)/2=z_{1}/z=1-z_{2}/z\) was adopted. \[z_{hh} = \frac{P^{-}}{p^{-}}=\frac{P_{1}^{-}+P_{2}^{-}}{p^{-}}=z_{1}+z_{2}\] \[\zeta = \frac{2R^{-}}{P^{-}}=\frac{z_{1}-z_{2}}{z_{hh}}\, \tag{1}\] which represent the fraction of the fragmenting quark momentum carried by the hadron pair and how this fraction is split inside the pair, respectively. The fragmentation is described starting from the quark-quark correlator [18; 32] \[\Delta(p,P,R)=\sum_{X}\int\frac{dx}{(2\pi)^{4}}\,e^{ip\cdot x}\, \langle 0|\psi(x)|X,P,R\rangle\langle X,P,R|\bar{\psi}(0)|0\rangle\, \tag{2}\] where \(\psi\) is the quark field operator and the sum runs over all possible final states \(|X,P,R\rangle\) containing a hadron pair with total and relative momenta \(P,R\), respectively. At leading twist, the fragmentation of an unpolarized quark into two unpolarized hadrons can be parametrized in terms of a single DiFF according to [32] \[D_{1}(z_{hh},\zeta,\mathbf{R}_{\perp}^{2})=4\pi\ \mbox{Tr}\left[\Delta(z_{hh}, \zeta,\mathbf{R}_{\perp}^{2})\,\gamma^{-}\right]\, \tag{3}\] where \[\Delta(z_{hh},\zeta,\mathbf{R}_{\perp}^{2})=\frac{z_{hh}}{32}\int dp ^{+}\int d\mathbf{p}_{\perp}\,\Delta(p,P,R)\Big{|}_{p^{-}=P^{-}/z_{hh}}. \tag{4}\] In fact, the full dependence of the correlator in Eq. (2) is reduced to the one in Eq. (4) by considering that [34] * in Eq.(4) we integrate over the light-cone suppressed variable \(p^{+}\) and over \(\mathbf{p}_{\perp}\) with the condition \(p^{-}=P^{-}/z_{hh}\); * our choice of frame and kinematics implies no dependence on \(\mathbf{P}_{\perp}\); * the following kinematical relations hold [18]: \[P^{2} = M_{hh}^{2}\,\quad R^{2}=\frac{M_{1}^{2}+M_{2}^{2}}{2}-\frac{M_{hh}^{2} }{4}-\frac{(M_{1}^{2}-M_{2}^{2})^{2}}{4M_{hh}^{2}}\,\] \[\mathbf{R}_{\perp}^{2} = \frac{1}{2}\left[\frac{(1-\zeta)\,(1+\zeta)}{2}\,M_{hh}^{2}-(1- \zeta)M_{1}^{2}-(1+\zeta)M_{2}^{2}\right]\.\] (5) It is useful to recall also that [18] \[p\cdot R=\frac{M_{1}^{2}-M_{2}^{2}-\frac{\zeta}{2}M_{hh}^{2}}{2z _{hh}}+z_{hh}\zeta\frac{p^{2}+\mathbf{p}_{\perp}^{2}}{2}-\mathbf{p}_{\perp}\cdot\mathbf{R} _{\perp}\, \tag{6}\] from which we deduce that in general DiFFs depend only on the relative angle between \(\mathbf{p}_{\perp}\) and \(\mathbf{R}_{\perp}\). ### Cross section for dihadron production in proton-proton collisions If the hadron pair is inclusively produced from the collision of two unpolarized protons with momenta \(P_{A}\) and \(P_{B}\), we can identify the reaction plane as the plane formed by \(\mathbf{P}_{A}\) and \(\mathbf{P}\). The azimuthal orientation around \(\mathbf{P}\) of the plane formed by \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) with respect to the reaction plane is described by the azimuthal angle \(\phi_{R}\) (see Fig 2 and Ref. [36] for a formal definition). The transverse component of \(\mathbf{P}\) with respect to \(\mathbf{P}_{A}\) is denoted by \(\mathbf{P}_{T}\). Its modulus represents the hard scale of the process, namely we assume that \(|\mathbf{P}_{T}|\gg M_{hh}\), \(M_{1}\), \(M_{2}\). For simplicity, in the following the dependence of DiFFs on \(|\mathbf{P}_{T}|\) is understood. At leading order in \(1/|\mathbf{P}_{T}|\), the differential cross section for the process \(A+B\to(C_{1}\,C_{2})+X\) reads (see App. A and Eq. (15) of Ref. [36]) \[\frac{d\sigma_{UU}}{d\eta\,d|\mathbf{P}_{T}|\,d\zeta\,d\mathbf{R}_{\perp}}=\sum_{a,b,c, d}\int\frac{dx_{A}dx_{B}dz_{hhC}}{x_{A}x_{B}z_{hhC}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b} (x_{B})\,\frac{|\mathbf{P}_{T}|\,\hat{s}}{2\pi s}\,\frac{d\hat{\sigma}_{ab\to cd}}{d \hat{t}}\,\hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,D_{1}^{c}(z_{hhC},\zeta,\bm {R}_{\perp}^{2})\;, \tag{7}\] where \(f_{1}^{a}\) and \(f_{1}^{b}\) are the usual parton distribution functions (PDFs) in the proton for partons \(a,\,b\) with fractional momenta \(x_{A}\), \(x_{B}\), respectively, and \(\eta\) is the pseudorapidity of the hadron pair with respect to \(\mathbf{P}_{A}\): \[\eta=\frac{1}{2}\,\log\frac{P^{0}+P_{z}}{P^{0}-P_{z}}\;. \tag{8}\] The elementary cross section \(d\hat{\sigma}\) describes the scattering of partons \(a\) and \(b\) into partons \(c\) (with momentum \(P/z_{hhC}\)) and \(d\), which is not detected. The partonic Mandelstam variables \(\hat{s}\), \(\hat{t}\), \(\hat{u}\) are related to the external ones by \[\hat{s}=x_{A}\,x_{B}\,s\;,\quad\hat{t}=\frac{x_{A}}{z_{hhC}}\,t\;,\quad\hat{u}= \frac{x_{B}}{z_{hhC}}\,u\;. \tag{9}\] The \(\delta\) function in Eq. (7) expresses the momentum conservation in the partonic scattering, and it can be rewritten as [36] \[\hat{s}\,\delta(\hat{s}+\hat{t}+\hat{u})=z_{hhC}\,\delta(z_{hhC}-\bar{z}_{hh} )\;, \tag{10}\] where \[\bar{z}_{hh}=\frac{|\mathbf{P}_{T}|}{\sqrt{s}}\;\frac{x_{A}\,e^{-\eta}+x_{B}\,e^ {\eta}}{x_{A}\,x_{B}}\;. \tag{11}\] In Eq. (7), the sum runs upon all possible combinations of parton flavors. The elementary cross sections \(d\hat{\sigma}_{ab\to cd}\) for the independent combinations are listed in the Appendix of Ref. [36]. ## III Hadron-in-jet fragmentation We now consider the distribution of a hadron with 4-momentum \(P_{h}\) and mass \(M_{h}\) inside a jet with radius \(r\), initiated by a unpolarized quark \(q\) with 4-momentum \(p\) and mass \(m\). Following Ref. [26], we denote by \(\mathbf{j}_{\perp}\) the Figure 2: Kinematics for the collision of a proton with 3-momentum \(\mathbf{P}_{A}\) and a (transversely polarized) proton with momentum \(\mathbf{P}_{B}\) (and polarization \(\mathbf{S}_{BT}\)), inclusively producing two unpolarized hadrons with total and relative momenta \(\mathbf{P}=\mathbf{P}_{1}+\mathbf{P}_{2}\) and \(\mathbf{R}=(\mathbf{P}_{1}-\mathbf{P}_{2})/2\). The plane formed by \(\mathbf{P}\) and \(\mathbf{R}\) is oriented by the azimuthal angle \(\phi_{R}\) around \(\mathbf{P}\) with respect to the reaction plane formed by \(\mathbf{P}_{A}\) and \(\mathbf{P}\). transverse momentum of the hadron inside the jet (see Fig. 3). The latter is defined with respect to the standard jet axis (rather than using a recoil-free algorithm) because only in this case a direct connection to the TMD FF can be made [26]. As in the dihadron case, the \(\hat{z}\) axis is chosen along the standard jet axis and at LO it is identified with the direction of \(\mathbf{p}\). When the jet is produced in a hard process in deep-inelastic kinematical regime, the large light-cone components of quark, jet, and hadron vectors are denoted by \(p^{-}\), \(J^{-}\), and \(P_{h}^{-}\), respectively. They are used to define the following invariants \[z_{J}=\frac{J^{-}}{p^{-}}\,\qquad z_{h}=\frac{P_{h}^{-}}{J^{-}}\, \tag{12}\] which represent the fraction of the fragmenting quark momentum carried by the jet and the fraction of jet momentum carried by the hadron inside the jet, respectively. The \(J^{-}\) is related to the transverse momentum of the reconstructed jet in the hard process, whose size is denoted as \(|\mathbf{P}_{T}|\) and represents the hard scale of the process itself. The fragmentation is described starting from the quark-quark correlator \[\Delta(p,J,P_{h})=\sum_{X}\int\frac{dx}{(2\pi)^{4}}\,e^{ip\cdot x}\,\langle 0| \psi(x)|X,J,P_{h}\rangle\langle X,J,P_{h}|\bar{\psi}(0)|0\rangle\, \tag{13}\] where, as before, \(\psi\) is the quark field operator and the sum runs over all possible final states \(|X,J,P_{h}\rangle\) containing a hadron \(P_{h}\) inside a jet \(J\). At leading twist, the object describing the observed hadron inside the produced jet can be parametrized in terms of a JTMDFF according to [26] \[\mathcal{D}_{1}(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)=\frac{ z_{J}}{4N_{c}}\,\operatorname{Tr}\left[\Delta(z_{J},z_{h},\mathbf{j}_{\perp},|\mathbf{P}_{ T}|,|\mathbf{P}_{T}|r)\,\gamma^{-}\right]\, \tag{14}\] where \(N_{c}\) is the number of quark colors, \(|\mathbf{P}_{T}|r\) is the typical momentum scale of the jet [26], and \[\Delta(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)=\int dp^{+}\, \Delta(p,J,P_{h})\Bigg{|}_{\begin{subarray}{c}p^{-}=J^{-}/z_{J}\\ |\mathbf{p}_{T}|=|\mathbf{P}_{T}|/z_{J}\\ J^{-}=P_{h}^{-}/z_{h}\end{subarray}}. \tag{15}\] Depending on the relative size of \(|\mathbf{j}_{\perp}|\), \(|\mathbf{P}_{T}|r\) and the QCD nonperturbative scale \(\Lambda_{\text{QCD}}\), the jTMDFF of Eq. (14) can be expressed in different factorized forms. Here, we are interested in the kinematical region \(\Lambda_{\text{QCD}}\lesssim|\mathbf{j}_{\perp}|\ll|\mathbf{P}_{T}|r\) where collinear radiation within the jet and soft radiation of order \(|\mathbf{j}_{\perp}|\) are relevant, while harder radiation is allowed only outside the jet and it does not affect the distribution of the hadron transverse momentum \(|\mathbf{j}_{\perp}|\). In this regime, a factorized form for \(\mathcal{D}_{1}\) is given in Ref. [26] in terms of a hard matching function (related to the hard out-of-jet radiation) and a convolution of a usual TMD FF and a soft function (accounting for the soft radiation inside the jet). It is obtained by initially evolving the TMD FF in the usual Collins-Soper-Sterman (CSS) scheme up to the jet scale \(|\mathbf{P}_{T}|r\), then matching to the calculable hard function describing the out-of-jet radiation, and finally evolving to the hard scale by using the standard time-like DGLAP equations. All calculations in Ref. [26] are performed at NLO. At LO, the direction of the quark momentum \(\mathbf{p}\) coincides with the standard jet axis and its transverse component is equal to the transverse momentum of the reconstructed jet in the hard process, \(|\mathbf{p}_{T}|\approx|\mathbf{P}_{T}|\). In this approximation, Figure 3: Fragmentation of a unpolarized hadron with 3-momentum \(\mathbf{P}_{h}\) inside a jet of radius \(r\) initiated by a quark with 3-momentum \(\mathbf{p}\) (and transverse polarization \(\mathbf{s}_{\perp}\)). The transverse component of \(\mathbf{P}_{h}\) with respect to the standard jet axis is denoted by \(\mathbf{j}_{\perp}\)[26]. For sake of simplicity, \(\mathbf{p}\) and the jet axis are approximately taken along the same direction. the jTMDFF \(\mathcal{D}_{1}^{q}\) for the fragmentation of a quark \(q\) into a hadron inside the jet reduces to \[\mathcal{D}_{1}^{q}(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P} _{T}|r)\Big{|}_{\rm LO} =\sum_{i}\,\delta(1-z_{J})\,\delta_{qi}\,D_{1}^{i}(z_{h},\mathbf{j}_{ \perp}^{2};|\mathbf{P}_{T}|)\] \[=\delta(1-z_{J})\,D_{1}^{q}(z_{h},\mathbf{j}_{\perp}^{2};|\mathbf{P}_{T}|)\;, \tag{16}\] where \(D_{1}^{q}\) is the standard single-hadron TMD FF that can be isolated also in \(e^{+}e^{-}\) annihilations or in semi-inclusive deep-inelastic scattering. ### Cross section for hadron-in-jet fragmentation in proton-proton collisions We consider the same situation as in Sec. II.1, namely the collision of two unpolarized protons with momenta \(P_{A}\) and \(P_{B}\). The final state is now described by the inclusive production of a jet where a hadron is identified inside it with transverse momentum \(\mathbf{j}_{\perp}\) with respect to the standard jet axis. Following Ref. [27], the factorization theorem for the process \(A+B\to({\rm Jet}\,C)+X\) can be written as \[\frac{d\sigma_{UU}}{d\eta\,d|\mathbf{P}_{T}|\,dz_{h}\,d\mathbf{j}_{\perp}} =\frac{2\pi|\mathbf{P}_{T}|}{s}\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{ JC}}{x_{A}x_{B}z_{JC}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,H_{ab\to cd}^{U} \,\mathcal{D}_{1}^{c}(z_{JC},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r) \,z_{JC}^{2}\delta(z_{JC}-\bar{z}_{J})\;, \tag{17}\] where \(\bar{z}_{J}\) is given as in Eq. (11) and \(H_{ab\to cd}^{U}\) describes the elementary hard process \(a+b\to c+d\) from which the parton \(c\) initiates the reconstructed jet. As detailed in App. B, the \(H_{ab\to cd}^{U}\) of Ref. [27] can be reconnected to the \(d\hat{\sigma}_{ab\to cd}\) of Eq. (7) by \[H_{ab\to cd}^{U}=\frac{\hat{s}}{\pi z_{JC}}\,\frac{d\hat{\sigma}_{ab\to cd}}{d \hat{t}}\;. \tag{18}\] The cross section of Eq. (17) can then be cast in the form \[\frac{d\sigma}{d\eta\,d|\mathbf{P}_{T}|\,dz_{h}\,d\mathbf{j}_{\perp}} =\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{JC}}{x_{A}x_{B}z_{JC}^{2 }}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,2\frac{|\mathbf{P}_{T}|\,\hat{s}}{s}\, \frac{d\hat{\sigma}_{ab\to cd}}{d\hat{t}}\,z_{JC}\,\delta(z_{JC}-\bar{z}_{J}) \,\mathcal{D}_{1}^{c}(z_{JC},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)\] \[=\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{JC}}{x_{A}x_{B}z_{JC}^{2 }}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,2\frac{|\mathbf{P}_{T}|\,\hat{s}}{s}\, \frac{d\hat{\sigma}_{ab\to cd}}{d\hat{t}}\,\hat{s}\delta(\hat{s}+\hat{t}+\hat{ u})\,\mathcal{D}_{1}^{c}(z_{JC},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)\;, \tag{19}\] where we used Eq. (10) adapted to the case of hadron-in-jet fragmentation, _i.e._, by replacing \(z_{hhC}\) with \(z_{JC}\) for the fragmenting parton \(c\) and using the pseudorapidity of the jet with respect to \(\mathbf{P}_{A}\). ## IV Fragmentation of transversely polarized quarks We extend our study to the case of a fragmenting quark with transverse polarization \(\mathbf{s}_{\perp}\). We first consider the fragmentation into a pair of unpolarized hadrons (see Fig. 1). In the kinematic conditions described in Sec. II, the leading-twist correlator of Eq. (4) can be expanded as [32] \[\Delta(z_{hh},\zeta,\mathbf{R}_{\perp}^{2})=\frac{1}{16\pi}\,\left\{D_{1}(z_{hh}, \zeta,\mathbf{R}_{\perp}^{2})\,\not{u}_{-}+\,H_{1}^{\spherical}(z_{hh},\zeta,\mathbf{R}_ {\perp}^{2})\,\frac{i}{2M_{hh}}\,[\not{R}_{\perp},\not{u}_{-}]\right\}\;, \tag{20}\] where \(H_{1}^{\spherical}\) describes the probability density for a transversely polarized quark to fragment into a pair of unpolarized hadrons with total momentum collinear with the quark momentum. The \(H_{1}^{\spherical}\) can be extracted by the following projection \[\frac{(\mathbf{s}_{\perp}\times\mathbf{R}_{\perp})\cdot\mathbf{P}}{M_{hh}}\,\,H_{1}^{ \spherical}(z_{hh},\zeta,\mathbf{R}_{\perp}^{2})=4\pi\,\,{\rm Tr}\left[\Delta(z_{hh}, \zeta,\mathbf{R}_{\perp}^{2})\,i\,\sigma^{i-}\,\gamma_{5}\right]\;, \tag{21}\] where \(\sigma^{\mu\nu}=i[\gamma^{\mu},\,\gamma^{\nu}]/2\) and its spatial index \(i\) points in the direction of \(\mathbf{s}_{\perp}\). Similarly, if the transversely polarized quark fragments into a hadron inside a jet in the kinematical conditions described in Sec. III (see Fig. 3), we can project out the "Collins-in-jet" function \({\cal H}_{1}^{\perp}\) from the correlator in Eq. (15) as \[\frac{(\mathbf{s}_{\perp}\times\mathbf{j}_{\perp})\cdot\mathbf{p}}{z_{h}\,M_{h}}\;{\cal H}_{ 1}^{\perp}(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)=\frac{z_{J}}{4 N_{c}}\;\operatorname{Tr}\left[\Delta(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T} |r)\,i\,\sigma^{i-}\,\gamma_{5}\right]\;, \tag{22}\] where again the spatial index \(i\) of \(\sigma^{\mu\nu}\) points in the direction of \(\mathbf{s}_{\perp}\). In the following section, for the two fragmentation scenarios we analyze the contributions that arise in the cross section for proton-proton collisions when one of the two protons is transversely polarized. ### Transversely polarized proton-proton collisions For the process \(A+B^{\uparrow}\to(C_{1}\,C_{2})+X\) depicted in Fig. 2, the polarized part of the cross section reads (see App. A and Eq. (16) of Ref. [36]) \[\frac{d\sigma_{UT}}{d\eta\,d|\mathbf{P}_{T}|\,d\zeta\,d\mathbf{R}_{\perp }\,d\phi_{S_{B}}}=\frac{|\mathbf{S}_{BT}|}{4\pi^{2}}\,\sin(\phi_{S_{B}}-\phi_{R})\] \[\times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{hhC}}{x_{A}x_{B}z_{ hhC}^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,\frac{|\mathbf{P}_{T}|\,\hat{s}}{s}\, \frac{d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}}{d\hat{t}}\,\hat{ s}\delta(\hat{s}+\hat{t}+\hat{u})\,\frac{|\mathbf{R}_{\perp}|}{M_{hh}}\,H_{1}^{ \spherical c}(z_{hhC},\zeta,\mathbf{R}_{\perp}^{2})\;, \tag{23}\] where \(\mathbf{S}_{BT}\) is the transverse polarization of the colliding proton with orientation \(\phi_{S_{B}}\) with respect to the reaction plane, and \(h_{1}^{b}\) is the transversity distribution for the transversely polarized parton \(b\) with fractional momentum \(x_{B}\). The elementary cross sections \(d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}\) describe the scattering of parton \(a\) and \(b\) with transfer of the transverse polarization of the latter to parton \(c\) while summing on the undetected fragments from parton \(d\). All the possible independent flavor combinations are listed in the Appendix of Ref. [36]. The corresponding process \(A+B^{\uparrow}\to(\operatorname{Jet}C)+X\) is displayed in Fig. 4. A hadron with 3-momentum \(\mathbf{P}_{h}\) is inclusively produced inside a jet with standard axis \(\hat{\mathbf{J}}\) from the collisions of a proton with 3-momentum \(\mathbf{P}_{A}\) and a transversely polarized proton with 3-momentum \(\mathbf{P}_{B}\) and polarization \(\mathbf{S}_{BT}\). The azimuthal angles \(\phi_{S_{B}}\) and \(\phi_{h}\) describe the orientation of \(\mathbf{S}_{BT}\) and of the plane formed by \(\mathbf{P}_{h}\) and \(\hat{\mathbf{J}}\), respectively, with respect to the reaction plane formed Figure 4: Kinematics for the collision of a proton with 3-momentum \(\mathbf{P}_{A}\) and a transversely polarized proton with momentum \(\mathbf{P}_{B}\) (and polarization \(\mathbf{S}_{BT}\)), inclusively producing inside a jet a hadron with 3-momentum \(\mathbf{P}_{h}\) and transverse component \(\mathbf{j}_{\perp}\) with respect to the jet axis \(\hat{\mathbf{J}}\). The plane formed by \(\hat{\mathbf{J}}\) and \(\mathbf{P}_{h}\) is oriented by the azimuthal angle \(\phi_{h}\) around \(\hat{\mathbf{J}}\) with respect to the reaction plane formed by \(\mathbf{P}_{A}\) and \(\hat{\mathbf{J}}\). by \(\mathbf{P}_{A}\) and \(\hat{\mathbf{J}}\). The polarized part of the cross section reads [27]4 Footnote 4: The expression in Eq. (24) differs from Ref. [27] by a \(1/z_{h}\) term because of a definition of the Collins function inherited from Ref. [25] which does not adhere to the Trento conventions [37]. \[\frac{d\sigma_{UT}}{d\eta\,d|\mathbf{P}_{T}|\,dz_{h}\,d\mathbf{j}_{\perp} \,d\phi_{S_{B}}}=|\mathbf{S}_{BT}|\,\sin(\phi_{S_{B}}-\phi_{h})\,\frac{|\mathbf{P}_{T} |}{s}\\ \times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{JC}}{x_{A}x_{B}z_{ JC}^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,H_{ab\uparrow\to c^{\uparrow}d}^{ Collins}\,\frac{|\mathbf{j}_{\perp}|}{M_{h}}\,\mathcal{H}_{1}^{\perp\,c}(z_{ JC},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)\,z_{JC}^{2}\delta(z_{ JC}-\bar{z}_{J})\;, \tag{24}\] where \(H_{ab\uparrow\to c^{\uparrow}d}^{Collins}\) is the cross section for the transfer of transverse polarization in the elementary hard process \(a+b^{\uparrow}\to c^{\uparrow}+d\), and \(\mathcal{H}_{1}^{\perp\,c}\) is the polarized jTMDFF describing the hadron inside the jet produced by the transversely polarized fragmenting parton \(c\). By extending the relation (18) to the polarized case involving \(H_{ab^{\uparrow}\to c^{\uparrow}d}^{Collins}\) of Ref. [27] and \(d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}\) of Ref. [36] (and exchanging \(\hat{t}\leftrightarrow\hat{u}\) to account for the fact that the transversely polarized parton in Ref. [27] is \(a^{\uparrow}\) while in Ref. [36] is \(b^{\uparrow}\)), we finally get \[\frac{d\sigma_{UT}}{d\eta\,d|\mathbf{P}_{T}|\,dz_{h}\,d\mathbf{j}_{\perp} \,d\phi_{S_{B}}}=|\mathbf{S}_{BT}|\,\sin(\phi_{S_{B}}-\phi_{h})\\ \times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{JC}}{x_{A}x_{B}z_{ JC}^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,\frac{|\mathbf{P}_{T}|\hat{s}}{\pi s }\,z_{JC}\delta(z_{JC}-\bar{z}_{J})\,\frac{d\Delta\hat{\sigma}_{ab^{\uparrow} \to c^{\uparrow}d}}{d\hat{t}}\,\frac{|\mathbf{j}_{\perp}|}{M_{h}}\,\mathcal{H}_{1 }^{\perp\,c}(z_{JC},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{T}|r)\\ =|\mathbf{S}_{BT}|\,\sin(\phi_{S_{B}}-\phi_{h})\\ \times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{JC}}{x_{A}x_{B}z_{ JC}^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,\frac{|\mathbf{P}_{T}|\hat{s}}{\pi s }\,\frac{d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}}{d\hat{t}}\, \hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,\frac{|\mathbf{j}_{\perp}|}{M_{h}}\, \mathcal{H}_{1}^{\perp\,c}(z_{JC},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{T}|,|\mathbf{P}_{ T}|r)\;. \tag{25}\] ## V Correspondence between Dihadron and Hadron-in-jet fragmentation We are now in the position to compare the cross sections for the \(A+B^{(\uparrow)}\to(C_{1}\,C_{2})+X\) and \(A+B^{(\uparrow)}\to(\mathrm{Jet}\,C)+X\) processes. We deduce that: * from Eq. (12), the combination \(z_{J}z_{h}=P_{h}^{-}/p^{-}\) describes the fraction of fragmenting quark momentum carried by the hadron inside the jet; hence, it can be mapped onto \(z_{hh}\) of Eq. (1); * by comparing the same two equations, we can map \(z_{h}\) onto \(\zeta z_{hh}=z_{1}-z_{2}\), the relative fractional momentum carried by the hadron pair; thus, for both hadronic final states (dihadron and hadron inside jet) the light-cone kinematics can be described by a pair of invariants and we can establish a correspondence between these pairs, namely \((z_{hh},\zeta)\leftrightarrow(z_{J},z_{h})\); * along the same line, we can map the transverse momentum \(\mathbf{j}_{\perp}\) of the hadron inside the jet with respect to the standard jet axis onto the transverse component \(\mathbf{R}_{\perp}\) of the hadron pair relative momentum with respect to the direction of the pair total momentum, which in collinear kinematics coincides with the standard jet axis; obviously, the same mapping holds for their azimuthal angles, _i.e._, \(\phi_{h}\leftrightarrow\phi_{R}\); * by directly comparing Eqs. (7) with (19) and Eqs. (23) with (25), the jTMDFF can be mapped onto the corresponding DiFF according to \[4\pi\,\mathcal{D}_{1}^{q}(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{P}_{ T}|,|\mathbf{P}_{T}|r) \longleftrightarrow\,D_{1}^{q}(z_{hh},\zeta,\mathbf{R}_{\perp}^{2};|\mathbf{P}_{T}|)\] (26) \[4\pi\,\mathcal{H}_{1}^{\perp\,q}(z_{J},z_{h},\mathbf{j}_{\perp};|\mathbf{ P}_{T}|,|\mathbf{P}_{T}|r) \longleftrightarrow\,H_{1}^{<\,q}(z_{hh},\zeta,\mathbf{R}_{\perp}^{2};|\mathbf{P}_{T}|)\;.\] (27) As a final remark, DiFFs have been extracted so far only through a LO analysis of inclusive dihadron production in \(e^{+}e^{-}\) annihilation [21], in combined \(e^{+}e^{-}\) and SIDIS processes [22], and in a global fit of \(e^{+}e^{-}\), SIDIS and hadron-hadron collision data [24]. The formalism of jTMDFFs is available instead up to NLO, but only in the unpolarized case [26; 27]. However, NLO corrections separately affect the elementary hard cross section for a \(2\to 2\) partonic process [38] and the OPE-expanded expression of \(\mathcal{D}_{1}\)[26]. Hence, we can argue that the same structure of the leading-twist cross sections for inclusive dihadron production in Eqs. (7), (23) holds also at NLO. The correspondence expressed in Eqs. (26) and (27) represents the main result of this paper. It has been derived by considering the case of proton-proton collisions but it can be extended to all hard processes where collinear factorization holds, like inclusive dihadron production in \(e^{+}e^{-}\) annihilations [19; 20; 21] and SIDIS. In the following, we outline some interesting applications involving proton-proton collisions and the SIDIS process. ## VI Opportunities with hadron-in-jet fragmentation In this section, we mention two possible applications of the above correspondence where results known for dihadron inclusive production can be formally translated into the cross section for hadron-in-jet fragmentation, opening up new channels for investigating the partonic structure of hadrons. ### Inclusive production of two dihadrons and back-to-back hadron-in-jet's in proton-proton collisions For the process \(A+B\to(C_{1}C_{2})_{C}+(D_{1}D_{2})_{D}+X\) depicted in the left panel of Fig. 5, after summing over the polarizations of initial hadrons the leading-twist cross section reads (see App. A and Eqs. (20-22) in Ref. [36]) \[\frac{d\sigma_{UU}}{d\eta_{C}\,d|\mathbf{P}_{CT}|\,d\zeta_{C}\,d \mathbf{R}_{C\perp}\,d\eta_{D}\,d|\mathbf{P}_{DT}|\,d\zeta_{D}\,d\mathbf{R}_{D\perp}}= \frac{|\mathbf{P}_{CT}||\mathbf{P}_{DT}|}{8\pi^{2}}\,\sum_{a,b}\int\frac{dx_{A}dx_{B} dz_{hhC}dz_{hhD}}{z_{hhC}^{2}z_{hhD}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\] \[\times\hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,\delta\left(\frac{| \mathbf{P}_{CT}|}{z_{hhC}}-\frac{|\mathbf{P}_{DT}|}{z_{hhD}}\right)\,\delta\left(x_{A} P_{Az}-x_{B}P_{Bz}-\frac{P_{Cz}}{z_{hhC}}-\frac{P_{Dz}}{z_{hhD}}\right)\] \[\times\Bigg{\{}\sum_{c,d}\left[\,\frac{d\hat{\sigma}_{ab\to cd}}{d \hat{t}}\,D_{1}^{e}(z_{hhC},\zeta_{C},\mathbf{R}_{C\perp}^{2})\,D_{1}^{d}(z_{hhD}, \zeta_{D},\mathbf{R}_{D\perp}^{2})\right.\] \[\left.\qquad+\cos(\phi_{R_{C}}-\phi_{R_{D}})\,\frac{d\Delta\hat{ \sigma}_{ab\to c^{\perp}d^{\perp}}}{d\hat{t}}\,\frac{|\mathbf{R}_{C\perp}|}{M_{C} }\,H_{1}^{\spherical\,c}(z_{hhC},\zeta_{C},\mathbf{R}_{C\perp}^{2})\,\frac{|\mathbf{R}_{ D\perp}|}{M_{D}}\,H_{1}^{\spherical\,d}(z_{hhD},\zeta_{D},\mathbf{R}_{D\perp}^{2})\,\right]\] Figure 5: Kinematics for the collision of two unpolarized protons with 3-momenta \(\mathbf{P}_{A}\) and \(\mathbf{P}_{B}\) along the \(\hat{z}\) axis. Left panel: inclusive production of two back-to-back hadron pairs with total momenta \(\mathbf{P}_{C}=\mathbf{P}_{C1}+\mathbf{P}_{C2}\) and \(\mathbf{P}_{D}=\mathbf{P}_{D1}+\mathbf{P}_{D2}\), and back-to-back projections \(\mathbf{P}_{CT}\) and \(\mathbf{P}_{DT}\) on the transverse plane (\(\phi_{C}=\phi_{D}+\pi\)), respectively; the planes containing the momenta of each pair form the azimuthal angles \(\phi_{R_{C}}\) and \(\phi_{R_{D}}\) with the reaction plane containing \(\mathbf{P}_{A}\) and \(\mathbf{P}_{C}\). Right panel: inclusive production of two back-to-back jets with axis \(\hat{\mathbf{J}}_{C}\), \(\hat{\mathbf{J}}_{D}\), and back-to-back projected momenta \(\mathbf{P}_{CT}\) and \(\mathbf{P}_{DT}\) on the transverse plane (\(\phi_{C}=\phi_{D}+\pi\)), respectively; in each jet a hadron is detected with 3-momentum \(\mathbf{P}_{h_{C}}\) (\(\mathbf{P}_{h_{D}}\)) and transverse component \(\mathbf{j}_{C\perp}\) (\(\mathbf{j}_{D\perp}\)) with respect to the jet axis \(\hat{\mathbf{J}}_{C}\) (\(\hat{\mathbf{J}}_{D}\)); the planes containing \(\hat{\mathbf{J}}_{C}\), \(\mathbf{P}_{h_{C}}\) and \(\hat{\mathbf{J}}_{D}\), \(\mathbf{P}_{h_{D}}\) form the azimuthal angles \(\phi_{j_{C}}\), \(\phi_{j_{D}}\) with the reaction plane containing \(\mathbf{P}_{A}\) and \(\hat{\mathbf{J}}_{C}\). \[+\cos(2\phi_{R_{C}}-2\phi_{R_{D}})\,\frac{d\Delta\hat{\sigma}_{ab\to g^{ \prime}g^{\prime}}}{d\hat{t}}\,\frac{|\mathbf{R}_{C\perp}|^{2}}{M_{C}^{2}}\,H_{1}^{ \spherical\,g}(z_{hhC},\zeta_{C},\mathbf{R}_{C\perp}^{2})\,\frac{|\mathbf{R}_{D\perp}|^{2}}{ M_{D}^{2}}\,H_{1}^{\spherical\,g}(z_{hhD},\zeta_{D},\mathbf{R}_{D\perp}^{2})\Bigg{\}}\,, \tag{28}\] where the momenta and the angles of the second hadron pair are defined in complete analogy with the first pair by replacing the labels \(c,C\) with \(d,D\). The delta functions describe in the elementary process the conservation of energy and of momentum both along the longitudinal direction of the \(\hat{z}\) axis (identified with \(\mathbf{P}_{A}\), see Fig. 5) and in the transverse plane. In Eq. (28), the elementary cross sections \(d\Delta\hat{\sigma}_{ab\to c^{\dagger}d^{\dagger}}\) involve only quarks for the final partons \(c,d\), while \(d\Delta\hat{\sigma}_{ab\to g^{\prime}g^{\dagger}}\) contain only final gluons linearly polarized in the transverse plane. Hence, the \(H_{1}^{\spherical g}\) function describes the fragmentation of such linearly polarized gluons into pairs of unpolarized hadrons.5 For both cases of final polarized quarks and gluons, all nonvanishing combinations are listed in the Appendix of Ref. [36]. Footnote 5: The \(H_{1}^{\spherical\,g}\) corresponds to the notation \(\delta\hat{G}^{\spherical}\) of Ref. [36]. Therefore, by disentangling specific asymmetries in the azimuthal orientation of the planes containing the momenta of the two dihadrons one can access the DiFFs \(H_{1}^{\spherical\,q}\) and \(H_{1}^{\spherical\,g}\) for the fragmentation of transversely polarized quarks and linearly polarized gluons, respectively, without considering any polarization in the initial hadronic collision [36]. Because of the correspondence described in Sec. V, it is interesting to explore the same possibility for the process \(A+B\to(\mathrm{Jet}\,C)+(\mathrm{Jet}\,D)+X\), as depicted in the right panel of Fig. 5. Using the same rules of correspondence as in the single hadron-pair production, we get \[\frac{d\sigma_{UU}}{d\eta_{C}\,d|\mathbf{P}_{CT}|\,dz_{hC}\,d\mathbf{j}_{ C\perp}\,d\eta_{D}\,d|\mathbf{P}_{DT}|\,dz_{hD}\,d\mathbf{j}_{D\perp}}=2|\mathbf{P}_{CT}||\mathbf{P}_{ DT}|\,\sum_{a,b}\int\frac{dx_{A}dx_{B}dz_{JC}dz_{JD}}{z_{JC}^{2}z_{JD}^{2}}\,f_{1}^{a}(x _{A})\,f_{1}^{b}(x_{B})\] \[\times\hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,\delta\left(\frac{| \mathbf{P}_{CT}|}{z_{JC}}-\frac{|\mathbf{P}_{DT}|}{z_{JD}}\right)\,\delta\left(x_{A}P_ {Az}-x_{B}P_{Bz}-\frac{P_{Cz}}{z_{JC}}-\frac{P_{Dz}}{z_{JD}}\right)\] \[\times\Bigg{\{}\sum_{c,d}\Bigg{[}\,\frac{d\hat{\sigma}_{ab\to cd }}{d\hat{t}}\,\mathcal{D}_{1}^{c}(x_{JC},z_{LC},\mathbf{j}_{C\perp}^{2};|\mathbf{P}_{ CT}|,|\mathbf{P}_{CT}|r_{C})\,\mathcal{D}_{1}^{d}(z_{JD},z_{hD},\mathbf{j}_{D\perp}^{2};|\mathbf{P}_{ DT}|,|\mathbf{P}_{DT}|r_{D})\] \[+\cos(\phi_{j_{C}}-\phi_{j_{D}})\,\frac{d\Delta\hat{\sigma}_{ab \to c^{\dagger}d^{\dagger}}}{d\hat{t}}\,\frac{|\mathbf{j}_{C\perp}|}{M_{h_{C}}}\, \mathcal{H}_{1}^{\perp\,c}(z_{JC},z_{hC},\mathbf{j}_{C\perp}^{2};|\mathbf{P}_{CT}|,| \mathbf{P}_{CT}|r_{C})\] \[\times\frac{|\mathbf{j}_{D\perp}|}{M_{h_{D}}}\,\mathcal{H}_{1}^{\perp \,d}(z_{JD},z_{hD},\mathbf{j}_{D\perp}^{2};|\mathbf{P}_{DT}|,|\mathbf{P}_{DT}|r_{D})\, \Bigg{]}\] \[+\cos(2\phi_{j_{C}}-2\phi_{j_{D}})\,\frac{d\Delta\hat{\sigma}_{ ab\to g^{\prime}g^{\prime}}}{d\hat{t}}\,\frac{|\mathbf{j}_{C\perp}|^{2}}{M_{h_{C}}^{2}}\, \mathcal{H}_{1}^{\perp\,g}(z_{JC},z_{hC},\mathbf{j}_{C\perp}^{2};|\mathbf{P}_{CT}|,| \mathbf{P}_{CT}|r_{C})\] \[\times\frac{|\mathbf{j}_{D\perp}|^{2}}{M_{h_{D}}^{2}}\,\mathcal{H}_{1 }^{\perp\,g}(z_{JD},z_{hD},\mathbf{j}_{D\perp}^{2};|\mathbf{P}_{DT}|,|\mathbf{P}_{DT}|r_{D}) \Bigg{\}}\,, \tag{29}\] where \(\phi_{j_{C}}\) and \(\phi_{j_{D}}\) are the azimuthal angles with respect to the reaction plane of the transverse momenta \(\mathbf{j}_{C\perp},\mathbf{j}_{D\perp}\) of hadrons inside the jets with radius \(r_{C}\) and \(r_{D}\), respectively. All other variables are defined in complete analogy with the single hadron-in-jet case, identifying each corresponding jet by using the labels \(c,C\) and \(d,D\). From Eq. (29), we deduce that the \(\cos(\phi_{j_{C}}-\phi_{j_{D}})\) asymmetry in the azimuthal distribution of the two hadrons inside the two back-to-back jets is generated by two "Collins-in-jet" effects, one per each jet. This asymmetry allows to isolate back-to-back jets produced by the fragmentation of back-to-back transversely polarized quarks, giving an alternative option to access the Collins function of each hadron inside the corresponding jet. Similarly and even more interestingly, extracting the \(\cos(2\phi_{j_{C}}-2\phi_{j_{D}})\) Fourier component in the azimuthal distribution allows to isolate back-to-back jets produced by the fragmentation of back-to-back linearly polarized gluons, giving access to a new class of fragmentation functions: the \(\mathcal{H}_{1}^{\perp\,g}\) describe the inclusive production of hadrons inside jets by the fragmentation of linearly polarized gluons, where the hadron transverse momentum \(\mathbf{j}_{\perp}\) with respect to the jet axis becomes the spin analyzer of the gluon linear polarization in the transverse plane. ### Semi-inclusive deep-inelastic scattering up to subleading twist The left panel of Fig. 6 describes the kinematics for the \(\ell+A\to\ell^{\prime}+(C_{1}C_{2})+X\) process, namely for the inclusive production of a hadron pair with momenta \(P_{1}\) and \(P_{2}\) by the scattering of a lepton with 4-momentum \(\ell\) off a hadronic target with momentum \(P_{A}\), mass \(M\) and polarization \(S\), leading to a final lepton with 4-momentum \(\ell^{\prime}\). The azimuthal orientations of the transverse polarization \(\mathbf{S}_{T}\) and of the hadron pair plane (represented by \(\mathbf{R}_{\perp}=(\mathbf{P}_{1\perp}-\mathbf{P}_{2\perp})/2\)) are given by \(\phi_{S}\) and \(\phi_{R}\), respectively, and they are all measured with respect to the scattering plane identified by \(\mathbf{\ell}\) and \(\mathbf{\ell}^{\prime}\). The hard scale of the process is given by \(Q^{2}=-q^{2}=-(\ell-\ell^{\prime})^{2}\gg M\geq 0\). The collinear kinematics is realized by integrating over the transverse components of the hadron-pair total 3-momentum \(\mathbf{P}=\mathbf{P}_{1}+\mathbf{P}_{2}\) or, equivalently, by taking \(\mathbf{P}\) collinear with \(\hat{\mathbf{q}}\), which identifies the \(\hat{z}\) axis. The expressions of the LO cross section for various combinations of polarization of lepton probe and proton target are listed in Eqs. (44-49) of Ref. [32] up to subleading twist. A slightly different notation was employed in Ref. [39], where the cross section was described in terms of structure functions, based on the analgous expression in Ref. [40]. Here, we limit ourselves to reproducing the terms that are more interesting for our discussion: \[\frac{d\sigma}{dx\,dy\,dz_{hh}\,d\phi_{S}\,d\mathbf{R}_{\perp}d\zeta }=\frac{2\alpha^{2}}{xyQ^{2}}\,\frac{y^{2}}{2\,(1-\varepsilon)}\,\bigg{\{} F_{UU,T}+\ldots+|\mathbf{S}_{T}|\,\bigg{[}\varepsilon\,\sin(\phi_{R}+\phi_{S}) \,F_{UT}^{\sin(\phi_{R}+\phi_{S})}+\ldots\bigg{]}\] \[+S_{L}\bigg{[}\sqrt{2\,\varepsilon(1+\varepsilon)}\,\sin\phi_{R} \,F_{UL}^{\sin\phi_{R}}+\ldots\bigg{]} \tag{30}\] \[+\lambda\,\sqrt{2\,\varepsilon(1-\varepsilon)}\,\sin\phi_{R}\,F_ {LU}^{\sin\phi_{R}}+\ldots+|\mathbf{S}_{T}|\,\lambda\,\bigg{[}\ldots\bigg{]} \bigg{\}}\,,\] where \(\alpha\) is the fine structure constant, \(x=Q^{2}/2P_{A}\cdot q\approx k^{+}/P_{A}^{+}\) is the fraction of target momentum carried by a parton with 4-momentum \(k\) and fractional charge \(e_{q}\), \(y=P_{A}\cdot q/P_{A}\cdot\ell\approx(E_{\ell}-E_{\ell}^{\prime})/E_{\ell}\) is the fraction of beam energy transferred to the hadronic system, \(\lambda\) is the beam helicity, \(S_{L}\) is the target longitudinal polarization, \(\varepsilon\) is the ratio of longitudinal and transverse photon flux, and \[\frac{y^{2}}{2\,(1-\varepsilon)} \approx\left(1-y+\frac{1}{2}y^{2}\right), \frac{y^{2}}{2\,(1-\varepsilon)}\,\varepsilon \approx(1-y) \tag{31}\] \[\frac{y^{2}}{2\,(1-\varepsilon)}\,\sqrt{2\,\varepsilon(1+ \varepsilon)} \approx(2-y)\,\sqrt{1-y}, \frac{y^{2}}{2\,(1-\varepsilon)}\,\sqrt{2\,\varepsilon(1- \varepsilon)} \approx y\,\sqrt{1-y}\;. \tag{32}\] The structure functions of interest can be written in terms of PDFs and DiFFs in the following way \[F_{UT}^{\sin(\phi_{R}+\phi_{S})}=\frac{1}{4\pi}\,x\sum_{q}\,e_{q}^{2}\,\frac{| \mathbf{R}_{\perp}|}{M_{hh}}\,h_{1}^{q}(x)\,H_{1}^{{\rm cl}\,q}(z_{hh},\zeta,\mathbf{R }_{\perp}^{2})\;, \tag{33}\] Figure 6: Kinematics for the semi-inclusive deep-inelastic scattering of a lepton with initial momentum \(\ell\) and final momentum \(\ell^{\prime}\) on a hadronic target with transverse polarization \(\mathbf{S}_{T}\) oriented along \(\phi_{S}\) with respect to the scattering plane formed by \(\mathbf{\ell}\) and \(\mathbf{\ell}^{\prime}\). The final state can be either a pair of unpolarized hadrons with momenta \(P_{1}\) and \(P_{2}\) with azimuthal orientation \(\phi_{R}\) and total momentum \(P=P_{1}+P_{2}\) (left panel), or a hadron with momentum \(P_{h}\) and azimuthal orientation \(\phi_{h}\) inside a jet with standard axis \(\hat{\mathbf{J}}\) (right panel). In both cases, the parallel kinematics is considered where \(\mathbf{P}\) and \(\hat{\mathbf{J}}\) are along the momentum transfer \(\mathbf{q}=\mathbf{\ell}-\mathbf{\ell}^{\prime}\), which identifies the \(\hat{z}\) axis. \[F_{LU}^{\sin\phi_{R}} = \frac{2M}{Q}\,\frac{1}{4\pi}\,x\sum_{q}\,e_{q}^{2}\,\frac{|\mathbf{R}_{ \perp}|}{M_{hh}}\Bigg{[}e^{q}(x)\,H_{1}^{\spherical\,q}(z_{hh},\zeta,\mathbf{R}_{\perp} ^{2})+\frac{M_{hh}}{M}\,f_{1}^{q}(x)\,\frac{\tilde{G}^{\spherical\,q}(z_{hh}, \zeta,\mathbf{R}_{\perp}^{2})}{z_{hh}}\Bigg{]}, \tag{34}\] \[F_{UL}^{\sin\phi_{R}} = \frac{2M}{Q}\,\frac{1}{4\pi}\,x\sum_{q}\,e_{q}^{2}\,\frac{|\mathbf{R}_{ \perp}|}{M_{hh}}\Bigg{[}h_{L}^{q}(x)\,H_{1}^{\spherical\,q}(z_{hh},\zeta,\mathbf{R}_{ \perp}^{2})+\frac{M_{hh}}{M}\,g_{1}^{q}(x)\,\frac{\tilde{G}^{\spherical\,q}(z_{hh}, \zeta,\mathbf{R}_{\perp}^{2})}{z_{hh}}\Bigg{]}\;. \tag{35}\] Equation (33) represents the standard way to address in a collinear framework the chiral-odd transversity PDF \(h_{1}(x)\). The integral of \(h_{1}(x)\) is the tensor charge, which might represent a possible portal to new physics beyond the Standard Model [41] since it is relevant for explorations of new possible CP-violating couplings [42] or effects induced by tensor operators not included in the Standard Model Lagrangian [43]. The tensor charge can be computed in lattice QCD with very high precision [44]. Future facilities will have a large impact on the current uncertainty on the tensor charge extracted from phenomenological studies [45; 46; 47]. Equation (34) is particularly interesting because it contains the contribution of the twist-3 chiral-odd PDF \(e(x)\), which contains crucial information on quark-gluon-quark correlations (see, e.g., [48]). The integral of \(e(x)\) is the scalar charge of the nucleon and is related to the the so-called \(\sigma\) term, which plays an important role in understanding the emergence of nucleon mass from chiral symmetry breaking [49] and its decomposition in terms of contributions from quarks and gluons [50; 51; 52; 53]. The nucleon scalar charge can be important also for the search of physics beyond the Standard Model, since it probes scalar interactions and can be relevant for dark matter searches (see, e.g., Refs. [43; 54]). The nucleon scalar charge and the \(\sigma\) term have been computed in lattice QCD (for a review, see Ref. [55] and references therein). The \(e(x)\) has been studied in several non-perturbative models of hadron structure [56; 57; 58; 59; 60; 61; 62; 63; 64]. It can be extracted in the TMD framework by considering the beam spin asymmetry that isolates the \(d\sigma_{LU}\) cross section for inclusive single-hadron production, where the chiral-odd partner is represented by the Collins function \(H_{1}^{\spherical}\)[65; 66; 67]. However, this observable contains other three contributions [68; 69; 70]. Moreover, each term is represented by an intricate convolution upon transverse momenta. Therefore, it may be more convenient to work in the collinear framework and isolate the \(e(x)\) through the simple product with its chiral-odd partner represented by the DiFF \(H_{1}^{\spherical}\), as shown in Eq. (34). In order to reach this goal, we need to deal with the second contribution in Eq. (34), which depends on the unknown twist-3 DiFF \(\tilde{G}^{\spherical}\). Calculations in the spectator model show that \(\tilde{G}^{\spherical}\) turns out to be small, and possibly with opposite sign to \(H_{1}^{\spherical}\)[71]. The extraction of \(e(x)\) from CLAS and CLAS12 data projected onto the \(x\) dependence was performed assuming that the \(M_{hh}\) dependence of \(\tilde{G}^{\spherical}\) is the same of \(H_{1}^{\spherical}\) but rescaled by a constant factor [72]. A possible strategy to overcome this problem could be to study the ratio \(d\sigma_{LU}/d\sigma_{UL}\)[33]. In fact, if the term proportional to \(\tilde{G}^{\spherical}\) would be negligible, using the flavor symmetries of \(H_{1}^{\spherical}\)[21; 22; 23; 24; 25; 73; 74; 75; 76] the ratio should not exhibit any dependence on \((z_{hh},M_{hh})\), since the latter should cancel out between numerator and denominator. On the contrary, any observed dependence would hint at a non-negligible contribution from twist-3 DiFF, making the extraction of \(e(x)\) more challenging. In this perspective, collecting more information on these observables from other channels should give more insight. Hence, it might be useful to consider the \(\ell+A\to\ell^{\prime}+(\text{Jet}\,C)+X\) process depicted in the right panel of Fig. 6, namely the SIDIS on a (polarized) hadron target where a hadron with momentum \(P_{h}\) is inclusively produced inside a jet with transverse momentum \(\mathbf{j}_{\perp}\) with respect to the jet axis \(\hat{\mathbf{J}}\) taken parallel to the \(\hat{z}=\hat{\mathbf{q}}\) axis. The cross section of this process has the same structure of Eq. (30). By using the correspondence of Sec. V, the structure functions in Eqs. (33)-(35) become \[F_{UT}^{\sin(\phi_{j}+\phi_{S})} = x\,\sum_{q}\,e_{q}^{2}\,\frac{|\mathbf{j}_{\perp}|}{M_{h}}\,h_{1}^{q }(x)\,\mathcal{H}_{1}^{\perp\,q}(z_{J},z_{h},\mathbf{j}_{\perp}^{2};Q,Qr)\;, \tag{36}\] \[F_{LU}^{\sin\phi_{j}} = \frac{2M}{Q}\,x\,\sum_{q}\,e_{q}^{2}\,\frac{|\mathbf{j}_{\perp}|}{M_{ h}}\Bigg{[}e^{q}(x)\,\mathcal{H}_{1}^{\perp\,q}(z_{J},z_{h},\mathbf{j}_{\perp}^{2};Q,Qr)+ \frac{M_{h}}{M}\,f_{1}^{q}(x)\,\frac{\tilde{\mathcal{G}}^{\perp\,q}(z_{J},z_{h}, \mathbf{j}_{\perp}^{2};Q,Qr)}{z_{h}}\Bigg{]}\;,\] (37) \[F_{UL}^{\sin\phi_{j}} = \frac{2M}{Q}\,x\,\sum_{q}\,e_{q}^{2}\,\frac{|\mathbf{j}_{\perp}|}{M_{ h}}\left[h_{L}^{q}(x)\,\mathcal{H}_{1}^{\perp\,q}(z_{J},z_{h},\mathbf{j}_{\perp}^{2};Q,Qr)+ \frac{M_{h}}{M}\,g_{1}^{q}(x)\,\frac{\tilde{\mathcal{G}}^{\perp\,q}(z_{J},z_{h}, \mathbf{j}_{\perp}^{2};Q,Qr)}{z_{h}}\right]\;, \tag{38}\] where \(Qr\) is the typical scale of the jet with radius \(r\). Equation (36) indicates a new way to address the collinear transversity PDF \(h_{1}(x)\), namely through the "Collins-in-jet" effect in the SIDIS process. Equations (37),(38) show that in the same framework of the SIDIS "Collins-in-jet" effect one can address also the twist-3 collinear PDFs \(e(x)\) and \(h_{L}(x)\), provided that the remaining contribution given by \(\tilde{\mathcal{G}}^{\perp}\) is small. The \(\tilde{\mathcal{G}}^{\perp}\) is a new twist-3 jTMDFF that corresponds to the above twist-3 DiFF \(\tilde{G}^{\spherical}\). As in the dihadron case, by using the current knowledge on the "Collins-in-jet" effect one could predict the dependence of the ratio \(d\sigma_{LU}/d\sigma_{UL}\) on the kinematic variables of the final state. The analysis of any possible deviation of data from these predictions would indicate if the contribution of the twist-3 \(\tilde{\mathcal{G}}^{\perp}\) would be or would not be negligible. ## VII Conclusions Transverse-Momentum-Dependent (TMD) factorization gives the possibility of measuring many interesting signals and access many intriguing features of the structure of hadrons. However, one of its shortcomings is that it cannot be applied to hadronic collisions with observed hadronic final states, like, e.g., the process \(A+B\to C+D+X\). Two alternative mechanisms have been proposed to recover part of the versatility of TMDs while preserving the applicability to hadronic processes: the inclusive production of dihadrons, or of a hadron inside jet. The inclusive production of dihadrons, namely of two hadrons originating from the fragmentation of the same parton, can be usefully studied in the collinear framework, where transverse momenta of all partons are integrated; it involves universal collinear Dihadron Fragmentation Functions (DiFFs). The inclusive production of a hadron inside jet, namely the inclusive production of a jet with a detected substructure, can be studied in a hybrid factorization approach involving collinear partonic functions in the initial state and TMD hadron-in-jet Fragmentation Functions (jTMDFFs) in the final state. In this paper, we have explored similarities between the two formalisms of dihadron and hadron-in-jet production, and we have established a set of correspondence rules between DiFFs and jTMDFFs. We have used this correspondence to transfer to the jTMDFF case some interesting results obtained with DiFFs, in particular for inclusive production of two back-to-back dihadrons in unpolarized proton-proton collisions, and for inclusive production of a dihadron in semi-inclusive deep-inelastic scattering. In unpolarized proton-proton collisions with the inclusive production of two back-to-back jets where one hadron is detected inside each jet, the cross section contains specific modulations that can distinguish if the hadron is detected inside a jet generated by a quark or by a gluon. Moreover, one of the two modulations is sensitive to a polarized jTMDFF directly linked to the TMD fragmentation function of a linearly polarized gluon. In semi-inclusive deep-inelastic scattering, the cross section for unpolarized lepton probe and transversely polarized proton target offers a new channel to extract the chiral-odd transversity collinear parton distribution function \(h_{1}(x)\), which is connected to the puzzling proton tensor charge. The cross section for longitudinally polarized lepton and unpolarized proton contains a term proportional to the chiral-odd subleading-twist collinear parton distribution function \(e(x)\), which is connected to the well known nucleon \(\sigma\) term and to the physics of QCD chiral symmetry breaking. The above examples illustrate how useful the formal comparison between DiFFs and jTMDFFs can be. The inclusive production of dihadrons has already been measured in hadronic colliders [77; 78; 79], \(e^{+}e^{-}\) colliders [80] and fixed-target experiments [81; 82; 83; 84]. The inclusive production of hadrons-in-jet has been measured only in hadronic colliders [85; 86]. Both channels will be (abundantly) available at the future Electron-Ion Collider [46; 47]. Therefore, we think it is worth to explore the above mentioned possibilities and to push further the analysis of the consequences of the correspondence rules set in this paper. ## Appendix A Cross section for inclusive dihadron production In Ref. [36], Eq. (15) shows the unpolarized cross section for the process \(A+B\to(C_{1}\,C_{2})+X\). After integrating over the azimuthal orientation \(\phi_{S_{B}}\) of the polarization of hadron \(B\), it reads \[\frac{d\sigma_{UU}}{d\eta\,d|\mathbf{P}_{T}|\,d\cos\theta_{C}\,dM_{hh }^{2}\,d\phi_{R}}=\] \[=\frac{|\mathbf{P}_{T}|}{2\pi}\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_ {hhC}}{z_{hhC}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,z_{hhC}\delta(z_{hhC} -\bar{z}_{hh})\,\frac{d\hat{\sigma}_{ab\to cd}}{d\hat{t}}\,D_{1}^{c}(z_{hhC},\cos\theta_{C},M_{hh}^{2})\] \[=\frac{|\mathbf{P}_{T}|}{2\pi s}\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_ {hhC}}{x_{A}x_{B}z_{hhC}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,\tilde{s}^ {2}\delta(\hat{s}+\hat{t}+\hat{u})\,\frac{d\hat{\sigma}_{ab\to cd}}{d\hat{t}} \,D_{1}^{c}(z_{hhC},\cos\theta_{C},M_{hh}^{2})\, \tag{16}\] where \(\eta,|\mathbf{P}_{T}|,M_{hh}^{2}\) and \(\phi_{R}\) are defined in Sec. II.1, and \(\theta_{C}\) is the polar angle between \(\mathbf{P}\) and the direction of the back-to-back emission of the two hadrons in their center-of-mass (c.m.) frame (see Fig. 3 of Ref. [18]). It turns out that \(\zeta=a+b\cos\theta_{C}\), with \(a,b\) functions of only the invariant mass \(M_{hh}\)[18]. Therefore, the Jacobian of the transformation is \(d\zeta=2|\mathbf{R}|/M_{hh}\,d\cos\theta_{C}\) with [36] \[|\mathbf{R}|=\frac{1}{2}\sqrt{M_{hh}^{2}-2(M_{1}^{2}+M_{2}^{2})+(M_{1}^{2}-M_{2}^{2} )^{2}/M_{hh}^{2}}\;. \tag{10}\] Using the kinematic relations in Eq. (5) and the obvious definition \(\mathbf{R}_{\perp}=(|\mathbf{R}_{\perp}|\cos\phi_{R},\,|\mathbf{R}_{\perp}|\sin\phi_{R})\), we can compute the Jacobian of the transformation \(dM_{hh}^{2}\,d\phi_{R}=d\mathbf{R}_{\perp}\,8/(1-\zeta^{2})\). The cross section in Eq. (10) can be conveniently rewritten as \[\frac{d\sigma_{UU}}{d\eta\,d|\mathbf{P}_{T}|\,d\zeta\,d\mathbf{R}_{\perp}}=\sum_{a,b,c, d}\int\frac{dx_{A}dx_{B}dz_{hhC}}{x_{A}x_{B}z_{hhC}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{ b}(x_{B})\,\frac{|\mathbf{P}_{T}|\,\hat{s}}{2\pi s}\,\frac{d\hat{\sigma}_{ab \to cd}}{d\hat{t}}\,\hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,D_{1}^{c}(z_{hhC },\zeta,\mathbf{R}_{\perp}^{2})\;, \tag{11}\] where \[D_{1}^{c}(z_{hhC},\cos\theta_{C},M_{hh}^{2})=2\frac{|\mathbf{R}|}{M_{hh}}\frac{1- \zeta^{2}}{8}\,D_{1}^{c}(z_{hhC},\zeta,\mathbf{R}_{\perp}^{2}) \tag{12}\] takes into account the above Jacobians. In a similar way, Eq. (16) of Ref. [36] describes the polarized cross section for the process \(A+B^{\uparrow}\to(C_{1}\,C_{2})+X\): \[\frac{d\sigma_{UT}}{d\eta\,d|\mathbf{P}_{T}|\,d\cos\theta_{C}\,dM_{hh }^{2}\,d\phi_{R}d\phi_{S_{B}}}=\frac{|\mathbf{P}_{T}|}{4\pi^{2}}\,|\mathbf{S}_{BT}|\, \sin(\phi_{S_{B}}-\phi_{R})\] \[\qquad\times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{hhC}}{z_{hhC} ^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,z_{hhC}\delta(z_{hhC}-\bar{z}_{hh}) \,\frac{d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}}{d\hat{t}}\, \frac{|\mathbf{R}|}{M_{hh}}\,\sin\theta_{C}\,H_{1}^{\spherical}(z_{hhC},\cos\theta_{ C},M_{hh}^{2})\] \[\qquad=\frac{|\mathbf{P}_{T}|}{4\pi^{2}s}\,|\mathbf{S}_{BT}|\,\sin(\phi_{ S_{B}}-\phi_{R})\] \[\qquad\times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{hhC}}{x_{A}x_ {B}z_{hhC}^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,\hat{s}^{2}\delta(\hat{s} +\hat{t}+\hat{u})\,\frac{d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{ \uparrow}d}}{d\hat{t}}\,\frac{|\mathbf{R}_{\perp}|}{M_{hh}}\,H_{1}^{\spherical}(z_{ hhc},\cos\theta_{C},M_{hh}^{2})\;, \tag{13}\] where \(\mathbf{S}_{BT}\) is the transverse polarization of the colliding proton with orientation \(\phi_{S_{B}}\) with respect to the reaction plane, and \(h_{1}^{b}\) is the transversity distribution for the transversely polarized parton \(b\) with fractional momentum \(x_{B}\). The elementary cross sections \(d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}\) describe the annihilation of parton \(a\) and \(b\) with transfer of the transverse polarization of the latter to parton \(c\) while summing on the undetected fragments from parton \(d\). All the possible independent flavor combinations are listed in the Appendix of Ref. [36]. By applying the same transformation of variables from \(d\cos\theta_{C}dM_{hh}^{2}d\phi_{R}\) to \(d\zeta d\mathbf{R}_{\perp}\), we get \[\frac{d\sigma_{UT}}{d\eta\,d|\mathbf{P}_{T}|\,d\zeta\,d\mathbf{R}_{\perp }d\phi_{S_{B}}}=\frac{|\mathbf{S}_{BT}|}{4\pi^{2}}\,\sin(\phi_{S_{B}}-\phi_{R})\] \[\qquad\times\sum_{a,b,c,d}\int\frac{dx_{A}dx_{B}dz_{hhC}}{x_{A}x_ {B}z_{hhC}^{2}}\,f_{1}^{a}(x_{A})\,h_{1}^{b}(x_{B})\,\frac{|\mathbf{P}_{T}|\,\hat{ s}}{s}\,\frac{d\Delta\hat{\sigma}_{ab^{\uparrow}\to c^{\uparrow}d}}{d\hat{t}}\, \hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,\frac{|\mathbf{R}_{\perp}|}{M_{hh}}\,H_{1} ^{\spherical}(z_{hhC},\zeta,\mathbf{R}_{\perp}^{2})\;, \tag{14}\] where \[H_{1}^{\spherical}(z_{hhC},\cos\theta_{C},M_{hh}^{2})=2\frac{|\mathbf{R}|}{M_{hh}} \frac{1-\zeta^{2}}{8}\,H_{1}^{\spherical}(z_{hhC},\zeta,\mathbf{R}_{\perp}^{2})\;. \tag{15}\] We generalize the above formulae to the case of the inclusive production of two dihadrons. In Ref. [36], from Eqs. (20-22) the unpolarized cross section for the process \(A+B\to(C_{1}C_{2})_{C}+(D_{1}D_{2})_{D}+X\) reads (after integrating on the polarizations of initial hadrons) \[\frac{d\sigma_{UU}}{d\eta_{C}\,d|\mathbf{P}_{CT}|\,d\cos\theta_{C}\,dM_{ C}^{2}\,d\phi_{R_{C}}\,d\eta_{D}\,d|\mathbf{P}_{DT}|\,d\cos\theta_{D}\,dM_{D}^{2}\,d \phi_{R_{D}}}=\] \[\quad=\frac{|\mathbf{P}_{CT}||\mathbf{P}_{DT}|}{8\pi^{2}}\,\sum_{a,b}\int \frac{dx_{A}dx_{B}dz_{hhC}dz_{hhD}}{z_{hhC}^{2}z_{hhD}^{2}}\,f_{1}^{a}(x_{A}) \,f_{1}^{b}(x_{B})\,x_{B}\delta(x_{B}-\bar{x}_{B})\] \[\quad\times z_{hhC}\delta(z_{hhC}-\bar{z}_{hhC})\,\frac{z_{hhC}z_ {hhD}^{2}}{|\mathbf{P}_{CT}||\mathbf{P}_{DT}|}\,\delta(z_{hhD}-\bar{z}_{hhD})\] \[\quad\times\Bigg{\{}\sum_{c,d}\Bigg{[}\,\frac{d\hat{\sigma}_{ab \to cd}}{d\hat{t}}\,D_{1}^{c}(z_{hhC},\cos\theta_{C},M_{C}^{2})\,D_{1}^{d}(z_ {hhD},\cos\theta_{D},M_{D}^{2})\] \[\qquad\qquad+\cos(\phi_{R_{C}}-\phi_{R_{D}})\,\frac{d\Delta\hat{ \sigma}_{ab\to c^{\dagger}d^{\ast}}}{d\hat{t}}\,\frac{|\mathbf{R}_{C}|}{M_{C}}\, \sin\theta_{C}\,H_{1}^{\spherical c}(z_{hhC},\cos\theta_{C},M_{C}^{2})\,\frac{| \mathbf{R}_{D}|}{M_{D}}\,\sin\theta_{D}\,H_{1}^{\spherical d}(z_{hhD},\cos\theta_{D}, M_{D}^{2})\,\Bigg{]}\] \[\quad+\cos(2\phi_{R_{C}}-2\phi_{R_{D}})\,\frac{d\Delta\hat{ \sigma}_{ab\to g^{\dagger}g^{\dagger}}}{d\hat{t}}\,\frac{|\mathbf{R}_{C}|^{2}}{M_{ C}^{2}}\,\sin^{2}\theta_{C}\,H_{1}^{\spherical g}(z_{hhC},\cos\theta_{C},M_{C}^{2})\, \frac{|\mathbf{R}_{D}|^{2}}{M_{D}^{2}}\,\sin^{2}\theta_{D}\,\,H_{1}^{\spherical g}(z _{hhD},\cos\theta_{D},M_{D}^{2})\Bigg{\}}\;, \tag{10}\] where the momenta and the angles of the second hadron pair are defined in complete analogy with the first pair by replacing the labels \(c,C\) with \(d,D\). The additional delta functions are due to momentum conservation in the elementary \(ab\to cd\) process both in the longitudinal direction of the \(\hat{z}\) axis, identified with \(\mathbf{P}_{A}\), and in the transverse plane. In collinear kinematics, the conservation in the transverse plane is trivially \(\mathbf{P}_{CT}/z_{hhC}=-\mathbf{P}_{DT}/z_{hhD}\). This implies that the above cross section is integrated in the azimuthal angles of \(\mathbf{P}_{CT}\) and \(\mathbf{P}_{DT}\) with the condition \(\phi_{C}=\phi_{D}+\pi\) and that the moduli are constrained by [8] \[\delta\left(\frac{|\mathbf{P}_{CT}|}{z_{hhC}}-\frac{|\mathbf{P}_{DT}|}{z_{hhD}}\right) =\frac{z_{hhD}^{2}z_{hhC}}{|\mathbf{P}_{CT}||\mathbf{P}_{DT}|}\,\delta(z_{ hhD}-\bar{z}_{hhD})\;,\qquad\bar{z}_{hhD}=\frac{|\mathbf{P}_{DT}|}{\sqrt{s}}\, \frac{e^{\eta_{C}}+e^{\eta_{D}}}{x_{A}}\;. \tag{11}\] The conservation along the \(\hat{z}\) axis in the c.m. frame of the annihilation reads \(x_{A}P_{Az}-x_{B}P_{Bz}=P_{Cz}/z_{hhC}+P_{Dz}/z_{hhD}\). Using the previous delta function, after some manipulation it can be rewritten as \[\delta\left(\eta_{C}+\eta_{D}+\log\frac{x_{B}}{x_{A}}\right)=x_{ B}\delta(x_{B}-\bar{x}_{B})\;,\qquad\bar{x}_{B}=x_{A}e^{-\eta_{C}}e^{-\eta_{D}}\;. \tag{12}\] Finally, the third delta function is the analogue of Eq. (10). Because of Eq. (12), it can be rewritten as \[\hat{s}\delta(\hat{s}+\hat{t}+\hat{u})=z_{hhC}\delta(z_{hhC}-\bar{z}_{hhC})\;, \qquad\bar{z}_{hhC}=\frac{|\mathbf{P}_{CT}|}{\sqrt{s}}\,\frac{x_{A}\,e^{-\eta_{C}} +x_{B}\,e^{\eta_{C}}}{x_{A}\,x_{B}}=\frac{|\mathbf{P}_{CT}|}{\sqrt{s}}\,\frac{e^{ \eta_{C}}+e^{\eta_{D}}}{x_{A}}\;. \tag{13}\] In Eq. (10), the elementary cross sections \(d\Delta\hat{\sigma}_{ab\to c^{\dagger}d^{\dagger}}\) involve only quarks for the final partons \(c,d\), while \(d\Delta\hat{\sigma}_{ab\to g^{\dagger}g^{\dagger}}\) contain only final gluons linearly polarized in the transverse plane. Hence, the \(H_{1}^{\spherical g}\) function describes the fragmentation of such linearly polarized gluons into pairs of unpolarized hadrons. For both cases of final polarized quarks and gluons, all nonvanishing combinations are listed in the Appendix of Ref. [36]. By introducing the same transformation of variables used for the inclusive production of a single hadron pair, the cross section of Eq. (10) can be rewritten as \[\frac{d\sigma_{UU}}{d\eta_{C}\,d|\mathbf{P}_{CT}|\,d\zeta_{C}\,d\mathbf{ R}_{C\perp}\,d\eta_{D}\,d|\mathbf{P}_{DT}|\,d\zeta_{D}\,d\mathbf{R}_{D\perp}}=\frac{|\mathbf{P}_{CT}|| \mathbf{P}_{DT}|}{8\pi^{2}}\,\sum_{a,b}\int\frac{dx_{A}dx_{B}dz_{hhC}dz_{hhD}}{z_{ hhC}^{2}z_{hhD}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\] \[\quad\times\hat{s}\delta(\hat{s}+\hat{t}+\hat{u})\,\delta\left( \frac{|\mathbf{P}_{CT}|}{z_{hhC}}-\frac{|\mathbf{P}_{DT}|}{z_{hhD}}\right)\,\delta \left(x_{A}P_{Az}-x_{B}P_{Bz}-\frac{P_{Cz}}{z_{hhC}}-\frac{P_{Dz}}{z_{hhD}}\right)\] \[\quad\times\Bigg{\{}\sum_{c,d}\Bigg{[}\,\frac{d\hat{\sigma}_{ab \to cd}}{d\hat{t}}\,D_{1}^{c}(z_{hhC},\zeta_{C},\mathbf{R}_{C\perp}^{2})\,D_{1}^{d}(z _{hhD},\zeta_{D},\mathbf{R}_{D\perp}^{2})\] \[+\cos(\phi_{R_{C}}-\phi_{R_{D}})\,\frac{d\Delta\hat{\sigma}_{ab\to c^{ \dagger}d^{\ast}}}{d\hat{t}}\,\frac{|\mathbf{R}_{C\perp}|}{M_{C}}\,H_{1}^{\spherical} \,c(z_{hhC},\zeta_{C},\mathbf{R}_{C\perp}^{2})\,\frac{|\mathbf{R}_{D\perp}|}{M_{D}}\,H_{ 1}^{\spherical}\,d(z_{hhD},\zeta_{D},\mathbf{R}_{D\perp}^{2})\Bigg{]}\] \[+\cos(2\phi_{R_{C}}-2\phi_{R_{D}})\,\frac{d\Delta\hat{\sigma}_{ab \to g^{\dagger}g^{\dagger}}}{d\hat{t}}\,\frac{|\mathbf{R}_{C\perp}|^{2}}{M_{C}^{2} }\,\,H_{1}^{\spherical}\,(z_{hhC},\zeta_{C},\mathbf{R}_{C\perp}^{2})\,\frac{|\mathbf{R}_{D \perp}|^{2}}{M_{D}^{2}}\,\,H_{1}^{\spherical}g\,(z_{hhD},\zeta_{D},\mathbf{R}_{D\perp} ^{2})\Bigg{\}}\;. \tag{55}\] ## Appendix B Elementary hard cross section for \(a+b\to c+d\) We compare the elementary cross section used in Refs. [36; 27] and in the relevant literature for inclusive production of a hadronic final state in hadron-hadron collisions. In Refs. [87; 88], the cross section for the process \(A+B\to C+X\) reads \[\frac{E_{C}d\sigma}{dP_{C}}=\sum_{abcd}\int\frac{dx_{A}dx_{B}dz_{C}}{\pi z_{C} ^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,\frac{d\hat{\sigma}_{ab\to cd}}{d \hat{t}}\,\hat{s}\,\delta(\hat{s}+\hat{t}+\hat{u})\,D_{1}(z_{C})\;. \tag{56}\] The same structure of cross section can be obtained in Ref. [38] after making the transformation of variables \(\hat{v}=1+\hat{t}/\hat{s}\) and \(\hat{w}=-\hat{u}/(\hat{s}+\hat{t})\), but obtaining the above elementary cross section multiplied by \(\hat{s}\). By labeling the \(d\hat{\sigma}\) of Refs. [87; 88] as \(d\hat{\sigma}^{\rm ToCa}\) and the one of Ref. [38] as \(d\hat{\sigma}^{\rm ACGG}\), we get \[\frac{d\hat{\sigma}^{\rm ACGG}_{ab\to cd}}{d\hat{t}}=\hat{s}\,\frac{d\hat{ \sigma}^{\rm ToCa}_{ab\to cd}}{d\hat{t}}\;. \tag{57}\] Equation (56) can be further integrated in the angle of \(\mathbf{P}_{T}\) and made differential in the pseudorapidity \(\eta\) of the final hadron through the transformation \(2\pi|\mathbf{P}_{T}|d|\mathbf{P}_{T}|dP_{z}/E\to 2\pi|\mathbf{P}_{T}|d|\mathbf{P}_{T}|d\eta\): \[\frac{d\sigma}{d\eta d|\mathbf{P}_{T}|}=2|\mathbf{P}_{T}|\sum_{a,b,c,a}\int\frac{dx_{ A}dx_{B}dz_{C}}{z_{C}^{2}}\,f_{1}^{a}(x_{A})\,f_{1}^{b}(x_{B})\,\frac{d\hat{ \sigma}^{\rm ToCa}_{ab\to cd}}{d\hat{t}}\,\hat{s}\,\delta(\hat{s}+\hat{t}+ \hat{u})\,D_{1}(z_{C})\;. \tag{58}\] This expression is formally identical to the cross section for the \(A+B\to(C_{1}\,C_{2})+X\) process of Eq. (56) after integrating over \(d\cos\theta_{C}\), \(d\phi_{R}\) and using Eq. (10) (apart for the dependence on the dihadron invariant mass \(M_{hh}\) through the unpolarized DiFF \(D_{1}\), which does not affect the argument about the elementary cross section). By labeling the \(d\hat{\sigma}\) of Eq. (56) as \(d\hat{\sigma}^{\rm DiFF}\), we get \[\frac{d\hat{\sigma}^{\rm ACGG}_{ab\to cd}}{d\hat{t}}=\hat{s}\,\frac{d\hat{ \sigma}^{\rm ToCa}_{ab\to cd}}{d\hat{t}}=\hat{s}\,\frac{d\hat{\sigma}^{\rm DiFF }_{ab\to cd}}{d\hat{t}}\;. \tag{59}\] The above relation can be cross-checked by inspecting the elementary cross sections of various partonic channels in the Appendix of Refs. [38] and [36], respectively. By recalling the relations (9) and (10), Eq. (17) has the same structure as Eq. (58) (apart for the dependence on the variables \((z_{h},\mathbf{j}_{\perp})\) describing the jet substructure through the unpolarized jTMDFF \(\mathcal{D}_{1}\), which does not affect the argument about the elementary cross section) provided that \[\frac{\pi z_{JC}}{\hat{s}}\,H_{ab\to cd}^{U}=\frac{d\hat{\sigma}^{\rm ToCa}_{ ab\to cd}}{d\hat{t}}\;. \tag{60}\] Because of Eq. (59), the above relation leads to Eq. (18).
2305.06855
Entropy Constraints for Ground Energy Optimization
We study the use of von Neumann entropy constraints for obtaining lower bounds on the ground energy of quantum many-body systems. Known methods for obtaining certificates on the ground energy typically use consistency of local observables and are expressed as semidefinite programming relaxations. The local marginals defined by such a relaxation do not necessarily satisfy entropy inequalities that follow from the existence of a global state. Here, we propose to add such entropy constraints that lead to tighter convex relaxations for the ground energy problem. We give analytical and numerical results illustrating the advantages of such entropy constraints. We also show limitations of the entropy constraints we construct: they are implied by doubling the number of sites in the relaxation and as a result they can at best lead to a quadratic improvement in terms of the matrix sizes of the variables. We explain the relation to a method for approximating the free energy known as the Markov Entropy Decomposition method.
Hamza Fawzi, Omar Fawzi, Samuel O. Scalet
2023-05-11T14:51:21Z
http://arxiv.org/abs/2305.06855v2
# Entropy Constraints for Ground Energy Optimization ###### Abstract We study the use of von Neumann entropy constraints for obtaining lower bounds on the ground energy of quantum many-body systems. Known methods for obtaining certificates on the ground energy typically use consistency of local observables and are expressed as semidefinite programming relaxations. The local marginals defined by such a relaxation do not necessarily satisfy entropy inequalities that follow from the existence of a global state. Here, we propose to add such entropy constraints that lead to tighter convex relaxations for the ground energy problem. We give analytical and numerical results illustrating the advantages of such entropy constraints. We also show limitations of the entropy constraints we construct: they are implied by doubling the number of sites in the relaxation and as a result they can at best lead to a quadratic improvement in terms of the matrix sizes of the variables. We explain the relation to a method for approximating the free energy known as the Markov Entropy Decomposition method. ## 1 Introduction A fundamental computational problem in quantum many-body theory is to compute the ground energy of local Hamiltonians. Consider a multipartite Hilbert space \(\mathcal{H}=\otimes_{v\in V}\mathbb{C}^{d}\) with local dimension \(d\), on a finite set of sites \(V\). A \(k\)-local Hamiltonian, is a Hermitian operator on \(\mathcal{H}\) defined as \[H=\sum_{A\in\binom{V}{k}}h_{A}, \tag{1}\] where each of the \(h_{A}\) is a Hermitian operator acting nontrivially only on the set \(A\subset V\), and \(\binom{V}{k}\) is the set of subsets of \(V\) of size \(k\). In this paper, we will be mostly interested in 2-local Hamiltonians, where the interaction can be modeled by a graph \(G=(V,E)\) on the set of sites \(V\), and where a Hamiltonian term \(h_{ij}\) is attached to each edge \(ij\in E\): \[H=\sum_{ij\in E}h_{ij}. \tag{2}\] The _ground energy_ of \(H\) is defined as its smallest eigenvalue. Due to the special structure of \(H\), its matrix representation is generally sparse and thus one can apply standard methods such as Lanczos iterations [10] to compute its minimal eigenvalue. However, since the dimension of \(H\) grows exponentially with \(|V|\), this is only feasible for moderate values of \(|V|\). It is of considerable theoretical and practical interest to find efficient algorithms, that scale polynomially in \(|V|\), to approximate the ground energy of local Hamiltonians [14, 15, 16]. The smallest eigenvalue of \(H\) admits the following variational formulation: \[\lambda_{\min}(H)=\min_{\psi\in\mathcal{H}}\frac{\langle\psi,H\psi\rangle}{ \langle\psi,\psi\rangle}. \tag{3}\] Variational methods posit a certain form for the state \(\psi=\psi_{\theta}\), and find the value of the parameters \(\theta\) that minimize the objective function of (3). As such, these methods provide upper bounds on \(\lambda_{\min}(H)\). A prominent example are tensor network states [1, 10], which have been extremely successful and in particular give provably efficient algorithms for gapped systems in one dimension [14, 15, 16]. Another class of methods that have been studied in the literature are based on convex relaxations and provide lower bounds on \(\lambda_{\min}(H)\). For 2-local Hamiltonians \(H\) of the form (2), computing the energy \(\langle\psi,H\psi\rangle\) only requires knowledge of the two-body marginals \(\rho_{ij}\) of \(|\psi\rangle\langle\psi|\) for \(ij\in E\). If we denote by \(\mathcal{C}\) the set of two-body marginals that are consistent with a global state on \(V\), i.e., \[\mathcal{C}=\mathcal{C}_{d,V,E}=\Big{\{}(\rho_{ij})_{ij\in E}:\exists\rho\in \mathcal{D}(\otimes_{v\in V}\mathbb{C}^{d}),\text{ s.t. }\rho_{ij}=\operatorname{Tr}_{V\setminus\{i,j \}}\rho\Big{\}} \tag{4}\] where \(\mathcal{D}(\mathcal{H})\) denotes the set of density operators on a Hilbert space \(\mathcal{H}\), then one can write the ground energy problem for a 2-local Hamiltonian (2) as a linear optimization problem over \(\mathcal{C}\): \[\lambda_{\min}(H)=\min_{\langle\rho_{ij}\rangle\in\mathcal{C}}\ \sum_{ij\in E} \operatorname{Tr}[h_{ij}\rho_{ij}]. \tag{5}\] To make this approach tractable, it is required to have a computationally efficient representation of the convex set \(\mathcal{C}\). Unfortunately, it is highly likely that \(\mathcal{C}\) does not have any simple representation, e.g., it is known that the problem of checking membership in \(\mathcal{C}_{2,[n],\binom{[n]}{2}}\) is QMA-hard [17]. Rather than aiming to describe \(\mathcal{C}\), we are interested in constructing efficient outer relaxations of \(\mathcal{C}\), i.e., tractable convex sets \(\widehat{\mathcal{C}}\) such that \(\mathcal{C}\subset\widehat{\mathcal{C}}\). Replacing \(\mathcal{C}\) by \(\widehat{\mathcal{C}}\) in (5) would then yield a lower bound on \(\lambda_{\min}(H)\). Such relaxations \(\widehat{\mathcal{C}}\) can be constructed by identifying _necessary conditions_ that any set of marginals \((\rho_{ij})\) which are globally consistent must satisfy. Most relaxations that have been constructed in the literature are based on semidefinite programming. We describe here the most popular approaches: * A simple relaxation can be obtained by simply imposing that the two-body marginals are consistent on the intersection of their supports, i.e., one can take \[\begin{split}\widehat{\mathcal{C}}^{\text{loc}}_{E}=\Big{\{}( \rho_{ij})_{ij\in E}&:\rho_{ij}\geq 0,\operatorname{Tr}\rho_{ij}=1 \ \ \forall ij\in E\\ &\text{ and }\operatorname{Tr}_{j}\rho_{ij}=\operatorname{Tr}_{j^{ \prime}}\rho_{ij^{\prime}}\ \ \forall ij,ij^{\prime}\in E\Big{\}}.\end{split}\] (6) This relaxation can be made tighter by introducing higher-order marginals of \(\rho\), namely one can consider \[\begin{split}\widehat{\mathcal{C}}^{\text{loc}}_{l}=\Big{\{}( \rho_{ij})_{ij\in E}&:\exists(\rho_{S})_{|S|\leq l},\ \ \rho_{S}\geq 0, \operatorname{Tr}\rho_{S}=1\ \forall S\in\binom{V}{l}\\ &\text{ and }\operatorname{Tr}_{S\setminus S^{\prime}}\rho_{S}= \operatorname{Tr}_{S^{\prime}\setminus S}\rho_{S^{\prime}}\ \ \forall S,S^{\prime}\in\binom{V}{l}\Big{\}}.\end{split}\] (7) It is clear that \(\mathcal{C}=\widehat{\mathcal{C}}^{\text{loc}}_{N}\subset\widehat{\mathcal{C }}^{\text{loc}}_{N-1}\subset\cdots\subset\widehat{\mathcal{C}}^{\text{loc}}_{2} \subset\widehat{\mathcal{C}}^{\text{loc}}_{E}\), where \(N=|V|\). * The Lasserre/sum-of-squares relaxation [10, 14, 15, 16] stems from the observation that if \(|\psi\rangle\) is a global state on \(\mathcal{H}\), then \(\left\langle\psi,O^{\dagger}O\psi\right\rangle\geq 0\) for any observable \(O\) acting on \(\mathcal{H}\). In particular if \(O\) is a \(l\)-local operator, then \(O^{\dagger}O\) is at most \(2l\)-local, and \(\left\langle\psi,O^{\dagger}O\psi\right\rangle\) is linear in the expectation values \(m_{F}=\left\langle\psi,F\psi\right\rangle\) of \(2l\)-local observables \(F\). It turns out that the infinite family of constraints \[\left\langle\psi,O^{\dagger}O\psi\right\rangle\geq 0\ \ \forall O\ l\text{-local observable on }\mathcal{H}\] (8) can be encoded as a single positive semidefinite (psd) constraint of a matrix whose entries are linear in the expectation values (\(m_{F}\)). The corresponding relaxation \(\widehat{\mathcal{C}}_{l}^{\text{\rm{sos}}}\) can then be expressed as \[\widehat{\mathcal{C}}_{l}^{\text{\rm{sos}}}=\left\{(\rho_{ij})_{ij\in E}: \operatorname{Tr}[\rho_{ij}]=1,\rho_{ij}=\sum_{\alpha,\beta}m_{\sigma_{i}^{ \alpha}\sigma_{j}^{\beta}}\sigma^{\alpha}\otimes\sigma^{\beta}\text{ and psd constraint \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq weak monotonicity constraints (and also MED in many settings): entropy constraints involving \(l\) sites are implied by consistency constraints on \(2l-1\) sites, see Eq (12). As a result, as the size of the matrix variables involved in \(\widehat{\mathcal{C}}_{l}^{\mathrm{loc}}\) is exponential in \(l\), entropy constraints can at most lead to a quadratic improvement in terms of the size of the matrix variables. An intriguing open question arising from our work is to construct other entropy constraints that could lead to tighter relaxations. This question is related to obtaining inequalities for the so-called quantum entropy cone (see e.g., [10]) though it differs in several respects: in our case, the dimension of the subsystems is fixed, the number of systems involved is bounded by \(l\), and in order to obtain convex relaxations, we look for expressions that are concave in the state \(\rho\), e.g., conical linear combinations of conditional entropies. ## 2 Weak Monotonicity Constraints We start by recalling the following well-known entropy inequality, also known as weak monotonicity. **Lemma 2.1** (Weak monotonicity).: _For any state \(\rho_{ABC}\) on systems \(ABC\), we have_ \[S(A|B)_{\rho}+S(A|C)_{\rho}\geq 0. \tag{9}\] Proof.: Let \(\rho_{ABCD}\) be a purification of \(\rho_{ABC}\), i.e., \(\rho_{ABCD}\) is rank-one and \(\operatorname{Tr}_{D}(\rho_{ABCD})=\rho_{ABC}\). We have, using the fact that \(\rho_{ABCD}\) is pure, \[S(A|B)_{\rho}=S(AB)_{\rho}-S(B)_{\rho}=S(CD)_{\rho}-S(ACD)_{\rho}=-S(A|CD)_{ \rho}\geq-S(A|C)_{\rho},\] where we used strong subadditivity, i.e., the property that \(S(A|C)\geq S(A|CD)\). There are two important features of this inequality that we want to highlight: the first one is that it does not involve the global state of \(\rho\) on \(ABC\), but only the marginals of \(\rho\) on \(AB\) and \(AC\). The second important aspect is that the inequality (9) defines a convex region in the space of marginals \((\rho_{AB},\rho_{AC})\). This is a consequence of the concavity of the conditional entropy function. For these reasons, the inequality can be used to strengthen the semidefinite relaxations \(\widehat{\mathcal{C}}\) defined earlier, as follows. For example, the set of 2-body locally consistent marginals \(\widehat{\mathcal{C}}_{2}^{\mathrm{loc}}\) can be strengthened by adding the following scalar inequalities: \[S(i|j)_{\rho}+S(i|k)_{\rho}\geq 0\quad\forall 1\leq i,j,k\leq n\text{ distinct}. \tag{10}\] In general, if one considers relaxations involving marginals on \(l\geq 2\) sites, one can include all weak monotonicity inequalities (9) for all disjoint sets \(A,B,C\subset V\) such that \(|AB|\leq l\) and \(|AC|\leq l\). The next lemma shows that it is indeed sufficient to consider only inequalities where \(|A|=1\), and \(|B|=|C|=l-1\), i.e., \[\begin{split}\widehat{\mathcal{C}}_{l}^{\mathrm{WM}}=\Big{\{}( \rho_{ij})_{ij\in E}:&\exists(\rho_{S})_{|S|\leq l},\ \ \rho_{S}\geq 0,\operatorname{Tr}\rho_{S}=1\ \forall S\in \binom{V}{l}\\ &\text{and}\ \operatorname{Tr}_{S\setminus S^{\prime}}\rho_{S}= \operatorname{Tr}_{S^{\prime}\setminus S}\rho_{S^{\prime}}\ \ \forall S,S^{\prime}\in\binom{V}{l}\\ &\text{and}\ S(A|B)+S(A|C)\geq 0\ \forall|A|=1,B,C\in\binom{V \setminus A}{l-1}\ \text{disjoint}\Big{\}}.\end{split} \tag{11}\] **Lemma 2.2**.: _Let \(|V|\geq 2l-1\) and \((\rho_{S})_{S\in\binom{V}{l}}\) be a set of locally consistent marginals satisfying weak monotonicity \(S(A|B)_{\rho}+S(A|C)_{\rho}\geq 0\) for any disjoint \(A\), \(B\) and \(C\) with \(|A|=1\) and \(|B|=|C|=l-1\). Then the same weak monotonicity inequality holds for any disjoint sets \(A\), \(B\) and \(C\) of any size (as long as it is defined, i.e., \(|AB|,|AC|\leq l\))._ Proof.: For \(|A|=1\) and \(B,C\) of any size at most \(l-1\), the proof directly follows from the data-processing inequality for conditional entropy. If \(|A|=m>1\), let the elements of \(A\) be \(A_{1}\ldots A_{m}\) with \(A_{i}\in V\) and use the chain rule to write for \(S(A|B)\) and \(S(A|C)\) \[S(A|B)_{\rho}+S(A|C)_{\rho} =\sum_{i=1}^{m}S(A_{i}|A_{1}\ldots A_{i-1}B)_{\rho}+\sum_{i=m}^{1 }S(A_{i}|A_{i+1}\ldots A_{m}C)_{\rho}\] \[=\sum_{i=1}^{m}S(A_{i}|A_{1}\ldots A_{i-1}B)_{\rho}+S(A_{i}|A_{i+ 1}\ldots A_{m}C)_{\rho}.\] Using the fact that \(|AB|\leq l\), we have that \(|A_{1}\ldots A_{i-1}B|\leq l-1\) for any \(i\in\{1,\ldots,m\}\) and similarly for \(C\), together with the fact that \(A_{1}\ldots A_{i-1}B\) and \(A_{i+1}\ldots A_{m}C\) are disjoint we obtain that \(S(A|B)_{\rho}+S(A|C)_{\rho}\geq 0\). Notice that if \(\rho_{AB}\) and \(\rho_{AC}\) are classical probability distributions, or more generally if \(\rho_{AB}\) and \(\rho_{AC}\) are separable quantum states, then conditional entropies are nonnegative and thereby Eq. (9) is automatically satisfied, even without assuming the existence of a global state \(\rho_{ABC}\). As a result, imposing the weak monotonicity inequality is useless for classical Hamiltonians. Furthermore, we note that the weak monotonicity inequalities at level \(l\) are automatically implied by the level \(2l-1\) local consistency relaxation, i.e., \[\widehat{\mathcal{C}}^{\mathrm{loc}}_{2l-1}\subset\widehat{\mathcal{C}}^{ \mathrm{WM}}_{l}\subset\widehat{\mathcal{C}}^{\mathrm{loc}}_{l}. \tag{12}\] This is simply because the level \(2l-1\) ensures that the marginals are consistent with a valid quantum state on \(ABC\), for which weak monotonicity is known to hold. While this means weak monotonicity constraints cannot help more than doubling the number of sites in the marginal relaxation, it should be noted that this corresponds to squaring the size of each variable in the semidefinite program. Graphical illustrationWe end this section by graphically illustrating the effect of the entropy constraints. Consider the problem of characterizing the \(\{1,2\}\) and \(\{2,3\}\)-body marginals of a 3-site qubit state \(\rho=\rho_{123}\), i.e., \[\left\{(\rho_{12},\rho_{23}):\rho_{ij}=\mathrm{Tr}_{\setminus\{i,j\}}\,\rho_{ 123}\text{ for some }\rho_{123}\in\mathcal{D}((\mathbb{C}^{2})^{\otimes 3}) \right\}.\] Figure 1 shows a particular two-dimensional projection of this set on Bell states, namely \[(\rho_{12},\rho_{23})\mapsto\left(\mathrm{Tr}[(\rho_{12}+\rho_{23})/2|\psi^{- }\rangle\langle\psi^{-}|],\mathrm{Tr}[(\rho_{12}+\rho_{23})/2|\phi^{+}\rangle \langle\phi^{+}|]\right).\] ### Quantitative advantage In this section we analyze quantitatively the advantage that entropy constraints can provide for estimating the ground energy of certain local Hamiltonians. A Hamiltonian on 3 sitesWe start by looking at a Hamiltonian defined on a 3-node graph given by \[H=h_{12}+h_{23},\] where \[h_{12}=-\frac{1}{2}|\psi^{-}\rangle\langle\psi^{-}|_{12}\qquad h_{23}=-\frac{1}{ 2}|\psi^{-}\rangle\langle\psi^{-}|_{23}\] and \(|\psi^{-}\rangle=(|01\rangle-|10\rangle)/\sqrt{2}\). The ground energy of this Hamiltonian is \(\lambda_{\min}(H)=-3/4\). Using the relaxation \(\widehat{\mathcal{C}}_{2}^{\text{loc}}\), we get the (trivial) lower bound \(-1\): indeed a valid locally consistent assignment of 2-body marginals is \[\rho_{12}=\rho_{23}=|\psi^{-}\rangle\langle\psi^{-}| \tag{13}\] as all 1-body marginals are consistently equal to the maximally mixed state \(1/2\). However, this violates the monogamy of entanglement property as for a valid global state, 2 cannot be maximally entangled with 1 and 3. This violation of monogamy is to some extent captured by entropy constraints coming from weak monotonicity. Indeed, the assignment (13) violates the inequality \(S(2|1)_{\rho}+S(2|3)_{\rho}\geq 0\) since the left-hand side in this case is equal to \(-2\). In fact, by optimizing the Hamiltonian \(H\) over the relaxation \(\widehat{\mathcal{C}}_{2}^{\text{WM}}\) we get a value that is approximately \(-0.811>-1\), i.e., \[\min_{(\rho_{ij})\in\widehat{\mathcal{C}}_{2}^{\text{WM}}}\operatorname{Tr}[h _{12}\rho_{12}]+\operatorname{Tr}[h_{23}\rho_{23}]\geq-0.811. \tag{14}\] Graphically this value can be seen in Figure 1 as the \(x\)-component of the rightmost point of the red convex set, whereas the true value \(-0.75\) is for the blue convex set. **Remark 2.3** (Exploiting symmetries).: In the above example, one can actually use the symmetries of the Hamiltonian to simplify the entropy constraints. As the Hamiltonian commutes with the unitary operation of exchanging subsystems 1 and 3, the minimizer can be chosen to obey the same symmetry. For that state we have \(S(2|1)=S(2|3)\), and so the weak monotonicity inequality becomes simply \(S(2|1)\geq 0\). We will see more examples of similar arguments when we consider translation-invariant Hamiltonians in Section 4. Figure 1: We compare the sets of valid states on 3 sites (blue, inner curved line), 2-body marginal relaxations (black triangle), and the entropy constrained marginals fulfilling \(S(2|1)+S(2|3)\geq 0\) (red, outer curved line). We also depict the linear inequality mentioned in Eq. (16) (orange, dashed). Larger graphsConsider now a Hamiltonian defined on a general connected graph \(G=(V,E)\) of the form \[H=\sum_{ij\in E}h_{ij}\qquad h_{ij}=-|\psi^{-}\rangle\langle\psi^{-}|_{ij}\ \ \forall ij \in E. \tag{15}\] While the simple relaxation \(\widehat{\mathcal{C}}_{2}^{\text{loc}}\) yields a trivial lower bound of \(-|E|\) on \(\lambda_{\min}(H)\), one can show that the relaxation incorporating the weak monotonicity constraints \(S(i|j)+S(i|k)\geq 0\) will have a value \(\gtrsim-0.811|E|\). More precisely, we prove: **Theorem 2.4**.: _For the Hamiltonian (15) defined on a connected graph \(G=(V,E)\), we have_ \[-|E|=\min_{(\rho_{e})\in\widehat{\mathcal{C}}_{E}^{\text{loc}}}\sum_{e\in E} \operatorname{Tr}[h_{e}\rho_{e}]\leq-0.811|E-1|-1\leq\min_{(\rho_{e})\in \widehat{\mathcal{C}}_{E}^{\text{WM}}}\sum_{e\in E}\operatorname{Tr}[h_{e} \rho_{e}].\] The key to proving this theorem is the following proposition, which allows us to decompose the edges \(E\) of the graph into disjoint pairs of adjacent edges. **Proposition 2.5**.: _For any connected graph \(G=(V,E)\), there exists a set \(P=\{\{f_{1},g_{1}\},\ldots,\{f_{m},g_{m}\}\}\) of disjoint pairs of adjacent edges (i.e., \(f_{i}\in E\) and \(g_{i}\in E\) share a node for each \(i\), and \(\{f_{i},g_{i}\}\cap\{f_{j},g_{j}\}=\emptyset\) for all \(i\neq j\)) that covers all edges of the graph for an even number of edges and all but one in the case of an odd number of edges._ Proof.: See Appendix A. Proof of Theorem 2.4.: We use the decomposition of the edge set \(P\) from Proposition 2.5 and write \[H=\sum_{\{f,g\}\in P}(h_{f}+h_{g})+\delta_{odd}h_{e}\] where \(\delta_{odd}=1\) if the graph has an odd number of edges with \(e\) the unmatched edge and \(\delta_{odd}=0\) otherwise. We estimate \[\min_{(\rho_{e})\in\widehat{\mathcal{C}}_{E}^{\text{WM}}}\sum_{e \in E}\operatorname{Tr}[h_{e}\rho_{e}] \geq\left(\sum_{\{f,g\}\in P}\min_{(\rho_{e})\in\widehat{\mathcal{ C}}_{E}^{\text{WM}}}\operatorname{Tr}[h_{f}\rho_{f}]+\operatorname{Tr}[h_{g} \rho_{g}]\right)+\delta_{odd}\] \[\geq-0.811(|E|-\delta_{odd})-\delta_{odd}\] where we bounded for each term the objective by the optimal value for the entropy constrained value of the 3-node graph from Eq. (14). We remark that for this problem, we chose a smaller set of marginals and entropy constraints motivated by the structure of the problem rather than simply all possible constraints for the given size of marginals. These examples are unweighted instances of the quantum version of the Max-Cut problem that has been widely studied in the literature [1, 1, 1, 1, 1, 10]. These works consider semidefinite relaxations of this problem and then round the solutions of the semidefinite program to a valid global quantum state. In order to do this, for example in [1], it is shown that any \((\rho_{12},\rho_{23})\) coming from the relaxation \(\widehat{\mathcal{C}}_{2}^{\text{sos}}\) satisfies the linear inequality \[\frac{1}{2}\left(\operatorname{Tr}[\rho_{12}|\psi^{-}\rangle\langle\psi^{-}|_ {12}]+\operatorname{Tr}[\rho_{23}|\psi^{-}\rangle\langle\psi^{-}|_{23}]\right) \leq 0.75. \tag{16}\] This can be interpreted in terms of Figure 1: it shows that Eq. (16) leads to a tight bound when optimizing the linear function defined by \(|\psi^{-}\rangle\langle\psi^{-}|\) on sites \(12\) and \(23\). The entropy constraint performs worse in the directions of the Bell states as seen in Figure 1 and in Equation (14), but it is better in some other directions. Larger marginalsWe extend the above example to show how nontrivial entropy constraints can be easily constructed for any choice \(k\geq 2\) of number of sites considered by the marginal. Let us consider a \(k\)-local Hamiltonian, where each term is given by a projector on the GHZ-state \(h_{e}=-|\text{GHZ}\rangle\langle\text{GHZ}|\), which is given by \(|\text{GHZ}\rangle=(|0\ldots 0\rangle+|1\ldots 1\rangle)/\sqrt{2}\) and \(e\) is a subset of the sites of size \(k\). Again, for the \(k\)-site relaxation the optimal solution is to assign a projector onto the GHZ-state to every marginal \(\rho_{e}=|\text{GHZ}\rangle\langle\text{GHZ}|\), which are all locally consistent. However, the conditional entropy for the GHZ state \(S(1\ldots i|i+1\ldots k)=-1\) is negative. As a result, for two distinct subsets \(e\) and \(f\) of size \(k\) that have a nonempty intersection, if we write \(e=AB\) and \(f=AC\), the weak monotonicity constraint \(S(A|B)+S(A|C)\geq 0\) is violated. For concreteness, let \(V=1\ldots(2l-1)\) and consider hyperedges \(e=1\ldots l,\ f=l\ldots(2l-1)\) and consider the Hamiltonian \(h_{e}+h_{f}\). For the relaxation we specify the marginals \(\rho_{e}\), \(\rho_{f}\) subject to local consistency on site \(l\) and the weak monotonicity constraint \[S(l|1\ldots l-1)+S(l|l+1\ldots 2l-1)\geq 0\] In fact, due to the symmetry of the problem, this can be reduced to the optimization \[\min_{\rho_{e},\ S(l|1\ldots l-1)\geq 0}\operatorname{Tr}[h_{e}\rho_{e}]. \tag{17}\] For any \(l\), this problem is related to the problem at \(l=2\) by the local isometry \(U_{1\to 1\ldots l-1}=|0\ldots 0\rangle\langle 0|+|1\ldots 1\rangle \langle 1|\). Since this local isometry leaves the conditional entropy invariant the optimal value of Eq. (17) does not depend on \(l\) and is thereby equal to approximately \(-0.811\) as in the previously encountered example Eq. (14). ## 3 Markov Entropy Decomposition In this section we derive another family of entropy constraints that one can impose on the set of globally consistent marginals. We call these inequalities _Markov Entropy Decomposition (MED)_ inequalities, because they have appeared in [11] in the context of deriving lower bounds on the free energy of a Hamiltonian \(H\). The next lemma briefly describes the family of inequalities. We explain the connection to the free energy later. **Lemma 3.1** (Markov Entropy Decomposition inequalities).: _Let \(\rho\) be a density operator acting on the Hilbert space corresponding to the sites \(V\). Consider an ordering \(1,\ldots,N=|V|\) of the sites in \(V\), and for each \(i\in\{1,\ldots,N\}\), let \(\mathcal{N}_{i}\subset\{1,\ldots,i-1\}\) be a subset of the sites appearing before \(i\) (in the chosen order) with \(|\mathcal{N}_{i}|\leq l-1\). Then for any \(1\leq k\leq N\)_ \[\sum_{i=1}^{k}S(i|\mathcal{N}_{i})_{\rho}\geq 0. \tag{18}\] Proof.: We have, using the chain rule and strong subadditivity: \[0\leq S(1\ldots k)_{\rho}=S(1)_{\rho}+S(2|1)_{\rho}+S(3|12)_{\rho}+\cdots+S(k| 1\ldots k-1)_{\rho}\leq\sum_{i=1}^{k}S(i|\mathcal{N}_{i})_{\rho}.\] We see that the inequalities (18) only involve the marginals of \(\rho\) on at most \(l\) sites (since \(|\mathcal{N}_{i}|\leq l-1\)) and that they define a convex region in the space of these marginals. Thus these inequalities can be used to strengthen the relaxations \(\widehat{\mathcal{C}}_{l}^{\text{loc}}\). We note that there is an MED inequality for each choice of ordering of the sites in \(V\), and each choice of _Markov shields_\(\mathcal{N}_{i}\subset\{1,\ldots,i-1\}\) ### Relationship with weak monotonicity A natural question is whether the MED inequalities are equivalent to the weak monotonicity inequalities derived in the previous section. The answer in general is no, and we show in this section that the two relaxations are incomparable. However, we first observe that some simple MED inequalities are implied by the weak monotonicity inequalities. For example consider the case where the Markov shields \(\mathcal{N}_{i}\) consist of the (at most) \(l-1\) sites that are immediately preceding the site \(i\) in the chosen order, i.e., \(\mathcal{N}_{i}=\mathcal{N}_{i}^{-}=\{\max(1,i-l+1),\ldots,i-1\}\). Then one can show that the MED inequality is implied by the weak monotonicity inequalities. Indeed, let \(\mathcal{N}_{i}^{+}\) be the (at most) \(l-1\) sites that are immediately _following_ the site \(i\) in the chosen order, i.e., \(\mathcal{N}_{i}^{+}=\{i+1,\ldots,\min(k,i+l-1)\}\). The MED inequality states \[\sum_{i=1}^{k}S(i|\mathcal{N}_{i}^{-})\geq 0. \tag{19}\] One can check that \(\sum_{i=1}^{k}S(i|\mathcal{N}_{i}^{-})=\sum_{i=1}^{k}S(i|\mathcal{N}_{i}^{+})\), and so the inequality above can be equivalently written as \[\sum_{i=1}^{k}S(i|\mathcal{N}_{i}^{-})+S(i|\mathcal{N}_{i}^{+})\geq 0.\] Now it suffices to observe that for each \(i\), \(S(i|\mathcal{N}_{i}^{-})+S(i|\mathcal{N}_{i}^{+})\geq 0\) is implied by weak monotonicity. We now give two examples showing that MED and weak monotonicity constraints are in general inequivalent. Weak monotonicity does not imply MEDWe construct a set of 2-body locally consistent states on \(N\) sites (for \(N\) large enough) that violate a certain MED inequality but are consistent with all possible weak monotonicity constraints of the form (10). The 2-body marginals are defined as follows (for \(i,j\geq 2\)): \[\rho_{1i} =\lambda|\psi^{-}\rangle\langle\psi^{-}|+(1-\lambda)(1/2\otimes| 0\rangle\langle 0|)\] \[\rho_{ij} =(\lambda 1/2+(1-\lambda)|0\rangle\langle 0|)^{\otimes 2}.\] Here, \(|\psi^{-}\rangle=(|01\rangle-|10\rangle)/\sqrt{2}\) as before. The entropies of the single-body and two-body marginals are \[S(1)=1\quad\text{and}\quad S(i)=h(1-\frac{\lambda}{2})\quad\text{and}\quad S( ij)=2h(1-\frac{\lambda}{2})\quad i,j\geq 2\] where \(h\) is the binary entropy function, which is concave, continuous and satisfies \(h(1)=0\) and \(h(1/2)=1\). We omit an explicit formula for the entropy of the two-body marginal \(S(1i)\), but it is also concave, continuous, and equal to \(1\) if \(\lambda=0\), and \(0\) if \(\lambda=1\). This means we can pick \(0<\lambda<1\) such that \(S(1|i)+S(1|j)=2S(1i)-2h(1-\lambda/2)=0\). Furthermore, \[S(i|j)+S(i|1) =h(1-\lambda/2)+S(i1)-1\] \[\geq\lambda+(1-\lambda)-1=0,\] where we applied concavity of \(h\) and the entropy function. Finally, \(S(i|j)+S(i|k)\geq 0\) for \(i,j,k\geq 0\), because both marginals are product states, so all weak monotonicity constraints that can be defined for 2-site marginals are satisfied. However, let us consider the MED inequality \[0\leq S(1)+\sum_{i=1}^{N}S(i|1)=1+N(S(1i)-1).\] Since \(0<S(1i)=h(1-\lambda/2)<1\) (\(h\) is strictly decreasing on \([1/2,1]\)), we see that the right-hand side becomes negative for sufficiently large \(N\) (\(N=8\) is sufficient). MED does not imply weak monotonicityWe now construct a set of 2-body locally consistent states on \(N=3\) sites that satisfy all the MED inequalities but violate some weak monotonicity inequalities. We choose for all \(ij\) \[\rho_{ij}=\lambda|\psi^{-}\rangle\langle\psi^{-}|+(1-\lambda) \mathbb{1}/4 \tag{20}\] with \(\lambda\) such that \(-1/2\leq S(i|j)<0\). This is possible due to the continuity in \(\lambda\) and the fact that it takes the values \(1\) and \(-1\) for \(\lambda=0\) and \(\lambda=1\) respectively. This choice violates weak monotonicity as all conditional entropies are negative. Due to the permutation invariance, the only relevant MED constraints to be checked are \[S(1)+S(2|1)+S(3|2)\geq 0\] \[S(1)+S(2|1)+S(3|1)\geq 0.\] These are satisfied since \(S(1)=1\) and each \(S(i|j)\geq-1/2\). **Remark 3.2**.: We note that on three sites, the 2-body MED constraints are all implied by weak monotonicity. Indeed, the MED constraints are all of the form \[S(i)+S(j|i)+S(k|j)\geq 0\] for all choices of \(i,j,k\) distinct. The only other choice \(S(i)+S(j|i)+S(k|i)=S(j)+S(i|j)+S(k|i)\) is in fact equivalent. This is precisely of the form (19) where the sites are ordered as \(i<j<k\), which have been shown to be implied by the weak monotonicity constraints. The above example (20) thus shows that the weak monotonicity constraints are strictly stronger than the MED constraints on 3 sites. ### Connection with the free energy The expression appearing in the MED inequalities was used in [11] to derive lower bounds on the free energy \(F(T)\) of the Hamiltonian \(H\) on sites \(V\) at temperature \(T\). Recall that the free energy is given by the following variational formula \[F(T)=\min_{\rho}\{\operatorname{Tr}[H\rho]-TS(V)_{\rho}\},\] where the minimization is over all density operators \(\rho\) acting on the Hilbert space of all the sites in \(V\). As opposed to the ground energy problem, the objective does not only depend on few-site marginals as the entropy function involves the global density matrix and its spectrum. The entropy however can be decomposed into conditional entropies which can be upper bounded using the strong subadditivity inequality: Considering an order of the sites \(1,\ldots,N\), we write \[S(V)_{\rho} =S(1)_{\rho}+S(2|1)_{\rho}+\ldots+S(N|1\ldots N-1)\] \[\leq\sum_{i=1}^{N}S(i|\mathcal{N}_{i})_{\rho}\] again using Markov shields \(\mathcal{N}_{i}\subset\{1,\ldots,i-1\}\) defining the sites that are taken into account in the conditioning. The intuition in choosing these is that sites that are far away in the hypergraph distance induced by the interaction hypergraph have only small correlation as measured by the conditional mutual information (CMI), see for example [10] for a result in 1D. The CMI is also equal to the error made in the approximation using strong subadditivity. This relaxation can now again be computed by just the marginals on \(\{i\}\cup\mathcal{N}_{i}\) and the supports of the Hamiltonian terms and therefore allows for a further relaxation as explained in the previous section: \[F(T) =\min_{\rho}\{\operatorname{Tr}[H\rho]-TS(\rho)\} \tag{21}\] \[\geq\min_{(\rho_{e})\in\widetilde{\mathcal{C}}^{\operatorname{ loc}}_{l}}\left\{\sum_{e}\operatorname{Tr}[h_{e}\rho_{e}]-T\sum_{i=1}^{N}S(i| \mathcal{N}_{i})\right\}:=\operatorname{MED}(T)\] Here, \(l\) is chosen sufficiently large to define all conditional entropies and interaction terms. Lower bounds for the ground energyIt has been pointed out in [14] that while the free energy \(F(T)\) is a decreasing function of \(T\), its MED approximation \(\operatorname{MED}(T)\) need not be. At every temperature, however, we have \(\operatorname{MED}(T)\leq F(T)\leq F(0)\), which is to say that the MED lower bounds the ground energy \(F(0)\). We can maximize this lower bound over \(T\) to obtain the best lower bound on the ground energy at that level. It is easy to see that maximizing the expression for \(\operatorname{MED}(T)\) in (21) corresponds to adding the MED term as an inequality, i.e., \[F(0)\geq\max_{T}\operatorname{MED}(T)=\min_{\begin{subarray}{c}(\rho_{e})\in \widetilde{\mathcal{C}}^{\operatorname{loc}}_{l}\\ \sum_{i=1}^{N}S(i|\mathcal{N}_{i})\geq 0\end{subarray}}\sum_{e} \operatorname{Tr}[h_{e}\rho_{e}].\] ## 4 Infinite systems A practically relevant case of the above methods are lattice systems. We start this section by discussing the application of weak monotonicity to one-dimensional translation-invariant systems. One-dimensional systemsWe consider a translation-invariant Hamiltonian \(H\) acting on a chain of length \(m\) of the form \[H_{[1,m]}=\sum_{i=1}^{m}h_{i,i+1}\] where the interaction terms \(h_{i,i+1}\) act on sites \(i\) and \(i+1\), and are translates of the same Hermitian operator \(h\) acting on two sites \(\mathbb{C}^{d}\otimes\mathbb{C}^{d}\). We use periodic boundary conditions, i.e., we identify \(m+1\) with \(1\). We can define the ground energy per site of the infinite system as the limit \[e_{0}(h)=\lim_{m\to\infty}\frac{1}{m}\lambda_{\min}(H_{[1,m]})=\lim_{m\to \infty}\frac{1}{m}\min_{\rho_{m}\in\mathcal{D}(\otimes_{v\in[1,m]}\mathbb{C}^ {d})}\operatorname{Tr}[H_{[1,m]}\rho_{m}]. \tag{22}\] The existence of the limit above is shown in Appendix B; in fact we show that in the minimization problems in (22), it is equivalent to restrict to just a single interaction term while taking \(\rho_{m}\) translation-invariant, i.e., satisfying \(\operatorname{Tr}_{1}[\rho_{m}]=\operatorname{Tr}_{m}[\rho_{m}]\), since for such translation-invariant states the two-body marginals \(\rho_{12},\ldots,\rho_{m-1,m}\) are all equal. This allows us to express the ground energy density \(e_{0}(h)\) in the following way \[e_{0}(h)=\min_{\rho\in\mathcal{C}^{\operatorname{TI}}}\operatorname{Tr}[h \rho],\] where \(\mathcal{C}^{\text{TI}}\) is the set of two-body density matrices that are extendible to an infinite translation-invariant system, i.e., \[\mathcal{C}^{\text{TI}}=\{\rho\in\mathcal{D}(\mathbb{C}^{d}\otimes\mathbb{C}^{d }):\forall m\;\exists\rho_{m}\in\mathcal{D}(\otimes_{v\in[1,m]}\mathbb{C}^{d}), \;\rho=\text{Tr}_{[3,m]}[\rho_{m}],\;\text{Tr}_{1}[\rho_{m}]=\text{Tr}_{m}[ \rho_{m}]\}. \tag{23}\] A "local" relaxation of this set can be defined in a similar way as discussed previously, and leads to \[\mathcal{\tilde{C}}^{\text{loc,TI}}_{l}=\left\{\rho\in\mathcal{D}(\mathbb{C}^ {d}\otimes\mathbb{C}^{d}):\exists\rho_{l}\in\mathcal{D}(\otimes_{v\in[1,l]} \mathbb{C}^{d}),\;\rho=\text{Tr}_{[3,l]}[\rho_{l}],\;\text{Tr}_{1}[\rho_{l}]= \text{Tr}_{l}[\rho_{l}]\right\}. \tag{24}\] We prove in Appendix B that for any \(h\), the value \[\min_{\rho\in\mathcal{\tilde{C}}^{\text{loc,TI}}_{l}}\text{Tr}[h\rho]\] converges to \(e_{0}(h)\) as \(l\to\infty\) at the rate \(O(1/l)\). Using weak monotonicity and translation-invariance, one can strengthen the relaxation (24) by adding the inequality \(S(l|1\ldots l-1)\geq 0\), leading to: \[\begin{split}\mathcal{\tilde{C}}^{\text{WM,TI}}_{l}=\Big{\{}\rho \in\mathcal{D}(\mathbb{C}^{d}\otimes\mathbb{C}^{d}):&\exists\rho _{l}\in\mathcal{D}(\otimes_{v\in[1,l]}\mathbb{C}^{d}),\\ &\rho=\text{Tr}_{[3,l]}[\rho_{l}],\;\text{Tr}_{1}[\rho_{l}]= \text{Tr}_{l}[\rho_{l}],\;S(l|1\ldots l-1)\geq 0\Big{\}}.\end{split} \tag{25}\] The next theorem shows that the above is a valid relaxation, and moreover that the resulting relaxation cannot be better than \(\mathcal{\tilde{C}}^{\text{loc,TI}}_{2l-1}\). **Theorem 4.1**.: \[\mathcal{\tilde{C}}^{\text{loc,TI}}_{2l-1}\subset\mathcal{\tilde{C}}^{\text{ WM,TI}}_{l}\subset\mathcal{\tilde{C}}^{\text{loc,TI}}_{l}\] Proof.: The second inclusion is immediate. To prove the first one we start from a translation-invariant state on sites \(1,\ldots,2l-1\). We know from weak monotonicity that \(S(l|1\ldots l-1)+S(l|l+1\ldots 2l-1)\geq 0\). However, translation-invariance tells us that \[\begin{split} S(l|l+1\ldots 2l-1)&=S(l\ldots 2l-1)-S(l+ 1\ldots 2l-1)\\ &=S(1\ldots l)-S(1\ldots l-1)\\ &=S(l|1\ldots l-1),\end{split}\] so \(2S(l|1\ldots l-1)\geq 0\), which concludes the proof. In fact, the entropy constraint \(S(l|1\ldots,l-1)\geq 0\) can as well be derived from the MED inequalities (Lemma 3.1) by using an increasing order and a Markov shield equal to the previous \(l-1\) sites. Higher dimensionsWe now consider a lattice system on \(\mathbb{Z}^{D}\) for \(D\geq 1\). Let \(h\) be a Hamiltonian term on the origin and its nearest neighbours in positive axis directions (i.e., a Hermitian matrix of size \(d^{D+1}\times d^{D+1}\)), and for \(v\in\mathbb{Z}^{D}\), let \(h_{v}\) be its translate to the site \(v\in\mathbb{Z}^{D}\), so that it acts on \(\{v,v+e_{1},\ldots,v+e_{D}\}\). Let us consider Hamiltonians of the form: \[H_{[1,m]^{D}}=\sum_{v\in[1,m]^{D}}h_{v},\] with periodic boundary conditions, i.e., identifying \(m+1\) with \(1\). The ground energy density of the system is defined by the limit (see Appendix B for existence). \[e_{0}(h)=\lim_{m\to\infty}\frac{1}{m^{D}}\lambda_{\min}(H_{[1,m]^{D}}).\] Using the same considerations as in the 1D case, we can also write \(e_{0}(h)\) as a linear optimization problem over the set of density matrices defined on \(\{0,e_{1},\ldots,e_{D}\}\) that are extendible to an infinite translation-invariant system, i.e., \[\mathcal{C}^{\mathrm{TI}}=\bigcap_{m\in[2,\infty)}\widehat{\mathcal{C}}^{ \mathrm{loc,TI}}_{[1,m]^{D}}\] where for any finite subset \(A\subset\mathbb{Z}^{D}\) such that \(\{0,e_{1},\ldots,e_{D}\}\subset A\) we define \[\widehat{\mathcal{C}}^{\mathrm{loc,TI}}_{A}=\Big{\{}\rho\in \mathcal{D}((\mathbb{C}^{d})^{D+1}): \exists\rho_{A}\in\mathcal{D}(\otimes_{v\in A}\mathbb{C}^{d})\text{ s.t. }\rho= \mathrm{Tr}_{A\setminus\{0,e_{1},\ldots,e_{D}\}}[\rho_{A}]\] \[\mathrm{Tr}_{A\setminus A+t}[\rho_{A}]=\mathrm{Tr}_{A+t\setminus A }[\tau_{t}(\rho_{A})]\;\forall t\in\mathbb{Z}^{D}\Big{\}},\] where \(\tau_{t}\) defines the translation operator shifting the state by \(t\) sites. We investigate what entropy constraints arising from weak monotonicity and the MED can be imposed to strengthen the relaxation \(\widehat{\mathcal{C}}^{\mathrm{loc,TI}}_{A}\) of \(\mathcal{C}^{\mathrm{TI}}\). The weak monotonicity inequality allows us to strengthen the relaxation above by adding constraints of the form \[S(i|B)+S(i|C)\geq 0 \tag{26}\] for any choice of \(i\in A\) and \(B,C\subset A\) disjoint. We now consider MED inequalities. Consider a translation-invariant state defined on \([-m,m]^{D}\) for large \(m\). We fix an order of the sites \(1,\ldots,N=(2m+1)^{D}\) that decreases in each coordinate direction and iterates over the coordinates (see Figure 2). Let \(\mathcal{N}\) be some fixed region of size \(l-1\) independent of \(m\), which is a subset of the sites preceding the origin (a common choice would be the intersection of a ball \(B(0,r)\) of fixed radius \(r\) with the sites preceding the origin, see Figure 2). We define the Markov shields \(\mathcal{N}_{i}\) relative to the site \(i\) by translating \(\mathcal{N}\) to the site numbered \(i\), i.e., abusing notations: \[\mathcal{N}_{i}=(i+\mathcal{N})\cap[-m,m]^{D}.\] The MED inequality (18) in this case takes the form \[0\leq\sum_{i=1}^{N}S(i|\mathcal{N}_{i}).\] Figure 2: Markov shield with order of sites in 2D. For sites \(i\) that are not close to the boundary of the region \([-m,m]^{D}\), the regions \(\{i\}\cup\mathcal{N}_{i}\) are all translates of each other, and so the terms in the equation above are equal, by translation invariance. The number of sites \(i\) that are close to the boundary is \(o(N)\), and so dividing by \(N\) and letting \(N\to\infty\), the inequality above yields \[0\leq S(0|\mathcal{N}) \tag{27}\] for a state in \(\mathcal{C}^{\mathrm{TI}}\). This suggests the relaxation \[\mathcal{C}^{\mathrm{TI}}\subset\widehat{C}^{\mathrm{MED},\mathrm{ TI}}_{0\cup\mathcal{N}}=\Big{\{}\widetilde{\rho}\in\mathcal{D}((\mathbb{C}^{d})^{D+1 })\ :\exists\rho\in\mathcal{D}(\otimes_{v\in 0\cup\mathcal{N}}\mathbb{C}^{d})\ \mathrm{s.t.}\] \[\mathrm{Tr}_{(0\cup\mathcal{N})\setminus(0\cup\mathcal{N})+t}[ \rho]=\mathrm{Tr}_{(0\cup\mathcal{N})+t\setminus(0\cup\mathcal{N})}[\tau_{t}( \rho)]\ \forall t\in\mathbb{Z}^{D}\] \[S(0|\mathcal{N})\geq 0\Big{\}}.\] In general, the MED inequality (27) is different from the weak monotonicity inequality (26): (27) shows the nonnegativity of a particular conditional entropy \(S(0|\mathcal{N})\) where \(\mathcal{N}\) has to satisfy the conditions of a Markov shield described above (i.e., compatibility with well-chosen order). Weak monotonicity on the other hand, asserts the nonnegativity of the sum of two conditional entropies (26) with the only constraint that \(B\) and \(C\) are disjoint. Under some mild conditions however, one can recover the MED inequality (27) as a consequence of weak monotonicity. This is the case if the state \(\rho\) is reflection symmetric, which can be assumed if the Hamiltonian itself is reflection symmetric (i.e., \(h=\sum_{i=1}^{D}h^{i}_{0,e_{i}}\) where \(h^{i}_{0,e_{i}}\) acts on sites \(0\), \(e_{i}\) and is symmetric under exchange of the two sites). One can check that with the order defined earlier, the reflection of \(\mathcal{N}\) about the origin is disjoint from \(\mathcal{N}\), i.e., \(-\mathcal{N}\cap\mathcal{N}=\emptyset\). Furthermore, by reflection symmetry we have \(S(0|\mathcal{N})=S(0|-\mathcal{N})\) and so the weak monotonicity inequality \(S(0|\mathcal{N})+S(0|-\mathcal{N})\geq 0\) recovers (27). A corollary of this observation is stated in the next theorem. **Theorem 4.2**.: _Under the reflection symmetry and translation-invariance assumptions explained above with a Markov shield \(\mathcal{N}\) of size \(l-1\) the MED relaxation is no better than a semidefinite optimization in one \(d^{2l-1}\times d^{2l-1}\) dimensional variable, more precisely_ \[\min_{\rho\in\widehat{C}^{\mathrm{loc},\mathrm{TI}}_{0\cup\mathcal{N}}} \mathrm{Tr}[h\rho]\leq\min_{\rho\in\widehat{C}^{\mathrm{MED},\mathrm{TI}}_{0 \cup\mathcal{N}\cup-\mathcal{N}}}\mathrm{Tr}[h\rho]\leq e_{0}(h).\] Proof.: The first inequality is immediate. We start with the value \[\mathrm{SDP}=\min_{\rho\in\widehat{C}^{\mathrm{loc},\mathrm{TI}}_{0\cup \mathcal{N}\cup-\mathcal{N}}}\mathrm{Tr}[h\rho].\] Due to the translation-invariance this is the same as \[\min_{\rho\in\widehat{C}^{\mathrm{loc},\mathrm{TI}}_{0\cup\mathcal{N}\cup- \mathcal{N}}}\mathrm{Tr}[\widetilde{h}\rho].\] with \(\widetilde{h}=\sum_{i=1}^{D}(h^{i}_{0,e_{i}}+h^{i}_{-e_{i},0})/2\) and the variable over the same system. This problem is intrinsically reflection symmetric, which means that the optimizer \(\rho^{*}\) can be chosen reflection symmetric. Thereby we also have \(S(0|\mathcal{N})_{\rho^{*}}=S(0|-\mathcal{N})_{\rho^{*}}\) and by using weak monotonicity we deduce \[S(0|\mathcal{N})_{\rho^{*}}=\frac{S(0|\mathcal{N})_{\rho^{*}}+S(0|-\mathcal{N })_{\rho^{*}}}{2}\geq 0,\] which holds because \(\mathcal{N}\) and \(-\mathcal{N}\) are disjoint. This shows that any optimizer of the SDP on \(2l-1\) sites gives a feasible point with same objective value and satisfying the MED inequality. This concludes the proof. Numerical experiments: Heisenberg XXZ-chain In this section we present numerical experiments for the relaxations with entropy constraints. The model we consider is the XXZ-Hamiltonian with the nearest-neighbour interaction \[h=-\sigma_{x}\otimes\sigma_{x}-\sigma_{y}\otimes\sigma_{y}-\Delta\sigma_{z} \otimes\sigma_{z} \tag{28}\] on an infinite chain. For \(\Delta=0\) this model is also called the XY-Hamiltonian. We consider the values obtained by the relaxation \(\widetilde{\mathcal{C}}^{\text{loc,TI}}_{l}\) and its strengthening \(\widetilde{\mathcal{C}}^{\text{WM,TI}}_{l}\) based on entropy constraint \(S(l|1\ldots l-1)\geq 0\), see Equations (24) and (25). The resulting convex optimization problems can be solved using different algorithms. One approach is to rely on semidefinite programming solvers, via approximations of the (conditional) entropy function [14, 15]. Another approach is to use interior-point methods that support the quantum relative entropy cone [13, 10, 12]. Finally, another approach is to use custom first-order splitting methods such as [11], see also [13]. We have used the latter for the experiments in this section. We start by comparing the convergence with \(l\) for the XY-Hamiltonian, see Figure 3. We go up to \(l=8\) for the problem with entropy constraints which takes about \(900s\) (about \(250s\) at \(l=7\)) and \(l=9\) for the standard relaxation \(\widetilde{\mathcal{C}}^{\text{loc,TI}}_{l}\) in about \(200s\) (about \(20s\) at \(l=8\)) on a laptop computer. The graph shows a significant improvement to the optimal value by using entropy constraints and for a fixed runtime, the entropy-constrained relaxation value outperforms the simpler SDP. Due to the symmetry of the Hamiltonian, one can show that the two-body marginal of the Figure 3: Accuracy of the relaxations with and without entropy constraints for the XY-Hamiltonian on an infinite 1D chain. We verify that the relaxation with entropy constraint gives a value which is not better than the standard SDP relaxation at level \(2l-1\). It is surprising to note however that it is better than the value of the SDP for all levels up to \(2l-2\). ground state must be of the form1 Footnote 1: More precisely, we use the fact that the Hamiltonian is invariant under the action of the unitaries \(U\in\{\sigma_{x},\sigma_{y},\sigma_{z},S\}\) where \(S=\exp(i\pi(1-\sigma_{z})/4)\), i.e., \(U^{\otimes 2}h(U^{\dagger})^{\otimes 2}=h\). The set of states (29) is exactly the set left invariant by this action. \[\rho(x,z)=\frac{1}{4}(\mathbb{1}\otimes 1+x(\sigma_{x}\otimes\sigma_{x}+ \sigma_{y}\otimes\sigma_{y})+z\sigma_{z}\otimes\sigma_{z}). \tag{29}\] In Figure 4 we compare the convex sets \(\mathcal{C}^{\rm TI}\) with \(\widehat{\mathcal{C}}^{\rm loc,TI}_{l}\) and \(\widehat{\mathcal{C}}^{\rm WM,TI}_{l}\) on the two-dimensional slice (29), as done in [21], i.e., we plot the set of valid states \[\{(x,z)\in\mathbb{R}^{2}:\rho(x,z)\in\mathcal{C}^{\rm TI}\},\] which we obtain from the analytic solution for infinite systems [19] with its relaxations \[\{(x,z)\in\mathbb{R}^{2}:\rho(x,z)\in\widehat{\mathcal{C}}\}.\] Figure 4: Two-dimensional slice (29) of sets of feasible 2-body marginals. The blue lines are for the MED, the red lines for the unconstrained marginal relaxation, and the green line is for the exact analytic solution. As the size of the marginals increases the outer approximations become better. ## Acknowledgements We would like to thank Matt Hastings for bringing the paper [11] to our attention. HF acknowledges funding from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee EP/X032051/1. OF acknowledges funding by the European Research Council (ERC Grant AlgoQIP, Agreement No. 851716). SOS acknowledges support from the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/W524141/1.
2308.09542
Decoupled conditional contrastive learning with variable metadata for prostate lesion detection
Early diagnosis of prostate cancer is crucial for efficient treatment. Multi-parametric Magnetic Resonance Images (mp-MRI) are widely used for lesion detection. The Prostate Imaging Reporting and Data System (PI-RADS) has standardized interpretation of prostate MRI by defining a score for lesion malignancy. PI-RADS data is readily available from radiology reports but is subject to high inter-reports variability. We propose a new contrastive loss function that leverages weak metadata with multiple annotators per sample and takes advantage of inter-reports variability by defining metadata confidence. By combining metadata of varying confidence with unannotated data into a single conditional contrastive loss function, we report a 3% AUC increase on lesion detection on the public PI-CAI challenge dataset. Code is available at: https://github.com/camilleruppli/decoupled_ccl
Camille Ruppli, Pietro Gori, Roberto Ardon, Isabelle Bloch
2023-08-18T13:19:26Z
http://arxiv.org/abs/2308.09542v1
# Decoupled conditional contrastive learning with variable metadata for prostate lesion detection ###### Abstract Early diagnosis of prostate cancer is crucial for efficient treatment. Multi-parametric Magnetic Resonance Images (mp-MRI) are widely used for lesion detection. The Prostate Imaging Reporting and Data System (PI-RADS) has standardized interpretation of prostate MRI by defining a score for lesion malignancy. PI-RADS data is readily available from radiology reports but is subject to high inter-reports variability. We propose a new contrastive loss function that leverages weak metadata with multiple annotators per sample and takes advantage of inter-reports variability by defining metadata confidence. By combining metadata of varying confidence with unannotated data into a single conditional contrastive loss function, we report a 3% AUC increase on lesion detection on the public PI-CAI challenge dataset. Code is available at : [https://github.com/camilleruppli/decoupled_ccl](https://github.com/camilleruppli/decoupled_ccl) Keywords:Contrastive Learning Semi-supervised Learning Prostate cancer segmentation ## 1 Introduction Clinical context.Prostate cancer is the second most common cancer in men worldwide. Its early detection is crucial for efficient treatment. Multi-parametric MRI has proved successful to increase diagnosis accuracy [21]. Recently, deep learning methods have been developed to automate prostate cancer detection [3, 22, 33]. Most of these methods rely on datasets of thousands of images, where lesions are usually manually annotated and classified by experts. This classification is based on the Prostate Imaging Reporting and Data System (PI-RADS) score, which ranges between 1 and 5, and associates a malignancy level to each lesion or to the whole exam (by considering the highest lesion score) [27]. This score is widely used by clinicians and is readily available from radiology reports. However, it is rather qualitative and subject to low inter-reader reproducibility [25]. Images can also be classified using biopsy results, as in the PI-CAI dataset [23]. This kind of classification is usually considered more precise (hence often taken as ground truth), but is also more costly to obtain and presents a bias since only patients with high PI-RADS scores undergo a biopsy. Building a generic and automatic lesion detection method must therefore deal with the diversity of classification sources, radiology or biopsy, and the variability of classifications for a given exam. **Methodological context.** In the past years, the amount of available medical imaging data has drastically increased. However, images are often either unannotated or weakly-annotated (_e.g.,_ a single PI-RADS score for the entire exam), as annotating each lesion is costly and time consuming. This means that usual supervised models cannot be used, since their performance highly depends on the amount of annotated data, as shown in Table 1. To take advantage of unannotated or weakly-annotated data during a pretraining step, self-supervised contrastive learning methods [7, 15, 16] have been developed. Recent works have proposed to condition contrastive learning with class labels [18] or weak metadata [8, 10, 26] to improve latent representations. Lately, some works have also studied the robustness of supervised contrastive learning against noisy labels [30] proposing a new regularization [32]. While contrastive pretraining has been widely applied to classification problems [7, 13, 15, 16], there have been few works about segmentation [2, 6]. Recent works [1, 6, 35] propose to include pseudo labels at the pixel (decoder) level and not after the encoder, but, due to high computational burden, they can only consider 2D images and not whole 3D volumes. Furthermore, many datasets contain several weak metadata at the exam level (e.g., PI-RADS score) obtained by different annotators. These weak metadata may have high inter- and intra-annotator variability, as for the PI-RADS score [25]. This variability is rarely taken into account into self-supervised pretraining. Researchers usually either use all annotations, thus using the same sample several times, or they only use confident samples, based on the number of annotators and their experience or on the learned representations, as in [19]. **Contributions.** Here, we aim to train a model that takes as input multi-parametric MRI exam and outputs a map where higher values account for higher lesion probability. _Annotations_ are provided by multiple annotators in the form of binary maps: segmentations of observed lesions. Only a small portion of the dataset has annotations. A greater proportion has _metadata_ information available from written reports. These metadata, referring to the whole exam, are either a (binary) biopsy grading (presence or absence of malignant lesion) or a \begin{table} \begin{tabular}{l c c c} \hline \hline & 100\% (\(N_{train}\)=1397) & 10\% (\(N_{train}\)=139) & 1\% (\(N_{train}\)=13) \\ \hline 3D UNet & 0.80 (0.03) & 0.76 (0.02) & 0.71 (0.04) \\ 3D ResUnet & 0.79 (0.01) & 0.73 (0.02) & 0.64 (0.03) \\ \hline \hline \end{tabular} \end{table} Table 1: AUC at exam level (metrics defined in Section 3) on a hold out test set of the PI-CAI dataset of models trained from random initialization with 5-fold cross validation on a private dataset. PI-RADS score. For each exam, reports with a PI-RADS score are available from several radiologists, but the number of radiologists may differ among exams. In the spirit of [5, 11], we propose to include confidence, measured as a degree of inter-reports variability on metadata, in a contrastive learning framework. Our contributions are the following: * We propose a new contrastive loss function that leverages weak metadata with multiple annotators per sample and takes advantage of inter-annotators variability by defining metadata confidence. * We show that our method performs better than training from random initialization and previous pre-training methods on both the PI-CAI [24] public dataset and on a private multi-parametric prostate MRI dataset for prostate cancer lesion detection. ## 2 Method We propose to apply contrastive pretraining to prostate lesion detection. A lesion is considered detected if the overlap between the predicted lesion segmentation and the reference segmentation is above 0.1, as defined in [22]. The predicted lesion masks are generated by a U-Net model [20] (since in our experiments this was the best model, see Table 1) fine-tuned after contrastive pretraining. In this section, we describe our contrastive learning framework defining confidence on metadata. ### Contrastive learning framework Contrastive learning (CL) methods train an encoder to bring close together latent representations of images of a positive pair while pushing further apart those of negative pairs. In unsupervised CL [7], where no annotations or metadata are available, a positive pair is usually defined as two transformations of the same image and negative pairs as transformed versions of different images. Transformations are usually randomly chosen among a predefined family of transformations. The final estimated latent space is structured to learn invariances with respect to the applied transformations. In most CL methods, latent representations of different images are pushed apart _uniformly_. The alignment/uniformity contrastive loss function proposed in [28] is: \[\mathcal{L}_{NCE}=\underbrace{\frac{1}{N}\sum_{i=1}^{N}d_{ii}}_{\text{Global Alignment}}+\underbrace{\log\big{(}\frac{1}{N^{2}}\sum_{i,j=1}^{N}e^{-d_{ij}} \big{)}}_{\text{Global Uniformity}} \tag{1}\] where \(d_{ij}=||x_{1}^{i}-x_{2}^{j}||_{2}\), \(x_{1}^{i}\) and \(x_{2}^{j}\) are the encoder outputs of the transformed versions of images \(i\) and \(j\), respectively. However, as in many medical applications, our dataset contains discrete clinical features as metadata: PI-RADS scores and biopsy results per exam, that should be used to better define negative and positive samples. To take metadata into account in contrastive pretraining, we follow the work of [9, 10]. The authors introduce a kernel function on metadata \(y\) to _condition_ positive and negative pairs selection, defining the following loss function: \[\mathcal{L}_{w}=\underbrace{\frac{1}{N}\sum_{i,j=1}^{N}w(y_{i},y_{j})d_{ij}}_{ \text{Conditional Alignment}}+\underbrace{\log\big{(}\frac{1}{N^{2}}\sum_{ \begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}(||w||_{\infty}-w(y_{i},y_{j}))e^{-d_{ij}}\big{)}}_{ \text{Conditional Uniformity}} \tag{2}\] where \(w\) is a kernel function measuring the degree of similarity between metadata \(y_{i}\) and \(y_{j}\), \(0\leq w\leq 1\) and \(||w||_{\infty}=w(y_{i},y_{i})=1\). The conditional alignment term brings close together, in the representation space, only samples that have a metadata similarity greater than 0, while the conditional uniformity term does not repel all samples uniformly but weights the repulsion based on metadata dissimilarity. A schematic view of these two objective functions is shown in the supplementary material. We apply this framework to metadata (PI-RADS scores and biopsy results) that can have high inter-report variability. To simplify the problem and homogenize PI-RADS and biopsy scores, we decide to binarize both scores, following clinical practice and medical knowledge [12, 27]. We set \(y=0\) for PI-RADS 1 and 2, and \(y=1\) for PI-RADS 4 and 5. We do not consider PI-RADS 3, since it has the highest inter-reader variability [14] and low positive predictive value [29]. This means that all exams with a PI-RADS 3 are considered deprived of metadata. For each exam \(i\), a set of \(y\) values is available, noted \(\mathbf{y}_{i}\). The number of annotations may differ among subjects (see Equation (5) for a definition of \(w\) in such cases). For a biopsy result (defining an ISUP classification [12]), we set \(y=0\) if ISUP \(\leq 1\) and \(y=1\) if ISUP \(\geq 2\). To take advantage of the entire dataset, we also consider unannotated data for which metadata are not provided. When computing the loss function on an exam without metadata (no \(\mathbf{y}\) associated), we use the standard (unsupervised) contrastive loss function, as defined in [7]. This leads to the following contrastive loss function: \[\mathcal{L}_{w}= \tag{3}\] \[+\log\big{(}\frac{1}{|U|}\sum_{i\in U}\Big{(}||x_{1}^{i}-x_{2}^{i }||\Big{)}+\log\Big{(}\frac{1}{|U|^{2}}\sum_{\begin{subarray}{c}i,j\in U\\ i\neq j\end{subarray}}e^{-||x_{1}^{i}-x_{2}^{j}||}\Big{)}\Bigg{\}}_{\text{ without metadata}}\] where \(A\) (resp. \(U\)) is the subset with (resp. without) associated \(\mathbf{y}\) metadata. Since the number of annotations may be different between two subjects \(i\) and \(j\), we cannot use a standard kernel, as the RBF in [10]. We would like to take into account metadata confidence, namely agreement among annotators. In the following, we propose a new kernel \(w\) that takes metadata confidence into account. **Confidence.** Our measure of confidence is based on the discrepancy between the elements of vector \(\mathbf{y}\) and their most common value (or majority voting). For exam \(i\), if \(y_{i}\) is the most common value in its metadata vector \(\mathbf{y}_{i}=[y_{i0},y_{i1},...y_{in-1}]\) with \(n\) the number of available scores, confidence \(c\) is defined as: \[c(\mathbf{y}_{i})=\left\{\begin{aligned} \epsilon&\text{if}\ \ n=1\\ & 2\times\left(\frac{\sum_{k=0}^{n-1}\delta(y_{ik},y_{i})}{n}- \frac{1}{2}\right)\text{if}\ \ n>1\end{aligned}\right. \tag{4}\] where \(\delta\) is the Dirac function and \(\epsilon=0.1\)1. \(c(\mathbf{y}_{i})\in[0,1]\), \(0\) is found when an even number of opposite scores is obtained and the majority voting cannot provide a decision. In that case the associated exam will be considered as deprived of metadata. The proposed kernel then reads : Footnote 1: The maximum number of metadata available for an exam is \(n=7\), the minimal achievable confidence value is thus \(c=2(4/7-1/2)>0.14\). We fix \(\epsilon\) so that the confidence for \(n=1\) is higher than \(0\) but less that the minimal confidence when \(n\) is odd. \[w(\mathbf{y}_{i},\mathbf{y}_{j})=\begin{cases}1&\text{if}\ i=j\\ c_{ij}&\text{if}\ y_{i}=y_{j}\text{ and }i\neq j\\ 0&\text{if}\ y_{i}\neq y_{j}\text{ and }i\neq j\\ \end{cases}\text{(\begin{aligned} (exam\ against\ its\\ \text{own transformed\ version})\\ (different\ exams,\\ \text{same\ majority\ voting})\\ (different\ exams,\\ \text{different\ majority\ voting})\end{aligned}} \tag{5}\] where \(c_{ij}=\min(c(\mathbf{y}_{i}),c(\mathbf{y}_{j}))\). For two given exams \(i\) and \(j\), the proposed model is interpreted as follows: * If both metadata confidences are maximal (\(c_{ij}=1\)), \(w(\mathbf{y}_{i},\mathbf{y}_{j})\) will be equal to \(1\) and full alignment will be computed. * If either metadata confidence is less than \(1\), \(w(\mathbf{y}_{i},\mathbf{y}_{j})\) value will be smaller and exams will not be fully aligned in the latent space. The less confidence, the less aligned exams \(i\) and \(j\) representations will be. * If confidence drops to zero for either exam, the exam will only be aligned with its own transformed version. Similarly to decoupled CL [31], we design \(w\) such that the second term of Equation (3) does not repel samples with identical metadata most common value and maximal confidence (\(c_{ij}=1\)). See Figure 1 for a schematic view. ### Experimental settings Datasets.Experiments were performed on a private dataset of 2415 multi parametric MRI prostate exams among which 1397 have annotations (metadata and manual lesion segmentation) provided by multiple radiologists (up to 7). We also use the public PI-CAI dataset [23] composed of 1500 exams and 1295 annotations. In all learning steps, we used T2 weighted (T2w), apparent diffusion coefficient (ADC) and diffusion weighted (with the highest available b-value in the exam) sequences. As in [22, 34], we use the prostate ternary segmentation (background, peripheral zone and central zone), generated from an independent process on T2w sequences. We thus learn from a total of four volumes considered as registered. Pretraining is performed on data from both datasets on 3915 exams. Fine-tuning is performed with 1% and 10% of these exams using cross validation (see **Implementation details**). Implementation details.We pretrain the chosen U-Net encoder followed by a projection head. Similarly to nn U-Net [17], the encoder is a fully convolutional network where spatial anisotropy is used (e.g. axial axis is downsampled with a lower frequency since MRI volumes often have lower resolution in this direction). It is composed of four convolution blocks with one convolution layer in each block and takes the four sequences as input in channel dimension. The projection head is a two-layer perceptron as in [7]. We train with a batch size of 16 for 100 epochs and use a learning rate of \(10^{-4}\). Following the work of [13] on contrastive learning Figure 1: Given a set of exams \(x_{i\in[1,10]}\), \(\mathbf{y}_{i}\) is represented as a list of colored points. Confidence (\(c\)) is represented with color saturation: darker means more confident. Exams such that \(c(\mathbf{y}_{i})=0\) (no decision from majority voting) are considered as unlabeled and uncolored. Exams such that \(c(\mathbf{y}_{i,j})=1\) and \(y_{i}=y_{j}\), e.g. \((x_{1},x_{2})\) (resp. \((x_{3},x_{8})\)), will be strongly attracted while less attracted to patients with \(c(\mathbf{y}_{i})<1\), e.g. \(x_{5,6}\) (resp. \(x_{7,9}\)). Groups of exams with different \(y\) scores are repelled. for prostate cancer triage, we use a random sampling of rotation, translation and horizontal flip to generate the transformed versions of the images. To evaluate the impact of contrastive pretraining at low data regime, we perform fine-tuning with 10% and 1% of annotated exams. The contrastive pretrained encoder is used to initialize the U-Net encoder, the whole encoder-decoder architecture is then fine-tuned on the supervised task. Fine-tuning is performed with **5-fold cross validation** with both datasets using the pretrained encoder. Using 1% (resp. 10%) of annotated data, each fold has 39 (resp. 269) training data and 12 (resp. 83) validation data. We build a hold out test set of 500 volumes2, not used during any training step with data from both datasets to report our results. We also compared fine-tuning from our pretrained encoder to a model trained from random initialization. Fine-tuning results with 10% of annotated data are reported the in supplementary material. Footnote 2: The 100 validation cases on the PI-CAI challenge website being hidden we could not compare our methods to the leaderboard performances. **Computing infrastructure.** Optimizations were run on GPU NVIDIA T4 cards. ## 3 Results and discussion The 3D U-Net network outputs lesions segmentation masks which are thresholded, following the dynamic thresholding proposed in [4], and of which connected components are computed. For each connected component, a detection probability is assigned as the maximum value of the network output in this component. The output of this post-processing is a binary mask associated with a detection probability per lesion. We compute the overlap between each lesion mask and the reference mask. A lesion is considered as a true positive (detection) if the overlap with the reference is above 0.1 as defined in [22]. This threshold is chosen to keep a maximum number of lesions to be analyzed for AUC computation. Different thresholds values are then applied for AUC computation. As in [22, 33], lesion detection probability is used to compute AUC values at exam and lesion levels, and average precision (mAP). To compute AUC at exam level we take, as ground truth, the absence or presence of a lesion mask, and, as a detection probability, the maximum probability of the set of detected lesions. At lesion level, all detection probabilities are considered and thresholded with different values, which amounts to limiting the number of predicted lesions. The higher this threshold, the lower are sensitivity and the number of predicted lesions, and the higher is specificity. Results are presented in Table 2. For both datasets, we see that including metadata confidence to condition alignment and uniformity in contrastive pretraining yields better performances than previous state of the art approaches and random initialization. The discrepancy between PI-CAI and private mAP is due to the nature of the dataset: the PI-CAI challenge was designed to detect lesions confirmed by biopsy, while our private dataset contains lesions not necessarily confirmed by biopsy. Our private dataset contains manually segmented lesions that might be discarded if biopsy was performed. The model being fine-tuned on both datasets, PI-CAI exams are overly segmented which leads to lower mAP values (since our model tends to over-segment on biopsy ground truths). For our clinical application, which aims to reproduce radiologist responses, this is acceptable. We report significant performance improvement at very low data regime (1% annotated data) compared to existing methods which is a framework often encountered in clinical practice. To assess the impact of our approach we perform different ablation studies (shown in the second part of Table 2). **High confidence** (HC row in Table 2). For pretraining, only exams with confidence equal to 1 are considered but are not perfectly aligned (\(c_{ij}=0.8\delta(c_{ij},1)\) in Equation (5)). We can see that considering only confident samples to condition contrastive learning decreased performances. **Majority Voting** (Majority voting row in Table 2). We removed confidence and used majority voting output for kernel computation. If two different exams have the same majority vote we set : \(w(y_{i},y_{j})=0.8\) in Equation (5), other \(w\) values are kept unchanged. We can see that using majority voting output without taking confidence into account leads to decreased performances. **Biopsy** (Biopsy row in Table 2). We set the confidence of PI-CAI exams to 1 (increasing biopsy confidence) which amounts to setting \(\epsilon=1\) for PI-CAI exams in Equation (4). No particular improvement is observed with this approach. **Global uniformity.** We remove the conditioning on uniformity. Exams are uniformly repelled rather than conditioning on metadata similarity for repulsion (which amounts to setting \(w(\mathbf{y}_{i},\mathbf{y}_{j})=0\) for the second term of Equation (3)). Removing uniformity conditioning yields lower performances than the proposed approach (GIU row in Table 2). Figure 2 shows the impact of our pretraining method on the finetuned U-Net outputs. Without conditioning, some lesions are missed (cases FN 1, FN 2) and others are falsely detected (cases FP 1, 2 and 3). Adding the conditioned pretraining removed these errors. More examples are provided in the supplementary material. ## 4 Conclusion We presented a new method to take the confidence of metadata, namely the agreement among annotators, into account in a contrastive pretraining. We proposed a definition of metadata confidence and a new kernel to condition positive and negative sampling. The proposed method yielded better results for prostate lesion detection than existing contrastive learning approaches on two datasets.
2302.03752
Dynamic Visualization of Gyral and Sulcal Stereoelectroencephalographic contacts in Humans
Stereoelectroencephalography (SEEG) is a neurosurgical method to survey electrophysiological activity within the brain to treat disorders such as Epilepsy. In this stereotactic approach, leads are implanted through straight trajectories to survey both cortical and sub-cortical activity. Visualizing the recorded locations covering sulcal and gyral activity while staying true to the cortical architecture is challenging due to the folded, three-dimensional nature of the human cortex. To overcome this challenge, we developed a novel visualization concept, allowing investigators to dynamically morph between the subjects' cortical reconstruction and an inflated cortex representation. This inflated view, in which gyri and sulci are viewed on a smooth surface, allows better visualization of electrodes buried within the sulcus while staying true to the underlying cortical architecture.
Markus Adamek, Alexander P Rockhill, Peter Brunner, Dora Hermes
2023-02-07T20:57:15Z
http://arxiv.org/abs/2302.03752v1
# Dynamic Visualization of Gyral and Sulcal ###### Abstract Stereoelectroencephalography (SEEG) is a neurosurgical method to survey electrophysiological activity within the brain to treat disorders such as Epilepsy. In this stereotactic approach, leads are implanted through straight trajectories to survey both cortical and sub-cortical activity. Visualizing the recorded locations covering sulcal and gyral activity while staying true to the cortical architecture is challenging due to the folded, three-dimensional nature of the human cortex. To overcome this challenge, we developed a novel visualization concept, allowing investigators to dynamically morph between the subjects' cortical reconstruction and an inflated cortex representation. This inflated view, in which gyri and sulci are viewed on a smooth surface, allows better visualization of electrodes buried within the sulcus while staying true to the underlying cortical architecture. _Clinical relevance--_ These visualization techniques might also help guide clinical decision-making when defining seizure onset zones or resections for patients undergoing SEEG monitoring for intractable epilepsy. ## I Introduction Behavior emerges from complex interactions between functionally specialized regions in the brain. Uncovering these regional specializations and the complex networks required to produce behavior have been of scientific and clinical interest for decades. For example, electrode grids placed directly on the surface of the human cortex (Electrocardiography; ECoG) are used to determine the optimal treatment for patients with epilepsy while creating a unique opportunity to investigate human visual, motor, and auditory networks [1, 2, 3]. In recent years, Stereoelectroencephalography (SEEG) has started to replace ECoG in the United States. In the SEEG approach, leads are implanted through straight trajectories, surveying both cortical and sub-cortical areas. Therefore, the electrodes placed along the trajectory will pass through the highly folded human cortex (Figure 1), opening up the possibility of investigating the functional differences between electrodes recording sulcal and gyral activity with high fidelity. This paradigm shift requires us to rethink how we localize, visualize and analyze the activity recorded through this approach, creating a new set of challenges. First, visualizing the location of electrodes within the sulcus is difficult. Investigators and clinicians have to look either at (1) 2-D slice stacks or (2) semi-transparent 3D models, which are both impractical for different reasons. While looking at electrode locations on the anatomical 2-D slices provides an accurate anatomical description for a specific contact, it is impractical to investigate widespread network activity, as there is no conceivable way to visualize all electrodes simultaneously. This issue can be somewhat addressed by viewing the electrode locations on a semi-transparent reconstructed 3D model. However, two electrodes that appear close to each other in the semi-transparent brain might be considered far apart if the folded nature of the cortex is taken into account. (Figure 1). Secondly, the recorded activity needs to be viewed in a space that allows accurate association between the recorded data and the anatomical and functional architecture of the brain. There is mounting evidence that sulcal and gyral cortical grey matter are functionally distinct [4, 5]. Understanding the temporal spread of activation across gyri and sulci, therefore, depends on an accurate representation of the folded cortex. [6, 7, 8]. To address these issues, we developed a novel approach for localizing, visualizing, and analyzing SEEG data. In this approach, we morph the 3D surface, and associated electrode locations from its gyrated 3D model to an inflated model [9, 10] through models created by Freesurfer. In the inflated model, the cortical surface is smooth, allowing visualization of sulci and gyri while maintaining their topological structure. This approach allows visualization of electrode locations in the original anatomically accurate 3D space with the ability to slowly morph the model into a view representing the position of electrodes on the inflated model, enabling a better view from a more functional perspective. ## II Methods ### Surface Reconstruction The surface and the inflated surface were reconstructed from the subjects' T1-weighted MRI using Freesurfer. In short, the intensity of the T1-weighted MRI is normalized, and non-cerebral voxels are removed. The resulting image is then processed further to remove subcortical components, and finally, a surface mesh of the cortex is created. Next, Freesurfer calculates the inflated model by minimizing the number of folds on the surface. The resulting surface meshes have a 1:1 vertex identity, allowing a surface vertex to be identified in the inflated surface. ### Implementation The visualization is available in MATLAB and Python, using Freesurfers' reconstructed surface and inflated model. Both implementations are freely available on GitHub. The Matlab version was implemented as a standalone package[11] and subsequently integrated into the Versatile Electrode Localization Framework (VERA) [12]. The implementation can also be used independently of VERA, allowing integration into existing workflows. In addition to the 3D model morphing, points (through scatter3) or text can also be morphed. The implementation enables linking these different visualizations to ensure that morphing one object results in correct morphs of all linked objects. Furthermore, the code is easily extendable to create additionally linked visualizations. The Python version was implemented as part of the MNE-Python package [13]. The MNE-Python implementation can be seen here ([https://mne.tools/dev/auto_tutorials/clinical/20_seeg.html](https://mne.tools/dev/auto_tutorials/clinical/20_seeg.html)). ### Electrode Locations Electrode locations were reconstructed from a post-op CT using VERA. The CT was first co-registered with the T1-weighted MRI used for surface reconstruction. Next, we use VERA's fully-automated segmentation algorithm to determine the electrode locations from the CT, followed by manual correction. ### Algorithm We morph between the surface and inflated model using a morphing parameter \(0\leq\sigma\leq 1\). Assuming the \(i^{th}\) vertex \(p_{ci}\) of the cortical surface corresponds to the vertex \(i^{th}\) vertex \(p_{ii}\) on the inflated model, the morphed vertex location \(p_{mi}\) is calculated as \[p_{mi}=(1-\sigma)p_{ci}+\sigma p_{ii}\] Next, we determined which electrodes are within the cortical surface matter versus those in subcortical structures or white matter. This can be achieved through multiple avenues. The simplest method is to define a distance threshold \(d\) so that only electrodes \(e_{j}\) remain, which satisfies the condition \[d_{j}=\min_{\forall i}\lVert e_{j}-p_{ci}\rVert<d\] In our case, the distance threshold was set to \(d=4mm\). Therefore, any electrode \(e_{j}\) close enough to the cortical surface will be visualized. However, cortical thickness varies across the cortex as well as between subjects. An alternative method is to determine electrode locations through volume-derived labels. Finally, for each electrode location \(e_{j}\), we determine the closest cortical vertex \(p_{ci_{j}}\) and its associated inflated vertex location \(p_{ii_{j}}\). The inflated vertex location \(p_{ii_{j}}\) is then used to determine the morphed electrode location \(e_{mj}\). \[i_{j}=\arg\min_{\forall i}(p_{ci}-e_{j})\] \[e_{mj}=(1-\sigma)e_{j}+\sigma p_{ii_{j}}\] ## III Results To illustrate the issues solved by our implementation, we present four scenarios. The first scenario is the classical view of the cortex on a non-transparent cortical reconstruction. Without transparency, only a few of the 138 contacts (The patient was implanted with 231 electrodes, and 138 passed our \(4mm\) distance criteria) are visible in Figure 2 A. However, after morphing the surface and electrode locations, we can determine the anatomical identity of all cortical electrodes (Figure 2 A, \(\sigma\)=1). In the second scenario, we use a semi-transparent surface to illustrate the issue of three-dimensional representations. While in this view (Figure 2 B), all electrodes are visible, without morphing, it is hard to determine their anatomical identity. Representing the same subject but color-code sulci and gyri helps illustrate the issue of identifying recording locations buried within the sulci. As expected, sulci cannot be visualized without morphing, and electrodes recording activity within these sulci cannot be observed (Figure 2 C). Lastly, we overlapped the sulcus map with visual field areas identified via the neuropathy package [14]. Without the inflated model, identifying the electrodes which are probable to record activity from V1v (neon green) and PHC1 (dark blue) would not be possible (Figure 2 D). ## IV Conclusions Here we present a novel method to visualize the relationship between electrode locations and their anatomical recording location for patients implanted with SEEG electrodes. Fig. 1: **Schematic representation electrode location distance issue in Euclidean space.** The left image shows a hypothetical SEEG trajectory passing through the folded cortex. SEEG recording electrodes are equally spaced along the trajectory. However, if the cortical grey matter is stretched out (as it would be in an inflated representation), the distances between the recording locations change. For example, the distance between electrodes 2 and 3 increases since the sulcus separates them. Note that the trajectory crosses a sulcus to demonstrate how sulci and gyri are represented on the inflated brain, not as a realistic trajectory. The shift from ECoG to SEEG has opened up new clinical and research opportunities but also requires us to rethink the methods we use to visualize and analyze said activity. These opportunities span a wide range of active research areas, all of which will benefit from the additional information provided by SEEG. One of these active research areas investigates the temporal dynamics of auditory processing. While some research suggests a caudal to rostral spread of cortical activity during auditory information processing, others refute this hypothesis [15, 16, 17]. However, evidence for this hypothesis of human auditory processing in humans was primarily investigated via ECoG grids, which do not record sulcal activity. SEEG activity, combined with the presented visualization method, might elucidate current debates by taking to account the complex underlying cytoarchitectural activity. Fig. 2: **Example of an SEEG subject being morphed from the 3D Surface to the inflated model.** Three different visualizations of the same subject at different morphing steps. **(A)** Reconstructed and labeled surfaces, including implanted electrode locations. As can be seen here, without inflation, almost non of the implanted electrodes can be visualized as the gyrated surface hides them. As we inflate the surface, we can visualize electrodes recording sulcal activity. **(B)** Semi-transparent reconstructed cortical surface including implanted electrode trajectories. While a semi-transparent surface allows visualization of deeper electrodes, identifying the anatomical position of an implanted electrode is challenging. **(C)** In this view, the gyri are labeled in red, and the sulci are labeled in blue. As expected, in the uninflated view, no sulci are visible. **(D)** Visualization of visual fields from a lateral (top) and medial (bottom) view. Locations of electrodes within PHC1 (dark blue, top) and V1v (non green, bottom) can only be identified after inflating the cortex. differences between sulci and gyri in the auditory cortex [18]. Similarly, visual processing is distributed smoothly across specific gyral and sulcal locations [19, 20, 21]. Therefore, knowledge of the location of a recording electrode is of paramount importance. And lastly, the proposed visualization could help understand the underlying mechanism of traveling waves. These spatially organized electrophysiological patterns have been identified as essential patterns, organizing information flow across the cortex[22, 23, 24, 25]. However, the idea of traveling waves is based on a flat cortical surface on which the wave propagates, necessitating an inflated brain model to examine traveling waves truthfully. ## Acknowledgment Research reported in this publication was supported by the National Institute Of Mental Health of the National Institutes of Health under Award Numbers R01-MH122258 (DH, PB), R01-EB026439 (PB), P41-EB018783 (PB), U24-NS109103 (PB), U01-NS108916 (PB) and U01-NS128612 (PB, DH). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Figure 1 was created with BioRender.com.
2310.17536
Classical Liouville Action and Uniformization of Orbifold Riemann Surfaces
We study the classical Liouville field theory on Riemann surfaces of genus $g>1$ in the presence of vertex operators associated with branch points of orders $m_i>1$. In order to do so, we consider the generalized Schottky space $\mathfrak{S}_{g,n}(\boldsymbol{m})$ obtained as a holomorphic fibration over the Schottky space $\mathfrak{S}_g$ of the (compactified) underlying Riemann surface. Those fibers correspond to configuration spaces of $n$ orbifold points of orders $\boldsymbol{m}=(m_1,\dots,m_n)$. Drawing on the previous work of Park, Teo, and Takhtajan \cite{park2015potentials} as well as Takhtajan and Zograf \cite{ZT_2018}, we define Hermitian metrics $\mathsf{h}_i$ for tautological line bundles $\mathscr{L}_i$ over $\mathfrak{S}_{g,n}(\boldsymbol{m})$. These metrics are expressed in terms of the first coefficient of the expansion of covering map $J$ of the Schottky domain. Additionally, we define the regularized classical Liouville action $S_{\boldsymbol{m}}$ using Schottky global coordinates on Riemann orbisurfaces with genus $g>1$. We demonstrate that $\exp{S_{\boldsymbol{m}}/\pi}$ serves as a Hermitian metric on the $\mathbb{Q}$-line bundle $\mathscr{L}=\bigotimes_{i=1}^{n}\mathscr{L}_i^{\otimes (1-1/m_i^2)}$ over $\mathfrak{S}_{g,n}(\boldsymbol{m})$. Furthermore, we explicitly compute the first and second variations of the smooth real-valued function $\mathscr{S}_{\boldsymbol{m}}=S_{\boldsymbol{m}}-\pi\sum_{i=1}^n(m_i-\tfrac{1}{m_i})\log\mathsf{h}_{i}$ on the Schottky deformation space $\mathfrak{S}_{g,n}(\boldsymbol{m})$. We establish two key results: (i) $\mathscr{S}_{\boldsymbol{m}}$ generates a combination of accessory and auxiliary parameters, and (ii) $-\mathscr{S}_{\boldsymbol{m}}$ acts as a K\"{a}hler potential for a specific combination of Weil-Petersson and Takhtajan-Zograf metrics that appear in the local index theorem for orbifold Riemann surfaces \cite{ZT_2018}.
Behrad Taghavi, Ali Naseh, Kuroush Allameh
2023-10-26T16:27:44Z
http://arxiv.org/abs/2310.17536v2
# Classical Liouville Action and Uniformization of Orbifold Riemann Surfaces ###### Abstract We study the classical Liouville field theory on Riemann surfaces of genus \(g>1\) in the presence of vertex operators associated with branch points of orders \(m_{i}>1\). In order to do so, we will consider the generalized Schottky space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) obtained as a holomorphic fibration over the Schottky space \(\mathfrak{S}_{g}\) of the (compactified) underlying Riemann surface. The fibers of \(\mathfrak{S}_{g,n}(\boldsymbol{m})\to\mathfrak{S}_{g}\) correspond to configuration spaces of \(n\) orbifold points of orders \(\boldsymbol{m}=(m_{1},\ldots,m_{n})\). Drawing on the previous work of Park, Takhtajan, and Teo [1] as well as Takhtajan and Zograf [2], we define Hermitian metrics \(\mathsf{h}_{i}\) for tautological line bundles \(\mathscr{L}_{i}\) over \(\mathfrak{S}_{g,n}(\boldsymbol{m})\). These metrics are expressed in terms of the first coefficient of the expansion of covering map \(J\) near each singular point on the Schottky domain. Additionally, we define the regularized classical Liouville action \(S_{\boldsymbol{m}}\) using Schottky global coordinates on Riemann orbisurfaces with genus \(g>1\). We demonstrate that \(\exp[S_{\boldsymbol{m}}/\pi]\) serves as a Hermitian metric in the holomorphic \(\mathbb{Q}\)-line bundle \(\mathscr{L}=\bigotimes_{i=1}^{n}\mathscr{L}_{i}^{\otimes(1-1/m_{i}^{2})}\) over \(\mathfrak{S}_{g,n}(\boldsymbol{m})\). Furthermore, we explicitly compute the first and second variations of the smooth real-valued function \(\mathscr{S}_{\boldsymbol{m}}=S_{\boldsymbol{m}}-\pi\sum_{i=1}^{n}(m_{i}-\frac {1}{m_{i}})\log\mathsf{h}_{i}\) on the Schottky deformation space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\). We establish two key results: (i) \(\mathscr{S}_{\boldsymbol{m}}\) generates a combination of accessory and auxiliary parameters, and (ii) \(-\mathscr{S}_{\boldsymbol{m}}\) acts as a Kahler potential for a specific combination of Weil-Petersson and Takhtajan-Zograf metrics that appear in the local index theorem for orbifold Riemann surfaces [2]. The obtained results can then be interpreted in terms of the complex geometry of the Hodge line bundle equipped with Quillen's metric over the moduli space \(\mathfrak{M}_{g,n}(\boldsymbol{m})\) of Riemann orbisurfaces and the tree-level approximation of conformal Ward identities associated with quantum Liouville theory. _Dedicated to Professor Hessamaddin Arfaei on the occasion of his 75th birthday_ ## 1 Introduction * 2 Classical Liouville Action and Uniformization Theory * 2.1 Fuchsian and Schottky Uniformizations * 2.2 Projective Connections and Energy-Momentum Tensor * 3 Geometry of Tiechmuller, Moduli, and Schottky Spaces * 3.1 Teichmuller Space \(\mathcal{T}(\Gamma)\) * 3.1.1 Complex Structure on \(\mathcal{T}(\Gamma)\) * 3.1.2 Variational Formulas * 3.1.3 Kahler Metrics on \(\mathcal{T}(\Gamma)\) * 3.2 Moduli Spaces \(\mathcal{M}_{g,n}\) and \(\mathfrak{M}_{g,n}(\boldsymbol{m})\) * 3.2.1 \(\mathfrak{M}_{0,n}(\boldsymbol{m})\) * 3.3 Schottky Space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) * 4 Classical Liouville Action * 4.1 Riemann Orbisurfaces of Genus 0 * 4.2 Riemann Orbisurfaces of Genus >1 * 5 Potentials for Weil-Petersson and Takhtajan-Zograf Metrics * 5.1 Potentials for Cuspidal and Elliptic TZ Metrics on \(\mathcal{M}_{0,n}\) * 5.2 Chern Forms and Potentials on Schottky Space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) * 6 Discussion and some Future Directions * A Introduction to Orbifold Riemann Surfaces * A.1 Analytic Geometry * A.1.1 Complex Analytic Spaces and Analytic Mappings * A.1.2 An Intermezzo on Line Bundles and Divisors * A.1.3 Finite Group Actions, Quotient Singularities, and Galois Coverings * A.1.4 Complex Analytic Orbifolds * A.2 Orbifold Riemann Surfaces * A.3 Universal Orbifold Covering and Orbifold Fundamental Group * A.4 Orbifold Euler Characteristic and Riemann-Hurwitz Formula * A.5 Orbisheaves, Orbbundles, and Orbidivisors * A.6 Orbifold Metrics * A.6.1 Hyperbolic metric on Riemann Orbisurfaces * A.7 Orbifold Differential Forms and Automorphic Forms * A B Geometric structures on orbifolds * B.1 Basic definitions and some theorems * B.2 Hierarchy of Geometric Structures * B.3 Orbifold \(\mathbb{CP}^{1}\)-Structure and Projective Connections * C Asymptotics Near Elliptic and Parabolic Fixed Points * C.1 Quadratic and Beltrami Differentials * D List of symbols in the main text * E List of symbols in the appendices ## 1 Introduction Conformal field theory in two dimensions has found a wide range of applications in both physics and mathematics. Perhaps, one of the most interesting applications of CFTs in mathematical physics is to the geometry of surfaces: This is most clear in Liouville CFT, introduced by Polyakov [3], that can be viewed as a quantum theory of geometry in two dimensions [4; 5]. This theory admits two dimensional surfaces of constant negative curvature (possibly with sources) as its classical solutions. It is then natural to consider these classical solutions as critical points of a certain functional defined on the space of all smooth conformal metrics on a given Riemann surface. In the context of string theory, this functional is known as the _Liouville action functional_ while its critical value is usually called the _classical Liouville action_. From a mathematical perspective, the connection between Liouville theory and complex geometry of moduli spaces of Riemann surfaces was first established by Takhtajan and Zograf [6; 7; 8]. One novelty of their work was the use of (Fuchsian or Schottky) projective structures on Riemann surfaces to construct the Liouville action. Zograf and Takhtajan proved that the classical Liouville action is a Kahler potential for the Weill-Petersson (WP) metric on moduli spaces of punctured Riemann spheres [7], as well as on Schottky spaces of compact Riemann surfaces [8]. In the case of punctured Riemann spheres, the classical action is a generating function for the famous _accessory parameters_ of Klein and Poincare. For compact Riemann surfaces, the classical Liouville action is an anti-derivative of a 1-form on the Schottky space given by the difference of Fuchsian and Schottky projective connections. In turn, this 1-form is an anti-derivative of the Weil-Petersson symplectic 2-form on the Schottky space. See [9; 10] for reviews of these results. Later on, Takhtajan and Zograf introduced a new Kahler metric [11; 12], called Takhtajan-Zograf (TZ) metric [13; 14; 15], on the moduli space \(\mathfrak{M}_{g,n}\) of punctured Riemann surfaces in the process of deriving a local index theorem (in Quillen's form) for families of Cauchy-Riemann operators (for its precise definition, see Sect. 3.1.3). In 2015, Park, Takhtajan, and Teo [1] found a Kahler potential \(\mathsf{h}_{i}\) for the i-th TZ metric in terms of the first coefficient of the Fourier expansion of a covering map \(J\) near the i-th puncture. The authors of [1] also showed that these Kahler potentials are essential for defining classical Liouville actions that are invariant under certain subgroups of the Teichmuller modular group: An appropriate definition of classical Liouville action on a punctured Riemann surface needs a regularization procedure that introduces a "modular anomaly" (see [16]). Kahler potentials \(\mathsf{h}_{i}\) are then essential in cancellation of these anomalous contributions. More recently, a generalization of local index theorem [12, Theorem 1] to the case of orbifold Riemann surfaces [2], has lead Zograf and Takhtajan to introduce yet another Kahler metric on the moduli space of Riemann orbisurfaces. In order to avoid confusion, this new Kahler metric will be called the _elliptic_ TZ metric while the one introduced in [11, 12] will be called the _cuspidal_ TZ metric.1 Using the results of [1], Zograf and Takhtajan [2] also found a Kahler potential for the i-th elliptic TZ metric in the case of genus zero orbifold Riemann surfaces. Footnote 1: It was demonstrated by the authors of [2] that the elliptic TZ metric converges to the cuspidal TZ metric in the limit that the opening angle of corresponding elliptic fixed point approaches zero. Motivated by the results of [1, 2], this manuscript explores the classical limit of Liouville field theory (LFT) on orbifold Riemann surfaces with genus \(g>1\) using the Schottky global coordinates.2 Our main result can be viewed as an extension of [1, Theorems 1,2] to the case of orbifold Riemann surfaces (both compact and with punctures): While the authors of [1] considered the classical Liouville action on (generalized) Schottky space of punctured Riemann surfaces, we have to take into account the contributions of orbifold points to the Liouville action as well. Despite the fact that some aspects of this generalization might be familiar to mathematicians and experts, we have still chosen to include them here in order to make this manuscript self-contained and more accessible. In particular, while the method of proofs in this work closely resemble those in [1, 2, 16] (and the references therein), the details of calculations for the case of orbifolds, to the best of our knowledge, have not appeared explicitly anywhere in the literature. Footnote 2: While our main results are derived for the case of Riemann orbisurfaces with genus \(g>1\), we still study genus zero Riemann orbisurfaces to draw some important lessons. From a mathematical perspective, our result provides evidence that the connection between classical Liouville action and Quillen's metric in the Hodge line bundle (see [16]) extends to the orbifold setting.3 From the point of view of physics, the results of this paper have multiple applications, many of which stem from the connection between partition function of Liouville theory on a Riemann surface with conical singularities and correlation functions of Liouville vertex operators corresponding to conical defects (see, e.g. [17, 18, 19]). Footnote 3: Some other mathematical implications of our result will be highlighted in a forthcoming shorter version of this manuscript. Studying CFTs like LFT on Riemann orbisurfaces with genus \(g\) has additional reasons from the physics perspective. One notable rationale comes from the fact that many established constructions that relate geometry and entanglement are based on bipartite entanglement. For instance, the emergence of eternal black holes from quantum entanglement between two copies of the CFT in a thermofield double (TFD) state [20], or Ryu-Takayanagi [21] minimal area surface in AdS anchored on the boundary of a sub-region that determines spatial entanglement between that sub-region and its complement in dual theory. Like wise, the MERA ansatz which reveals an additional dimension for AdS spacetime in the direction of increasing or decreasing entanglement [22]. Additionally, linearized Einstein equations which can be derived from the entanglement of the underlying quantum degrees of freedom [23]. However, it is possible for the degrees of freedom to be entangled in a multipartite manner, much like how many-point correlation functions cannot be deduced from lower correlations. This is evident in tripartite entangled states such as GHZ (Greenberger-Horne-Zeilinger) and W states. These states (and some of their deformations) are similar to the TFD state and can be described by integrating over half of a higher genus surface (with possible singularities). Another reason stems from the Renyi entropies, \(S_{n}\). For a reduced density matrix \(\rho\) of a spatial region A, \[S_{n}=\frac{1}{1-n}\log\mathrm{Tr}(\rho^{n})=\frac{1}{1-n}\left(\log Z_{n}-n \log Z_{1}\right),\] where \(Z_{n}\) is the partition function on an orbifold Riemann surface with non-zero genus and \(\mathbb{Z}_{n}\) symmetry. This orbifold surface can be constructed by gluing together \(n\)-copies of the original system across the entangling region \(A\) with some disjoint regions.4 The Schottky uniformization suggests that distinct phases should be considered in studying the \(Z_{n}\) partition function and in order to prove the RT formula, the extension of those distinct phases into a quotient of \(\mathbb{H}_{3}\) should be explored. Actually the dominant contribution, determined by the least action principle based on the values in each phase, is important in determining the wave function and proving the RT formula.5 Accordingly, this implies that studying the CFT on orbifold Riemann surfaces is especially instrumental in determining whether the assumption of replica symmetry holds true in the dual gravitational system. Footnote 4: See e.g.[24]. Footnote 5: For attempts in this direction see [25; 26]. As another reason, it is worth noting that the correlation functions of twist operators in a CFT have a connection to partition functions on obrisurfaces with different genera. Specifically, a CFT that arises from the low energy limit of a 2-dimensional sigma model with a target space of \(M^{n}/S_{n}\) (symmetric orbifold of \(n\) copies of \(M\)). This relationship was highlighted in a paper by Lunin and Mathur [27]. Interestingly, the sigma model with the aforementioned target space can describe the low energy behavior of the \(D1-D5\) system [28; 29; 30], which generates a near horizon structure of AdS\({}_{3}\times S^{3}\times M_{4}\), as presented in [31]. Thus, possessing knowledge about the Liouville action on obrisurfaces can be highly advantageous for gaining a better insight into not only the string theoretical constructions of AdS\({}_{3}\)/CFT\({}_{2}\), but also for addressing important topics related to black hole physics, such as microstate counting [32]. Given that we intend to allocate a part to extending our findings from Riemann obrisurfaces to conical Riemann surfaces, it is essential to outline here some motivations behind this choice. One motivation ties into the importance of investigating quantum gravity in three dimensions. Previous research has shown [33; 34]6 that if one only considers smooth saddle-points when calculating the gravitational path integral in 3-dimensional gravity, the resulting regularized partition function is plagued by two issues. Firstly, the range of twists at a constant spin is continuous rather than discrete. Secondly, when dealing with high spins and energies near the edge of the spectrum, the density of states becomes negative. The first difficulty can potentially be resolved by considering recent findings that suggest the dual theory of 2-dimensional AdS gravity is an ensemble of one-dimensional quantum mechanics [37; 38]. To address the issue of non-unitarity, it has been proposed that extra contributions should be added to the path integral over metrics, namely Seifert manifolds, which are off-shell configurations [39].7 It is particularly interesting to note that through Kaluza-Klein reduction, the solutions of the derived 2-dimensional theory are conical Riemann surfaces. Footnote 7: In another proposal, the 3-dimensional theory is modified by adding some special massive particles which it implies that one should consider 3-dimensional conical manifolds beside the smooth saddles, see [40]. The study of conical Riemann surfaces also plays a crucial role in addressing a significant concern within the realm of 2-dimensional CFTs. Ideally, one aims to resolve the constraints of conformal invariance and unitarity to ascertain the permissible values for conformal dimensions \(\Delta_{i}\) and Operator Product Expansion (OPE) coefficients. This pursuit would lead to a comprehensive classification of 2-dimensional CFTs. But since such exhaustive classification does not exist (up to now), one can at least explore the universal aspects of those data, i.e., those hold true in any conformal field theory. When all scalar operators in three-point functions possess high dimensions (i.e., they are "heavy"), a universal formula for the averaged value of OPE coefficients emerges for any unitary and compact 2-dimensional CFT with a central charge \(c\) greater than 1, as detailed in references [41; 42] (see also [43]). Notably, when \(12\Delta_{i}/c<1\), this formula finds [42] a connection to the DOZZ (Dorn-Otto-Zamolodchikov-Zamolodchikov) formula for the structure constants of vertex operators in Liouville theory. Actually, the classical correlation function of Liouville vertex operators on a Riemann surface with genus \(g\) can be linked to the on-shell value of the Liouville action functional on the same Riemann surface, albeit with the insertion of conical points at the positions of those operators -effectively transforming it into a conical Riemann surface. As a result, delving into LFT on conical Riemann surfaces offers a dual benefit. It not only allows us to investigate the universal features of OPEs within 2-dimensional CFTs but also sheds light on certain facets of 3-dimensional gravity in the presence of heavy particles, a realm characterized by 3-dimensional geometry with conical defects.8 Footnote 8: The operator with \(12\Delta_{i}/c>1\) is dual to a black hole state. See [43]. The aforementioned observation presents a different aspect when examined within the context of the bulk dual. Within semiclassical gravity, the wormhole amplitudes can be understood as averaged solutions to the mentioned CFT's bootstrap constraints in the semiclassical limit [44; 45].9 To be more precise, the Euclidean wormhole solutions provide connected contributions to the average of products of CFT's correlation functions.10 Moreover, by initiating from a two-sphere boundary wormhole with \((n+1)\) massive particles going through the wormhole and then analytically continuing the mass of \((m+1)\) of them to the black hole regime, the two-sphere boundaries are effectively joined at their \((m+1)\) pairs of insertion points. This results in the creation of a genus-\(m\) handlebody with \(2(n-m)\) conical singularities [44].11 Consequently, the LFT on the single conical boundary of the handlebody not only is connected to the analytical continuation of the dimension of defect operators (mass of massive particles) on two-boundary wormholes but also can shed light on the statistical distribution of CFT's data on some regime of scaling dimensions. Footnote 11: The parameters that define operator dimensions change into moduli of the (conical) Riemann surface as the operators are made heavy. Furthermore, it is established that there exist numerous distinct families of black hole microstates, each comprising an infinite number of members [47; 48]. These families are also closely related to geometries featuring Einstein-Rosen bridges of potentially immense volume. Intriguingly, it has been demonstrated that a substantial reduction in the dimension of the Hilbert space can happen by adding the contribution of wormhole saddle points in the gravitational path integral. These wormholes yield minute yet universal contributions to the quantum overlap of candidate black hole microstates and shed light on the really orthogonal ones [48].12 Accordingly, this concept can also help to resolve the problem [50] of growth of holographic complexity at exponentially large times. Actually, some of the microstates are created by massive particles with masses below the black hole threshold, which reside behind the horizon without altering the mass. Therefore, by analytically continuing external operators to the black hole regime in two-boundary wormholes, the LFT on conical Riemann surfaces can also offer insights into the minute overlaps between different microstates and, ultimately, provide a deeper understanding of the Bekenstein-Hawking entropy and holographic complexity. Footnote 12: See also [49]. Even more intriguingly, when one integrates out the mentioned (on-shell) wormholes in 3-dimensional gravity coupled to sufficiently massive point particles, it results in the emergence of random bulk 3-point interactions among these point particles. These interactions exhibit the same statistical properties as the boundary OPE coefficients [44]. As a consequence, it becomes apparent that the LFT on conical Riemann surfaces can also be utilized to investigate these random couplings within the bulk Effective Field Theory and to offer a controlled and semiclassical way to realize the mechanism originally proposed by Coleman, Giddings and Strominger [51; 52]. There are also other reasons to write this paper, which we will mention in the Discussion section when more possible applications for our results are explored. #### Related Works At this stage, we would like to make some comments about the relation between our work and that of other authors who have studied neighbouring questions: * In addition to the study of classical Liouville action on generalized Schottky space of punctured Riemann surfaces, reference [1] also studied Liouville theory on punctured Riemann surfaces with quasi-Fuchsian global coordinates. Moreover, the authors of [1] have proved the holographic correspondence for the case of punctured Riemann surfaces with both Schottky and quasi-Fuchsian global coordinates.13 While Park and Teo [56] have already extended the results of [1] to the case of orbifold Riemann surfaces with quasi-Fuchsian global coordinates,14 a rigorous study of Liouville action and holographic correspondence for the case of Riemann orbisurfaces with Schottky global coordinates has not appeared anywhere in the literature. The present manuscript aims to partially fill this gap: We study the classical Liouville action on Schottky deformation space of Riemann orbisurfaces but leave a rigorous proof of the holographic correspondence to a future work [57].15 Footnote 13: The holographic correspondence for compact Riemann surfaces has been proved a long time ago (see [53; 54; 55]) and asserts that the renormalized volume of a hyperbolic 3-manifold, which is a purely three-dimensional quantity in its definition, is equivalent with the classical Liouville action on its conformal boundary — a purely two-dimensional quantity. * Motivated by the study of quantum Hall states on singular surfaces, reference [59] has studied the modular invariant Liouville action on Riemann sphere with conical singularities. For the special case of Riemann orbisurfaces with genus \(g=0\), our results are in agreement with that of reference [59]. * As we will further discuss in section 6, the main results of this paper (see Theorems 1 and 2) provide strong evidence for a close connection between classical Liouville action and (appropriately defined) determinant of Laplacian on Riemann orbisurfaces with genus \(g>1\); for Riemann sphere with conical singularities, this connection has also been studied by Kalvin (see, e.g. [60; 61]).16 In this sense, our results are closely related to the studies of Laplace-Beltrami operator on Riemann orbisurfaces and its spectral properties [63; 64].17 Footnote 14: In particular, reference [56] also includes the proof of holographic correspondence for the case of quasi-Fuchsian orbifolds. Moreover, from a physics perspective, such orbifolds have been studied by Chandra, Collier, Hartman, and Maloney [44]. Footnote 15: While a rigorous proof of holographic correspondence is still outstanding for the case of handlebody orbifolds, many references have studied Einstein-Hilbert action on AdS\({}_{3}\) with conical singularities in connection to correlation functions of Liouville vertex operators (see, e.g. [43; 58]). Footnote 16: More generally, see [62] for the derivation of a Polyakov-type anomaly formula in this case. Footnote 17: From a physics perspective, the zeta-regularized determinant of Laplacian on Riemann orbisurfaces has been recently studied in [65]. #### Structure of the Paper The structure of this work is as follows: In section 2, we will briefly review the relationship between correlation functions of heavy Liouville vertex operators corresponding to branch points and the uniformization theory of orbifold Riemann surfaces. Section 3 will cover various topics related to the deformation theory of Ahlfors and Bers. This will include a discussion of some known facts about the geometry of Teichmuller, Schottky, and moduli spaces of Riemann orbisurfaces as well as some variational formulas which we will need throughout this manuscript. Section 4 contains a detailed study of regularized Liouville action and its geometric properties: Subsection 4.1 studies the regularized classical action on Riemann orbisurfaces of genus \(g=0\) while subsection 4.2 focuses on the classical Liouville action defined for Riemann orbisurfaces of genus \(g>1\). In Section 5, we will first study Kahler potentials \(\mathsf{h}_{i}\) for cuspidal and elliptic Takhtajan-Zograf metrics on both \(\mathcal{M}_{0,n}\) and \(\mathfrak{S}_{g,n}(\boldsymbol{m})\). In particular, we will demonstrate that for certain special line bundles \(\mathscr{L}_{i}\) equipped with Hermitian metrics \(\mathsf{h}_{i}^{m_{i}}\), the first Chern forms are related to the Kahler form of TZ metrics associated with elliptic and parabolic generators. Moreover, we will show that the first Chern form of the \(\mathbb{Q}\)-line bundle \(\mathscr{L}=\bigotimes_{i=1}^{n}\mathscr{L}_{i}^{(1-1/m_{i}^{2})}\) with Hermitian metric \(\exp[S_{\boldsymbol{m}}/\pi]\) is given by \(\frac{1}{\pi^{2}}\omega_{\text{WP}}\); here, \(S_{\boldsymbol{m}}\) denotes the appropriately regularized classical Liouville action. From these results, it is easy to see that a specific combination \(\omega_{\text{WP}}-\frac{4\pi^{2}}{3}\,\omega_{\text{TZ}}^{\text{cusp}}-\frac {\pi}{2}\,\sum_{j=1}^{n_{e}}(m_{i}-\frac{1}{m_{i}})\,\omega_{\text{TZ},j}^{ \text{ell}}\) of Weil-Petersson and Takhtajan-Zograf metrics has a global Kahler potential on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) given by \(\mathscr{S}_{\boldsymbol{m}}=S_{\boldsymbol{m}}-\pi\sum_{i=1}^{n}(m_{i}-\frac {1}{m_{i}})\log\mathsf{h}_{i}\). Theorems 1 and 2 constitute the main findings of this paper and are related to the first and second variations of \(\mathscr{S}_{\boldsymbol{m}}\). In Section 6, we will provide a brief overview of some implications of our findings and discuss potential pathways for future research. Appendix A offers some mathematical background regarding orbifold Riemann surfaces, while Appendix B delves into geometric structures on such orbisurfaces. Moreover, Appendix C outlines the derivation of various asymptotic behaviors that are used throughout the main body of this manuscript. Finally, for the convenience of the reader, a list of symbols used throughout this text is also presented in Appendix D. ## 2 Classical Liouville Action and Uniformization Theory In this section, we will discuss the semi-classical limit of quantum Liouville field theory on hyperbolic Riemann orbisurfaces and its connection with the uniformization theory. Let \(X\) be a compact Riemann surface of genus \(g>1\). In the so-called geometric approach to quantum Liouville theory, developed by Takhtajan in [17, 66, 4, 18] based on the original proposal by Polyakov [67],18 the un-normalized correlation functions of Liouville vertex operators with "charges" \(\alpha_{i}\) on the Riemann surface \(X\) are defined by (we set \(\hbar=1\)) Footnote 18: See references [68, 69, 70, 71, 72] for details regarding the relation between geometric and standard approaches to Liouville CFT. \[\langle V_{\alpha_{1}}(x_{1})\cdots V_{\alpha_{n}}(x_{n})\rangle=\int\limits_ {\mathscr{C}\mathscr{M}_{\boldsymbol{\alpha}}(X)}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Then, the Riemann orbisurface \(O\) can be characterized as the triple \((X,\operatorname{Sing}(O),\nu)\) where \(\nu:\operatorname{Sing}(O)\to\hat{\mathbb{N}}^{>1}:=\big{(}\mathbb{N}\backslash\{1 \}\big{)}\cup\{\infty\}\) is the so-called _branching function_ that assigns to each singular point \(x_{i}\) its corresponding _branching order_\(m_{i}\in\hat{\mathbb{N}}^{>1}\) for \(i=1,\dots,n\). Now, let \(X_{O}:=\hat{X}_{O}\backslash\{x_{i}\,|\,m_{i}=\infty\}\) be the underlying Riemann surface of \(O\). A Riemann orbisurface \(O\) can be equivalently characterized as a pair \((X_{O},\mathscr{D})\) where the so-called _branch divisor_ \[\mathscr{D}:=\sum_{\{x_{i}\,|\,m_{i}<\infty\}}\left(1-\frac{1}{\nu(x_{i})} \right)x_{i}, \tag{2.2}\] is a \(\mathbb{Q}\)-divisor on \(X_{O}\) (see Section A.1.4 for more details).19 When \(O\) has cusps, i.e. branch points of order \(m_{i}=\infty\), we will denote its compactification by \(\hat{O}:=(\hat{X}_{O},\hat{\mathscr{D}})\) where Footnote 19: By a \(\mathbb{Q}\)-_divisor_ on a Riemann surface \(X\), we simply mean a formal linear combination of points on \(X\) with rational coefficients. \[\hat{\mathscr{D}}:=\mathscr{D}+\sum_{\{x_{i}\,|\,m_{i}=\infty\}}x_{i}. \tag{2.3}\] An orbifold Riemann surface \(O\) with \(g>1\) handles, \(n_{e}\geq 3\) conical points of orders \(2\leq m_{1}\leq\dots\leq m_{n_{e}}<\infty\), and \(n_{p}\geq 0\) punctures is said to have the signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\). Next, let \(O\) be an orbifold Riemann surface with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\) and let \(\big{\{}(U_{a},u_{a})\big{\}}_{a\in A}\) be a complex-analytic atlas on \(X_{O}\) with charts \(U_{a}\), local coordinates \(u_{a}:U_{a}\to\mathbb{C}\), and transition functions \(g_{ab}:u_{b}(U_{a}\cap U_{b})\to u_{a}(U_{a}\cap U_{b})\). Denote by \(\mathscr{C}\mathscr{M}(O)\) the space of singular conformal metrics on \(X_{O}\) representing \(\mathscr{D}\). If \(X_{O}^{\text{reg}}:=\hat{X}_{O}\backslash\operatorname{Sing}(O)\) denotes the so-called _regular locus_ of \(O\), every such metric \(ds^{2}\in\mathscr{C}\mathscr{M}(O)\) is given by a collection \(\big{\{}e^{\psi_{a}}|du_{a}|^{2}\big{\}}_{a\in A}\), where the functions \(\psi_{a}\in\mathcal{C}^{\infty}\big{(}U_{a}\cap X_{O}^{\text{reg}},\mathbb{R} \big{)}\) satisfy \[\psi_{a}\circ g_{ab}+\log|g_{ab}^{\prime}|^{2}=\psi_{b}\quad\text{on}\quad U_{ a}\cap U_{b}\cap X_{O}^{\text{reg}}, \tag{2.4}\] and near each singular point \(x_{i}\in U_{a}\), \(e^{\psi_{a}}\) has the form \[e^{\psi_{a}}\simeq\begin{cases}\frac{4\,m_{i}^{-2}|u_{a}-x_{i}|^{\frac{2}{m_{i }}-2}}{\Big{(}1-|u_{a}-x_{i}|^{\frac{2}{m_{i}}}\Big{)}^{2}}&\text{for}\qquad m _{i}<\infty,\\ \\ \\ \frac{1}{|u_{a}-x_{i}|^{2}\big{(}\log|u_{a}-x_{i}|\big{)}^{2}}&\text{for} \qquad m_{i}=\infty,\end{cases} \tag{2.5}\] as \(u_{a}\to x_{i}\). According to the classical results of Picard [73, 74] and the more recent work of McOwen [75] and Troyanov [76], when the Euler characteristic \(\chi(O):=\chi(X_{O})-\deg(\mathscr{D})=2-2g-n_{p}-\sum_{i=1}^{n_{e}}(1-1/m_{i})\) is negative -- i.e. when \(O\) is hyperbolic -- there exists a unique singular conformal metric, called the _hyperbolic metric_, on \(X_{O}\) which represents the branch divisor \(\mathscr{D}\) and has constant Gaussian curvature \(-1\) everywhere on \(X_{O}^{\text{reg}}\). If we denote this unique hyperbolic metric by \(ds^{2}_{\text{hyp}}:=\big{\{}e^{\varphi_{a}}|du_{a}|^{2}\big{\}}_{a\in A}\) and assuming that each open subset \(U_{a}\subset X_{O}\) include at most one singular point \(x_{i}\) of order \(m_{i}\), the corresponding function \(\varphi_{a}\) on \(U_{a}\) satisfies the so-called _Liouville equation_ (see, e.g. [69])20 Footnote 20: If the cone point \(x_{i}\) is fixed to be at infinity, instead of the last term on the right hand side of (6), we have \(\pi(1+\frac{1}{m_{i}})\delta(u_{a}-x_{i})\). \[\partial_{u_{a}}\partial_{\bar{u}_{a}}\varphi_{a}=\frac{1}{2}e^{\varphi_{a}}- \pi\left(1-\frac{1}{m_{i}}\right)\delta(u_{a}-x_{i}), \tag{6}\] which is equivalent to \(ds^{2}_{\rm hyp}\) having constant curvature \(-1\) on \(X^{\rm reg}_{O}\) and satisfying asymptotics (5) near each singular point. The problem is now to define (suitably regularized) Liouville action functional on the Riemann orbisurface \(O\) -- a functional \(\mathscr{S}_{\mathbf{m}}:\mathscr{CM}(O)\to\mathbb{R}\) such that its Euler-Lagrange equation is the Liouville equation. However, it is well-known (see the discussion in [54]) that a general mathematical definition of the Liouville action functional on a Riemann orbisurface \(O\) of genus \(g>1\) is a non-trivial problem. This is due to the fact that the classical Liouville field \(\varphi\) is not a globally defined function on \(O\) but rather a logarithm of the conformal factor of the metric. More concretely, due to transformation law (4), local kinetic terms \(|\partial_{u_{a}}\varphi_{a}|^{2}du_{a}\wedge d\bar{u}_{a}\) do not glue properly on \(U_{a}\cap U_{b}\cap X^{\rm reg}_{O}\) and thus can not be integrated over \(X^{\rm reg}_{O}\). This means that "naive" Dirichlet' type functional is not well defined and can not serve as an action for the Liouville theory (it also diverges at the singular points). In other words, the Liouville action functional cannot be defined in terms of a Riemann orbisurface \(O\) alone and additionally depends on the choice of a _global coordinate_ on \(X^{\rm reg}_{O}\) -- a representation \(X^{\rm reg}_{O}\cong\Omega_{K}/K\), where \(K\) is a Kleinian group with an invariant component \(\Omega\subset\mathbb{C}\) and \(\Omega_{K}:=\Omega\backslash\{\text{fixed points}\}\). As we will see in the next subsection, for our purposes, it will be sufficient to consider the case when \(K\) is either a Fuchsian group \(\Gamma\) or a Schottky group \(\Sigma\). Finally, let us mention that once the action functional is defined, we can define the partition function \(\langle O\rangle\), or rather its _free energy_\(\mathbb{F}_{O}:=-\log\left\langle O\right\rangle\), using the perturbative expansion21 Footnote 21: In order to write this perturbative expansion, we have to temporarily restore \(\hbar\) in (1). \[\mathbb{F}_{O}=-\log\left\langle O\right\rangle=\frac{1}{2\pi\hbar}\mathscr{S }_{\mathbf{m}}[\varphi]+\frac{1}{2}\log\det\biggl{(}\Delta_{0}+\frac{1}{2} \biggr{)}+\mathcal{O}(\hbar), \tag{7}\] of the r.h.s of (1) around the classical solution \(\varphi\), where \(\mathscr{S}_{\mathbf{m}}[\varphi]:\mathfrak{D}(K)/\sim\to\mathbb{R}\) (see Section 3) is the regularized classical Liouville action and \(\Delta_{0}\) is the _Laplace operator_ of the hyperbolic metric acting on functions on \(O\). In the bulk of this paper, we will only concern the classical contribution (i.e. contributions of order \(\hbar^{-1}\)) to the free energy \(\mathcal{F}_{O}\), which is given by the classical Liouville action \(\mathscr{S}_{\mathbf{m}}\). ### Fuchsian and Schottky Uniformizations Consider a hyperbolic orbifold Riemann surface \(O=(X_{O},\mathscr{D})\) with signature \((g;m_{1},\ldots,m_{n_{e}}\allowbreak;n_{p})\) and fix a base point \(x_{*}\in X^{\rm reg}_{O}\). Let us choose a standard system of generators for the orbifold fundamental group \[\begin{split}\pi_{1}(O,x_{*})=\bigg{\langle}\mathsf{A}_{1},\mathsf{B }_{1},\ldots,\mathsf{A}_{g},\mathsf{B}_{g},\mathsf{C}_{1},\ldots,\mathsf{C}_{n _{e}},\mathsf{P}_{1},\ldots,\mathsf{P}_{n_{p}}\,\bigg{|}\\ \mathsf{C}_{1}^{m_{1}}=\cdots=\mathsf{C}_{n_{e}}^{m_{ne}}=\prod_{ i=1}^{g}[\mathsf{A}_{i},\mathsf{B}_{i}]\prod_{j=1}^{n_{e}}\mathsf{C}_{j}\prod_{k=1}^{ n_{p}}\mathsf{P}_{k}=\mathrm{id}\,\bigg{\rangle},\end{split} \tag{10}\] where \(\mathsf{A}_{i}\mathsf{s}\), \(\mathsf{B}_{i}\mathsf{s}\), \(\mathsf{C}_{j}\mathsf{s}\), and \(\mathsf{P}_{k}\mathsf{s}\) are homotopy classes of loops based at \(x_{*}\) and \([\mathsf{A}_{i},\mathsf{B}_{i}]:=\mathsf{A}_{i}\mathsf{B}_{i}\mathsf{A}_{i}^ {-1}\mathsf{B}_{i}^{-1}\); see Definition 28 and the discussion following it for more details. The Riemann orbisurface \(O\) with a distinguished system of generators for its fundamental group \(\pi_{1}(O,x_{*})\), up to inner automorphisms of \(\pi_{1}(O,x_{*})\), will be called a _marked Riemann orbisurface_ (see Figure 1). As a result of the Theorems 26 and 27, all hyperbolic Riemann orbisurfaces are _developable_ (or _good_ in Thurston's language) and hence can be realized as a global quotient \([\mathbb{H}/\Gamma]\) where \(\mathbb{H}:=\left\{z\in\mathbb{C}\,\big{|}\,\operatorname{Im}z>0\right\}\) is the upper half-plane and \(\Gamma\subset\operatorname{PSL}(2,\mathbb{R})\) is a Fuchsian group of the first kind22 with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\); this is a direct consequence of the usual uniformization theorem for ordinary Riemann surfaces. The holomorphic orbifold covering map \(\pi_{\Gamma}:\mathbb{H}\to O\) provides a Riemann orbisurface \(O\) with the _Fuchsian global coordinate_, and the hyperbolic metric \(ds_{\mathrm{hyp}}^{2}\in\mathscr{C}\mathscr{M}(O)\) is a push-forward of the Poincare metric \((\operatorname{Im}z)^{-2}|dz|^{2}\) on \(\mathbb{H}\) by the covering map \(\pi_{\Gamma}\). From this point of view, \(\Gamma\) can be thought of as the fundamental group of the Riemann orbisurface \(O\cong[\mathbb{H}/\Gamma]\), and the group isomorphism \(\Gamma\simeq\pi_{1}(O)\) can be viewed as being induced by the holonomy representation \(\operatorname{hol}:\pi_{1}(O)\to\operatorname{PSL}(2,\mathbb{R})\) of the _orbifold hyperbolic structure_.23 Footnote 22: A Fuchsian group is said to be of _first kind_ if its limit set is the closed real line \(\mathbb{R}\cup\{\infty\}\). Otherwise, a Fuchsian group is said to be of the second kind. Footnote 23: Note that, in the language of orbifold \((G,\mathbb{X})\)-structures introduced in Appx.B, orbifold hyperbolic structures are, in fact, \(\big{(}\operatorname{PSL}(2,\mathbb{R}),\mathbb{H}\big{)}\)-structures. Such a Fuchsian group \(\Gamma\) has a standard presentation, corresponding to the standard generators of \(\pi_{1}(O)\) discussed above, which includes \(2g\) hyperbolic generators \(\alpha_{1},\beta_{1},\ldots,\alpha_{g},\beta_{g}\) \(n_{e}\) elliptic generators \(\tau_{1},\ldots,\tau_{n_{e}}\) of orders \(m_{1},\ldots,m_{n_{e}}\), and \(n_{p}\) parabolic generators \(\kappa_{1},\ldots,\kappa_{n_{p}}\).24 Obviously, the generators of \(\Gamma\) also satisfy Footnote 24: A non-identity element \(\gamma\in\Gamma\) is called _hyperbolic_, _parabolic_, or _elliptic_ if \(\gamma\) is conjugated in \(\mathrm{PSL}(2,\mathbb{R})\) to a _dilation_, _horizontal translation_, or _rotation_ respectively. This is analogous to \(|\operatorname{tr}(\gamma)|\) being greater than, equal, or less than \(2\), respectively. \[\prod_{i=1}^{g}\left[\alpha_{i},\beta_{i}\right]\prod_{j=1}^{n_{e}}\tau_{j} \prod_{k=1}^{n_{p}}\kappa_{k}=\mathbb{1}\quad\text{and}\quad\tau_{j}^{m_{j}}= \mathbb{1}\quad(j=1,\ldots,n_{e}), \tag{9}\] where \([\alpha_{i},\beta_{i}]:=\alpha_{i}\beta_{i}\alpha_{i}^{-1}\beta_{i}^{-1}\) and \(\mathbb{1}\) is the identity element of \(\mathrm{PSL}(2,\mathbb{R})\). The Fuchsian group \(\Gamma\), together with a distinguished system of generators \[\big{\{}\alpha_{1},\ldots,\alpha_{g};\beta_{1},\ldots,\beta_{g};\tau_{1}, \ldots,\tau_{n_{e}};\kappa_{1},\ldots,\kappa_{n_{p}}\big{\}}, \tag{10}\] is called the _marked Fuchsian group_ corresponding to the Riemann orbisurface \(O\cong[\mathbb{H}/\Gamma]\). The elliptic elements of \(\Gamma\) will have fixed points in \(\mathbb{H}\) and are denoted by \(z_{1}^{e},\ldots,z_{n_{e}}^{e}\) while the fixed points of the parabolic elements lie in \(\partial\mathbb{H}=\mathbb{R}\cup\{\infty\}\) and will be denoted by \(z_{1}^{p},\ldots,z_{n_{p}}^{p}\). The images of these elliptic and parabolic fixed points under the projection \(\mathbb{H}\to O\cong[\mathbb{H}/\Gamma]\) will be the conical points \(x_{1}^{e},\ldots,x_{n_{e}}^{e}\) and punctures \(x_{1}^{p},\ldots,x_{n_{p}}^{p}\) of \(O\), respectively. For our future convenience, let us also introduce \(\operatorname{Sing}_{m}(O):=\nu^{-1}(m)\) for all \(m\in\hat{\mathbb{N}}^{>1}\). Note that \(\bigsqcup_{m\in\hat{\mathbb{N}}^{>1}}\operatorname{Sing}_{m}(O)\) gives a canonical stratification of the singular set \(\operatorname{Sing}(O)\equiv\operatorname{Supp}(\hat{\mathscr{D}})\), and each \(\operatorname{Sing}_{m}(O)\) for \(m\neq\infty\) represents the stratum of conical points with stabilizer group \(\mathbb{Z}_{m}\). In addition, we will denote by \(\operatorname{Sing}_{\infty}(O):=\nu^{-1}(\infty)\) and \(\operatorname{Sing}_{{}_{\lambda}}(O):=\bigsqcup_{m\neq\infty}\operatorname{ Sing}_{m}(O)\) the subset of cusps and conical points in \(\operatorname{Sing}(O)\) respectively. Finally, following [77], we will define the _signature type_ of \(O\) as the unordered set \(\mathbf{s}:=\{\mathbf{s}_{m}\}_{m\in\hat{\mathbb{N}}^{>1}}\) where \(\mathbf{s}_{m}:=\big{|}\operatorname{Sing}_{m}(O)\big{|}\) denotes the cardinality of the stratum of singular points of order \(m\). In particular, we have \(\mathbf{s}_{\infty}=\big{|}\operatorname{Sing}_{\infty}\big{|}=n_{p}\) and \(\sum_{m\in\hat{\mathbb{N}}^{>1}}\mathbf{s}_{m}=\big{|}\operatorname{Sing}(O) \big{|}=n_{e}+n_{p}\). **Remark 2.1**.: Sometimes, when we need to refer to singular points (or fixed points) collectively, we will denote them by \(x_{1},\ldots,x_{n}\) (respectively \(z_{1},\ldots,z_{n}\)) where \(n=n_{e}+n_{p}\) is the total number of singular points (or fixed points) and the indices are ordered such that the corresponding orders of isotropy increase \(2\leq m_{1}\leq m_{2}\leq\cdots\leq m_{n}\leq\infty\). In this situation, the vector of orders \((m_{1},\ldots,m_{n})\) will be denoted by \(\boldsymbol{m}\). Note that with this convention, the first \(n_{e}\) singular points \(x_{1},\ldots,x_{n_{e}}\) will always correspond to conical points \(x_{1}^{e},\ldots,x_{n_{e}}^{e}\) while the remaining \(n_{p}\geq 0\) singular points \(x_{{}_{ne+1}},\ldots,x_{n}\) will correspond to the punctures \(x_{1}^{p},\ldots,x_{n_{p}}^{p}\) of \(O\). Similar to Fuchsian groups, Schottky groups can also be used to construct Riemann orbisurfaces. We begin with a few definitions: A _Kleinian group_\(K\) is a discrete subgroup of the Mobius group \(\mathrm{PSL}(2,\mathbb{C})\) that acts properly discontinuously on a subset \(\Omega\subset\hat{\mathbb{C}}\) called the _region of discontinuity_ of \(K\). The complement \(\Lambda=\hat{\mathbb{C}}\backslash\Omega\) is called the _limit set_ of \(K\). In this work, we are particularly interested in Kleinian groups that are free, finitely generated, and strictly loxodromic; such Kleinian groups are called _Schottky groups_ and will be denoted by \(\Sigma\). It is well-known that for a Schottky group \(\Sigma\) of rank \(g\), the limit set \(\Lambda\) is a Cantor set25 and the region of discontinuity \(\Omega=\hat{\mathbb{C}}\backslash\Lambda\) is a dense connected subset of \(\hat{\mathbb{C}}\) such that the Schottky group \(\Sigma\) acts on \(\Omega\) freely, and the quotient space \(\Omega/\Sigma\) is a closed Riemann surface \(X\) of genus \(g\); this is called a _Schottky uniformization_ of \(X\) and, as a consequence of the retrospection theorem [79] (see also [80; 81]), every closed Riemann surface has such a uniformization. Footnote 25: For more details on the geometry of limit sets see Ref. [78]. Now, let us consider uniformization of the compactified underlying Riemann surface \(\hat{X}_{O}\) by a Schottky group \(\Sigma\). If \(\Omega\) denotes the region of discontinuity of \(\Sigma\), we can subtract from it the pre-images of cusps by the covering map \(\Omega\to\hat{X}_{O}\) to get another planar region \(\Omega_{0}\). The space \(\Omega_{0}\) will uniformize the underlying Riemann surface \(X_{O}\) and we will denote the corresponding covering map \(\Omega_{0}\to X_{O}\cong\Omega_{0}/\Sigma\) by \(\pi_{0}\). Next, we can lift the branch divisor \(\mathscr{D}\) by the covering map \(\pi_{0}:\Omega_{0}\to X_{O}\) to get another branch divisor \[\widetilde{\mathscr{D}}:=\sum_{w_{i}\in\pi_{0}^{-1}(\operatorname{Sing}_{ \mathbb{A}}(O))}\left(1-\frac{1}{\nu\left(\pi_{\Sigma}(w_{i})\right)}\right)w _{i}, \tag{11}\] which lives on the planar region \(\Omega_{0}\). Then, the pair \((\Omega_{0},\widetilde{\mathscr{D}})\) will define a planar Riemann orbisurface \(\dot{\hat{\Omega}}\) such that \(\pi_{\Sigma}:\dot{\hat{\Omega}}\to O\cong\dot{\hat{\Omega}}/\Sigma\) is an orbifold covering map (see [82] for more details). In addition, note that the restriction of \(\pi_{0}\) to \(\Omega^{\text{reg}}:=\Omega_{0}\backslash\operatorname{Supp}\widetilde{ \mathscr{D}}\) provides \(X_{O}^{\text{reg}}\) with the _Schottky global coordinate_ such that the space \(\mathscr{C}\mathscr{M}(O)\) is identified with the affine subspace of \(\mathcal{C}^{\infty}(\Omega^{\text{reg}},\mathbb{R})\) consisting of functions \(\psi\) satisfying the condition \[\psi\circ\sigma+\log|\sigma^{\prime}|^{2}=\psi\quad\text{for all}\quad\sigma \in\Sigma, \tag{12}\] and representing \(\widetilde{\mathscr{D}}\). Let us now define a _marked Schottky group_ as a Schottky group \(\Sigma\) of rank \(g\) together with a choice of distinguished relation-free system of generators \(L_{1},\dots,L_{g}\) for it. In fact, a choice of marking for the Fuchsian group \(\Gamma\) uniquely determines a marked Schottky group \((\Sigma;L_{1},\dots,L_{g})\). If \(N\) is the smallest normal subgroup of \(\Gamma\) containing \(\{\alpha_{1},\dots,\alpha_{g},\tau_{1},\dots,\tau_{n_{e}},\kappa_{1},\dots, \kappa_{n_{p}}\}\) then \(\Sigma\) is isomorphic to the quotient group \(\Gamma/N\). There is also a notion of equivalence between two marked Schottky groups: \((\Sigma;L_{1},\dots,L_{g})\) is said to be equivalent to \((\Sigma^{\prime};L_{1}^{\prime},\dots,L_{g}^{\prime})\) if and only if there exists a Mobius transformation \(\varsigma\in\operatorname{PSL}(2,\mathbb{C})\) such that \(L_{i}^{\prime}=\varsigma L_{i}\varsigma^{-1}\) for all \(i=1,\dots,g\). Then, the _Schottky space_\(\mathfrak{S}_{g}\) can be defined as the space of equivalence classes of marked Schottky groups of genus \(g\). Now, we can introduce the generalized Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\) of Riemann orbisurfaces, both with and without punctures. It is regarded as a holomorphic fibration \(j:\mathfrak{S}_{g,n}(\mathbf{m})\to\mathfrak{S}_{g}\), where the fibers represent configuration spaces of \(n\) labeled points.26 Denote by \(L_{1},\dots,L_{g}\) the system of generators in \(\Sigma\) corresponding to the cosets \(\beta_{1}N,\dots,\beta_{g}N\) in \(\Gamma\). Normalizing the marked Schottky group \((\Sigma;L_{1},\dots,L_{g})\),27 we thereby associate with each marked Schottky group (equivalently, with each \(O\cong\dot{\hat{\Omega}}/\Sigma\)) a point in the generalized Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\). Footnote 27: For details, see Section 3.3. Footnote 27: By _normalizing_, we mean using the equivalence notion of marked Schottky groups to set the attracting fixed points of generators \(L_{1}\) and \(L_{2}\) as well as the repelling fixed-point of generator \(L_{1}\) equal to \(0\), \(1\), and \(\infty\) respectively; see Section 3.3 for more details. The Schottky uniformization of an orbisurface \(O\) is connected with the Fuchsian uniformization of it by the following commutative diagram (13) where each of the mappings is a holomorphic orbifold covering (complex-analytic covering). The normal subgroup \(N\) of \(\Gamma\) corresponds to the group of deck transformations of the covering \(J\). A deck transformation is a homeomorphism \(\mathrm{deck}:\mathbb{H}\to\mathbb{H}\), such that the diagram of the maps commutes. The set of deck transformations forms a group which is called the automorphism group of covering map, \(\mathrm{Aut}(J)\) (See Section A.1.3). Accordingly, the mapping \(J\) can be regarded as a (meromorphic) function on \(\mathbb{H}\), which is automorphic with respect to \(N\) -- i.e. \(J\circ N=J\). Moreover, \(J\circ\beta_{i}=L_{i}\circ J\) for all \(i=1,\ldots,g\). ### Projective Connections and Energy-Momentum Tensor Let \(O=(X_{O},\mathscr{D})\) be a hyperbolic Riemann orbisurface with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\), and let \(\big{\{}(U_{a},u_{a})\big{\}}\) be a complex-analytic atlas on the underlying Riemann surface \(X_{O}\) with local coordinates \(u_{a}:U_{a}\to\mathbb{C}\) and transition functions \(u_{a}=g_{ab}\circ u_{b}\) on overlaps \(U_{a}\cap U_{b}\). A (meromorphic) _projective connection_ on \(O\) is a collection \(R=\{r_{a}\}_{a\in A}\) of holomorphic functions \(r_{a}\) defined on each \(U_{a}\cap X_{O}^{\mathrm{reg}}\) that satisfy \[r_{b}=r_{a}\circ g_{ab}\,(g^{\prime}_{ab})^{2}+\mathrm{Sch}\,(g_{ab};u_{b})\,, \tag{14}\] on every intersection \(U_{a}\cap U_{b}\cap X_{O}^{\mathrm{reg}}\) and are compatible with \(\mathscr{D}\) -- i.e. if \(x_{i}\in U_{a}\cap\mathrm{Sing}(O)\) and \(u_{a}(x_{i})=0\), we have \[r_{a}(u_{a})=\frac{1-1/m_{i}^{2}}{2u_{a}^{2}}+\mathcal{O}\big{(}|u_{a}|^{-1} \big{)}\quad\text{as}\quad u_{a}\to 0. \tag{15}\] In the above definition of projective connections, \(\mathrm{Sch}\,(f;z)\) denotes the _Schwarzian derivative_ \[\mathrm{Sch}\,(f;z):=\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3}{2} \left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{2}, \tag{16}\] of a holomorphic function \(f(z)\) and can be intuitively viewed as measuring the failure of \(f(z)\) to be the restriction of a Mobius transformation. Such meromorphic projective connections are in one-to-one correspondence with \(\mathbb{CP}^{1}\)-structures on \(O\) (see Section B.3 for more details). **Remark 2.2**.: We note that the coefficient \(h_{i}:=1-1/m_{i}^{2}\) of the leading singular term in the asymptotic behavior (2.15) of projective connections near each singular point \(x_{i}\in\mathrm{Sing}(O)\) does _not_ depend on the choice of chart or complex coordinate \(u\). The above remark means that the difference between two projective connections is a (meromorphic) _quadratic differential_ on \(O\) with only simple poles -- i.e. a collection \(Q=\{q_{a}\}_{a\in A}\) of holomorphic functions on each open subset \(U_{a}\cap X_{O}^{\mathrm{reg}}\) with the transformation law \[q_{b}=q_{a}\circ g_{ab}(g^{\prime}_{ab})^{2}, \tag{2.17}\] and the asymptotic behavior \(q_{a}(u_{a})=\mathcal{O}\big{(}|u_{a}|^{-1}\big{)}\) near each singular point \(x_{i}\in U_{a}\cap\mathrm{Sing}(O)\) with \(u_{a}(x_{i})=0\). Conversely, we can add a meromorphic quadratic differential to a given projective connection to obtain a new projective connection. Since we know that each Riemann orbisurface has at least one \(\mathbb{CP}^{1}\)-structure, i.e. the one given by Poincare-Koebe uniformization, we have the following (see [83]): **Proposition 2.1** (Biswas).: _The space of all \(\mathbb{CP}^{1}\)-structures on \(O\), denoted by \(\mathcal{P}(O)\), is an affine space for the vector space of all meromorphic quadratic differentials on \(O\) with at most simple poles at singularities._ These meromorphic projective connections on \(O\) have the following physical interpretation: For the hyperbolic metric \(ds^{2}_{\mathrm{hyp}}=\big{\{}e^{\varphi_{a}(u_{a},\bar{u}_{a})}|du_{a}|^{2} \big{\}}_{a\in A}\) on \(O\), let us define the following functions on each open subset \(U_{a}\): \[T_{a}=\partial_{u_{a}}^{2}\varphi_{a}-\tfrac{1}{2}(\partial_{u_{a}}\varphi_{a })^{2}\quad\text{and}\quad\bar{T}_{a}=\partial_{\bar{u}_{a}}^{2}\varphi_{a}- \tfrac{1}{2}(\partial_{\bar{u}_{a}}\varphi_{a})^{2}. \tag{2.18}\] The collections \(T_{\varphi}=\{T_{a}\}_{a\in A}\) and \(\bar{T}_{\varphi}=\{\bar{T}_{a}\}_{a\in A}\) are the \((2,0)\) and \((0,2)\) components of the _classical energy-momentum tensor_ on \(O\) and are associated with the _quasi-conformal transformations_ of the hyperbolic metric (see e.g. [19, Appx. B] for more details). In addition, the functions \(T_{a}\big{(}\varphi_{a}(u_{a},\bar{u}_{a})\big{)}\) satisfy the _conservation law_ \[\partial_{\bar{u}_{a}}T_{a}\big{(}\varphi_{a}(u_{a},\bar{u}_{a})\big{)}=0, \tag{2.19}\] on each open subset \(U_{a}\cap X_{O}^{\mathrm{reg}}\subset X_{O}^{\mathrm{reg}}\) and, as a result of (2.5), have the asymptotic behavior \[T_{a}(u_{a})=\frac{h_{i}}{2u_{a}^{2}}+\mathcal{O}\big{(}|u_{a}|^{-1}\big{)} \quad\text{as}\quad u_{a}\to 0, \tag{2.20}\] near each singular point \(x_{i}\in U_{a}\cap\mathrm{Sing}(O)\) with \(u_{a}(x_{i})=0\) and \(h_{i}=1-1/m_{i}^{2}\). The property that functions \(T_{a}(u_{a})\) are meromorphic expresses the fact that the energy-momentum tensor for the classical Liouville theory is traceless and the coefficients \(h_{i}/2\) appearing in the above asymptotics have the interpretation of _conformal weights_ of Liouville vertex operators corresponding to each singular point [84]. Finally, it follows from (2.4) that on every overlap \(U_{a}\cap U_{b}\cap X_{O}^{\mathrm{reg}}\) \[T_{b}=T_{a}\circ g_{ab}\left(g^{\prime}_{ab}\right)^{2}+\mathrm{Sch}\left(g_{ ab};u_{b}\right), \tag{2.21}\] which means that, by definition, \(T_{\varphi}(u)\) is a meromorphic projective connection on \(O\). Since the hyperbolic metric \(ds^{2}_{\mathrm{hyp}}\) on \(O\) is a push-forward of the Poincare metric on \(\mathbb{H}\) by the covering map \(\pi_{\Gamma}:\mathbb{H}\to O\), a simple computation gives \(T_{\varphi}(u)=\left\{\operatorname{Sch}\left(\pi_{\Gamma}^{-1};u_{a}\right) \right\}_{a\in A}\). The multi-valued analytic function \(\pi_{\Gamma}^{-1}:O\to\mathbb{H}\) is a locally univalent linear polymorphic function on \(O\) (this means that its branches are connected by fractional linear transformations in \(\Gamma\)) and, using the property \(\operatorname{Sch}\left(\varsigma(z);z\right)=0\) for all \(\varsigma\in\operatorname{PSL}(2,\mathbb{C})\), as well as the Caley identity \[\operatorname{Sch}\left(f\circ g;z\right)=\operatorname{Sch}\left(f;g\right) \left(g^{\prime}\right)^{2}+\operatorname{Sch}\left(g;z\right), \tag{2.22}\] it is easy to verify directly that \(\operatorname{Sch}\left(\pi_{\Gamma}^{-1};u_{a}\right)\) are well-defined functions on each subset \(U_{a}\cap X_{O}^{\operatorname{reg}}\), which satisfy (2.21). Slightly abusing notations, we will write \(T_{\varphi}(u)=\operatorname{Sch}(\pi_{\Gamma}^{-1})\) and call it the _Fuchsian projective connection_ on \(O\). Similarly, the Schottky global coordinate given by the orbifold covering map \(\pi_{\Sigma}:\dot{\bar{\Omega}}\to O\cong\dot{\bar{\Omega}}/\Sigma\), produces the so-called _Schottky projective connection_\(\operatorname{Sch}(\pi_{\Sigma}^{-1})\) on \(O\). **Remark 2.3**.: While the Fuchsian projective connection is canonically determined by the Riemann orbisurface \(O\) and does not depend on the choice of marking for \(\Gamma\), the Schottky projective connection is defined only for marked Riemann orbisurfaces and is uniquely determined by the normal subgroup \(N\subset\Gamma\) introduced in the previous subsection. It follows from the commutative diagram (2.13) and the Caley identity (2.22) for the Schwarzian derivative that the Fuchsian and Schottky projective connections are related by \[\operatorname{Sch}\left(\pi_{\Gamma}^{-1};u_{a}\right)=\operatorname{Sch}\left( J^{-1};w\right)\circ\pi_{\Sigma}^{-1}\,\left(\partial_{u_{a}}\pi_{\Sigma}^{-1} \right)^{2}+\operatorname{Sch}\left(\pi_{\Sigma}^{-1};u_{a}\right)\quad\text{ for all}\quad a\in A, \tag{2.23}\] where \(w\) is the global coordinate on \(\Omega\). Therefore, the collection \[\left\{\operatorname{Sch}\left(J^{-1};w\right)\circ\pi_{\Sigma}^{-1}\left( \partial_{u_{a}}\pi_{\Sigma}^{-1}\right)^{2}\right\}_{a\in A} \tag{2.24}\] is a meromorphic quadratic differential on \(O\) and \(T_{\varphi}(w):=\operatorname{Sch}\left(J^{-1};w\right)\) is a meromorphic automorphic form of weight four for the Schottky group \(\Sigma\) -- i.e. \(T_{\varphi}\big{(}\sigma(w)\big{)}\,(\sigma^{\prime})^{2}=T_{\varphi}(w)\) for all \(\sigma\in\Sigma\). With the above explanations in mind, we are now ready to give a summary of our main results: Let \(\mathcal{T}_{g,n}(\boldsymbol{m})\) be the Teichmuller space of marked Riemann orbisurfaces of genus \(g>1\) defined as the space of all equivalence classes of marked Riemann orbisurfaces with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\).28 The affine spaces \(\mathcal{P}(O)\) for varying Riemann orbisurfaces \(O\) glue together to an affine bundle \(\mathscr{P}_{g,n}(\boldsymbol{m})\to\mathcal{T}_{g,n}(\boldsymbol{m})\), modeled over holomorphic cotangent bundle of \(\mathcal{T}_{g,n}(\boldsymbol{m})\). The Fuchsian projective connection \(\operatorname{Sch}(\pi_{\Gamma}^{-1})\) gives a canonical section of the affine bundle \(\mathscr{P}_{g,n}(\boldsymbol{m})\to\mathcal{T}_{g,n}(\boldsymbol{m})\), while the Schottky projective connection \(\operatorname{Sch}(\pi_{\Sigma}^{-1})\) gives a canonical section of the affine bundle \(\mathscr{P}_{g,n}(\boldsymbol{m})\to\mathfrak{S}_{g,n}(\boldsymbol{m})\). Their difference \(\underline{\mathscr{Q}}:=\operatorname{Sch}(\pi_{\Gamma}^{-1})-\operatorname{ Sch}(\pi_{\Sigma}^{-1})\) can be viewed as a (1,0)-form on the Schottky space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) and has the following interesting properties (see Theorems 1 and 2): Let us denote by \(\omega_{\operatorname{WP}}\) the symplectic form of the WP metric, and by \(\omega_{\operatorname{TZ},i}^{\operatorname{ell}}\), \(\omega_{\operatorname{TZ},j}^{\operatorname{cusp}}\), the symplectic forms of \(i^{th}\)-elliptic, \(j^{th}\)-cuspidal \(\operatorname{TZ}\) metrics on generalized Schottky space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\). Additionally, let \(\partial\) and \(\bar{\partial}\) denote the (1,0) and (0,1) components of the exterior differential d on the Schottky space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) -- i.e. \(\text{d}=\partial+\bar{\partial}\). Then, we have: Footnote 28: For details, see Section 3.1. 1. \(\mathscr{Q}\) is \(\partial\)-exact -- i.e. there exists a smooth function \(\mathscr{S}_{\mathbf{m}}:\mathfrak{S}_{g,n}(\mathbf{m})\to\mathbb{R}\) such that \[\mathscr{Q}=\frac{1}{2}\partial\mathscr{S}_{\mathbf{m}}.\] (2.25) 2. \(\mathscr{Q}\) is a \(\bar{\partial}\)-antiderivative (hence, a d-antiderivative) of the following combination of WP and TZ symplectic forms on \(\mathfrak{S}_{g,n}(\mathbf{m})\): \[\bar{\partial}\mathscr{Q}=-\sqrt{-1}\left(\omega_{\rm WP}-\frac{4\pi^{2}}{3} \omega_{\rm TZ}^{\rm cusp}-\frac{\pi}{2}\sum_{i=1}^{n_{e}}m_{i}h_{i}\;\omega_{ \rm TZ,i}^{\rm ell}\right).\] (2.26) Here, \(\omega_{\rm TZ}^{\rm cusp}=\sum_{j=1}^{n_{p}}\omega_{\rm TZ,j}^{\rm cusp}\). 3. It follows immediately from the above two statements that the function \(-\mathscr{S}_{\mathbf{m}}\) is a Kahler potential for the special combination of WP and TZ metrics on \(\mathfrak{S}_{g,n}(\mathbf{m})\): \[-\bar{\partial}\partial\mathscr{S}_{\mathbf{m}}=2\sqrt{-1}\left(\omega_{\rm WP}- \frac{4\pi^{2}}{3}\omega_{\rm TZ}^{\rm cusp}-\frac{\pi}{2}\sum_{i=1}^{n_{e}}m _{i}h_{i}\;\omega_{\rm TZ,i}^{\rm ell}\right).\] (2.27) Before ending this section, and in order to avoid confusion in the remainder of this manuscript, we feel the need to talk about our notation for coordinate functions on different spaces: In this section, we have used \(u_{a}\) to denote the coordinate function on each open subset \(U_{a}\subset X_{O}\). However, in what follows, we will always use \(w\) to denote the coordinate function on \(X_{O}^{\rm reg}\) when \(O\) has genus \(g=0\).29 When the orbifold Riemann surface \(O\) has genus \(g>1\), \(\{u_{a}\}_{a\in A}\) denotes the set of coordinate functions on \(X_{O}\) while \(w\) is used to denote the coordinate function on \(\Omega^{\rm reg}\subset\mathbb{C}\). This notation is meant to be suggestive of the fact that the difference between Schottky and Fuchsian uniformizations of \(O\cong\mathring{\bar{\Omega}}/\Sigma\) is effectively equivalent to Fuchsian uniformization of the planar orbifold \(\mathring{\bar{\Omega}}\). Finally, throughout this manuscript, \(z\) has always been used to denote the coordinate function on the upper half-plane \(\mathbb{H}\). Footnote 29: Note that, in this situation, \(X_{O}\cong\mathring{\mathbb{C}}\) needs to be covered with at least two coordinate charts while \(X_{O}^{\rm reg}\subset\mathbb{C}\) can be covered with only one chart. ## 3 Geometry of Tiechmuller, Moduli, and Schottky Spaces In this section, we will recall some well-known facts about the deformation theory of Ahlfors and Bers. More details can be found in [85; 86; 87]. ### Teichmuller Space \(\mathcal{T}(\Gamma)\) Let \(\Gamma\) be a finitely generated Fuchsian group of the first kind that uniformizes the hyperbolic orbifold Riemann surface \(O\) with the signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\). In this situation, the Teichmuller space of Riemann orbisurfaces can be equivalently described as the Teichmuller space \(\mathcal{T}(\Gamma)\) of Fuchsian groups with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\) -- i.e. the space of all equivalence classes of marked Fuchsian groups with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\). A _Beltrami differential_ for \(\Gamma\) is defined as \(\mu:=\mu(z)\,\partial_{z}\,\mathrm{d}\bar{z}\) where \(\mu(z)\) is a complex-valued bounded measurable function on \(\mathbb{H}\) with the property that \[\mu(\gamma z)\frac{\overline{\gamma^{\prime}(z)}}{\gamma^{\prime}(z)}=\mu(z) \qquad\text{for all}\qquad\gamma\in\Gamma\quad\text{and}\quad z\in\mathbb{H}. \tag{3.1}\] We will denote by \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\), the complex Banach space of Beltrami differentials for \(\Gamma\). Now, let \(\mathfrak{D}(\Gamma)\) denote the open unit ball in \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\), in the sense of the \(L^{\infty}\)-norm: \[\mathfrak{D}(\Gamma)\equiv\left\{\mu\in\mathcal{A}^{-1,1}(\mathbb{H},\Gamma) \,\Big{|}\left\|\mu\right\|_{\infty}:=\sup_{z\in\mathbb{H}}|\mu(z)|<1\right\}. \tag{3.2}\] For each \(\mu\in\mathfrak{D}(\Gamma)\), the _Beltrami equation_ \[\partial_{\bar{z}}f^{\mu}(z)=\mu(z)\,\partial_{z}f^{\mu}(z),\qquad z\in \mathbb{H}, \tag{3.3}\] is solvable in the class of quasi-conformal homeomorphisms of \(\mathbb{H}\) onto itself, and any two solutions are connected by a linear fractional transformation in \(\mathrm{PSL}(2,\mathbb{R})\). Let \(f^{\mu}\) be a solution of Beltrami equation (3.3) that fixes points \(0,1,\infty\) and define \(\Gamma^{\mu}:=f^{\mu}\circ\Gamma\circ(f^{\mu})^{-1}\) where \(\Gamma^{\mu}\) is a Fuchsian group with the same signature as \(\Gamma\). Thus, each element \(\mu\in\mathfrak{D}(\Gamma)\) gives a faithful representation \(\varrho_{\mu}\) of \(\Gamma\) in \(\mathrm{PSL}(2,\mathbb{R})\) with \(2g\) hyperbolic generators \(\alpha_{i}^{\mu}:=f^{\mu}\circ\alpha_{i}\circ(f^{\mu})^{-1}\) and \(\beta_{i}^{\mu}:=f^{\mu}\circ\beta_{i}\circ(f^{\mu})^{-1}\), \(n_{p}\) parabolic elements \(\kappa_{i}^{\mu}:=f^{\mu}\circ\kappa_{i}\circ(f^{\mu})^{-1}\), as well as \(n_{e}\) elliptic elements \(\tau_{i}^{\mu}:=f^{\mu}\circ\tau_{i}\circ(f^{\mu})^{-1}\) of orders \(m_{1},\dots,m_{n_{e}}\) respectively, satisfying the single relation \[\alpha_{1}^{\mu}\beta_{1}^{\mu}(\alpha_{1}^{\mu})^{-1}(\beta_{1}^{\mu})^{-1} \cdots\alpha_{g}^{\mu}\beta_{g}^{\mu}(\alpha_{g}^{\mu})^{-1}(\beta_{g}^{\mu}) ^{-1}\tau_{1}^{\mu}\cdots\tau_{n_{e}}^{\mu}\kappa_{1}^{\mu}\cdots\kappa_{n_{ p}}^{\mu}=1. \tag{3.4}\] But one needs to define the equivalence classes of representations \(\varrho_{\mu}\) since two representations \(\varrho_{\mu_{1}}\) and \(\varrho_{\mu_{2}}\) are called equivalent if they differ by an inner automorphism of \(\mathrm{PSL}(2,\mathbb{R})\) -- i.e. if \(\varrho_{\mu_{2}}=\varsigma\varrho_{\mu_{1}}\varsigma^{-1}\) for a Mobius transformation \(\varsigma\in\mathrm{PSL}(2,\mathbb{R})\). Accordingly, the _Teichmuller space_\(\mathcal{T}(\Gamma)\) is defined to be the set of all equivalence classes of representations \(\varrho_{\mu}:\Gamma\to\mathrm{PSL}(2,\mathbb{R})\), \(\mu\in\mathfrak{D}(\Gamma)\). In other words, \[\mathcal{T}(\Gamma)\cong\mathfrak{D}(\Gamma)/\sim \tag{3.5}\] where \(\mu_{1}\sim\mu_{2}\) if and only if \(f^{\mu_{1}}\circ\gamma\circ(f^{\mu_{1}})^{-1}=f^{\mu_{2}}\circ\gamma\circ(f^{ \mu_{2}})^{-1}\) for all \(\gamma\in\Gamma\) (equivalently, if \(f^{\mu_{1}}\big{|}_{\mathbb{R}}=f^{\mu_{2}}\big{|}_{\mathbb{R}}\)). The base point of \(\mathcal{T}(\Gamma)\) is defined by \(\mu=0\) and it corresponds to the group \(\Gamma\). Last but not least, the projection \(\Phi:\mathfrak{D}(\Gamma)\to\mathcal{T}(\Gamma)\) induces a natural complex-analytic manifold structure on \(\mathcal{T}(\Gamma)\) which will be described in subsection 3.1.1. **Remark 3.1**.: Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be two cofinite Fuchsian groups with the same signature, and let \(f:\mathbb{H}\to\mathbb{H}\) be a quasi-conformal mapping such that \(\Gamma_{2}=f\circ\Gamma_{1}\circ f^{-1}\). Then \(f\) induces a mapping \(f^{*}:\mathcal{T}(\Gamma_{1})\to\mathcal{T}(\Gamma_{2})\) according to the formula \(\varrho_{\mu_{1}}\mapsto\varrho_{\mu_{2}}\) where \(\mu_{1}\in\mathfrak{D}(\Gamma_{1})\) and \[\mu_{2}=\left(\frac{\mu_{1}-(\partial_{\bar{z}}f/\partial_{z}f)}{1-\mu_{1}( \partial_{\bar{z}}f/\partial_{z}f)}\frac{\partial_{z}f}{\partial_{z}\bar{f}} \right)\circ f^{-1}\in\mathfrak{D}(\Gamma_{2}). \tag{3.6}\] This mapping is a complex-analytic isomorphism in the natural complex structure on \(\mathcal{T}(\Gamma_{1})\) and \(\mathcal{T}(\Gamma_{2})\), which makes a specific choice of base point inessential (see, e.g. [7, Remark 3]). **Remark 3.2**.: Teichmuller space \(\mathcal{T}(\Gamma)\) can be interpreted as the Teichmuller space of marked Riemann orbisurfaces with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\) by assigning to each point \(\Phi(\mu)\in\mathcal{T}(\Gamma)\) a marked Riemann orbisurface \(O^{\mu}\cong[\mathbb{H}/\Gamma^{\mu}]\), with the orbisurface \(O\cong[\mathbb{H}/\Gamma]\) playing the role of a base point. It follows from Remark 3.1 that the choice of a base point is inessential and, for this reason, we will sometimes use the notation \(\mathcal{T}_{g,n}(\boldsymbol{m})\) to denote the Teichmuller space of marked Riemann orbisurfaces with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\).30 Footnote 30: It follows from the Bers-Greenberg theorem [88], that the complex-analytic structure of \(\mathcal{T}(\Gamma)\) does not depend on the vector of orders — i.e. there exists a complex-analytic isomorphism between \(\mathcal{T}_{g,n}(\boldsymbol{m})\) and the Teichmüller space \(\mathcal{T}_{g,n}\) of punctured Riemann surface \(X_{O}^{\text{reg}}\). However, we will keep using the notation \(\mathcal{T}_{g,n}(\boldsymbol{m})\) for the Teichmüller space of Riemann orbisurfaces in order to emphasize that the natural Kähler structure and the action of orbifold mapping class group on this space does depend on \(\boldsymbol{m}=(m_{1},\ldots,m_{n})\) through dependence on the signature type of \(O\). #### 3.1.1 Complex Structure on \(\mathcal{T}(\Gamma)\) The complex structure on \(\mathcal{T}(\Gamma)\) is _uniquely_ characterized by the fact that the mapping \(\Phi:\mathfrak{D}(\Gamma)\to\mathcal{T}(\Gamma)\) is holomorphic. For a more explicit description of this canonical complex-analytic structure, we consider the space \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) of _holomorphic quadratic differentials_ (equivalently, _holomorphic cusp forms_ of weight \(4\)) for \(\Gamma\).31 An arbitrary element \(q\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) has the form \(q=q(z)\,\mathrm{d}z^{2}\) where \(q(z)\) is a bounded holomorphic function on \(\mathbb{H}\) that transforms according to the rule \(q(\gamma z)\gamma^{\prime}(z)^{2}=q(z)\) for all \(\gamma\in\Gamma\).32 The dimension of the space of square-integrable meromorphic \(k\)-differentials on \(O\), or cusp forms of weight \(2k\) for \(\Gamma\), is given by _Riemann-Roch_ formula for orbifolds: Footnote 31: By holomorphic cusp forms, we mean holomorphic \(\Gamma\)-automorphic forms on \(\mathbb{H}\) with zero constant coefficient in their Fourier expansions near the cusps of \(\Gamma\). Footnote 32: As we will see in Subsec. 3.2.1, any element \(q\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) corresponds to a _meromorphic_ quadratic differential \(Q\in\mathcal{H}^{2,0}(O)\) — i.e. a meromorphic (2,0)-tensor on \(X_{O}\) with simple poles at singular points. \[\dim_{\mathbb{C}}\mathcal{H}^{k,0}(O)=\begin{cases}(2k-1)(g-1)+\sum_{i=1}^{n_ {e}}\left\lfloor k\left(1-\frac{1}{m_{i}}\right)\right\rfloor+(k-1)n_{p},&k>1, \\ g,&k=1,\\ 1,&k=0,\\ 0,&k<0,\end{cases} \tag{3.7}\] where \(\left\lfloor\cdot\right\rfloor\) denotes the floor function (see Theorem 2.24 of [89]). In particular, the dimension of the Hilbert space of cusp forms of weight \(4\) for \(\Gamma\) is given by \[\dim_{\mathbb{C}}\mathcal{H}^{2,0}(\mathbb{H},\Gamma)=3g-3+n_{e}+n_{p}=3g-3+n. \tag{3.8}\] The Kodaira-Serre paring \[(\mu,q):=\iint_{\mathcal{F}(\Gamma)}\mu(z)q(z)\,\mathrm{d}^{2}z\,, \tag{3.9}\] is well-defined on the product of \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) and \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\). In the above equation, \(\mathrm{d}^{2}z\equiv\frac{\sqrt{-1}}{2}\,\mathrm{d}z\wedge\mathrm{d}\bar{z}= \mathrm{d}(\mathrm{Re}\,z)\wedge\mathrm{d}(\mathrm{Im}\,z)\) and \(\mathcal{F}(\Gamma)\subset\mathbb{H}\) denotes a fundamental domain for the Fuchsian group \(\Gamma\). The subspace \(\mathcal{N}(\mathbb{H},\Gamma)\subset\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) on which this pairing is degenerate coincides with the kernel of the differential \(\mathrm{d}\Phi\) at \(\mu=0\in\mathfrak{D}(\Gamma)\). Moreover, the space \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)/\mathcal{N}(\mathbb{H},\Gamma)\) and \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) are _dual_ with respect to the pairing (11). To realize \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)/\mathcal{N}(\mathbb{H},\Gamma)\) as a subspace of \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) we define the complex anti-linear mapping \(\varLambda:\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\to\mathcal{H}^{2,0}(\mathbb{ H},\Gamma)\) with the help of the Bergman integral \[\varLambda(\mu)(z)=\frac{12}{\pi}\iint_{\mathbb{H}}\frac{\overline{\mu(\xi)}} {(\xi-z)^{4}}\,\mathrm{d}^{2}\xi\,,\qquad\mu\in\mathcal{A}^{-1,1}(\mathbb{H}, \Gamma); \tag{12}\] its kernel coincides with \(\mathcal{N}(\mathbb{H},\Gamma)\). The mapping \(\varLambda^{*}:\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\to\mathcal{A}^{-1,1}( \mathbb{H},\Gamma)\) given by \[\varLambda^{*}(q)(z)=(\mathrm{Im}\,z)^{2}\,\overline{q(z)},\qquad q\in \mathcal{A}^{2,0}(\mathbb{H},\Gamma), \tag{13}\] and satisfies the condition \(\varLambda\varLambda^{*}=\mathrm{id}\) on \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\). In other words, \(\varLambda^{*}\) splits the exact sequence \[0\to\mathcal{N}(\mathbb{H},\Gamma)\hookrightarrow\mathcal{A}^{-1,1}(\mathbb{H },\Gamma)\stackrel{{\varLambda}}{{\to}}\mathcal{H}^{2,0}( \mathbb{H},\Gamma)\to 0. \tag{14}\] This enables us to realize \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)/\mathcal{N}(\mathbb{H},\Gamma)\) as the subspace \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)=\Lambda^{*}\left(\mathcal{H}^{2,0}( \mathbb{H},\Gamma)\right)\) of \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) with complex dimension \(3g-3+n\): the space of so-called _harmonic_ Beltrami differentials. The fact that \(\mathrm{Ker}\,\mathrm{d}\Phi=\mathcal{N}(\mathbb{H},\Gamma)\) at \(0\in\mathfrak{D}(\Gamma)\) implies that \(\Phi:\mathfrak{D}(\Gamma)\to\mathcal{T}(\Gamma)\) maps a sufficiently small neighborhood of the point \(0\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\cap\mathfrak{D}(\Gamma)\) injectively into \(\mathcal{T}(\Gamma)\) and can be regarded as a coordinate chart in a neighborhood of \(\Phi(0)\in\mathcal{T}(\Gamma)\). More explicitly, this coordinate chart can described as follows: Let \(\mu_{1},\ldots,\mu_{3g-3+n}\) denote a basis in \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) and let \(\mu=t_{1}\mu_{1}+\cdots+t_{3g-3+n}\,\mu_{3g-3+n}\) be any harmonic Beltrami differential with \(\|\mu\|_{\infty}<1\). Then, the correspondence \((t_{1},\ldots,t_{3g-3+n})\mapsto\Phi(\mu)\) defines the so-called _Bers coordinates_ in a neighborhood of the origin \(\Phi(0)\in\mathcal{T}(\Gamma)\). The isomorphism \(\mathcal{T}(\Gamma)\cong\mathcal{T}(\Gamma^{\mu})\) (see Remark 3.1) makes it possible to introduce similar coordinates in a neighborhood of an arbitrary point \(\Phi(\mu)\in\mathcal{T}(\Gamma)\). As a result, the _holomorphic tangent space_ to \(\mathcal{T}(\Gamma)\) at the point \(\Phi(\mu)\) can be identified with \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma^{\mu})\) -- the complex vector space of harmonic Beltrami differentials for \(\Gamma^{\mu}\). The pairing (11) lets us regard \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma^{\mu})\), i.e. the vector space of holomorphic cusp forms of weight \(4\) for \(\Gamma^{\mu}\), as the _holomorphic cotangent space_ to \(\mathcal{T}(\Gamma)\) at the point \(\Phi(\mu)\). This collection of charts gives the natural complex structure mentioned in the beginning of this subsection, on the Tiechmuller space \(\mathcal{T}(\Gamma)\). Finally, we point out that one can always associate \(3g-3+n\) vector fields \(\frac{\partial}{\partial t_{i}}\) with the Bers' coordinates \((t_{1},\ldots,t_{3g-3+n})\) in a neighborhood of \(\Phi(0)\in\mathcal{T}(\Gamma)\). At any other point \(\Phi(\mu)\) in this neighborhood, we have \(\frac{\partial}{\partial t_{i}}\big{|}_{\Phi(\mu)}=\mu_{i}^{\Phi(\mu)}\) where the harmonic Beltrami differentials \(\mu_{i}^{\Phi(\mu)}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma^{\mu})\) are given by the formula \[\mu_{i}^{\Phi(\mu)}=\mathrm{Proj}_{{}_{\mathcal{H}^{-1,1}}}\left[\left(\frac{ \mu_{i}}{1-|\mu|^{2}}\frac{\partial_{z}f^{\mu}}{\partial_{z}f^{\mu}}\right) \circ(f^{\mu})^{-1}\right]. \tag{15}\] Here, the mapping \(\mathrm{Proj}_{{}_{\mathcal{H}^{-1,1}}}\) denotes a projection onto the subspace \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma^{\mu})\) of harmonic Beltrami differentials. Moreover, let \(q_{1},\ldots,q_{3g-3+n}\) be the basis in \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\), dual to the basis \(\mu_{1},\ldots,\mu_{3g-3+n}\) for \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\). Then, at an arbitrary point \(\Phi(\mu)\) in a neighborhood of the origin, holomorphic \(1\)-forms \(\mathrm{d}t_{i}\) are represented by the holomorphic quadratic differentials \(q_{i}^{\Phi(\mu)}\) -- i.e. \(\left.\mathrm{d}t_{i}\,\right|_{\Phi(\mu)}=q_{i}^{\Phi(\mu)}\) where the basis \(q_{1}^{\Phi(\mu)},\ldots,q_{3g-3+n}^{\Phi(\mu)}\in\mathcal{H}^{2,0}(\mathbb{H}, \Gamma^{\mu})\) has the property \[\mathrm{Proj}_{{}_{\mathcal{H}^{2,0}}}\left[q_{i}^{\Phi_{(\mu)}}\circ f^{\mu} \left(\partial_{z}f^{\mu}\right)^{2}\right]=q_{i}. \tag{3.14}\] In the above equation, \(\mathrm{Proj}_{{}_{\mathcal{H}^{2,0}}}\) denotes a projection onto the subspace \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\). #### 3.1.2 Variational Formulas In order to further explore the complex-analytic structure of the Teichmuller space, variational formulas of the hyperbolic metric \(\rho(z)|\,\mathrm{d}z\,|^{2}\) on \(\mathbb{H}\) play a significant role. Let \(\phi^{\varepsilon}\in\mathcal{A}^{k,\ell}(\mathbb{H},\Gamma^{\varepsilon\mu})\) be a smooth family of automorphic forms of weight \((2k,2\ell)\) where \(\mu\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) denotes a harmonic Beltrami differential and \(\varepsilon\in\mathbb{C}\) is a sufficiently small parameter. We denote by \(\left(f^{\varepsilon\mu}\right)^{*}(\phi^{\varepsilon})\) the pullback of the automorphic form \(\phi^{\varepsilon}\) with the unique diffeomorphism \(f^{\varepsilon\mu}:\mathbb{H}\to\mathbb{H}\) that satisfies the Beltrami equation \(\partial_{\bar{z}}f^{\varepsilon\mu}=\left(\varepsilon\mu\right)\partial_{z} f^{\varepsilon\mu}\) and fixes the points \(0,1,\infty\). We have \[\left(f^{\varepsilon\mu}\right)^{*}(\phi^{\varepsilon})=\phi^{\varepsilon} \circ f^{\varepsilon\mu}\left(\frac{\partial f^{\varepsilon\mu}}{\partial z} \right)^{k}\left(\overline{\frac{\partial f^{\varepsilon\mu}}{\partial z}} \right)^{\ell}\in\mathcal{A}^{k,\ell}(\mathbb{H},\Gamma). \tag{3.15}\] In particular, for the density \(\rho(z)=(\mathrm{Im}\,z)^{-2}\) of the Poincare metric, considered as a family of \((1,1)\)-tensors, one has \[\left(f^{\varepsilon\mu}\right)^{*}(\rho)=\frac{|\partial_{z}f^{\varepsilon\mu }|^{2}}{\left(\mathrm{Im}\,f^{\varepsilon\mu}\right)^{2}}. \tag{3.16}\] Let the Lie derivatives of the family \(\phi^{\varepsilon}\) in holomorphic and anti-holomorphic tangential directions, \(\mu\) and \(\bar{\mu}\), be defined as \[\begin{cases}\mathcal{L}_{\mu}\phi\,\stackrel{{\text{\tiny def.}}}{{=}}\,\left.\frac{ \partial}{\partial\varepsilon}\right|_{\varepsilon=0}(f^{\varepsilon\mu})^{*} (\phi^{\varepsilon})\in\mathcal{A}^{k+1,\ell}(\mathbb{H},\Gamma),\\ \mathcal{L}_{\bar{\mu}}\phi\,\stackrel{{\text{\tiny def.}}}{{=}}\, \left.\frac{\partial}{\partial\bar{\varepsilon}}\right|_{\varepsilon=0}(f^{ \varepsilon\mu})^{*}(\phi^{\varepsilon})\in\mathcal{A}^{k,\ell+1}(\mathbb{H}, \Gamma).\end{cases} \tag{3.17}\] The first variational formula for \(\rho(z)\) is given by the following lemma due to Ahlfors [90]: **Lemma 3.1** (Ahlfors).: _For any \(\mu\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\), the Lie derivatives of the density \(\rho(z)\) of the Poincare metric in both holomorphic and anti-holomorphic tangential directions vanish:_ \[\mathcal{L}_{\mu}\rho=\mathcal{L}_{\bar{\mu}}\rho=0.\] For the second variation of \(\rho\), the following formula was obtained by Wolpert (see [91, Theorem 3.3 ]): \[\mathcal{L}_{\mu\overline{\mu}^{\prime}}\rho\,\stackrel{{\text {\tiny def.}}}{{=}}\,\left.\frac{\partial^{2}}{\partial\varepsilon_{1}\partial \bar{\varepsilon}_{2}}\right|_{\varepsilon_{1}=\varepsilon_{2}=0}(f^{ \varepsilon_{1}\mu+\varepsilon_{2}\mu^{\prime}})^{*}(\rho)=\frac{1}{2}\rho \left(\Delta_{0}+\frac{1}{2}\right)^{-1}(\mu\overline{\mu^{\prime}})\equiv \frac{1}{2}\rho\cdot f_{\mu\overline{\mu^{\prime}}}, \tag{3.18}\] where \(\mu,\mu^{\prime}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\). The \(\Gamma\)-automorphic function \(f_{\mu\overline{\mu^{\prime}}}\) is uniquely determined by \[\left(\Delta_{0}+\frac{1}{2}\right)f_{\mu\overline{\mu^{\prime}}}=\mu \overline{\mu^{\prime}}\quad\text{and}\quad\iint_{\mathcal{F}(\Gamma)}|f_{\mu \overline{\mu^{\prime}}}|^{2}\rho(z)\,\mathrm{d}^{2}z<\infty, \tag{3.19}\] where \(\Delta_{0}:=-\rho(z)^{-1}\frac{\partial^{2}}{\partial z\partial\bar{z}}\) is the _Laplace operator_ of the hyperbolic metric acting on \(\mathcal{H}^{0,0}(\mathbb{H},\Gamma)\).33 Footnote 33: See section 2 of [92] for more detailed exposition. #### 3.1.3 Kahler Metrics on \(\mathcal{T}(\Gamma)\) * _Weil-Petersson metric_. Together with the complex anti-linear isomorphism \(q(z)\mapsto\mu(z)=\rho(z)^{-1}\,\overline{q(z)}\), the pairing (3.9) defines the _Petersson inner product_ on \(T_{\Phi(0)}\mathcal{T}(\Gamma)\cong\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\): \[\langle\mu_{1},\mu_{2}\rangle_{\rm WP}=\iint_{\mathcal{F}(\Gamma)}\mu_{1}(z)\, \overline{\mu_{2}(z)}\,\rho(z)\,{\rm d}^{2}z\,,\qquad\mu_{1},\mu_{2}\in \mathcal{H}^{-1,1}(\mathbb{H},\Gamma).\] (3.20) The Petersson inner product on the tangent spaces determines the _Weil-Petersson_ Kahler metric on \(\mathcal{T}(\Gamma)\). Its Kahler \((1,1)\)-form is a symplectic form \(\omega_{\rm WP}\) on \(\mathcal{T}(\Gamma)\) \[\omega_{\rm WP}(\mu_{1},\bar{\mu}_{2})=\frac{\sqrt{-1}}{2}\iint_{\mathcal{F} (\Gamma)}\Big{(}\mu_{1}(z)\,\overline{\mu_{2}(z)}-\overline{\mu_{1}(z)}\,\mu_ {2}(z)\Big{)}\,\,\rho(z)\,{\rm d}^{2}z\,,\] (3.21) where \(\mu_{1},\mu_{2}\in T_{\Phi(0)}\mathcal{T}(\Gamma)\). It is worth mentioning that the Weil-Petersson metric is both invariant under the Teichmuller modular group \({\rm Mod}(\Gamma)\) and real-analytic. * _Cuspidal Takhtajan-Zograf metric._ In [92; 93], a new Kahler metric on \(\mathcal{T}(\Gamma)\) was introduced by Takhtajan and Zograf for the cases that the Fuchsian group \(\Gamma\) has \(n_{p}>0\) parabolic elements. Let us indicate the fixed points of the parabolic generators \(\kappa_{1},\ldots,\kappa_{n_{p}}\) by \(z_{n_{e}+1},\ldots,z_{n}\in\mathbb{R}\cup\{\infty\}\).34 For each \(i=1,\ldots,n_{p}\) denote by \(\langle\kappa_{i}\rangle\) the cyclic subgroup of \(\Gamma\) generated by \(\kappa_{i}\), and let \(\varsigma_{i}\in{\rm PSL}(2,\mathbb{R})\) be such that \(\varsigma_{i}(\infty)=z_{n_{e}+i}\) and \(\varsigma_{i}^{-1}\kappa_{i}\varsigma_{i}=\begin{pmatrix}1&\pm 1\\ 0&1\end{pmatrix}\). Let \(E_{i}(z,s)\) be the _Eisenstein-Mass series_ associated with the cusp \(z_{n_{e}+i}\), which is defined as (see section 3 in [64]) Footnote 34: Note that \(n_{e}+n_{p}=n\). \[E_{i}(z,s)=\sum_{\Gamma\setminus\gamma\in\langle\kappa_{i}\rangle}{\rm Im} \left(\varsigma_{i}^{-1}\gamma z\right)^{s}.\] (3.22) The series is absolutely convergent for \({\rm Re}\,s>1\), is positive for \(s=2\) and satisfies the equation \[\Delta_{0}E_{i}(z,s)=\frac{1}{4}s(1-s)E_{i}(z,s).\] (3.23) The inner product \[\langle\mu_{1},\mu_{2}\rangle_{\rm TZ,i}^{\rm cusp}=\iint_{\mathcal{F}(\Gamma) }\mu_{1}(z)\,\overline{\mu_{2}(z)}E_{i}(z,2)\rho(z)\,{\rm d}^{2}z\,,\qquad i=1, \ldots,n_{p},\] (3.24) in \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\), and the corresponding inner products in all \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma^{\mu})\), determines another Hermitian metric on \(\mathcal{T}(\Gamma)\) which is Kahler for each \(i=1,\ldots,n_{p}\). The metric \(\langle\cdot,\cdot\rangle_{\rm TZ}^{\rm cusp}=\langle\cdot,\cdot\rangle_{ \rm TZ,1}^{\rm cusp}+\cdots+\langle\cdot,\cdot\rangle_{\rm TZ,n_{p}}^{\rm cusp}\), called the _cuspidal Takhtajan-Zograf (TZ)_ metric, is invariant with respect to the Teichmuller modular group \({\rm Mod}(\Gamma)\) (see subsection 3.2). Let \[\omega_{\rm TZ,i}^{\rm cusp}=\frac{\sqrt{-1}}{2}\sum_{j,k=1}^{3g-3+n}\langle \mu_{j},\mu_{k}\rangle_{\rm TZ,i}^{\rm cusp}\,\,{\rm d}t_{j}\wedge{\rm d}\bar{ t}_{k}\] (3.25) be the symplectic form of the \(i\)-th cuspidal TZ metric and also define \(\omega^{\rm cusp}_{\rm TZ}=\omega^{\rm cusp}_{\rm TZ,1}+\cdots+\omega^{\rm cusp}_{ \rm TZ,n_{p}}\). According to [92, Lemma 2] \[\lim_{\rm Im\,z\to\infty}{\rm Im}(\varsigma_{i}z)f_{\mu_{j}\bar{\mu}_{k}}( \varsigma_{i}z)=\frac{4}{3}\langle\mu_{j},\mu_{k}\rangle^{\rm cusp}_{\rm TZ,i},\qquad i=1,\ldots,n_{p},\] (3.26) where \(\mu_{j},\mu_{k}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) and \(f_{\mu\overline{\mu^{\prime}}}\) was defined in Eq.(3.19), the cuspidal TZ metric pertains to the second variation of the hyperbolic metric on \(\mathbb{H}\). * _Elliptic Takhtajan-Zograf metric._ As discussed in [2, 63], when the Fuchsian group has \(n_{e}>1\) elliptic generators \(\tau_{1},\ldots,\tau_{n_{e}}\), the local index theorem of Takhtajan and Zograf [92] for punctured Riemann surfaces can be generalized to include elliptic fixed points. In this case, the role of Eisenstein-Mass series \(E_{i}(z,s)\) associated with the cusp \(z_{n_{e}+i}\) is played by the automorphic Green's function \(G_{0}(z,z_{j};s)\) associated with the elliptic fixed point \(z_{j}\), \(j=1,\ldots,n_{e}\). More explicitly, for the elliptic generator \(\tau_{j}\) of \(\Gamma\) define \[\langle\mu_{1},\mu_{2}\rangle^{\rm ell}_{\rm TZ,j}=\iint_{\mathcal{F}(\Gamma) }\mu_{1}(z)\,\overline{\mu_{2}(z)}G(z_{j},z)\rho(z)\,{\rm d}^{2}z\,,\qquad j=1,\ldots,n_{e},\] (3.27) where \(z_{j}\) is the fixed-point of \(\tau_{j}\), and \(G(z,z^{\prime})\equiv G_{0}(z,z^{\prime};2)\) is the integral kernel of the resolvent \(\left(\Delta_{0}+\frac{1}{2}\right)^{-1}\). It was shown in _Theorem 3_ of [2] that the metrics \(\langle\cdot,\cdot\rangle^{\rm ell}_{\rm TZ,j}\) are also Kahler. In addition, if we denote by \(\langle\cdot,\cdot\rangle^{\rm ell}_{\rm\{\bf s}m}\) the sum over all elliptic TZ metrics \(\langle\cdot,\cdot\rangle^{\rm ell}_{\rm TZ,j}\) associated with the elliptic generators \(\tau_{j}\) that have the same order of isotropy \(m\), we expect \(\langle\cdot,\cdot\rangle^{\rm ell}_{\rm\{\bf s}_{m}}\) to be invariant under the action of Teichmuller modular group \({\rm Mod}(\Gamma)\). Moreover, we will denote by \(\omega^{\rm ell}_{\rm TZ,j}\), the symplectic \((1,1)\)-form \[\omega^{\rm ell}_{\rm TZ,j}=\frac{\sqrt{-1}}{2}\sum_{l,k=1}^{3g-3+n}\langle \mu_{l},\mu_{k}\rangle^{\rm ell}_{\rm TZ,j}\,\,{\rm d}t_{l}\wedge{\rm d}\bar{t }_{k}\,.\] (3.28) Finally, the elliptic TZ metric is also intrinsically related to the second variation of the hyperbolic metric on \(\mathbb{H}\): The following result was proven by Takhtajan and Zograf in [2, Lemma 1 part (iii)] \[\lim_{w\to w_{j}}f_{\mu_{l}\bar{\mu}_{k}}\circ J^{-1}(w)=\left\langle\frac{ \partial}{\partial w_{l}},\frac{\partial}{\partial w_{k}}\right\rangle^{\rm ell }_{\rm TZ,j},\qquad j=1,\ldots,n_{e},\] (3.29) where \(\mu_{l},\mu_{k}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) and \(f_{\mu\overline{\mu^{\prime}}}\) was defined in Eq.(3.19). ### Moduli Spaces \(\mathcal{M}_{g,n}\) and \(\mathfrak{M}_{g,n}(\boldsymbol{m})\) In the previous subsection, we have defined the Teichmuller space \(\mathcal{T}(\Gamma)\) as the space of all equivalence classes of representations \(\varrho_{\mu}:\Gamma\to{\rm PSL}(2,\mathbb{R})\) and we have seen that \(\mathcal{T}(\Gamma)\) can be realized as a bounded complex domain in \(\mathbb{C}^{3g-3+n}\) via the so-called Bers embedding. Let \({\rm Aut}_{*}(\Gamma)\) denote the group of proper automorphisms of \(\Gamma\), which carry parabolic elements into parabolic elements and elliptic elements of order \(m\) into elliptic elements with the same order. The group \({\rm Aut}_{*}(\Gamma)\) acts on \(\mathcal{T}(\Gamma)\) via \[\imath(\varrho_{\mu})=\varrho_{\mu}\circ\imath,\qquad\imath\in{\rm Aut}_{*}( \Gamma). \tag{3.30}\] That this is well-defined, i.e. that \(\imath(\varrho_{\mu})\) is equivalent to another representation \(\varrho_{\mu_{i}}\) for some \(\mu_{i}\in\mathfrak{D}(\Gamma)\), follows from the fact that any automorphism \(\imath\in\mathrm{Aut}_{*}(\Gamma)\) induces a quasi-conformal homeomorphism of \(\mathbb{H}\). The group \(\mathrm{Inn}(\Gamma)\) of inner automorphisms of \(\Gamma\) obviously acts on \(\mathcal{T}(\Gamma)\) as the identity. Let us remind that the factor group \(\mathrm{Mod}(\Gamma):=\mathrm{Aut}_{*}(\Gamma)/\,\mathrm{Inn}(\Gamma)\) is called the Teichmuller modular group and acts discretely on \(\mathcal{T}(\Gamma)\) by complex-analytic automorphisms that only change the marking of \(\Gamma\). Denote by \(\mathrm{Mod}_{0}(\Gamma)\) the subgroup of \(\mathrm{Mod}(\Gamma)\) consisting of _pure mapping classes_ -- i.e. those fixing the cusps and orbifold points on \(O\) pointwise.35 The full Teichmuller modular group \(\mathrm{Mod}(\Gamma)\) is related to \(\mathrm{Mod}_{0}(\Gamma)\) by the short exact sequence Footnote 35: The Teichmüller modular group \(\mathrm{Mod}(\Gamma)\) acting on \(\mathcal{T}(\Gamma)\) can be identified with the orbifold mapping class group \(\mathrm{MCG}(O)\) acting on \(\mathcal{T}_{g,n}(\mathbf{m})\). Here, \(\mathrm{MCG}(O)\) is defined as \(\mathrm{Homeo}^{+}(O)/\,\mathrm{Homeo}_{0}(O)\) where \(\mathrm{Homeo}^{+}(O)\) is the group of orientation preserving homeomorphisms of \(O\) (in the category of orbifolds), and \(\mathrm{Homeo}_{0}(O)\) is its identity component. \[1\to\mathrm{Mod}_{0}(\Gamma)\to\mathrm{Mod}(\Gamma)\to\mathrm{ Symm}\left(\mathfrak{s}\right)\to 1, \tag{3.31}\] where \(\mathrm{\,Symm}\left(\mathfrak{s}\right):=\mathrm{\,Symm}\left(\mathfrak{s}_{ 2}\right)\times\mathrm{\,Symm}\left(\mathfrak{s}_{3}\right)\times\cdots \times\mathrm{\,Symm}\left(\mathfrak{s}_{\infty}\right)\) denotes a subgroup of \(\mathrm{\,Symm}\left(n\right)\) consisting of all permutations that leave the signature type \(\mathfrak{s}=\left\{\mathfrak{s}_{m}\right\}_{m\in\mathbb{N}^{>1}}\) invariant [94]. Then, the quotient space \(\mathcal{T}(\Gamma)/\,\mathrm{Mod}_{0}(\Gamma)\) is isomorphic to the moduli space \(\mathcal{M}_{g,n}\) of smooth algebraic curves of genus \(g\) with \(n=n_{e}+n_{p}\) labeled points. According to (3.31), the Tiechmuller modular group acts on \(\mathcal{M}_{g,n}\) via \(\mathrm{\,Symm}\left(\mathfrak{s}\right)\) and the quotient \(\mathcal{M}_{g,n}/\,\mathrm{\,Symm}\left(\mathfrak{s}\right)\) is isomorphic to \(\mathfrak{M}_{g,n}(\mathbf{m})\) -- the true moduli space of orbifold Riemann surfaces with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\).36 Finally, we remark that both \(\mathcal{T}_{g,n}(\mathbf{m})\) and \(\mathfrak{M}_{g,n}(\mathbf{m})\) depend not on the signature of \(O\), but rather on its signature type (see [77] for more details). Footnote 36: When all singular points have the same order of isotropy, the situation will be similar to what has been previously studied by Zograf [16]. #### 3.2.1 \(\mathfrak{M}_{0,n}(\mathbf{m})\) In the remainder of this subsection, we will focus on the \(g=0\) case for the sake of simplicity and return to \(g>1\) Riemann orbisurfaces in the next subsection. A _normalized_ orbifold Riemann surfaces with signature \((0;m_{1},\ldots,m_{n_{e}};n_{p})\) is given by a pair \(O=(\mathbb{C}\backslash\{w_{n_{e}+1},\ldots,w_{n}\},\mathscr{D})\) with \(\mathscr{D}=\sum_{i=1}^{n_{e}}(1-\frac{1}{m_{i}})\,w_{i}\) such that \(w_{n-3}\), \(w_{n-2}\), and \(w_{n}\) are at \(0\), \(1\), \(\infty\) respectively.37 Accordingly, the moduli space \(\mathcal{M}_{0,n}=\mathscr{F}_{n}\left(\hat{\mathbb{C}}\right)/\,\mathrm{PSL}( 2,\mathbb{C})\) is given by the following domain in \(\mathbb{C}^{n-3}\) Footnote 37: We are assuming that the stratum of conical points \(\mathrm{Sing}_{m_{\mathrm{max}}}(O)\) with largest order of isotropy \(m_{\mathrm{max}}\in\hat{\mathbb{N}}^{>1}\), has cardinality of at least three. The analysis in cases where this assumption doesn’t hold requires a change of notation, but the fundamental lessons remain the same. \[\mathcal{M}_{0,n}=\left\{(w_{1},\ldots,w_{n-3})\in\mathbb{C}^{n-3}\Big{|}w_{i} \neq 0,1\quad\text{and}\quad w_{i}\neq w_{k}\quad\text{for}\quad i\neq k\right\}, \tag{3.32}\] where \(\mathscr{F}_{n}\left(\hat{\mathbb{C}}\right)\) is the configuration space of \(n=n_{e}+n_{p}\) labeled distinct points in \(\hat{\mathbb{C}}\). We will show that \(\mathcal{M}_{0,n}\)38 is covered (in the complex-analytic sense) by the Teichmuller space of orbifold Riemann surfaces with signature \((0;m_{1},\ldots,m_{n_{e}};n_{p})\). This will enable us to express the vector fields \(\frac{\partial}{\partial w_{i}}\) on \(\mathcal{M}_{0,n}\) in terms of sections of the holomorphic tangent bundle of Teichmuller space. Using Theorem A.7, we have \(O\cong[\mathbb{H}/\Gamma]\) where \(\Gamma\) is normalized such that the fixed points of \(\kappa_{n-2},\kappa_{n-1},\kappa_{n}\) are at \(z_{n-2}=0\), \(z_{n-1}=1\), and \(z_{n}=\infty\) respectively. Denote by \(\mathbb{H}^{*}\) the union of \(\mathbb{H}\) and all parabolic points of \(\Gamma\). There is a unique (universal) orbifold covering map \(J:\mathbb{H}\to O\) with \(\operatorname{deck}(J)\cong\Gamma\),39 which extends to a holomorphic isomorphism \([\mathbb{H}^{*}/\Gamma]\xrightarrow{\cong}\hat{O}=(\hat{\mathbb{C}},\hat{ \mathscr{D}})\) that fixes the points \(0,1,\infty\)40 and has the property that \(w_{i}=J(z_{i})\) for \(i=1,\ldots,n-3\).41 The function \(J\) is univalent in any _fundamental domain_\(\mathcal{F}\) for \(\Gamma\) and has the following expansions near cusps and conical singularities (for more details, see Appendix C) Footnote 39: One might correctly want to identify the covering map \(J\) in this subsection with the covering map \(\pi_{\Gamma}\) in the commutative diagram (2.13). Only for the reasons that will become clear later, we decided to call the covering map in this subsection with \(J\). Footnote 40: In the literature, the \(J\) is called _Klien’s Hauptmodul_. Actually, it stands as the sole \(\Gamma\)-automorphic function on \(\mathbb{H}\) that exhibits a simple pole at \(\infty\) and fixes \(0\) and \(1\). Footnote 41: The \(\mathbb{Q}\)-divisor \(\hat{\mathscr{D}}\) refers to \(\mathscr{D}+\sum_{j=n_{e}+1}^{n}w_{j}\). \[J(z)=\begin{cases}w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\left(\frac{z-z_{i}}{z- \bar{z}_{i}}\right)^{km_{i}}&(i=1,\ldots,n_{e}),\quad\ z\to z_{i},\\ w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\exp\!\left(-\frac{2\pi\sqrt{-1}k}{|\delta_ {i}|(z-z_{i})}\right)&(i=n_{e}+1,\ldots,n-1),\ z\to z_{i},\\ \sum_{k=-1}^{\infty}J_{k}^{(n)}\exp\!\left(\frac{2\pi\sqrt{-1}k}{|\delta_{n}| }\right)&z\to z_{n}=\infty.\end{cases} \tag{3.33}\] The first coefficients of the above expansions determine the following smooth positive functions on \(\mathcal{M}_{0,n}\): \[\mathsf{h}_{i}=\begin{cases}\left|J_{1}^{(i)}\right|^{\frac{2}{m_{i}}}&i=1, \ldots,n_{e},\\ \left|J_{1}^{(i)}\right|^{2}&i=n_{e}+1,\ldots,n-1,\\ \left|J_{-1}^{(n)}\right|^{2}&i=n.\end{cases} \tag{3.34}\] Similar to the case of \(\mathcal{M}_{g,n}\) discussed at the beginning of this subsection, the symmetric group \(\operatorname{Symm}\left(\mathfrak{s}\right)\) acts on \(\mathcal{M}_{0,n}\) to give \(\mathfrak{M}_{0,n}(\boldsymbol{m})=\mathcal{M}_{0,n}/\operatorname{Symm} \left(\mathfrak{s}\right)\), the moduli space of orbifold Riemann surfaces with signature \((0;m_{1},\ldots,m_{n_{e}};n_{p})\). In order to describe the action of \(\operatorname{Symm}\left(\mathfrak{s}\right)\) on \(\mathcal{M}_{0,n}\) in more detail, we will make the simplifying assumption that the signature of orbifold Riemann surface \(O\) is given by \((0;\underbrace{m,\ldots,m}_{\mathfrak{s}},\underbrace{m^{\prime},\ldots,m^{ \prime}}_{\mathfrak{s}^{\prime}})\) with \(\mathfrak{s}\equiv\mathfrak{s}_{m}\) and \(\mathfrak{s}^{\prime}\equiv\mathfrak{s}_{m^{\prime}}>3\). Let us first focus on ordered \(\mathfrak{s}^{\prime}\)-tuples \((w_{1},w_{2},\ldots,w_{\mathfrak{s}^{\prime}})\) with each \(w_{i}\in\operatorname{Sing}_{m^{\prime}}\). If none of the points \(0,1\) and \(\infty\) are fixed in this set of points, then \(\operatorname{Symm}\left(\mathfrak{s}^{\prime}\right)\) will simply be the group of permutations of \(\mathfrak{s}^{\prime}\) objects. This group is generated by the set of transpositions \(\{\sigma_{i,i+1}\}_{i=1}^{\mathfrak{s}^{\prime}-1}\), whose action only involves interchanging \(w_{i}\) and \(w_{i+1}\) for \(i\neq\mathfrak{s}^{\prime}\), and \(w_{1}\) and \(w_{\mathfrak{s}^{\prime}}\) for \(i=\mathfrak{s}^{\prime}\). The situation will be a bit more complicated when the points \(0,1\) and \(\infty\) are fixed among these points, namely, we have a sector of the form \(\{(w_{1},w_{2},\ldots,w_{s^{\prime}-3},0,1,\infty)\in\mathbb{C}^{s^{\prime}}\}\). Then the group \(\operatorname{Symm}\left(\mathsf{s}^{\prime}\right)\) will be generated by the transpositions \(\{\sigma_{i,i+1}\}_{i=1}^{s^{\prime}-1}\) and the action of transpositions will be followed by a \(\operatorname{PSL}(2,\mathbb{C})\) transformation to ensure that the last three coordinates in \(\mathcal{M}_{0,n}\) remain \(0,1\) and \(\infty\). If \(i<\mathsf{s}^{\prime}-3\), the transpositions will not affect the points \(0,1\) and \(\infty\), thus no further action of \(\operatorname{PSL}(2,\mathbb{C})\) will be needed. If \(i=\mathsf{s}^{\prime}-3\), then the set of branch points will change to \(\{(w_{1},w_{2},\ldots,0,w_{\mathsf{s}^{\prime}-3},1,\infty)\}\), and we need a transformation that will take \(w_{\mathsf{s}^{\prime}-3}\to 0,1\to 1\) and \(\infty\to\infty\). This transformation is \(\gamma_{\mathsf{s}^{\prime}-3,\mathsf{s}^{\prime}-2}=(w-w_{\mathsf{s}^{ \prime}-3})/(1-w_{\mathsf{s}^{\prime}-3})\). Thus in the end we will arrive at \(\{(\frac{w_{1}-w_{\mathsf{s}^{\prime}-3}}{1-w_{\mathsf{s}^{\prime}-3}},\ldots,\frac{w_{\mathsf{s}^{\prime}-4}-w_{\mathsf{s}^{\prime}-3}}{1-w_{\mathsf{s}^{ \prime}-3}},\frac{w_{\mathsf{s}^{\prime}-3}}{w_{\mathsf{s}^{\prime}-3}-1},0,1,\infty)\}\). Repeating the same procedure for \(i=\mathsf{s}^{\prime}-2\) and \(i=\mathsf{s}^{\prime}-1\) will yield the transformations, \(\gamma_{\mathsf{s}^{\prime}-2,\mathsf{s}^{\prime}-1}=1-w\) and \(\gamma_{\mathsf{s}^{\prime}-1,\mathsf{s}^{\prime}}=w/(w-1)\). Putting all these together, the collective action of \(\operatorname{Symm}\left(\mathsf{s}^{\prime}\right)\) on \(\mathcal{M}_{0,n}\), can be expressed as \(\sigma_{i,i+1}(w_{1},w_{2},\ldots,w_{\mathsf{s}^{\prime}-3},0,1,\infty)=( \tilde{w}_{1},\ldots,\tilde{w}_{\mathsf{s}^{\prime}-3},0,1,\infty)\) such that \[\tilde{w}_{k}=\begin{cases}w_{k}&(k\neq i,i+1),\qquad\qquad i\leq\mathsf{s}^{ \prime}-4,\\ w_{k+1}&(k=i),\qquad\qquad i\leq\mathsf{s}^{\prime}-4,\\ w_{k-1}&(k=i+1),\qquad\qquad i\leq\mathsf{s}^{\prime}-4,\\ \frac{w-w_{\mathsf{s}^{\prime}-3}}{1-w_{\mathsf{s}^{\prime}-3}}&(k \leq\mathsf{s}^{\prime}-4),\quad i=\mathsf{s}^{\prime}-3,\\ \frac{w_{\mathsf{s}^{\prime}-3}}{w_{\mathsf{s}^{\prime}-3}-1}&(k =\mathsf{s}^{\prime}-3),\quad i=\mathsf{s}^{\prime}-3,\\ 1-w_{k}&(k\leq\mathsf{s}^{\prime}-3),\qquad i=\mathsf{s}^{\prime}-2,\\ \frac{w_{k}}{w_{k}-1}&(k\leq\mathsf{s}^{\prime}-3),\qquad i=\mathsf{s}^{ \prime}-1.\end{cases} \tag{3.35}\] As we said, the effect of \(\sigma^{\prime}_{i,i+1}\in\operatorname{Symm}\left(\mathsf{s}^{\prime}\right)\) on \(\mathcal{M}_{0,n}\) is followed by a \(\gamma_{i,i+1}\in\operatorname{PSL}(2,\mathbb{C})\) on the orbifold Riemann surface \(O\) with the coordinates \(w\). This transformation is actually an isomorphism that takes \(O\) to another orbifold Riemann surface with the coordinates \(\tilde{w}=\gamma_{i,i+1}(w)\). For \(i\leq\mathsf{s}^{\prime}-4\), \(\gamma_{i,i+1}\) is simply the identity map. For the other cases we have \(\gamma_{i,i+1}:w\to\tilde{w}=(w-w_{\mathsf{s}^{\prime}-3})/(1-w_{\mathsf{s}^{ \prime}-3})\), \(i=\mathsf{s}^{\prime}-3\), \(\gamma_{i,i+1}:w\to\tilde{w}=1-w\), \(i=\mathsf{s}^{\prime}-2\) and \(\gamma_{i,i+1}:w\to\tilde{w}=w/(w-1)\) for \(i=\mathsf{s}^{\prime}-1\). If one wishes to consider the variation of the objects defined on \(X\) under the action of \(\operatorname{Symm}\left(\mathsf{s}^{\prime}\right)\), one should study the effect of these \(\gamma_{i,i+1}\)'s on them. Now keeping \(0,1\) and \(\infty\) fixed in the \(\operatorname{Sing}_{m^{\prime}}\) stratum, we consider ordered \(\mathsf{s}\)-tuples \((w_{1},w_{2},\ldots,w_{\mathsf{s}})\) with \(w_{j}\in\operatorname{Sing}_{m}\) for all \(j=1,\ldots,\mathsf{s}\), \(\mathsf{s}:=|\operatorname{Sing}_{m}|\), and \(m<m^{\prime}\). Here, we no longer make any assumptions about \(s\). Again, the group \(\operatorname{Symm}\left(s\right)\) is generated by the transpositions \(\{\sigma_{j,j+1}\}_{j=1}^{s-1}\) and their action simply involves interchanging \(w_{j}\) and \(w_{j+1}\). Thus, the \(\gamma_{j,j+1}\) defined for this set will all be identity, meaning that \(\operatorname{Symm}\left(\mathsf{s}\right)\) will not have any non-trivial effect on \(O\). Moving forward, we can define the direct product of the two symmetric groups \(\operatorname{Symm}\left(\mathsf{s}\right)\times\operatorname{Symm}\left( \mathsf{s}^{\prime}\right)\) as ordered pairs \((\sigma,\sigma^{\prime}),\sigma\in\operatorname{Symm}\left(\mathsf{s}\right), \sigma^{\prime}\in\operatorname{Symm}\left(\mathsf{s}^{\prime}\right)\). The group operations will then be defined naturally on a pairwise basis, and this group acts on \(\mathcal{M}_{0,n}\), \[\{(w_{1},\ldots,w_{\mathsf{s}})\in\mathbb{C}^{\mathsf{s}}\}\times \{(w_{1},w_{2},\ldots,w_{\mathsf{s}^{\prime}-3},0,1,\infty)\in \mathbb{C}^{\mathsf{s}^{\prime}}\}\] \[=\{(w_{1},\ldots,w_{\mathsf{s}},w_{\mathsf{s}+1},w_{\mathsf{s}+2}, \ldots,w_{\mathsf{s}+\mathsf{s}^{\prime}-3},0,1,\infty)\}.\] It is now clear that we have chosen to fix \(0\), \(1\), and \(\infty\) in the stratum \(\text{Sing}_{m^{\prime}}\) in order to comply with our convention that branch points are ordered with increasing order of isotropy. In particular, the generators of the direct product group are pairs of transpositions \(\{(\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})\}_{j=1,i=1}^{j=s-1,i=s^{\prime}-1}\). The action of these pairs on \(\mathcal{M}_{0,n}\) is defined by their separate action on their corresponding subspace: \[\begin{split}(\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})& \big{(}w_{1},\dots,w_{\mathsf{s}},w_{\mathsf{s}+1},w_{\mathsf{s}+2}, \dots,w_{\mathsf{s}+\mathsf{s}^{\prime}-3},0,1,\infty\big{)}\\ &=\Big{(}\sigma_{j,j+1}(w_{1},\dots,w_{\mathsf{s}});\sigma^{ \prime}_{i,i+1}(w_{\mathsf{s}+1},w_{\mathsf{s}+2},\dots,w_{\mathsf{s}+\mathsf{ s}^{\prime}-3},0,1,\infty)\Big{)}.\end{split} \tag{3.36}\] Similar to the cases with the single stratum, these transpositions should be followed by a transformation \(\gamma_{j,j+1;i,i+1}\in\text{PSL}(2,\mathbb{C})\) on the orbifold Riemann surface \(O\). For any \(j\) and \(i\leq\mathsf{s}^{\prime}-4\), the corresponding transformation is simply the identity map, and for the cases with \(i=\mathsf{s}^{\prime}-3,\mathsf{s}^{\prime}-2,\mathsf{s}^{\prime}-1\) and arbitrary \(j\) we have \(\gamma_{j,j+1;i,i+1}\)'s that is identical to \(\gamma_{i,i+1}\)'s defined for the stratum \(\text{Sing}_{m^{\prime}}\), with \(w_{\mathsf{s}^{\prime}-3}\) replaced by \(w_{\mathsf{s}+\mathsf{s}^{\prime}-3}\). This means that (3.36) can be expressed more explicitly by: \((\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})\big{(}w_{1},\dots,w_{\mathsf{s}},w_{ \mathsf{s}+1},w_{\mathsf{s}+2},\dots,w_{\mathsf{s}+\mathsf{s}^{\prime}-3},0, 1,\infty\big{)}=\big{(}\tilde{w}_{1},\dots,\tilde{w}_{\mathsf{s}},\tilde{w}_{ \mathsf{s}+1},\tilde{w}_{\mathsf{s}+2},\dots,\tilde{w}_{\mathsf{s}+\mathsf{ s}^{\prime}-3},0,1,\infty\big{)}\) such that \[\tilde{w}_{k}=\begin{cases}w_{k}&(k\neq j,j+1,\mathsf{s}+i,\mathsf{s}+i+1), &i\leq\mathsf{s}^{\prime}-4,\forall j,\\ w_{k+1}&(k=j\text{ or }k=\mathsf{s}+i),&i\leq\mathsf{s}^{\prime}-4,\forall j,\\ w_{k-1}&(k=j+1\text{ or }k=\mathsf{s}+i+1),&i\leq\mathsf{s}^{\prime}-4, \forall j,\\ \frac{w_{k}-w_{\mathsf{s}+\mathsf{s}^{\prime}-3}}{1-w_{\mathsf{s}+\mathsf{s} ^{\prime}-3}}&(k\leq\mathsf{s}+\mathsf{s}^{\prime}-4),&i=\mathsf{s}^{\prime}-3,\forall j,\\ \frac{w_{\mathsf{s}+\mathsf{s}^{\prime}-3}}{w_{\mathsf{s}+\mathsf{s}^{\prime} -3}-1}&(k=\mathsf{s}+\mathsf{s}^{\prime}-3),&i=\mathsf{s}^{\prime}-3,\forall j,\\ 1-w_{k}&(k\leq\mathsf{s}+\mathsf{s}^{\prime}-3),&i=\mathsf{s}^{\prime}-2, \forall j,\\ \frac{w_{k}}{w_{k}-1}&(k\leq\mathsf{s}+\mathsf{s}^{\prime}-3),&i=\mathsf{s}^{ \prime}-1,\forall j.\end{cases} \tag{3.37}\] It is clear that one can continue developing larger direct products with more strata by doing simple changes to (3.37). It is natural that, similar to the case of the single stratum, one needs to look at the related Mobius transformation when dealing with the variation of objects with respect to the action of \(\text{Symm}\,(\mathsf{s})\). For the symmetric group \(\text{Symm}\,(\mathsf{s}^{\prime})\) one can define the \(1\)-cocycle \(\{f_{\sigma^{\prime}}\}_{\sigma^{\prime}\in\text{Symm}\,(\mathsf{s}^{\prime})}\). It is defined for the generators by \[f_{\sigma^{\prime}_{i,i+1}}=\begin{cases}1&i=1,2,\dots,\mathsf{s}^{\prime}-4, \mathsf{s}^{\prime}-2,\\ (w_{\mathsf{s}^{\prime}-3}-1)^{h^{\prime}(\mathsf{s}^{\prime}-2)}&i=\mathsf{s}^ {\prime}-3,\\ \prod_{k=1}^{\mathsf{s}^{\prime}-3}(w_{k}-1)^{2h^{\prime}}&i=\mathsf{s}^{ \prime}-1,\end{cases} \tag{3.38}\] where \(h^{\prime}/2\) is the conformal weight corresponding to \(m^{\prime}\). Note that if \(m^{\prime}\to\infty\), namely the case of punctures, (3.38) will be slightly different: \[f_{\sigma^{\prime}_{i,i+1}}=\begin{cases}1&i=1,2,\ldots\mathsf{s}^{\prime}-4, \mathsf{s}^{\prime}-2,\\ (w_{\mathsf{s}^{\prime}-3}-1)^{(\mathsf{s}^{\prime}-2)}&i=\mathsf{s}^{\prime}- 3,\\ \prod_{k=1}^{\mathsf{s}^{\prime}-3}(w_{k}-1)^{2}&i=\mathsf{s}^{\prime}-1. \end{cases} \tag{3.39}\] The action of the 1-cocycle \(f_{\sigma^{\prime}}\) can be extended to a general element in \(\operatorname{Symm}\left(\mathsf{s}^{\prime}\right)\) by the product rule \(f_{\sigma^{\prime}_{1}\sigma^{\prime}_{2}}=(f_{\sigma^{\prime}_{1}}\circ\sigma ^{\prime}_{2})f_{\sigma^{\prime}_{2}}\), \(\sigma^{\prime}_{1},\sigma^{\prime}_{2}\in\operatorname{Symm}\left(s^{ \prime}\right)\). For the group \(\operatorname{Symm}\left(s\right)\), the effect of the 1-cocycle on the generators is trivial, namely \(f_{\sigma_{j,j+1}}=1\). Finally, for the direct product group \(\operatorname{Symm}\left(\mathsf{s}\right)\times\operatorname{Symm}\left( \mathsf{s}^{\prime}\right)\) the 1-cocycle will be given for the generators by \[f_{(\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})}=\begin{cases}1&i=1,2,\ldots \mathsf{s}^{\prime}-4,\mathsf{s}^{\prime}-2,\forall j\\ (w_{\mathsf{s}+\mathsf{s}^{\prime}-3}-1)^{h\mathsf{s}+h^{\prime}(\mathsf{s}^{ \prime}-2)}&i=\mathsf{s}^{\prime}-3,\forall j\\ \prod_{k=1}^{\mathsf{s}}(w_{k}-1)^{2h}\prod_{k=\mathsf{s}+1}^{ \mathsf{s}^{\prime}-3}(w_{k}-1)^{2h^{\prime}}&i=\mathsf{s}^{\prime}-1,\forall j,\end{cases} \tag{3.40}\] where \(h/2,h^{\prime}/2\) are the conformal weights corresponding to \(m,m^{\prime}\). Thus, we can see that the non-triviality of the 1-cocycle \(f\) is only due to the fixed points \(0,1\) and \(\infty\), and if none of them are involved in the permutation, the 1-cocycle will be trivial. **Remark 3.3**.: As we will see in Lemma 4.1, a simple way to determine the 1-cocycles \(f\) is to calculate the variation of the Liouville action due to the transformation of \(\mathcal{M}_{0,n}\) under the symmetric group. In other words, these 1-cocycles are basically the modular anomaly caused by the non-covariance of the action under the effect of the modular group. As in [1, 16], let \(\{f_{\eta}\}_{\eta\in\operatorname{Symm}\left(\mathsf{s}\right)}\) be the 1-cocycle for \(\operatorname{Symm}\left(\mathsf{s}\right)\) on \(\mathcal{M}_{0,n}\). These 1-cocycles can be used to define a holomorphic \(\mathbb{Q}\)-line bundle over the moduli space \(\mathfrak{M}_{0,n}(\boldsymbol{m})\). To do so, one constructs the trivial bundle \(\mathcal{M}_{0,n}\times\mathbb{C}\) and defines the action of \(\operatorname{Symm}\left(\mathsf{s}\right)\) on this bundle by \[(\boldsymbol{w},\tilde{z})\mapsto(\eta\cdot\boldsymbol{w},f_{\eta}( \boldsymbol{w})\tilde{z}),\qquad\boldsymbol{w}\in\mathcal{M}_{0,n},\,\tilde{ z}\in\mathbb{C},\,\eta\in\operatorname{Symm}\left(\mathsf{s}\right). \tag{3.41}\] Then, the desired holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{0,\boldsymbol{m}}=(\mathcal{M}_{0,n}\times\mathbb{C})/\sim\) over moduli space \(\mathfrak{M}_{0,n}(\boldsymbol{m})=\mathcal{M}_{0,n}/\operatorname{Symm} \left(\mathsf{s}\right)\) is defined by the identification \((\boldsymbol{w},\tilde{z})\sim(\eta\cdot\boldsymbol{w},f_{\eta}(\boldsymbol{w })\tilde{z})\) for all \(\eta\in\operatorname{Symm}\left(\mathsf{s}\right)\). **Lemma 3.2**.: _Let \(O\) be a closed (i.e., \(n_{p}=0\)) orbifold Riemann surface with signature \((0;m_{1},\ldots,m_{n})\) and fix the last three conical points to be at \(0\), \(1\), and \(\infty\) (as always, we assume that these three conical points belong to the same stratum). Define a positive function_ \[\mathsf{H}=\mathsf{h}_{1}^{m_{1}h_{1}}\cdots\mathsf{h}_{n-1}^{m_{n-1}h_{n-1}} \mathsf{h}_{n}^{-m_{n}h_{n}},\] _on \(\mathcal{M}_{0,n}\). Then, the \(\mathsf{H}\) determines a Hermitian metric in the holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{0,\boldsymbol{m}}\) over \(\mathfrak{M}_{0,n}(\boldsymbol{m})\), where \(m_{i}\)'s are the branching indices and \(h_{i}/2\)'s are their corresponding conformal weights._ Proof.: We prove this lemma for the case where we have only two strata of \(s\) branch points of order \(m\) and \(s^{\prime}\) branch points of order \(m^{\prime}\). Furthermore, we assume that \(s^{\prime}>3\), and the last three conical points in the stratum \(\text{Sing}_{m^{\prime}}\) (\(m^{\prime}>m\)) are chosen to be at \(0\), \(1\), \(\infty\). In the end, we shall explain how the rest of the cases can be dealt with similarly. We have \(\mathcal{M}_{0,n}=\left\{(w_{1},\dots,w_{s},w_{s+1},w_{s+2},\dots,w_{s+s^{ \prime}-3},0,1)\right\}\). For simplicity of notation, we can, from now on, denote \(s+s^{\prime}=n\) in the proof. Each of the \(\mathsf{h}_{k}\)'s can be viewed as a function on \(\mathbb{C}\) with the appropriate asymptotics (see (3.34), part 3. in Lemma C.1 and Remark C.1). Accordingly, \[\log\mathsf{H}= hm\sum_{k=1}^{s}\left(-2\log m+2\log 2-\lim_{w\to w_{k}}\left( \varphi(w)+\left(1-\frac{1}{m}\right)\log|w-w_{k}|^{2}\right)\right)\] \[+h^{\prime}m^{\prime}\sum_{k=s+1}^{n-1}\left(-2\log m^{\prime}+2 \log 2-\lim_{w\to w_{k}}\left(\varphi(w)+\left(1-\frac{1}{m^{\prime}} \right)\log|w-w_{k}|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\left(2\log m^{\prime}-2\log 2+\lim_{w \rightarrow\infty}\left(\varphi(w)+\left(1+\frac{1}{m^{\prime}}\right)\log|w|^ {2}\right)\right). \tag{3.42}\] Now we need to calculate the variation of \(\log\mathsf{H}\) under the effect of \(\text{Symm}\left(\mathbf{s}\right)=\text{Symm}\left(s\right)\times\text{Symm} \left(s^{\prime}\right)\). For this, it suffices to look at the effect of the generators \(\{(\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})\}_{j=1,i=1}^{j=s-1,i=s^{\prime}-1}\). Considering the background we provided above on the structure of the generators, variation is translated through the effect of the transformation \(\gamma_{j,j+1;i,i+1}\): \[\Delta\log\mathsf{H}=\log\mathsf{H}[\gamma_{j,j+1;i,i+1}]-\log \mathsf{H}. \tag{3.43}\] By looking at (3.37), we see that the index \(j\) does not have any non-trivial effect, and we only have to worry about different values of \(i\). For \(i<s^{\prime}-3\) the isomorphism \(\gamma_{j,j+1;i,i+1}\) is the identity so we for this cases we have \(\Delta\log\mathsf{H}=0\). The non-trivial cases are \(i=s^{\prime}-3,i=s^{\prime}-2\) and \(i=s^{\prime}-1\). In the following, we study them separately: * \(i=n-3\) _case:_ The needed Mobius transformation is given by \(\gamma_{j,j+1;i,i+1}=(w-w_{n-3})/(1-w_{n-3})\) in this case. For simplicity, we denote this by just \(\gamma\). Thus, we write: \[\Delta \log\mathsf{H}=\log\mathsf{H}[\gamma]-\log\mathsf{H}= \tag{3.44}\] \[-hm\sum_{k=1}^{s}\left(\lim_{\gamma(w)\to\gamma(w_{k})}\left( \tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}}\right)\log|\gamma(w)- \gamma(w_{k})|^{2}\right)\right)\] \[+hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(\varphi(w)+\left(1- \frac{1}{m}\right)\log|w-w_{k}|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\sum_{k=s+1}^{n-1}\left(\lim_{\gamma(w)\to \gamma(w_{k})}\left(\tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}} \right)\log|\gamma(w)-\gamma(w_{k})|^{2}\right)\right)\] \[+h^{\prime}m^{\prime}\sum_{k=s+1}^{n-1}\left(\lim_{w\to w_{k}} \left(\varphi(w)+\left(1-\frac{1}{m^{\prime}}\right)\log|w-w_{k}|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\lim_{\gamma(w)\to\gamma(\infty)}\left( \tilde{\varphi}(\gamma(w))+\left(1+\frac{1}{m^{\prime}}\right)\log|\gamma(w)|^ {2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to\infty}\left(\varphi(w)+\left(1+ \frac{1}{m^{\prime}}\right)\log|w|^{2}\right),\] where \(\tilde{\varphi}\) is the transformed counterpart of \(\varphi\) through the isomorphism \(\gamma\). From the invariance of the hyperbolic metric, these two are related by \[\varphi(w)=\tilde{\varphi}(\gamma(w))+\log\left|\frac{\partial\gamma(w)}{ \partial w}\right|^{2}=\tilde{\varphi}(\gamma(w))-2\log|1-w_{n-3}|\,. \tag{3.45}\] We also have \[\log|\gamma(w)-\gamma(w_{k})|=\left\{\begin{aligned} &\log|\gamma(w)-\gamma(w_{k})|=\log \left|\frac{w-w_{k}}{1-w_{n-3}}\right|&\qquad k\leq n-4,\\ &\log|\gamma(w)-\gamma(0)|=\log\left|\frac{w}{1-w_{n-3}}\right|& \qquad k=n-3,\\ &\log|\gamma(w)-\gamma(w_{n-3})|=\log\left|\frac{w-w_{n-3}}{1-w_{n -3}}\right|&\qquad k=n-2,\\ &\log|\gamma(w)-\gamma(1)|=\log\left|\frac{w-1}{1-w_{n-3}} \right|&\qquad k=n-1.\end{aligned}\right. \tag{3.46}\] Note that in finding (3.46), we first included the transposition that exchanges \(w_{n-3}\) and \(w_{n-2}=0\) and then included the effect of \(\gamma\). Putting (3.45) and (3.46) together we look more closely at (3.44). The terms with \(k\leq n-4\) in the sum are quite straightforward: \[I_{1}= -hm\sum_{k=1}^{s}\left(\lim_{\gamma(w)\to\gamma(w_{k})}\left(\tilde{ \varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}}\right)\log|\gamma(w)-\gamma(w_{ k})|^{2}\right)\right)\] \[+hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(\varphi(w)+ \left(1-\frac{1}{m}\right)\log|w-w_{k}|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\sum_{k=s+1}^{n-4}\left(\lim_{\gamma(w)\to \gamma(w_{k})}\left(\tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}} \right)\log|\gamma(w)-\gamma(w_{k})|^{2}\right)\right)\] \[+h^{\prime}m^{\prime}\sum_{k=s+1}^{n-4}\left(\lim_{w\to w_{k}} \left(\varphi(w)+\left(1-\frac{1}{m^{\prime}}\right)\log|w-w_{k}|^{2}\right)\right)\] \[=-hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(2\log|1-w_{n-3}|+ \left(1-\frac{1}{m}\right)\log\left|\frac{w-w_{k}}{1-w_{n-3}}\right|^{2} \right)\right)\] \[+hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(\left(1-\frac{1}{m} \right)\log|w-w_{k}|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\sum_{k=s+1}^{n-4}\left(\lim_{w\to w_{k}} \left(2\log|1-w_{n-3}|+\left(1-\frac{1}{m^{\prime}}\right)\log\left|\frac{w-w_ {k}}{1-w_{n-3}}\right|^{2}\right)\right)\] \[+h^{\prime}m^{\prime}\sum_{k=s+1}^{n-4}\left(\lim_{w\to w_{k}} \left(\left(1-\frac{1}{m^{\prime}}\right)\log|w-w_{k}|^{2}\right)\right)\] \[=-2\left(hs+h^{\prime}(s^{\prime}-4)\right)\log|1-w_{n-3}|\,.\] For \(k=n-3\), we have: \[I_{2}= -h^{\prime}m^{\prime}\lim_{\gamma(w)\to\gamma(0)}\left(\tilde{ \varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}}\right)\log|\gamma(w)-\gamma(0 )|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to w_{n-3}}\left(\varphi(w)+\left(1- \frac{1}{m^{\prime}}\right)\log|w-w_{n-3}|^{2}\right)\] \[=-h^{\prime}m^{\prime}\lim_{w\to 0}\left(\varphi(w)+2\log|1-w_{n-3} |+\left(1-\frac{1}{m^{\prime}}\right)\log\left|\frac{w}{1-w_{n-3}}\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to w_{n-3}}\left(\varphi(w)+\left(1- \frac{1}{m^{\prime}}\right)\log|w-w_{n-3}|^{2}\right).\] And for \(k=n-2\): \[I_{3}= -h^{\prime}m^{\prime}\lim_{\gamma(w)\to\gamma(w_{n-3})}\left( \tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}}\right)\log|\gamma(w)- \gamma(w_{n-3})|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to 0}\left(\varphi(w)+\left(1- \frac{1}{m^{\prime}}\right)\log|w|^{2}\right)\] \[=-h^{\prime}m^{\prime}\lim_{w\to w_{n-3}}\left(\varphi(w)+2\log|1 -w_{n-3}|+\left(1-\frac{1}{m^{\prime}}\right)\log\left|\frac{w-w_{n-3}}{1-w_{ n-3}}\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to 0}\left(\varphi(w)+\left(1- \frac{1}{m^{\prime}}\right)\log|w|^{2}\right).\] Thus, we have: \[I_{2}+I_{3}=-2h^{\prime}\log\left|1-w_{n-3}\right|.\] Also for \(k=n-1\): \[I_{4}= -h^{\prime}m^{\prime}\lim_{\gamma(w)\rightarrow\gamma(1)}\left( \tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}}\right)\log\left|\gamma( w)-\gamma(1)\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to 1}\left(\varphi(w)+\left(1- \frac{1}{m^{\prime}}\right)\log\left|w-1\right|^{2}\right)\] \[=-h^{\prime}m^{\prime}\lim_{w\to 1}\left(2\log\left|1-w_{n-3} \right|+\left(1-\frac{1}{m^{\prime}}\right)\log\left|\frac{w-1}{1-w_{n-3}} \right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to 1}\left(\left(1-\frac{1}{m^{ \prime}}\right)\log\left|w-1\right|^{2}\right)\] \[=-2h^{\prime}\log\left|1-w_{n-3}\right|.\] And finally, for the contribution of infinity: \[I_{5}= -h^{\prime}m^{\prime}\lim_{\gamma(w)\rightarrow\gamma(\infty)} \left(\tilde{\varphi}(\gamma(w))+\left(1+\frac{1}{m^{\prime}}\right)\log \left|\gamma(w)\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\rightarrow\infty}\left(\varphi(w)+ \left(1+\frac{1}{m^{\prime}}\right)\log\left|w\right|^{2}\right)\] \[=-h^{\prime}m^{\prime}\lim_{w\rightarrow\infty}\left(2\log\left|1 -w_{n-3}\right|+\left(1+\frac{1}{m^{\prime}}\right)\log\left|\frac{w-w_{n-3}} {1-w_{n-3}}\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\rightarrow\infty}\left(\left(1+ \frac{1}{m^{\prime}}\right)\log\left|w\right|^{2}\right)\] \[=2h^{\prime}\log\left|1-w_{n-3}\right|.\] Thus, we have for all \(j\): \[\Delta\log\mathsf{H}=\sum_{i=1}^{5}I_{i}=-2\left(hs+h^{\prime}(s^{\prime}-2) \right)\log\left|1-w_{n-3}\right|=-2\log|f_{(\sigma_{j,j+1},\sigma^{\prime}_{n -3,n-2})}|,\] (3.47) where we used (3.40) in the last equality. * \(i=n-2\) _case:_ Now, we look at the case with \(i=n-2\) and again denote the morphism by \(\gamma\). We have \(\gamma=1-w\) in this case. Furthermore \[\varphi(w)=\tilde{\varphi}(\gamma(w))+\log\left|\frac{\partial \gamma(w)}{\partial w}\right|^{2}=\tilde{\varphi}(\gamma(w)),\] \[\log\left|\gamma(w)-\gamma(w_{k})\right|=\left\{\begin{aligned} &\log\left|\gamma(w)- \gamma(w_{k})\right|=\log\left|w-w_{k}\right|&\quad k\leq n-3,\\ &\log\left|\gamma(w)-\gamma(1)\right|=\log\left|1-w\right|& \quad k=n-2,\\ &\log\left|\gamma(w)-\gamma(0)\right|=\log\left|w\right|& \quad k=n-1.\end{aligned}\right.\] Accordingly, it is quite clear that the variation is zero in this case, which is in agreement with the lemma. * \(i=n-1\) _case:_ The last and perhaps the most subtle case is the case with \(i=n-1\). Here we have \(\gamma=w/(w-1)\), and \[\varphi(w)=\tilde{\varphi}(\gamma(w))+\log\left|\frac{\partial \gamma(w)}{\partial w}\right|^{2}=\tilde{\varphi}(\gamma(w))-2\log\left|1-w \right|^{2},\] \[\log\left|\gamma(w)-\gamma(w_{k})\right|=\left\{\begin{aligned} & \log\left|\gamma(w)-\gamma(w_{k})\right|=\log\left|\frac{w-w_{k}}{(1-w)(1-w_ {k})}\right|&\qquad k\leq n-2,\\ &\log\left|\gamma(w)-\gamma(\infty)\right|=\log\left|\frac{1}{1- w}\right|&\qquad k=n-1.\end{aligned}\right.\] Using these we again decompose (3.44) and write for \(k\leq n-2\): \[\tilde{I}_{1}= -hm\sum_{k=1}^{s}\left(\lim_{\gamma(w)\to\gamma(w_{k})}\left( \tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m}\right)\log\left|\gamma(w)- \gamma(w_{k})\right|^{2}\right)\right)\] \[+hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(\varphi(w)+ \left(1-\frac{1}{m}\right)\log\left|w-w_{k}\right|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\sum_{k=s+1}^{n-2}\left(\lim_{\gamma(w)\to \gamma(w_{k})}\left(\tilde{\varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}} \right)\log\left|\gamma(w)-\gamma(w_{k})\right|^{2}\right)\right)\] \[+h^{\prime}m^{\prime}\sum_{k=s+1}^{n-2}\left(\lim_{w\to w_{k}} \left(\varphi(w)+\left(1-\frac{1}{m^{\prime}}\right)\log\left|w-w_{k}\right|^ {2}\right)\right)\] \[=-hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(2\log\left|1-w \right|^{2}+\left(1-\frac{1}{m}\right)\log\left|\frac{w-w_{k}}{(1-w)(1-w_{k}) }\right|^{2}\right)\right)\] \[+hm\sum_{k=1}^{s}\left(\lim_{w\to w_{k}}\left(\left(1-\frac{1}{m} \right)\log\left|w-w_{k}\right|^{2}\right)\right)\] \[-h^{\prime}m^{\prime}\sum_{k=s+1}^{n-2}\left(\lim_{w\to w_{k}} \left(2\log\left|1-w\right|^{2}+\left(1-\frac{1}{m^{\prime}}\right)\log\left| \frac{w-w_{k}}{(1-w)(1-w_{k})}\right|^{2}\right)\right)\] \[+h^{\prime}m^{\prime}\sum_{k=s+1}^{n-2}\left(\lim_{w\to w_{k}} \left(\left(1-\frac{1}{m^{\prime}}\right)\log\left|w-w_{k}\right|^{2}\right)\right)\] \[=-2h\sum_{k=1}^{s}\log\left|1-w_{k}\right|^{2}-2h^{\prime}\sum_{k =s+1}^{n-2}\log\left|1-w_{k}\right|^{2}\] \[=-2h\sum_{k=1}^{s}\log\left|1-w_{k}\right|^{2}-2h^{\prime}\sum_{k =s+1}^{n-3}\log\left|1-w_{k}\right|^{2},\] where in the last equality, we used the fact that \(w_{n-2}=0\). For \(k=n-1\) we have \[\tilde{I}_{2}= -h^{\prime}m^{\prime}\lim_{\gamma(w)\to\gamma(\infty)}\left(\tilde{ \varphi}(\gamma(w))+\left(1-\frac{1}{m^{\prime}}\right)\log|\gamma(w)-\gamma( \infty)|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to 1}\left(\varphi(w)+\left(1-\frac{1 }{m^{\prime}}\right)\log|1-w|^{2}\right)\] \[=-h^{\prime}m^{\prime}\lim_{w\to\infty}\left(\varphi(w)+2\log|1- w|^{2}+\left(1-\frac{1}{m^{\prime}}\right)\log\left|\frac{1}{1-w}\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to 1}\left(\varphi(w)+\left(1- \frac{1}{m^{\prime}}\right)\log|1-w|^{2}\right),\] and for the point at the infinity, we have \[\tilde{I}_{3}= -h^{\prime}m^{\prime}\lim_{\gamma(w)\to\gamma(1)}\left(\tilde{ \varphi}(\gamma(w))+\left(1+\frac{1}{m^{\prime}}\right)\log|\gamma(w)|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to\infty}\left(\varphi(w)+\left(1+ \frac{1}{m^{\prime}}\right)\log|w|^{2}\right)\] \[=-h^{\prime}m^{\prime}\lim_{w\to 1}\left(\varphi(w)+2\log|1-w|^{2} +\left(1+\frac{1}{m^{\prime}}\right)\log\left|\frac{w}{1-w}\right|^{2}\right)\] \[+h^{\prime}m^{\prime}\lim_{w\to\infty}\left(\varphi(w)+\left(1+ \frac{1}{m^{\prime}}\right)\log|w|^{2}\right).\] Thus we see that \(\tilde{I}_{2}+\tilde{I}_{3}=0\) and consequently again for all \(j\): \[\Delta\log\mathsf{H}=\sum_{i=1}^{3}\tilde{I}_{i}=-2h\sum_{k=1}^{s}\log|1-w_{k} |^{2}-2h^{\prime}\sum_{k=s+1}^{n-3}\log|1-w_{k}|^{2}=-2\log|f_{(\sigma_{j,j+1 },\sigma^{\prime}_{n-1,n})}|, \tag{103}\] where we again used (102) in the last equality. By putting together all of the cases above, we see that \[\Delta\log\mathsf{H}=\log\mathsf{H}\circ(\sigma_{j,j+1},\sigma^{\prime}_{i,i+ 1})-\log\mathsf{H}=-2\log|f_{(\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})}|,\ \ \ \ \forall i,j. \tag{104}\] Had we chosen any other way to fix \(0,1\) and \(\infty\) by finding the appropriate generators and \(1\)-cocycles, the calculations analogous to the one above would yield the same result. In fact, the inclusion of \(h_{k}m_{k}\)'s in the definition of \(\mathsf{H}\) ensures that no matter how we choose to fix these points, this lemma holds. Finally, the case with more kinds of branch points can be derived inductively from the calculation above. Thus (104) holds in general, and it means that under the action of the elements of \(\operatorname{\mathrm{Symm}}\left(\mathsf{s}\right)\), \(\mathsf{H}\) transforms according to the rule \((\mathsf{H}\circ\eta)|f_{\eta}|^{2}=\mathsf{H}\). This means that \(\mathsf{H}\) is a Hermitian metric in the holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{0,\boldsymbol{m}}\) over \(\mathfrak{M}_{0,n}(\boldsymbol{m})\). **Remark 3.4**.: When \(O\) is a punctured orbifold Riemann surface with signature \((0;m_{1},\ldots,m_{n_{e}}\)\(;n_{p}>3)\), the Hermitian metric \(\mathsf{H}\) on \(\mathbb{Q}\)-line bundle \(\lambda_{0,\boldsymbol{m}}\) is defined as (see 3.34) \[\mathsf{H}=\mathsf{h}_{1}^{m_{1}h_{1}}\cdots\mathsf{h}_{n_{e}}^{m_{n_{e}}h_{n_{ e}}}\mathsf{h}_{n_{e}+1}\cdots\mathsf{h}_{n-1}\mathsf{h}_{n}^{-1}.\] It is worth noting that the proof of Lemma 3.2 for the case of punctured Riemann surfaces can be found in [1] and is just a matter of redoing the calculations above with the appropriate 1-cocycles. The Poincare metric on \(\mathbb{H}\) can be push-forwarded by the covering map \(J:\mathbb{H}\to O\) to obtain the hyperbolic metric \(e^{\varphi(w)}|\operatorname{d}\!w|^{2}\) on orbifold Riemann surface \(O\) as follows, \[e^{\varphi(w)}=\frac{\left|J^{-1}(w)^{\prime}\right|^{2}}{\left(\operatorname{ Im}J^{-1}(w)\right)^{2}}. \tag{3.50}\] The condition that the curvature is constant and equal to \(-1\), except at singularities, means that the function \(\varphi(w)\) satisfies the Liouville's equation \[\partial_{w}\partial_{\bar{w}}\varphi=\frac{1}{2}e^{\varphi}, \tag{3.51}\] on \(X_{O}^{\text{reg}}\). Moreover, as discussed in lemma C.1, one can derive the following asymptotic behavior for \(\varphi(w)\) near cusps and conical singularities: \[\varphi(w)=\left\{\begin{aligned} &-2(1-\frac{1}{m_{i}})\log|w-w_{i}|+ \log\frac{4|J_{1}^{(i)}|^{-\frac{2}{m_{i}}}}{m_{i}^{2}}+\mathcal{O}\left(1 \right)& w\to w_{i},\\ &-2\log|w-w_{j}|-2\log\left|\log\left|\frac{w-w_{j}}{J_{1}^{(j) }}\right|\right|+\mathcal{O}\left(1\right)& w\to w_{j},\\ &-2\log|w|-2\log\log\left|\frac{w}{J_{-1}^{(n)}}\right|+\mathcal{ O}\big{(}|w|^{-1}\big{)},& w\to\infty,\end{aligned}\right.\] with \(i=1,\ldots,n_{e}\) and \(j=n_{e}+1,\ldots,n-1\). Also, we have42 Footnote 42: The Liouville’s theorem in complex analysis is used. \[T_{\varphi}(w)=\partial_{w}^{2}\varphi-\frac{1}{2}(\partial_{w}\varphi)^{2}= \sum_{i=1}^{n-1}\left(\frac{h_{i}}{2(w-w_{i})^{2}}+\frac{c_{i}}{w-w_{i}}\right), \tag{3.52}\] with \(h_{i}=1\) for \(i=n_{e}+1,\ldots,n-1\) and \[T_{\varphi}(w)=\frac{1}{2w^{2}}+\frac{c_{n}}{w^{3}}+\mathcal{O}\big{(}|w|^{-4} \big{)}\quad\text{as}\quad w\to\infty, \tag{3.53}\] where \[\left\{\begin{aligned} c_{i}&\equiv-\frac{h_{i}J_{2}^{ (i)}}{\left(J_{1}^{(i)}\right)^{2}},&\qquad i=1,\ldots,n_{e},\\ c_{j}&\equiv-\frac{J_{2}^{(j)}}{\left(J_{1}^{(j)} \right)^{2}},&\qquad j=n_{e}+1,\ldots,n-1,\\ c_{n}&\equiv J_{0}^{(j)},&\qquad j=n, \end{aligned}\right. \tag{3.54}\] are the so-called _accessory parameters_ of the Fuchsian differential equation. The accessory parameters \(c_{k}=c_{k}(w_{1},\ldots,w_{n-3})\) for \(k=1,\ldots,n\), can be regarded as real-analytic functions on \(\mathcal{M}_{0,n}\).43 In addition, the expansion of (3.52) around \(w\to\infty\) gives Footnote 43: This is a consequence of the fact that the Liouville filed \(\varphi\) is a real-analytic function on \(\mathcal{M}_{0,n}\). \[T_{\varphi}(w)=\frac{1}{w}\sum_{k=1}^{n-1}c_{k}+\frac{1}{w^{2}}\sum_{k=1}^{n-1 }\left(\frac{h_{k}}{2}+c_{k}w_{k}\right)+\frac{1}{w^{3}}\sum_{k=1}^{n-1}w_{k} (h_{k}+c_{k}w_{k})+\mathcal{O}\big{(}|w|^{-4}\big{)}\quad\text{as}\quad w\to\infty.\] By equating the above expansion with (3.53), we get three conditions on the accessory parameters: \[\sum_{k=1}^{n-1}c_{k}=0,\qquad\sum_{k=1}^{n-1}(h_{k}+2c_{k}w_{k})=1,\qquad \sum_{k=1}^{n-1}w_{k}(h_{k}+c_{k}w_{k})=c_{n}, \tag{3.55}\] which enables us to express \(c_{n-2}\), \(c_{n-1}\), and \(c_{n}\) explicitly in terms of \(w_{1},\ldots,w_{n-1}\) and the remaining \(n-3\) accessory parameters. Consider the Riemann orbisurface \(O\cong[\mathbb{H}/\Gamma]\) as a base point in the Teichmuller space \(\mathcal{T}_{0,n}(\mathbf{m})\). Moreover, consider the solution of the Beltrami equation (3.3) such that the fixed points of \(\kappa_{n-2},\kappa_{n-1},\kappa_{n}\) are at \(z_{n-2}=0\), \(z_{n-1}=1\), and \(z_{n}=\infty\). Then the generators \(\kappa_{n-2}^{\mu},\kappa_{n-1}^{\mu},\kappa_{n}^{\mu}\) of \(\Gamma^{\mu}=f^{\mu}\circ\Gamma\circ(f^{\mu})^{-1}\) will also have fixed points \(0\), \(1\), and \(\infty\) respectively. Accordingly, the Riemann orbisurface \(O^{\mu}\cong[\mathbb{H}/\Gamma^{\mu}]\) can be uniquely and complex-analytically embedded in \(\hat{\mathbb{C}}\) in such a way that the punctures on \(O^{\mu}\) corresponding to the elements \(\kappa_{n-2}^{\mu}\), \(\kappa_{n-1}^{\mu}\), and \(\kappa_{n}^{\mu}\) are mapped into \(0\), \(1\), and \(\infty\). Denote by \(J_{\mu}\) the normalized covering map \(J_{\mu}:\mathbb{H}\to O^{\mu}\) corresponding to this embedding and let \(w_{i}^{\mu}=\left(J_{\mu}\circ f^{\mu}\right)(z_{i})\). Then, the map \(\Psi:\mathcal{T}_{0,n}(\mathbf{m})\to\mathbb{C}^{n-3}\), defined by \[\Psi\circ\Phi(\mu)=(w_{1}^{\mu},\ldots,w_{n-3}^{\mu})\in\mathbb{C}^{n-3}, \tag{3.56}\] is well defined, and its image in \(\mathbb{C}^{n-3}\) coincides with \(\mathcal{M}_{0,n}\). According to the above considerations, we have the following closed commutative diagram \[\begin{CD}\mathbb{H}@>{f^{\mu}}>{}>\mathbb{H}\\ @V{}V{J}V@V{}V{J_{\mu}}V\\ O@>{F^{\mu}}>{}>O^{\mu}\end{CD} \tag{3.57}\] where \(F^{\mu}\) is the mapping of \(O\) onto \(O^{\mu}\). From the above commutative diagram, we can deduce that the \(F^{\mu}\) is a quasi-conformal homeomorphism of \(\mathbb{C}\) onto itself with \[\partial_{\bar{w}}F^{\mu}=M\partial_{w}F^{\mu}, \tag{3.58}\] where \[M\stackrel{{\text{\tiny def.}}}{{=}}\left(\mu\circ J^{-1}\right) \frac{\overline{(J^{-1})^{\prime}}}{(J^{-1})^{\prime}}. \tag{3.59}\] Consider the Beltrami differential \(\varepsilon\mu\), where \(\mu\in\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) and \(\varepsilon\) is a sufficiently small complex number. The function, \(f^{\varepsilon\mu}(z)\) is a real-analytic function of \(\varepsilon\) for each particular \(z\in\mathbb{H}\). Let \[\dot{f}_{+}^{\mu\text{ {\tiny def.}}}\left.\left(\frac{\partial}{\partial \varepsilon}f^{\varepsilon\mu}\right)\right|_{\varepsilon=0},\qquad\dot{f}_{- }^{\mu\text{ {\tiny def.}}}\left.\left(\frac{\partial}{\partial \bar{\varepsilon}}f^{\varepsilon\mu}\right)\right|_{\varepsilon=0}; \tag{3.60}\] then (see [85, Section V.C]) \[\begin{cases}\dot{f}_{+}^{\mu}=-\frac{1}{\pi}\iint_{\mathbb{H}}\mu(z^{\prime} )\,R(z^{\prime},z)\,\mathrm{d}^{2}z^{\prime}\,,\\ \dot{f}_{-}^{\mu}=-\frac{1}{\pi}\iint_{\mathbb{H}}\overline{\mu(z^{\prime}) }\,R(\overline{z^{\prime}},z)\,\mathrm{d}^{2}z^{\prime}\,,\end{cases} \tag{3.61}\] where \[R(z^{\prime},z)\stackrel{{\text{{\tiny def.}}}}{{=}}\frac{1}{z^ {\prime}-z}+\frac{z-1}{z^{\prime}}-\frac{z}{z^{\prime}-1}=\frac{z(z-1)}{(z^{ \prime}-z)z^{\prime}(z^{\prime}-1)}. \tag{3.62}\] In turn, the function \(F^{\mu}(w)\) is holomorphic with respect to \(\varepsilon\) for each particular \(w\in\mathbb{C}\), and \[\dot{F}^{\mu}(w)=-\frac{1}{\pi}\iint_{\mathbb{C}}M(w^{\prime})\,R(w^{\prime}, w)\,\mathrm{d}^{2}w^{\prime}\,, \tag{3.63}\] where \(\dot{F}^{\mu}\) is understood to be given by \((\partial F^{\varsigma\mu}/\partial\varepsilon)|_{\varepsilon=0}\) and \(M\) was defined by Eq.(3.59).44 Footnote 44: According to (3.58), \(\partial_{\omega}\dot{F}=M\). The Green-function equation and solution for this equation are given by \(\partial_{\omega}R(w^{\prime},w)=-\pi\delta(w^{\prime},w)\) and (3.63), respectively. Therefore, the kernel \(R(w^{\prime},w)\), roughly speaking, inverts the action of \(\bar{\partial}\)-operator on Beltrami differentials on \(\hat{\mathbb{C}}\). The precise statement (see [95, Lemma 5] and [85, Section V.C]) is essentially a version of the Pompeu formula. Proof of these assertions can be found in [85, 86, 96]. Moreover, let \[R_{i}(w)=-\frac{1}{\pi}R(w,w_{i})=-\frac{w_{i}(w_{i}-1)}{\pi(w-w_{i})w(w-1)}, \qquad\qquad i=1,\ldots,n-3; \tag{3.64}\] where \(R_{i}\)s are linearly independent and generate the space \(\mathcal{H}^{2,0}(O)\). Denote by \(\{Q_{i}\}_{i=1,\ldots,n-3}\), the basis in \(\mathcal{H}^{2,0}(O)\) biorthogonal to \(\{R_{i}\}_{i=1,\ldots,n-3}\) in the sense of inner product (C.27) -- i.e. \(\langle R_{i},Q_{j}\rangle=\delta_{ij}\) where \(\delta_{ij}\) is the Kronecker delta. The desired basis in \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\cong T_{\Phi(0)}\mathcal{T}(\Gamma)\) has the form \[\mu_{i}(z)=\rho(z)^{-1}\overline{q_{i}(z)}, \tag{3.65}\] where \(q_{i}(z)=Q_{i}\circ J(z)\,J^{\prime}(z)^{2}\) for \(i=1,\ldots,n-3\) form a basis of the complex vector space \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\cong T_{\Phi(0)}^{*}\mathcal{T}(\Gamma)\). These \(q_{i}\)s are biorthogonal to \(r_{i}=R_{i}\circ J\,J^{\prime 2}\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) with respect to the Petersson inner product.45 The basis \(q_{i}^{\Phi(\mu)}\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma^{\mu})\) and \(\mu_{i}^{\Phi(\mu)}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma^{\mu})\) for \(\Phi(\mu)\in\mathcal{T}(\Gamma)\) can also be defined in a similar way. Then, the following lemma connects the motion of punctures and conical singularities on \(\hat{\mathbb{C}}\) with the geometry of Teichmuller space: Footnote 45: In the space \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\), it is defined as \(\langle q_{1},q_{2}\rangle=\iint_{\mathcal{F}(\Gamma)}q_{1}(z)\,\overline{q_{2 }(z)}\,\rho^{-1}(z)\,\mathrm{d}^{2}z\,.\) **Lemma 3.3**.: _For The mapping \(\Psi:\mathcal{T}_{0,n}(\boldsymbol{m})\to\mathcal{M}_{0,n}\) is a complex-analytic covering, and we have_ \[\mathrm{d}\Psi_{\Phi(\mu)}\left(\mu_{i}^{\Phi(\mu)}\right)=\frac{\partial}{ \partial w_{i}^{\mu}}\quad\text{and}\quad\Psi_{\Phi(\mu)}^{*}(\mathrm{d}w_{i}^ {\mu})=\mathrm{r}_{i}^{\Phi(\mu)},\qquad i=1,\ldots,n-3. \tag{3.66}\] Proof.: Let us start with proving the statement \(\mathrm{d}\Psi_{\Phi(\mu)}\left(\mu_{i}^{\Phi(\mu)}\right)=\frac{\partial}{ \partial w_{i}^{\mu}}\) which is actually repeating the proof of Lemma 3 in [7]; Remark 3.1 implies that it is sufficient to verify this statement at the point \(\Phi(0)\). In a neighborhood of \(\Phi(0)\in\mathcal{T}(\Gamma)\) the Bers' coordinates \((t_{1},\ldots,t_{n-3})\in\mathbb{C}^{n-3}\) are determined in the basis \(\mu_{1},\ldots,\mu_{n-3}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) from the expansion \[\mu=\sum_{i=1}^{n-3}t_{i}\mu_{i}, \tag{3.67}\] where \(\mu\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\cap\mathfrak{D}(\Gamma)\). In these coordinates, the mapping \(\Psi\) is given by \[(t_{1},\ldots,t_{n-3})\stackrel{{\Psi}}{{\longmapsto}}\left(F^{ \mu}(w_{1}),\ldots,F^{\mu}(w_{n-3})\right), \tag{3.68}\] where \((w_{1},\ldots,w_{n-3})=\Psi\circ\Phi(0)\in\mathcal{M}_{0,n}\). Since \(F^{\mu}\) depends complex-analytically on \(t_{1},\ldots,t_{n-3}\), the mapping \(\Psi\) is also complex-analytic. We now compute its differential \(\mathrm{d}\Psi\) at the point \(\Phi(0)\in\mathcal{T}(\Gamma)\): It follows from the definitions of \(\mu_{i}\) and \(M_{i}\), equations (3.65) and (3.59), as well as Eq.(3.50) that \[M_{i}=\mu_{i}\circ J^{-1}\frac{\overline{(J^{-1})^{\prime}}}{(J^{-1})^{\prime }}=\left(\mathrm{Im}\,J^{-1}\right)^{2}\overline{q_{i}\circ J^{-1}}\overline{ (J^{-1})^{\prime}}=e^{-\varphi}\overline{Q_{i}}, \tag{3.69}\] where \(Q_{i}=q_{i}\circ J^{-1}(J^{-1})^{\prime 2}\). Then, using the above equation, the definition of \(R_{i}\)s in (3.64), and Eq.(3.63), we get that \[\dot{F}^{\mu_{i}}(w_{j})=\iint_{\mathbb{C}}M_{i}(w)\,R_{j}(w)\,\mathrm{d}^{2}w \stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: **Remark 3.6**.: We can also rewrite the expression for Energy-Momentum Tensor (3.52) using \(R_{i}(w)\), \[\operatorname{Sch}\left(J^{-1};w\right)=\sum_{i=1}^{n}h_{i}\mathcal{E}_{i}(w)- \pi\sum_{i=1}^{n-3}c_{i}R_{i}(w), \tag{3.71}\] where \[\begin{cases}\mathcal{E}_{i}(w)=\frac{1}{2(w-w_{i})^{2}}-\frac{1}{2w(w-1)},&i= 1,\dots,n-1,\\ \mathcal{E}_{i}(w)=\frac{1}{2w(w-1)},&i=n,\end{cases} \tag{3.72}\] and \(h_{i}=1\) for \(i=n_{e}+1,\dots,n\). Accordingly, one can also define the \(e_{i}(z)=\mathcal{E}_{i}\circ J\,J^{\prime 2}\) on \(\mathbb{H}\). These functions are actually the automorphic forms of weight four for \(\Gamma\) and they have non-vanishing constant terms at the singularities. Let us obtain the equation (3.71) for the simplest case with one branch point of order \(m\) at \(w_{1}\) and three punctures at \(w_{2}=0,w_{3}=1\), and \(w_{4}\to\infty\), respectively. Solving (3.55) in favor of \(c_{3}\) and \(c_{2}\) gives \[c_{3}=-(c_{1}+c_{2}),\quad\ c_{2}=-\frac{1+h_{1}+2c_{1}(w_{1}-w_{3})}{2(w_{2}- w_{3})}.\] By substituting the above relations in (3.52) and noting that \(w_{2}=0,w_{3}=1\), one obtains \[T_{\varphi} =h_{1}\left(\frac{1}{2(w-w_{1})^{2}}-\frac{1}{2w(w-1)}\right)+ \frac{1}{2w^{2}}+\frac{1}{2(w-1)^{2}}-\frac{1}{2w(w-1)}\] \[\quad+c_{1}\left(\frac{1}{w-w_{1}}+\frac{w_{1}-1}{w}-\frac{w_{1}} {w-1}\right),\] which by using (3.72) and (3.64) is equal to (3.71). The desired general case can be obtained in the same way. **Corollary 3.1**.: _The following statements are true:_ 1. \(\dot{F}^{i}(0)=\dot{F}^{i}(1)=0\) _and_ \(\dot{F}^{i}(w)=\mathcal{O}\left(|w|^{2}\right)\) _as_ \(w\to\infty\)_._ 2. \(\partial_{\bar{w}}\dot{F}^{i}=M_{i}=e^{-\varphi}\overline{Q_{i}}\)_._ 3. _The functions_ \(\dot{F}^{i}(w)\) _have the following asymptotics_ \[\dot{F}^{i}(w)=\begin{cases}\delta_{ij}+(w-w_{j})\,\partial_{w}\dot{F}^{i}(w_{ j})+\mathcal{O}\left(\frac{|w-w_{j}|}{\log|w-w_{j}|}\right)\;(j=1,\dots,n_{e})\; as\;\,w\to w_{j},\\ \delta_{ij}+(w-w_{j})\,\partial_{w}\dot{F}^{i}(w_{j})+\mathcal{O}\left(\frac{|w -w_{j}|}{\log|w-w_{j}|}\right)\;(j=n_{e}+1,\dots,n-1)\;as\;\,w\to w_{j},\\ w\,\partial_{w}\dot{F}^{i}(\infty)+\mathcal{O}\left(\frac{|w|}{\log|w|} \right)&\text{as}\;\,w\to\infty.\end{cases}\] Proof.: The part _(i)_ can be proved by using the equation (3.63) and the expression for \(R(w,w_{i})\) in (3.64). The part _(ii)_ can also be proved by using (3.63) and (3.69). The general idea of proof the part _(iii)_ can also be found in [1, Remark 3]. In section 4, we will need the derivatives of \(\varphi\) with respect to the variables \(w_{1},\ldots,w_{n-3}\) and the following lemma will enable us to give a geometric description of them (see [7, Lemma 4]): **Lemma 3.4**.: _The Liouville filed \(\varphi\) is a continuously differentiable function on \(\mathcal{M}_{0,n}\), and_ \[\partial_{w_{i}}\varphi+\dot{F}^{i}\partial_{w}\varphi+\partial_{w}\dot{F}^{i} =0,\quad\text{for}\quad i=1,\ldots,n-3, \tag{3.73}\] Proof.: It can be proved exactly in the same way as Lemma 4 in [7]. Let \(\Gamma\) be a Fuchsian group of the first kind that uniformizes the orbifold Riemann surface \(O\) and let \(f^{\mu}(z)\) be the unique solution of the Beltrami equation (3.3) with \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\ni\mu=t_{1}\,\mu_{1}+\cdots+t_{n-3}\,\mu _{n-3}\) that fixes the points \(z_{n-2}=0\), \(z_{n-1}=1\), and \(z_{n}=\infty\). The function \(F^{\mu}(w)=(J_{\mu}\circ f^{\mu}\circ J^{-1})(w)\) is differentiable with respect to \(w\) on \(O\) and depends analytically on the Bers' coordinates \(t_{1},\ldots,t_{n-3}\). It follows from the commutative diagram (3.57) that \(J_{\mu}\) and \(J_{\mu}^{\prime}\) are continuously differentiable with respect to the Bers' coordinates \(t_{1},\ldots,t_{n-3}\) and that suitable branches of the functions \(J_{\mu}^{-1}\) and \((J_{\mu}^{-1})^{\prime}\) have this property locally outside the set of singular points. The continuous differentiability of \(\varphi\) on \(\mathcal{M}_{0,n}\) now follows from Eq.(3.50) and Lemma 3.3. As for equation (3.73), it is a reformulation of lemma 3.1 due to Ahlfors [90] on the vanishing of the first variation of the area element in the Poincare metric on \(O\) under quasi-conformal mappings corresponding to harmonic Beltrami differentials: for any \(\mu\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) \[\mathcal{L}_{\mu}\rho=\frac{\partial}{\partial\varepsilon}\Bigg{|}_{ \varepsilon=0}(f^{\varepsilon\mu})^{*}(\rho)=\frac{\partial}{\partial \varepsilon}\frac{|\partial_{z}f^{\varepsilon\mu}|^{2}}{(\operatorname{Im}f^ {\varepsilon\mu})^{2}}\Bigg{|}_{\varepsilon=0}=0. \tag{3.74}\] Let \(e^{\varphi^{\mu}(w)}\,|\,\mathrm{d}w\,|^{2}\) be the hyperbolic metric on the Riemann orbisurface \(O^{\mu}\) where \(\varphi^{\mu}(w)=\varphi\,(w;F^{\mu}(w_{1}),\ldots,F^{\mu}(w_{n-3}))\). From (3.57) one has \[(f^{\mu})^{*}(\rho)=(f^{\mu})^{*}\underbrace{(J_{\mu})^{*}(e^{\varphi^{\mu}}) }_{\rho}.\] Then, using Eq.(3.15) as well as \(F^{\mu}\circ J=J_{\mu}\circ f^{\mu}\), one gets \[\frac{|\partial_{z}f^{\mu}|^{2}}{(\operatorname{Im}f^{\mu})^{2}}=\exp(\varphi ^{\mu}\circ J_{\mu}\circ f^{\mu})\,|\partial_{z}(J_{\mu}\circ f^{\mu})|^{2}= \exp(\varphi^{\mu}\circ F^{\mu}\circ J)\,|\partial_{w}F^{\mu}\circ J|^{2}\,| J^{\prime}|^{2}. \tag{3.75}\] Finally, it follows from (3.74) that \[\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0}e^{\varphi^{ \varepsilon\mu}\circ F^{\varepsilon\mu}\circ J}\,|\partial_{w}F^{\varepsilon \mu}\circ J|^{2}\,|J^{\prime}|^{2}=\frac{\partial\varphi^{\varepsilon\mu}}{ \partial\varepsilon}\Bigg{|}_{\varepsilon=0}+\partial_{w}\varphi\dot{F}^{\mu} +\partial_{w}\dot{F}^{\mu}=0, \tag{3.76}\] which by setting \(\mu=\mu_{i}\) and recalling Lemma 3.3 (i.e. \(d\Psi_{\Phi(0)}(\mu_{i})=\partial/\partial_{w_{i}}\)) gives us our desired result (3.73). **Corollary 3.2**.: _According to Lemma C.1, the \(\partial_{w_{i}}\varphi\) has the following asymptotic expansion near the singular points_ \[\partial_{w_{i}}\varphi(w)=\left\{\begin{aligned} &-\delta_{ij}\partial_{w}\varphi(w)-\big{(}(w-w_{j}) \partial_{w}\varphi(w)+1\big{)}\partial_{w}\dot{F}^{i}(w)+\mathcal{O}\,(1)& \text{as}\quad w\to w_{j\neq n},\\ &-\big{(}w\partial_{w}\varphi(w)+1\big{)}\partial_{w}\dot{F}^{i} (w)+\mathcal{O}\,(1)&\text{as}\quad w\to w_{n}=\infty.\end{aligned}\right.\] Proof.: It follows from (3.73) and from the asymptotics of \(F^{i}\) (see Corollary 3.1). **Remark 3.7**.: One can also use the results of Ahlfors and Wolpert mentioned in subsection 3.1.2, i.e. lemma 3.1 and Eq.(3.18), to calculate the variations of \(\exp(\varphi^{\varepsilon\mu}(w))\) on the orbifold Riemann surfaces \(O^{\varepsilon\mu}=F^{\varepsilon\mu}(O)\). To do so, let us use the commutative diagram (3.57) once again to write \[(F^{\mu})^{*}(e^{\varphi^{\mu}})=(J^{-1})^{*}(f^{\mu})^{*}(\rho). \tag{3.77}\] Then, it is easy to show that the variations of hyperbolic metrics on Riemann orbisurfaces \(O^{\varepsilon\mu}\) are given by the same formulas in subsection 3.1.2 but with \(\rho\) replaced by \(e^{\varphi}\) and \(f_{\mu\overline{\mu}}\) replaced by \((J^{-1})^{*}(f_{\mu\overline{\mu}})=f_{\mu\overline{\mu}}\circ J^{-1}\). Moreover, for any \(\mathtt{a}\in\mathbb{R}\) we have \[(F^{\mu})^{*}(e^{\mathtt{a}\varphi^{\mu}})=\left((F^{\mu})^{*}(e^{\varphi^{ \mu}})\right)^{\mathtt{a}}. \tag{3.78}\] Therefore, we get the following formulas for the first and second variations of \(\exp(\varphi^{\varepsilon\mu}(w))\) \[\left\{\begin{aligned} &\left.\frac{\partial}{\partial \varepsilon}\right|_{\varepsilon=0}(F^{\varepsilon\mu})^{*}(e^{\mathtt{a} \varphi^{\varepsilon\mu}})=0,\\ &\left.\frac{\partial^{2}}{\partial\varepsilon_{1}\partial \varepsilon_{2}}\right|_{\varepsilon_{1}=\varepsilon_{2}=0}(F^{\varepsilon_ {1}\mu+\varepsilon_{2}\bar{\mu}})^{*}(e^{\mathtt{a}\varphi^{\varepsilon\mu}} )=\frac{\mathtt{a}}{2}e^{\mathtt{a}\varphi}f_{\mu\overline{\mu}}\circ J^{-1}.\end{aligned}\right. \tag{3.79}\] ### Schottky Space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) Let us start this subsection by recalling with more detail how a compact Riemann surface \(X\) of genus \(g\geq 2\) is uniformized by a Schottky group. We begin with a few well-known definitions. Schottky groups are an important class of _Kleinian groups_: discrete subgroups of Mobius group \(\mathrm{PSL}(2,\mathbb{C})\) that act properly discontinuous on some domain (called region of discontinuity) of the Riemann sphere \(\hat{\mathbb{C}}\). A _Schottky group_\(\Sigma\) is strictly loxodromic Kleinian group which is also free and finitely generated [98]. If we denote its limit set by \(\Lambda\) (which is a Cantor set)48 then the _region of discontinuity_\(\Omega=\hat{\mathbb{C}}\backslash\Lambda\) would be connected. Moreover, a Schottky group \(\Sigma\) of rank \(g\) will be called _marked_ if we choose a relation-free system of generators \(L_{1},\ldots,L_{g}\in\mathrm{PSL}(2,\mathbb{C})\). There is also a notion of equivalence between two marked Schottky groups: \((\Sigma;L_{1},\ldots,L_{g})\) is equivalent to \((\tilde{\Sigma};\tilde{L}_{1},\ldots,\tilde{L}_{g})\) if there exists a Mobius transformation \(\varsigma\in\mathrm{PSL}(2,\mathbb{C})\) such that \(\tilde{L}_{i}=\varsigma L_{i}\varsigma^{-1}\) for all \(i=1,\ldots,g\). The set of equivalence classes of marked Schottky groups of genus \(g\) is called the _Schottky space_ of genus \(g\) and is denoted by \(\mathfrak{S}_{g}\). Similar to Fuchsian groups, Schottky groups can be used to construct surfaces since the action of \(\Sigma\) on \(\Omega\) produces a _compact_ Riemann surface \(\Omega/\Sigma\). An important result is that for every marked Schottky group \((\Sigma;L_{1},\ldots,L_{g})\) there is a _fundamental domain_\(\mathcal{D}\)49 for \(\Sigma\) in \(\Omega\). This domain is a (connected) region in \(\hat{\mathbb{C}}\) and it is bounded by \(2g\) disjoint Jordan curves \(C_{1},\ldots,C_{g},C^{\prime}_{1},\ldots,C^{\prime}_{g}\) with \(C^{\prime}_{i}=-L_{i}(C_{i})\) \(i=1,\ldots,g\). The orientations of \(C_{i}\) and \(C^{\prime}_{i}\) are opposite and related to components of \(\partial\mathcal{D}\). The standard form for representation of each \(L_{i}\) is \[\bigg{(}L_{i}(w)-\tilde{a}_{i}\bigg{)}(w-\tilde{b}_{i})=\lambda_{i}\;\bigg{(}L_ {i}(w)-\tilde{b}_{i}\bigg{)}(w-\tilde{a}_{i}),\qquad w\in\hat{\mathbb{C}}, \tag{3.80}\] where \(\tilde{a}_{i}\) and \(\tilde{b}_{i}\) are the respective _attracting_ and _repelling_ fixed points of the loxodromic element \(L_{i}\) and \(0<|\lambda_{i}|<1\) is the corresponding multiplier. Given this normal form, one can explicitly construct the fundamental domain \(\mathcal{D}\) in the following way:50 Let us define the Mobius transformations Footnote 50: For more details, see [99, Appendix C]. \[\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}(w)=\frac{\tilde{b}_{i}w+\tilde{a}_{i} }{w+1}, \tag{3.81}\] satisfying \(\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}(0)=\tilde{a}_{i}\) and \(\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}(\infty)=\tilde{b}_{i}\), so that the generators \(L_{i}\) of the marked Schottky group \((\Sigma;L_{1},\ldots,L_{g})\) can be written as \[L_{i}=\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}\;\varsigma_{\lambda_{i}}\; \varsigma_{\tilde{a}_{i},\tilde{b}_{i}}^{-1}\quad\text{for}\quad i=1,\ldots,g. \tag{3.82}\] In the above equation, the Mobius transformation \(\varsigma_{\lambda_{i}}\) is defined by \(\varsigma_{\lambda_{i}}(w)=\lambda_{i}w\). Then, a fundamental domain for \((\Sigma;L_{1},\ldots,L_{g})\) is given by \[\mathcal{D}\stackrel{{\text{\tiny def.}}}{{\equiv}}\hat{ \mathbb{C}}\Big{\backslash}\bigcup_{i=1}^{g}(\mathrm{D}_{i}\cup\mathrm{D}_{-i }), \tag{3.83}\] where \[\begin{cases}\mathrm{D}_{i}=\left\{w\in\mathbb{C}\left|\,\frac{|w-\tilde{a}_{ i}|}{|w-\tilde{b}_{i}|}<|\mathrm{R}_{i}|\right\}=\varsigma_{\tilde{a}_{i}, \tilde{b}_{i}}\;\varsigma_{\mathrm{R}_{i}}(\mathrm{D}),\\ \\ \mathrm{D}_{-i}=\left\{w\in\mathbb{C}\left|\,\frac{|w-\tilde{b}_{i}|}{|w-\tilde {a}_{i}|}<|\mathrm{R}_{-i}|\right\}=\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}\; \varsigma_{inv}\;\varsigma_{\mathrm{R}_{-i}}(\mathrm{D}),\end{cases} \tag{3.84}\] \(\varsigma_{inv}\) is defined by \(\varsigma_{inv}(w)=-1/w\), and \(\mathrm{D}\) is the unit disk, \[\mathrm{D}=\left\{w\in\mathbb{C}\left|\,|w|<1\right\}. \tag{3.85}\] Here \(\mathrm{R}_{i}\) and \(\mathrm{R}_{-i}\) represent the radii of disks \(\mathrm{D}_{i}\) and \(\mathrm{D}_{-i}\) respectively and satisfy51 Footnote 51: Equation (3.86) makes clear the fact that, as mentioned in footnote 49, the fundamental domain \(\mathcal{D}\) cannot be uniquely determined by a choice of marking for the Schottky group \(\Sigma\). \[\mathrm{R}_{i}\mathrm{R}_{-i}=\lambda_{i}\quad\text{for}\quad i=1,\ldots,g. \tag{3.86}\] The boundary \(\partial\mathcal{D}=\bigcup_{i}C_{i}\cup C^{\prime}_{i}\) has components \[C_{i}=\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}\;\varsigma_{\mathrm{R}_{i}}(C) \quad\text{and}\quad C^{\prime}_{i}=\varsigma_{\tilde{a}_{i},\tilde{b}_{i}}\; \varsigma_{inv}\;\varsigma_{\mathrm{R}_{-i}}(C), \tag{3.87}\] where \(C=\partial\mathrm{D}\) is the unit circle. In the rest of this article, we will consistently presume that a marked Schottky group is _normalized_. This means that \(\tilde{a}_{1}\) equals \(0\), \(\tilde{b}_{1}\) is infinity, and \(\tilde{a}_{2}\) is \(1\). In particular, this means that \(\infty\notin\mathcal{D}\). Under the canonical holomorphic map \(\Omega\to\Omega/\Sigma\), the boundary curves of a standard fundamental domain described above are mapped onto smooth non-intersecting simple closed curves \(\mathsf{a}_{1},\ldots,\mathsf{a}_{g}\) on the Riemann surface. This motivates the following terminology introduced by Bers [100]: A complete set of _retrosections_ on a Riemann surface of genus \(g\) is a choice of \(g\) smooth simple non-intersecting, homologically independent, closed curves \(\mathsf{a}_{1},\ldots,\mathsf{a}_{g}\). We therefore see that a marked Schottky group, together with the choice of a standard fundamental domain, determines a Riemann surface with a complete set of retrosections. A much more profound statement is that _every_ compact Riemann surface can be obtained in this way, which is the content of the classical Koebe's retrosection theorem [101]: **Theorem** (Koebe).: _For every compact Riemann surface \(X\) of genus \(g\) with a complete set of retrosections \((\mathsf{a}_{1},\ldots,\mathsf{a}_{g})\), there exists a marked Schottky group of genus \(g\), \((\Sigma;L_{1},\ldots,L_{g})\), and a fundamental domain \(\mathcal{D}\subset\Omega\) for \(\Sigma\) with \(2g\) boundary curves \(C_{1},\ldots,C_{g},C^{\prime}_{1},\ldots,C^{\prime}_{g}\) such that \(X=\Omega/\Sigma\) and the map \(\Omega\to X\) sends both \(C_{i}\) and \(C^{\prime}_{i}\) to \(\mathsf{a}_{i}\). Moreover, the marked Schottky group is unique up to equivalence \((\Sigma;L_{1},\ldots,L_{g})\sim(\tilde{\Sigma};\tilde{L}_{1},\ldots,\tilde{L} _{g})\) defined before as well as \(L_{i}\mapsto L_{i}^{-1}\)._ **Remark 3.8**.: The above theorem implies that given a compact Riemann surface \(X\) uniformized by the marked Schottky group \((\Sigma;L_{1},\ldots,L_{g})\), we can take the homology classes of \(C_{1},\ldots,C_{g}\) as the generators \([\mathsf{a}_{1}],\ldots,[\mathsf{a}_{g}]\) in the symplectic basis of the first homology group \(H_{1}(X,\mathbb{Z})\). However, determining a canonical basis for \(H_{1}(X,\mathbb{Z})\), i.e. a symplectic basis \(\{[\mathsf{a}_{1}],\ldots,[\mathsf{a}_{g}],[\mathsf{b}_{1}],\ldots,[\mathsf{ b}_{g}]\}\) with intersection pairings given by \(\#\left([\mathsf{a}_{i}],[\mathsf{a}_{j}]\right)=0=\#\left([\mathsf{b}_{i}],[ \mathsf{b}_{j}]\right)\) and \(\#\left([\mathsf{a}_{i}],[\mathsf{b}_{j}]\right)=\delta_{ij}\), depends also on the choice of \(\mathsf{b}\)-cycles on the Riemann surface \(X\). Therefore, we can choose the elements \([\mathsf{b}_{1}],\ldots,[\mathsf{b}_{g}]\) in the canonical basis of \(H_{1}(X,\mathbb{Z})\) such that the projections of their representative curves onto the marked Schottky group \(\Sigma\) are precisely the marked generators \(L_{1},\ldots,L_{g}\). **Remark 3.9**.: The association between the set of normalized marked Schottky groups and the Schottky space \(\mathfrak{S}_{g}\), found within \(\mathbb{C}^{3g-3}\), is evidently bijective.52 Footnote 52: The space \(\mathfrak{S}_{g}\) is a finite covering of the moduli space \(\mathcal{M}_{g}\) of compact Riemann surfaces. For our purposes in the following sections, it will be crucial to give another (equivalent) definition for the Schottky space \(\mathfrak{S}_{g}\). Let \(\mathcal{A}^{-1,1}(\Omega,\Sigma)\) be the complex Banach space of Beltrami differentials for \(\Sigma\) and, in analogy with the Teichmuller case, let us define the deformation space \(\mathfrak{D}(\Sigma)\) to be the open ball of radius \(1\) (in the sense of \(L^{\infty}\)-norm) in \(\mathcal{A}^{-1,1}(\Omega,\Sigma)\): \[\mathfrak{D}(\Sigma)=\left\{\mu\in\mathcal{A}^{-1,1}(\Omega,\Sigma)\,\big{|}\, \big{\|}\,\mu\big{\|}_{\infty}<1\right\}. \tag{3.88}\] A homeomorphism \(F\) of a plane domain \(\Omega\) onto another plane domain \(\tilde{\Omega}\) is said to be _quasi-conformal_ if it satisfies the Beltrami equation at each point in \(\Omega\). For each \(\mu\in\mathfrak{D}(\Sigma)\), let \(F^{\mu}\) be the unique normalized (i.e. \(F^{\mu}(0)=0\) and \(F^{\mu}(1)=1\)) solution of the corresponding Beltrami equation on \(\mathbb{C}\) that gives a quasi-conformal homeomorphism of \(\mathbb{C}\) onto itself. Then, the restriction of \(F^{\mu}\) to the region of discontinuity \(\Omega\subset\mathbb{C}\) gives the desired quasi-conformal homeomorphism \(F^{\mu}:\Omega\to\Omega^{\mu}\) and each element \(\mu\in\mathfrak{D}(\Sigma)\) gives a faithful representation \(\varrho_{\mu}\) of \(\Sigma\) in \(\mathrm{PSL}(2,\mathbb{C})\) according to the formula \(\sigma\mapsto F^{\mu}\circ\sigma\circ(F^{\mu})^{-1}\), \(\sigma\in\Sigma\). As mentioned before, two representations \(\varrho_{\mu_{1}}\) and \(\varrho_{\mu_{2}}\) are equivalent if they differ by an _inner automorphism_ of \(\mathrm{PSL}(2,\mathbb{C})\), i.e., if \(\varrho_{\mu_{2}}=\varsigma\varrho_{\mu_{1}}\varsigma^{-1}\), \(\varsigma\in\mathrm{PSL}(2,\mathbb{C})\). Accordingly, the Schottky space \(\mathfrak{S}_{g}\) is defined to be the set of equivalence classes of representations \([\varrho_{\mu}]:\Sigma\to\mathrm{PSL}(2,\mathbb{C})\), \(\mu\in\mathfrak{D}(\Sigma)\). In other words, \[\mathfrak{S}_{g}\cong\mathfrak{D}(\Sigma)/\sim, \tag{102}\] where \(\mu_{1}\sim\mu_{2}\) if and only if \(F^{\mu_{1}}\circ\sigma\circ(F^{\mu_{1}})^{-1}=F^{\mu_{2}}\circ\sigma\circ(F^{ \mu_{2}})^{-1}\) for all \(\sigma\in\Sigma\) (or equivalently, \(F^{\mu_{1}}\big{|}_{\Lambda}=F^{\mu_{2}}\big{|}_{\Lambda}\)). At \(\mu=0\) one recovers the group \(\Sigma\) which corresponds to the base point of \(\mathfrak{S}_{g}\). The above alternative definition of Schottky space \(\mathfrak{S}_{g}\) gives us the opportunity to define also the generalized Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\) for Riemann orbisurfaces with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\). Let us consider the configuration spaces \(\mathscr{F}_{n}\,(\Omega^{\mu}/\Sigma^{\mu})=\mathscr{F}_{n}\,(\mathcal{D}^{ \mu})\) with \(\Sigma^{\mu}=F^{\mu}\circ\Sigma\circ(F^{\mu})^{-1},\;\Omega^{\mu}=F^{\mu}(\Omega)\) and the deformation space of a marked Schottky group \((\Sigma;L_{1},\ldots,L_{g})\) together with a point \((w_{1},\ldots,w_{n_{e}},w_{n_{e}+1},\ldots,w_{n})\in\mathscr{F}_{n}\,( \mathcal{D})\), \[\mathfrak{D}(\Sigma;L_{1},\ldots,L_{g};w_{1},\ldots,w_{n_{e}};w_{ n_{e}+1},\ldots,w_{n})\\ =\Big{\{}(\mu;w_{1}^{\mu},\ldots,w_{n_{e}}^{\mu};w_{n_{e}+1}^{ \mu},\ldots,w_{n}^{\mu})\in\mathcal{A}^{-1,1}(\Omega,\Sigma)\times\mathscr{F} _{n}\,(\mathcal{D}^{\mu})\;\Big{|}\left\|\mu\right\|_{\infty}<1\Big{\}}, \tag{103}\] where \(w_{i}^{\mu}=F^{\mu}(w_{i})\). Just as in the case of \(\mathfrak{S}_{g}\), each element \(\mu\in\mathfrak{D}(\Sigma;L_{1},\ldots,L_{g};w_{1},\ldots,w_{n})\) gives a faithful representation \(\varrho_{\mu}\) of \((\Sigma;L_{1},\ldots,L_{g};w_{1},\ldots,w_{n})\) in \(\mathrm{PSL}(2,\mathbb{C})\times\mathscr{F}_{n}\,(\mathcal{D}^{\mu})\) according to the formula \(L_{i}\mapsto F^{\mu}\circ L_{i}\circ(F^{\mu})^{-1}\) for all marked generators \(L_{i}\in\Sigma\), \(i=1,\ldots,g\) and \(w_{j}\mapsto F^{\mu}(w_{j})\) for \(j=1,\ldots,n\). Two representations are then equivalent, \(\varrho_{\mu_{1}}\sim\varrho_{\mu_{2}}\), if \(F^{\mu_{1}}\circ L_{i}\circ(F^{\mu_{1}})^{-1}=F^{\mu_{2}}\circ L_{i}\circ(F^{ \mu_{2}})^{-1}\) for all \(i=1,\ldots,g\) as well as \(w_{j}^{\mu_{1}}=w_{j}^{\mu_{2}}\) for all \(j=1,\ldots,n\) and the Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\) is defined to be the set of equivalence classes of representations \([\varrho_{\mu}]\) \[\mathfrak{S}_{g,n}(\mathbf{m})\cong\mathfrak{D}(\Sigma;L_{1},\ldots,L_{g};w_{1}, \ldots,w_{n_{e}};w_{n_{e}+1},\ldots,w_{n})/\sim. \tag{104}\] Let us remind that the Schottky uniformization of an orbisurface \(O\) is connected with the Fuchsian uniformization of it by the commutative diagram (13). Accordingly, each marked Fuchsian group \(\Gamma\) with signature \((g;m_{1},\ldots,m_{n_{e}},n_{p})\) corresponds to the unique marked normalized Schottky group \(\Sigma\simeq\Gamma/N\) with the domain of discontinuity \(\hat{\Omega}\) such that \(\mathbb{H}/\Gamma\cong\mathring{\Omega}/\Sigma\) and this correspondence determines the following map \[\pi:\mathcal{T}_{g,n}(\mathbf{m})\to\mathfrak{S}_{g,n}(\mathbf{m}) \tag{105}\] by putting \(w_{i}=J(z_{i})\) for \(i=1,\ldots,n\).53 We can use this map to understand the tangent and cotangent space to the Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\): Elements of \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) descend to meromorphic quadratic differentials for \(\Sigma\) -- i.e. automorphic forms of weight \(4\) for \(\Sigma\) with simple poles at singular points of \(\mathring{\Omega}\). The space of meromorphic quadratic differentials for \(\Sigma\) will be denoted by \(\mathcal{H}^{2,0}(\mathring{\Omega},\Sigma)\) and each \(Q\in\mathcal{H}^{2,0}(\mathring{\Omega},\Sigma)\) has the form Footnote 53: This map has the same role as the covering map \(\Psi\) in lemma 3.3 and is a complex covering map. \[Q(w)=(q\circ J^{-1})(w)\left(J^{-1}(w)^{\prime}\right)^{2},\qquad q\in\mathcal{ H}^{2,0}(\mathbb{H},\Gamma). \tag{106}\] The vector space \(\mathcal{H}^{2,0}(\hat{\Omega},\Sigma)\) coincides with the holomorphic cotangent space \(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g,n}(\mathbf{m})\) to \(\mathfrak{S}_{g,n}(\mathbf{m})\) at the origin. This implies that the holomorphic tangent space \(T_{\pi\circ\Phi(0)}\mathfrak{S}_{g,n}(\mathbf{m})\) is identified with the complex vector space \(\mathcal{H}^{-1,1}(\hat{\Omega},\Sigma)\) of harmonic Beltrami differentials.54 Each \(M\in\mathcal{H}^{-1,1}(\hat{\Omega},\Sigma)\) has the form Footnote 54: Harmonic with respect to the hyperbolic metric on \(\hat{\Omega}\). \[M(w)=e^{-\varphi(w)}\overline{Q(w)},\qquad Q\in\mathcal{H}^{2,0}(\hat{\Omega},\Sigma). \tag{3.94}\] From Eq.(3.90), which implies that there exists a fibration \(j:\mathfrak{S}_{g,n}(\mathbf{m})\to\mathfrak{S}_{g}\) whose fibers over the points \(\pi\circ\Phi(\mu)\in\mathfrak{S}_{g}\) are \(\mathscr{F}_{n}\left(\Omega^{\mu}/\Sigma^{\mu}\right)\), it follows that \(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g,n}(\mathbf{m})\) has a subspace \(J^{*}(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g})\cong\mathcal{H}^{2,0}(\Omega,\Sigma)\). The standard basis in this subspace of \(\mathcal{H}^{2,0}(\Omega,\Sigma)\) is given by the following holomorphic automorphic forms of weight four for Schottky group, \[P_{1}(w),\ldots,P_{3g-3}(w)\in\mathcal{H}^{2,0}(\Omega,\Sigma). \tag{3.95}\] Moreover, the \(P_{i}\)s actually coincide with the following cotangent vectors \[\mathrm{d}\lambda_{1}\,,\ldots,\mathrm{d}\lambda_{g}\,,\mathrm{d}a_{3}\,, \ldots,\mathrm{d}a_{g}\,,\mathrm{d}b_{2}\,,\ldots,\mathrm{d}b_{g}\in J^{*}(T^{ *}_{\pi\circ\Phi(0)}\mathfrak{S}_{g}). \tag{3.96}\] The subspace that is isomorphic to \(T^{*}_{\pi\circ\Phi(0)}\mathscr{F}_{n}\left(\mathcal{D}\right)\) corresponds to the complement of \(J^{*}(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g})\) in \(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g,n}(\mathbf{m})\). This subspace, is, in fact, the cotangent space to the configuration space at \((w_{1},\ldots,w_{n})\). In other words, we have \[T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g,n}(\mathbf{m})\cong J^{*}(T^{*}_{\pi\circ \Phi(0)}\mathfrak{S}_{g})\oplus T^{*}_{\pi\circ\Phi(0)}\mathscr{F}_{n}\left( \mathcal{D}\right). \tag{3.97}\] It follows from Eq.(3.63) that a standard basis for \(T^{*}_{\pi\circ\Phi(0)}\mathscr{F}_{n}\left(\mathcal{D}\right)\) is given by the following meromorphic automorphic forms of weight four, \[P_{3g-3+i}(w)=-\frac{1}{\pi}\sum_{\sigma\in\Sigma}R(\sigma w,w_{i})\sigma^{ \prime}(w)^{2},\qquad w\in\Omega^{\text{reg}}, \tag{3.98}\] which represent \(\mathrm{d}w_{i}\) for \(i=1,\ldots,n\). According to the following pairing, which actually is an analog of pairing (3.9) \[(Q,M)=\iint_{\mathcal{D}}Q(w)M(w)\,\mathrm{d}^{2}w\,, \tag{3.99}\] we can obtain the dual basis for \(P_{1}(w),\ldots P_{3g-3+n}(w)\) -- i.e. the basis \(M_{1}(w),\ldots M_{3g-3+n}(w)\) in \(\mathcal{H}^{-1,1}(\hat{\Omega},\Sigma)\) which coincide with the tangent vectors \(\frac{\partial}{\partial w_{1}},\ldots,\frac{\partial}{\partial w_{n}}\in T_ {\pi\circ\Phi(0)}\mathfrak{S}_{g,n}\). Similarly, the corresponding bases in the tangent and cotangent spaces to \(\mathfrak{S}_{g,n}\) at an arbitrary point can also be defined. This implies that \(\mathrm{Sch}\left(J^{-1};w\right)=\partial_{w}^{2}\varphi(w)-\frac{1}{2}\left( \partial_{w}\varphi(w)\right)^{2}\) can be decomposed as following 55 Footnote 55: See the asymptotic behavior of \(\varphi(w)\) and its derivatives as \(w\to w_{i}\) in lemma C.1. \[\mathrm{Sch}\left(J^{-1};w\right)=\sum_{i=1}^{n}h_{i}\mathscr{E}_{i}(w)-\pi \sum_{i=1}^{3g-3+n}c_{i}P_{i}(w), \tag{3.100}\] where \[\mathscr{E}_{i}(w)=\frac{1}{2}\sum_{\sigma\in\Sigma}\left(\frac{1}{(\sigma w-w_{i} )^{2}}-\frac{1}{\sigma w(\sigma w-1)}\right)\sigma^{\prime}(w)^{2},\quad\text{ for}\quad i=1,\ldots,n, \tag{111}\] are meromorphic automorphic forms of weight four for Schottky group with the second order poles at \(\Sigma\cdot w_{i}\) and \(c_{1},\ldots,c_{3g-3+n}\) are accessory parameters. Moreover, for the variations of hyperbolic metric we have the same formulas introduced in subsection 3.1.2 and 3.2.1. Furthermore, the formula (108) is also valid in this case. Finally, on \(\mathcal{T}_{g,n}(\mathbf{m})\), each cuspidal \(\langle\cdot,\cdot\rangle^{\text{cusp}}_{\text{TZ},i}\) and elliptic \(\langle\cdot,\cdot\rangle^{\text{ell}}_{\text{TZ},i}\) metric remains invariant under the automorphism group of the covering \(\pi:\mathcal{T}_{g,n}(\mathbf{m})\to\mathfrak{S}_{g,n}(\mathbf{m})\). Accordingly, each metric descends to a Kahler metric on \(\mathfrak{S}_{g,n}(\mathbf{m})\). All these also imply that in analogy with the commutative diagram (109), we have \[\begin{CD}\mathbb{H}@>{f^{\mu}}>{}>\mathbb{H}\\ @V{J}V{}V@V{}V{J_{\mu}}V\\ \mathring{\Omega}@>{F^{\mu}}>{}>\mathring{\Omega}^{\mu}\end{CD} \tag{112}\] where \[\partial_{\bar{w}}F^{\mu}=M(w)\;\partial_{w}F^{\mu}\qquad\text{for}\quad w\in \mathring{\Omega}, \tag{113}\] and \(F^{\varepsilon\mu}\) is complex-analytic in \(\varepsilon\) and \(\dot{F}^{\mu}\) is given by similar equation with Eq.(107) where \(w\in\mathring{\Omega}\). Before closing this section let us also mention that the mapping \(J\) in diagram (13) has the following expansions near cusps and branch points of \(\Gamma\) (see Appendix C) \[J(z)=\begin{cases} w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\left( \frac{z-z_{i}}{z-\bar{z}_{i}}\right)^{km_{i}}&(i=1,\ldots,n_{e}),\;\;z\to z_{i},\\ \\ w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\exp\!\left(-\frac{2\pi\,\sqrt{-1}k}{| \delta_{i}|(z-z_{i})}\right)&(i=n_{e}+1,\ldots,n-1),\;\;z\to z_{i},\end{cases} \tag{114}\] where \(w_{i}=J(z_{i})\) for \(i=1,\ldots,n-1\).56. If \(e^{\varphi(w)}|\,\mathrm{d}w\,|^{2}\) denotes the push-forward of the hyperbolic metric on \(\mathbb{H}\) by the map \(J\), then the density of hyperbolic metric on \(\mathring{\Omega}\) (i.e. \(\rho(w)=e^{\varphi(w)}\)) is once again given by Eq.(106), where \(\varphi(w)\) is smooth on \(\Omega^{\text{reg}}\). The function \(\varphi(w)\) satisfies57 Footnote 56: Note that since \(\Sigma\) is normalized, \(\infty\notin\Omega\). Footnote 57: This equation follows from the invariance of hyperbolic metric on \(\Omega^{\text{reg}}\) under the action of \(\Sigma\) — i.e. \(e^{\varphi(w)}\,\mathrm{d}w\,\mathrm{d}\bar{w}=e^{\varphi(\sigma w)}\, \mathrm{d}\sigma(w)\,\mathrm{d}\overline{\sigma(w)}\) for all \(\sigma\in\Sigma\). \[\varphi(\sigma w)=\varphi(w)-\log|\sigma^{\prime}(w)|^{2}\quad\text{for}\quad w \in\Omega^{\text{reg}},\,\forall\sigma\in\Sigma. \tag{115}\] According to Lemma C.1, it also has the following asymptotic form \[\varphi(w)=\left\{\begin{aligned} &-2(1-\frac{1}{m_{i}})\log|w-w_{i}|+\log \frac{4|J_{1}^{(i)}|^{-\frac{2}{m_{i}}}}{m_{i}^{2}}+\mathcal{O}\left(1\right) & w\to w_{i},\\ &-2\log|w-w_{j}|-2\log\left|\log\left|\frac{w-w_{j}}{J_{1}^{(i) }}\right|\right|+\mathcal{O}\left(1\right)& w\to w_{j},\\ &-2\log|w|-2\log\log\left|\frac{w}{J_{-1}^{(n)}}\right|+\mathcal{ O}\big{(}|w|^{-1}\big{)},& w\to\infty,\end{aligned}\right. \tag{3.106}\] for \(i=1,\ldots,n_{e}\) and \(j=n_{e}+1,\ldots,n-1\). **Remark 3.10**.: It follows from the above asymptotics that (see statement 3 of lemma C.1 for more details) \[\left\{\begin{aligned} &\log\mathsf{h}_{i}=-2\log m_{i}+2\log 2- \lim_{w\to w_{i}}\left(\varphi(w)+\left(1-\frac{1}{m_{i}}\right)\log|w-w_{i}|^ {2}\right),\,\,\,i=1,\ldots,n_{e},\\ &\log\mathsf{h}_{i}=\lim_{w\to w_{i}}\left(\log|w-w_{i}|^{2}- \frac{2e^{-\frac{\varphi(w)}{2}}}{|w-w_{i}|}\right),& i=n_{e}+1,\ldots,n-1,\\ &\log\mathsf{h}_{n}=\lim_{w\to\infty}\left(\log|w|^{2}-\frac{2e^ {-\frac{\varphi(w)}{2}}}{|w|}\right).\end{aligned}\right. \tag{3.107}\] with \(\mathsf{h}_{i}=\left|J_{1}^{(i)}\right|^{\frac{2}{m_{i}}}\) for \(i=1,\ldots,n_{e}\), and \(\mathsf{h}_{i}=\left|J_{1}^{(i)}\right|^{2}\) for \(i=n_{e}+1,\ldots,n\). Finally, consider the points \(w_{1},\ldots,L_{k}w_{i},\ldots,w_{n}\), corresponding to the branch points and cusps in \(z_{1},\ldots,\beta_{k}z_{i},\ldots,z_{n}\).58 Near the point \(\beta_{k}z_{i}\), the first coefficient in the expansion (3.104) of \(J(z)\) is given by \(L_{k}^{\prime}(w_{i})J_{1}^{(i)}\). Accordingly, the positive functions \(\mathsf{h}_{i}=\left|J_{1}^{(i)}\right|^{\frac{2}{m_{i}}}\) for \(i=1,\ldots,n_{e}\) and \(\mathsf{h}_{i}=\left|J_{1}^{(i)}\right|^{2}\) for \(i=n_{e}+1,\ldots,n\) are respectively replaced with \(\mathsf{h}_{i}\left|L_{k}^{\prime}(w_{i})\right|^{\frac{2}{m_{i}}}\) and \(\mathsf{h}_{i}\left|L_{k}^{\prime}(w_{i})\right|^{2}\), when sending \(w_{i}\) to \(L_{k}w_{i}\). Moreover, let us define \(\mathscr{L}_{i}\) as the \(i\)-th relative cotangent line bundle on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\), situated along the fibers of the projection \(p_{i}:\mathfrak{S}_{g,n}(\boldsymbol{m})\to\mathfrak{S}_{g,n-1}(\boldsymbol{ m})\).59 Given this understanding, we can establish the following assertion. Footnote 58: One can comprehend this by recognizing that \(J\circ\beta_{k}=L_{k}\circ J\). Footnote 59: This projection forgets the \(w_{i}\) for \(i=1,\ldots,n\). **Lemma 3.5**.: _Hermitian metrics in the holomorphic line bundles \(\mathscr{L}_{i}\) for \(i=1,\ldots,n_{e}\) are determined by the quantities \(\mathsf{h}_{i}^{m_{i}}\)._ Proof.: To prove this lemma for the branch points, we use the transformation of \(\mathsf{h}_{i}\)'s under the action of the generators \(L_{k}\) of the Schottky group. As explained above, sending \(w_{1},\ldots,w_{i},\ldots,w_{n_{e}}\) to \(w_{1},\ldots,L_{k}w_{i},\ldots,w_{n_{e}}\) will result in \(\mathsf{h}_{i}\) to be replaced by \(\mathsf{h}_{i}\left|L_{k}^{\prime}(w_{i})\right|^{\frac{2}{m_{i}}}\). Thus, we have: \[\Delta\log\mathsf{h}_{i}=\frac{1}{m_{i}}\log\left|L_{k}^{\prime}(w_{i})\right|^{ 2}. \tag{3.108}\] This means that \(\mathfrak{h}_{i}^{m_{i}}\) is a Hermitian metric in the line bundle \(\mathscr{L}_{i}\). For the case of cusp, a similar method of proof is applicable and we conclude that \(\mathfrak{h}_{i}\)s will determine Hermitian metric in the line bundle \(\mathscr{L}_{i}\). ## 4 Classical Liouville Action In this section, we study the classical Liouville action for hyperbolic Riemann orbisurfaces with the genus \(g=0\) and \(g>1\), separately. Before proceeding, we should mention that all the proofs in sections 4 and 5 are previously provided for the case of punctured Riemann surfaces in [7] and [1]. When it comes to calculations involving punctures, we can only direct the reader to those articles. However, for the reader's convenience and to facilitate a clearer understanding of distinctions in the presence of conical singularities, we find it appropriate to present all proofs side by side. ### Riemann Orbisurfaces of Genus 0 Let \(O\) be a marked Riemann orbisurface with signature \((0;m_{1},\ldots,m_{n_{e}},n_{p})\) and let \(\mathbf{m}\) denotes the vector of orders \((m_{1},\ldots,m_{n})\). The regularized action functional for the Liouville equation (3.51) in presence of conical singularities with conical angles \(2\pi/m_{1},\ldots,2\pi/m_{n_{e}}\) at \(w_{1},\ldots,w_{n_{e}}\) together with punctures at \(w_{n_{e}+1},\ldots,w_{n-2}=0,w_{n-1}=1,w_{n}=\infty\) is defined as follows (see [7; 18; 95]): \[S_{\mathbf{m}}[\varphi]=\\ \lim_{\epsilon\to 0^{+}}\left(\iint_{O_{\epsilon}}(|\partial_{w} \varphi|^{2}+e^{2\varphi})\,\mathrm{d}^{2}w+\frac{\sqrt{-1}}{2}\sum_{i=1}^{n_{ e}}\left(1-\frac{1}{m_{i}}\right)\oint_{C_{i}^{\epsilon}}\varphi\left(\frac{ \mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{i}}-\frac{\mathrm{d}w}{w-w_{i}}\right)\right. \\ \left.-2\pi\sum_{i=1}^{n_{e}}\left(1-\frac{1}{m_{i}}\right)^{2} \log\epsilon+2\pi n_{p}\log\epsilon+4\pi(n_{p}-2)\log|\log\epsilon|\right), \tag{4.1}\] where60 Footnote 60: Note that \(\lim_{\epsilon\to 0}O_{\epsilon}=O^{\text{reg}}=X_{O}^{\text{reg}}\). \[O_{\epsilon}=\mathbb{C}\setminus\bigcup_{i=1}^{n-1}\left\{w\,\Big{|}\,|w-w_{i }|<\epsilon\right\}\cup\left\{w\,\Big{|}\,|w|>\epsilon^{-1}\right\},\] and the circles \[C_{i}^{\epsilon}=\left\{w\,\Big{|}\,|w-w_{i}|=\epsilon\right\},\] are oriented as a component of the boundary \(\partial O_{\epsilon}\). **Remark 4.1**.: When \(n_{p}=0\), the appropriate classical Liouville action is given by \[S_{\boldsymbol{m}}[\varphi]=\\ \lim_{\epsilon\to 0^{+}}\Bigg{(}\iint_{O_{\epsilon}}(|\partial_{w} \varphi|^{2}+e^{2\varphi})\,\mathrm{d}^{2}w+\frac{\sqrt{-1}}{2}\sum_{i=1}^{n-1 }\left(1-\frac{1}{m_{i}}\right)\oint_{C_{i}^{\epsilon}}\varphi\left(\frac{ \mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{i}}-\frac{\mathrm{d}w}{w-w_{i}}\right)\\ +\frac{\sqrt{-1}}{2}\left(1+\frac{1}{m_{n}}\right)\oint_{C_{n}^{ \epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}}-\frac{\mathrm{d}w}{w} \right)-2\pi\sum_{i=1}^{n-1}\left(1-\frac{1}{m_{i}}\right)^{2}\log\epsilon-2 \pi\left(1+\frac{1}{m_{n}}\right)^{2}\log\epsilon\Bigg{)}. \tag{4.2}\] **Remark 4.2**.: It is worth providing some explanation regarding the action (4.1) and why the terms associated with conical points appear different from those associated with the punctures. The contour integral in the first line of (4.1) is added to ensure a well-defined variational principle around the conical singularities.61 The variation of this integral cancels the boundary term arising from the variation of the bulk term. To ensure a well-defined variational principle around the punctures, additional contour integrals must be considered, which are given by [18, SS2] Footnote 61: Alternatively, the line integrals are necessary in order to ensure the proper asymptotic behavior (3.106). See the explanation following [18, Eq. (8)] for more details. \[\frac{\sqrt{-1}}{2}\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{\epsilon}} \varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\mathrm{c.c} \right)+\frac{\sqrt{-1}}{2}\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{\epsilon}} \varphi\left(\frac{\mathrm{d}\bar{w}}{(\bar{w}-\bar{w}_{j})\log|w-w_{j}|}- \mathrm{c.c.}\right)\\ -\frac{\sqrt{-1}}{2}\oint_{C_{n}^{\epsilon}}\varphi\left(\frac{ \mathrm{d}\bar{w}}{\bar{w}}-\mathrm{c.c}\right)-\frac{\sqrt{-1}}{2}\oint_{C_{ n}^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}\log|w|}-\mathrm{c.c.} \right). \tag{4.3}\] More concretely, the classical Liouville action (4.1) in the presence of these additional contour integrals takes the form [18, Eq. (8)] \[S_{\boldsymbol{m}}[\varphi]=\\ \lim_{\epsilon\to 0^{+}}\left(\iint_{O_{\epsilon}}(|\partial_{w} \varphi|^{2}+e^{2\varphi})\,\mathrm{d}^{2}w+\frac{\sqrt{-1}}{2}\sum_{i=1}^{n-1 }\left(1-\frac{1}{m_{i}}\right)\oint_{C_{i}^{\epsilon}}\varphi\left(\frac{ \mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{i}}-\frac{\mathrm{d}w}{w-w_{i}}\right) \right.\\ \left.+\frac{\sqrt{-1}}{2}\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{ \epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{(\bar{w}-\bar{w}_{j})\log|w-w_ {j}|}-\frac{\mathrm{d}w}{(w-w_{j})\log|w-w_{j}|}\right)-\frac{\sqrt{-1}}{2} \oint_{C_{n}^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}}-\frac{ \mathrm{d}w}{w}\right)\right.\\ \left.-\frac{\sqrt{-1}}{2}\oint_{C_{n}^{\epsilon}}\varphi\left( \frac{\mathrm{d}\bar{w}}{\bar{w}\log|w|}-\frac{\mathrm{d}w}{w\log|w|}\right)- 2\pi\sum_{i=1}^{n}\left(1-\frac{1}{m_{i}}\right)^{2}\log\epsilon\right), \tag{4.4}\] where \(m_{i}=\infty\) for \(i=n_{e}+1,\ldots,n\). By substituting the asymptotic form (3.106) of the Liouville field \(\varphi\) near punctures in the contour integrals of (4.4), up to \(\mathcal{O}\left(1\right)\) terms, one gets \[\frac{\sqrt{-1}}{2}\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{\epsilon}} \varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\mathrm{c.c} \right)+\frac{\sqrt{-1}}{2}\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{\epsilon}} \varphi\left(\frac{\mathrm{d}\bar{w}}{(\bar{w}-\bar{w}_{j})\log|w-w_{j}|}- \mathrm{c.c.}\right)\\ -\frac{\sqrt{-1}}{2}\oint_{C_{n}^{\epsilon}}\varphi\left(\frac{ \mathrm{d}\bar{w}}{\bar{w}}-\mathrm{c.c}\right)-\frac{\sqrt{-1}}{2}\oint_{C_{n }^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}\log|w|}-\mathrm{c.c.}\right)=4\pi n_{p}\log\epsilon+4\pi(n_{p}-2)\log|\log\epsilon|. \tag{4.5}\] Since the contour integral (4.3) evaluated on-shell is merely a divergent term, we can add it to the counterterm \(-2\pi n_{p}\log\epsilon\) in (4.4) to get the Liouville action (4.1). The resulting regulating terms will have the opposite sign (i.e., plus sign instead of minus sign) and correctly cancel the divergence coming from the bulk term \(\lim_{\epsilon\to 0^{+}}\iint_{O_{\epsilon}}|\partial_{w}\varphi|^{2}\, \mathrm{d}^{2}w\). Performing a similar calculation for the case of conical points, i.e., substituting the asymptotic form (3.106) of the Liouville field \(\varphi\) near the conical points in the contour integral of (4.1), one observes that the result is given by \[\frac{\sqrt{-1}}{2}\sum_{i=1}^{n-1}\left(1-\frac{1}{m_{i}}\right)\oint_{C_{i}^ {\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{i}}-\frac{ \mathrm{d}w}{w-w_{i}}\right)=4\pi\sum_{i=1}^{n_{e}}\left[(1-\tfrac{1}{m_{i}}) ^{2}\log\epsilon+\tfrac{1}{2}(1-\tfrac{1}{m_{i}})\log\mathfrak{h}_{i}\right]+ \mathcal{O}\left(1\right). \tag{4.6}\] Notice that, apart from the divergent part, the above expression contains a finite part (i.e., \(2\pi\sum_{i}(1-\frac{1}{m_{i}})\log\mathfrak{h}_{i}\)) which behaves non-trivially under quasi-conformal transformations and therefore cannot be ignored (see lemma 5.1 for more details).62 We have decided to keep the line integrals around conical points in their integral form since this form is more familiar in the literature on Liouville CFT and can also be used when the conical points are of a more general type (see, e.g. [95]). Therefore, one arrives at the classical Liouville action (4.1). Footnote 62: For both punctures and conical points, there are finite constant terms in the calculation of the contour integrals which can be safely ignored. These terms are not written in equations (4.5) and (4.6). **Theorem** (Takhtajan and Zograf).: _For any fixed vector of orders \(\mathbf{m}=(m_{1},\ldots,m_{n})\) such that \(\sum_{j=1}^{n_{e}}(1-\frac{1}{m_{j}})+n_{p}>2\), the function \(S_{\mathbf{m}}:\mathcal{M}_{0,n}\to\mathbb{R}\) is differentiable and_ \[c_{i}=-\frac{1}{2\pi}\frac{\partial S_{\mathbf{m}}}{\partial w_{i}}\qquad\text{ for all}\qquad i=1,\ldots,n-3, \tag{4.7}\] _where \(c_{i}\)s are the accessory parameters defined by (3.54).63_ Footnote 63: See Theorem 1 in [7] for genus \(g=0\) punctured Riemann surfaces. Proof.: Let \[\tilde{S}_{\mathbf{m}}^{\epsilon}(w_{1},\ldots,w_{n-3})=\tilde{S}_{\mathbf{m}}^{(B) \epsilon}(w_{1},\ldots,w_{n-3})+\tilde{S}^{(\mathrm{ct})\epsilon}-2\pi\chi(X), \tag{4.8}\] with64 Footnote 64: In view of the Gauss-Bonnet formula for Riemann orbisurfaces [102; 103] \[\frac{\sqrt{-1}}{2}\iint_{O}e^{\varphi}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}= 2\pi\left(\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)+n_{p}-2\right)=-2 \pi\chi(O).\] \[\tilde{S}^{(\mathrm{ct})\epsilon}=-2\pi\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{ j}}\right)^{2}\log\epsilon+2\pi(n-n_{e})\log\epsilon+4\pi(n-n_{e}-2)\log|\log \epsilon|. \tag{4.9}\] For any \(\epsilon>0\), the function \(\tilde{S}^{\epsilon}_{\mathbf{m}}\) is continuously differentiable on \(\mathcal{M}_{0,n}\). To prove this theorem, it suffices to show that \(\mathcal{L}_{\mu_{i}}\tilde{S}^{\epsilon}_{\mathbf{m}}\) converges uniformly to \(-2\pi c_{i}\) as \(\epsilon\to 0\) in a neighborhood of any point of the moduli space \(\mathcal{M}_{0,n}\). More explicitly, one needs to show that \[\mathcal{L}_{\mu_{i}}\tilde{S}^{\epsilon}_{\mathbf{m}}=\frac{\partial\tilde{S}^{ \epsilon}_{\mathbf{m}}}{\partial w_{i}}\overset{\text{\tiny def.}}{=}\frac{ \partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0}\tilde{S}^{\epsilon}_{ \mathbf{m}}(w_{1}^{\varepsilon\mu_{i}},\ldots,w_{n-3}^{\varepsilon\mu_{i}})=-2\pi c _{i}\quad\text{for}\quad i=1,\ldots,n-3, \tag{4.10}\] pointwise on the moduli space \(\mathcal{M}_{0,n}\).65 Firstly, we can write Footnote 65: We remind the reader that the basis \(\{\mu_{i}\}\) for \(T_{\Phi(0)}\mathcal{T}(\Gamma)\) has been defined in Eq.(3.65). \[\frac{\partial\tilde{S}^{(B)\epsilon}_{\mathbf{m}}}{\partial w_{i}}=\frac{\sqrt{- 1}}{2}\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0}I_{\epsilon }(\varepsilon), \tag{4.11}\] where \[I_{\epsilon}(\varepsilon)=\left(I_{\epsilon}[\varphi]\circ \Psi\circ\Phi\right)(\varepsilon\mu_{i}),\] \[\varphi^{\varepsilon\mu_{i}}(w)=\varphi\Big{(}w;(\Psi\circ\Phi)( \varepsilon\mu_{i})\Big{)}=\varphi(w;\underbrace{F^{\varepsilon\mu_{i}}(w_{1 })}_{w_{1}^{\varepsilon\mu_{i}}},\ldots,\underbrace{F^{\varepsilon\mu_{i}}(w_ {n-3})}_{w_{n-3}^{\varepsilon\mu_{i}}}),\] and \[I_{\epsilon}[\varphi]=\iint_{\mathcal{O}_{\epsilon}}|\partial_{w}\varphi|^{2 }\,\mathrm{d}w\wedge\mathrm{d}\bar{w}+\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{ 1}{m_{j}}\right)\oint_{C^{\epsilon}_{i}}\varphi\left(\frac{\mathrm{d}\bar{w}} {\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right).\] Accordingly, one gets \[\begin{split} I_{\epsilon}(\varepsilon)=&\iint_{F^{ \varepsilon\mu_{i}}(\mathcal{O}_{\epsilon})}|\partial_{w}\varphi^{\varepsilon \mu_{i}}(w)|^{2}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\\ &+\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)\oint_{ F^{\varepsilon\mu_{i}}(C^{\epsilon}_{j})}\varphi^{\varepsilon\mu_{i}}(w)\left( \frac{\mathrm{d}\bar{w}}{\bar{w}-\overline{w_{j}^{\varepsilon\mu_{i}}}}-\frac {\mathrm{d}w}{w-w_{j}^{\varepsilon\mu_{i}}}\right),\end{split} \tag{4.12}\] with \[\begin{cases}F^{\varepsilon\mu_{i}}(\mathcal{O}_{\epsilon})=\mathbb{C} \backslash\bigcup_{k=1}^{n-1}\left\{w\,\Big{|}\,|w-w_{k}^{\varepsilon\mu_{i}}| <\epsilon\right\}\cup\left\{w\,\Big{|}\,|w|>\epsilon^{-1}\right\},\\ F^{\varepsilon\mu_{i}}(C^{\epsilon}_{j})=\left\{w\,\Big{|}\,|w-w_{j}^{ \varepsilon\mu_{i}}|=\epsilon\right\}.\end{cases} \tag{4.13}\] The calculation of (4.11) can be done by almost verbatim repeating the corresponding computations in the proof of Theorem 1 in [7]. Accordingly, let us now use the _change of variable formula_ for differential forms, \(\int_{F(X)}\omega=\int_{X}F^{*}(\omega)\), and the commutative diagram (3.57) to write \[\begin{split} I_{\epsilon}(\varepsilon)&=\iint_{O_{ \epsilon}(\varepsilon)}(F^{\epsilon\mu_{i}})^{*}\Big{(}|\partial_{w}\varphi^{ \varepsilon\mu_{i}}|^{2}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\,\Big{)}\\ &+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C^{s}_{ j}(\varepsilon)}(F^{\varepsilon\mu_{i}})^{*}\left(\varphi^{\varepsilon\mu_{i}} \left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\overline{w_{j}^{\varepsilon\mu_{i}}}}- \frac{\mathrm{d}w}{w-w_{j}^{\varepsilon\mu_{i}}}\right)\right)\\ &=\iint_{O_{\epsilon}(\varepsilon)}|\partial_{w}\varphi^{ \varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}|^{2}\,\mathrm{d}F^{\varepsilon \mu_{i}}(w)\wedge\mathrm{d}\overline{F^{\varepsilon\mu_{i}}}(w)\\ &+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C^{s}_{ j}(\varepsilon)}(\varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}) \left(\frac{\mathrm{d}\overline{F^{\varepsilon\mu_{i}}}(w)}{\overline{F^{ \varepsilon\mu_{i}}}(w)-\overline{w_{j}^{\varepsilon\mu_{i}}}}-\frac{ \mathrm{d}F^{\varepsilon\mu_{i}}(w)}{F^{\varepsilon\mu_{i}}(w)-w_{j}^{ \varepsilon\mu_{i}}}\right)\\ &=\iint_{O_{\epsilon}(\varepsilon)}|\partial_{w}\varphi^{ \varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}|^{2}\,|\partial_{w}F^{ \varepsilon\mu_{i}}|^{2}(1-|\varepsilon M_{i}|^{2})\,\mathrm{d}w\wedge\mathrm{ d}\bar{w}\\ &+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C^{s}_{ j}(\varepsilon)}(\varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}) \left(\frac{\overline{\partial_{w}F^{\varepsilon\mu_{i}}}(\bar{\varepsilon} \overline{M}_{i}\,\mathrm{d}w+\mathrm{d}\bar{w})}{\overline{w^{\varepsilon\mu _{i}}}-\overline{w_{j}^{\varepsilon\mu_{i}}}}-\frac{\partial_{w}F^{ \varepsilon\mu_{i}}(\mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})}{w^{ \varepsilon\mu_{i}}-w_{j}^{\varepsilon\mu_{i}}}\right),\end{split}\] where \[\begin{cases}O_{\epsilon}(\varepsilon)=\mathbb{C}\Big{\backslash}\bigcup_{k= 1}^{n-1}\left\{w\,\Big{|}\,|w^{\varepsilon\mu_{i}}-w_{i}^{\varepsilon\mu_{i}}| <\epsilon\right\}\cup\left\{w\,\Big{|}\,|w^{\varepsilon\mu_{i}}|>\epsilon^{- 1}\right\},\\ C^{\epsilon}_{j}(\varepsilon)=\left\{w\,\Big{|}\,|w^{\varepsilon\mu_{i}}-w _{j}^{\varepsilon\mu_{i}}|=\epsilon\right\}.\end{cases} \tag{4.14}\] In order to compute \(\partial I_{\epsilon}(\varepsilon)/\partial\varepsilon|\varepsilon=0\), it is necessary to differentiate both the integrand and the integration domains \(O_{\epsilon}(\varepsilon)\) and \(C^{\epsilon}_{j}(\varepsilon)\): \[\begin{split}\frac{\partial}{\partial\varepsilon}& \Bigg{|}_{\varepsilon=0}I_{\epsilon}(\varepsilon)=\iint_{O_{ \epsilon}}\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0}| \partial_{w}\varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}|^{2}\,| \partial_{w}F^{\varepsilon\mu_{i}}|^{2}(1-|\varepsilon M_{i}|^{2})\,\mathrm{d}w \wedge\mathrm{d}\bar{w}\\ &+\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0} \iint_{O_{\epsilon}(\varepsilon)}|\partial_{w}\varphi|^{2}\,\mathrm{d}w\wedge \mathrm{d}\bar{w}\\ &+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C^{s}_{ j}}\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0}(\varphi^{ \varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}})\left(\frac{\overline{ \partial_{w}F^{\varepsilon\mu_{i}}}(\bar{\varepsilon}\overline{M_{i}}\, \mathrm{d}w+\mathrm{d}\bar{w})}{\overline{w^{\varepsilon\mu_{i}}}-\overline{ w_{j}^{\varepsilon\mu_{i}}}}-\frac{\partial_{w}F^{\varepsilon\mu_{i}}( \mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})}{w^{\varepsilon\mu_{i}}-w_{j}^ {\varepsilon\mu_{i}}}\right)\\ &+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\frac{\partial}{ \partial\varepsilon}\bigg{|}_{\varepsilon=0}\oint_{C^{s}_{j}(\varepsilon)} \varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w}{ w-w_{j}}\right).\end{split} \tag{4.15}\] The second and fourth terms in Eq.(4.15) can be computed using the formula for differentiating a given \(k\)-form \(\omega\) over a smooth family of variable domains \(O(\varepsilon)\) \[\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0}\underbrace{\int \cdots\int_{O_{\epsilon}(\varepsilon)}}_{k}\omega=\underbrace{\int\cdots\int_{ \partial O_{\epsilon}}}_{k-1}i_{V}(\omega), \tag{4.16}\] where \(i_{V}(\omega)\) denotes the interior product of the \(k\)-form \(\omega\) with vector field \(V\) which is the vector field along \(\partial O_{\epsilon}\) corresponding to the family of curves \(\partial O_{\epsilon}(\varepsilon)\). As a result, we get \[\frac{\partial I_{\epsilon}}{\partial w_{i}}=\iint_{O_{\epsilon}} \left[(\partial_{w_{i}}\partial_{w}\varphi+\partial_{w}^{2}\varphi\dot{F}^{i}) \partial_{\bar{w}}\varphi+(\partial_{w_{i}}\partial_{\bar{w}}\varphi+\partial_ {w}\partial_{\bar{w}}\varphi\dot{F}^{i})\partial_{w}\varphi+|\partial_{w} \varphi|^{2}\partial_{w}\dot{F}^{i}\right]\mathrm{d}w\wedge\mathrm{d}\bar{w}\] \[-\sum_{k=1}^{n}\int_{\partial O_{k}^{\epsilon}}|\partial_{w} \varphi|^{2}\Big{(}\dot{F}^{i}(w)-\dot{F}^{i}(w_{k})\Big{)}\,\mathrm{d}\bar{w}\] \[+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}}\frac{\partial}{\partial\varepsilon}\bigg{|}_{\varepsilon=0}( \varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}})\Bigg{(}\frac{ \overline{\partial_{w}F^{\varepsilon\mu_{i}}}(\bar{\varepsilon}\overline{M_ {i}}\,\mathrm{d}w+\mathrm{d}\bar{w})}{\overline{w^{\varepsilon\mu_{i}}}- \overline{w_{j}^{\varepsilon\mu_{i}}}}-\frac{\partial_{w}F^{\varepsilon\mu_ {i}}(\mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})}{w^{\varepsilon\mu_{i}} -w_{j}^{\varepsilon\mu_{i}}}\Bigg{)}\,, \tag{113}\] where the last term in Eq.(110) has vanished due to the fact that \(\partial C_{j}^{\epsilon}=\emptyset\). By noting that \[\oint_{C_{j}^{\epsilon}}\varphi\left(\frac{d\bar{w}}{\bar{w}- \bar{w}_{j}}-\frac{dw}{w-w_{j}}\right)=\] \[\quad-2\oint_{C_{j}^{\epsilon}}\varphi\left(\frac{dw}{w-w_{j}} \right)-\oint_{C_{j}^{\epsilon}}\partial_{w}\varphi\log|w-w_{j}|^{2}dw-\oint_ {C_{j}^{\epsilon}}\partial_{\bar{w}}\varphi\log|w-w_{j}|^{2}d\bar{w},\] we have \[\oint_{C_{j}^{\epsilon}} (\varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}})\left( \frac{\overline{\partial_{w}F^{\varepsilon\mu_{i}}}(\bar{\varepsilon} \overline{M_{i}}\,\mathrm{d}w+\mathrm{d}\bar{w})}{\overline{w^{\varepsilon\mu _{i}}}-\overline{w_{j}^{\varepsilon\mu_{i}}}}-\frac{\partial_{w}F^{\varepsilon \mu_{i}}(\mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})}{w^{\varepsilon\mu _{i}}-w_{j}^{\varepsilon\mu_{i}}}\right)\] \[=\underbrace{-2\oint_{C_{j}^{\epsilon}}(\varphi^{\varepsilon\mu_{ i}}\circ F^{\varepsilon\mu_{i}})\left(\frac{\partial_{w}F^{\varepsilon\mu_{i}}(dw+ \varepsilon M_{i}d\bar{w})}{F^{\varepsilon\mu_{i}}(w)-F^{\varepsilon\mu_{i}}(w _{j})}\right)}_{B_{1}}\] \[-\underbrace{-\oint_{C_{j}^{\epsilon}}(\partial_{w}\varphi^{ \varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}})\log|F^{\varepsilon\mu_{i}}(w)-F ^{\varepsilon\mu_{i}}(w_{j})|^{2}\;\partial_{w}F^{\varepsilon\mu_{i}}\left(dw +\varepsilon M_{i}d\bar{w}\right)}_{B_{2}}\] \[-\underbrace{-\oint_{C_{j}^{\epsilon}}(\partial_{\bar{w}}\varphi^{ \varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}})\log|F^{\varepsilon\mu_{i}}(w)-F ^{\varepsilon\mu_{i}}(w_{j})|^{2}\;\overline{\partial_{w}F^{\varepsilon\mu_{i }}}\left(\bar{\varepsilon}\overline{M_{i}}dw+d\bar{w}\right)}_{B_{3}}.\] After simple calculations and using the Lemma 3.4, one can see that \[\left.\frac{\partial}{\partial\varepsilon}\right|_{\varepsilon=0} B_{1} =2\oint_{C_{j}^{\epsilon}}\partial_{w}\dot{F}^{i}(w)\left(\frac{dw}{w-w_{j}} \right)-2\oint_{C_{j}^{\epsilon}}\varphi\left(\frac{M_{i}\;d\bar{w}}{w-w_{j}} \right)+\mathcal{O}\left(1\right)\] \[\left.\frac{\partial}{\partial\varepsilon}\right|_{\varepsilon=0} B_{2} =\oint_{C_{j}^{\epsilon}}\partial_{w}^{2}\dot{F}^{i}\log|w-w_{j}|^{2}dw-\oint_{C_{ j}^{\epsilon}}\partial_{w}\varphi\partial_{w}\dot{F}^{i}(w_{j})dw\] \[\qquad-\oint_{C_{j}^{\epsilon}}\partial_{w}\varphi\;M_{i}\log|w-w _{j}|^{2}d\bar{w}+\mathcal{O}\left(1\right),\] \[\left.\frac{\partial}{\partial\varepsilon}\right|_{\varepsilon=0} B_{3} =\oint_{C_{j}^{\epsilon}}\partial_{w}\varphi\;M_{i}\log|w-w_{j}|^{2}d\bar{w}+\oint_{C_{ j}^{\epsilon}}\partial_{\bar{w}}\partial_{w}\dot{F}^{i}\log|w-w_{j}|^{2}d\bar{w}\] \[\qquad-\oint_{C_{j}^{\epsilon}}\partial_{\bar{w}}\varphi\partial_ {w}\dot{F}^{i}(w_{j})d\bar{w}+\mathcal{O}\left(1\right),\] as \(\epsilon\to 0\). This implies that \[\frac{\partial}{\partial\varepsilon}\Bigg{|}_{\varepsilon=0} \left(B_{1}+B_{2}+B_{3}\right)\] \[=-\oint_{C_{j}^{\epsilon}}\partial_{w}\dot{F}^{i}\left(\frac{d \bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{dw}{w-w_{j}}\right)-\oint_{C_{j}^{\epsilon }}\partial_{w}\dot{F}^{i}(w_{j})d\varphi-2\oint_{C_{j}^{\epsilon}}\varphi\left( \frac{M_{i}\;d\bar{w}}{w-w_{j}}\right)+\mathcal{O}\left(1\right) \tag{4.18}\] Therefore, according to the above result, for the third term in (4.17) we get \[\begin{split}\sum_{j=1}^{n_{e}}&\left(1-\frac{1}{m_ {j}}\right)\oint_{C_{j}^{\epsilon}}\frac{\partial}{\partial\varepsilon} \Bigg{|}_{\varepsilon=0}(\varphi^{\epsilon\mu_{i}}\circ F^{\epsilon\mu_{i}}) \left(\frac{\overline{\partial_{w}F^{\epsilon\mu_{i}}}(\bar{\varepsilon} \overline{M_{i}}\,\mathrm{d}w+\mathrm{d}\bar{w})}{\overline{w^{\epsilon\mu_ {i}}}-\overline{w_{j}^{\epsilon\mu_{i}}}}-\frac{\partial_{w}F^{\epsilon\mu_{ i}}(\mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})}{w^{\epsilon\mu_{i}}-w_{j}^{ \epsilon\mu_{i}}}\right)\\ &=-\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)\oint_{ C_{j}^{\epsilon}}\partial_{w}\dot{F}^{i}\left(\frac{d\bar{w}}{\bar{w}-\bar{w}_{j}}- \frac{dw}{w-w_{j}}\right)+\mathcal{O}\left(1\right).\end{split} \tag{4.19}\] Next, using the Lemma 3.4, we have \[\begin{cases}\partial_{w_{i}}\partial_{w}\varphi+\partial_{w}^{2} \varphi\,\dot{F}^{i}=-\partial_{w}\varphi\,\partial_{w}\dot{F}^{i}-\partial_{ w}^{2}\dot{F}^{i},\\ \partial_{w_{i}}\partial_{\bar{w}}\varphi+\partial_{w}\partial_{ \bar{w}}\varphi\dot{F}^{i}=-\partial_{w}\varphi\,\partial_{\bar{w}}\dot{F}^{i} -\partial_{\bar{w}}\partial_{w}\dot{F}^{i},\end{cases} \tag{4.20}\] which makes it possible to rewrite (4.19) as \[\begin{split}\frac{\partial I_{\epsilon}}{\partial w_{i}}& =\iint_{O_{\epsilon}}\Big{[}(-\partial_{w}\varphi\,\partial_{w}\dot{F}^{i}- \partial_{w}^{2}\dot{F}^{i})\partial_{\bar{w}}\varphi+(-\partial_{w}\varphi\, \partial_{\bar{w}}\dot{F}^{i}-\partial_{\bar{w}}\partial_{w}\dot{F}^{i}) \partial_{w}\varphi+|\partial_{w}\varphi|^{2}\partial_{w}\dot{F}^{i}\Big{]}\, \mathrm{d}w\wedge\mathrm{d}\bar{w}\\ &-\sum_{k=1}^{n}\int_{\partial O_{k}^{i}}|\partial_{w}\varphi|^{ 2}\Big{(}\dot{F}^{i}(w)-\dot{F}^{i}(w_{k})\Big{)}\,\mathrm{d}\bar{w}\\ &-\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}}\partial_{w}\dot{F}^{i}\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{ w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right)+\mathcal{O}\left(1\right)\\ &=\iint_{O_{\epsilon}}\left[\Big{(}2\partial_{w}^{2}\varphi-( \partial_{w}\varphi)^{2}\Big{)}\partial_{\bar{w}}\dot{F}^{i}-2\frac{\partial }{\partial w}\Big{(}\partial_{w}\varphi\,\partial_{\bar{w}}\dot{F}^{i}\Big{)} +\frac{\partial}{\partial\bar{w}}\Big{(}\partial_{w}\varphi\,\partial_{w} \dot{F}^{i}\Big{)}-\frac{\partial}{\partial w}\Big{(}\partial_{\bar{w}} \varphi\,\partial_{w}\dot{F}^{i}\Big{)}\right]\mathrm{d}w\wedge\mathrm{d}\bar{ w}\\ &-\sum_{k=1}^{n}\int_{\partial O_{k}^{i}}|\partial_{w}\varphi|^{ 2}\Big{(}\dot{F}^{i}(w)-\dot{F}^{i}(w_{k})\Big{)}\,\mathrm{d}\bar{w}\\ &-\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}}\partial_{w}\dot{F}^{i}\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{ w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right)+\mathcal{O}\left(1\right)\\ &=\underbrace{\iint_{O_{\epsilon}}\Big{(}2\partial_{w}^{2}\varphi- (\partial_{w}\varphi)^{2}\Big{)}\partial_{\bar{w}}\dot{F}^{i}\,\mathrm{d}w \wedge\mathrm{d}\bar{w}}_{I_{1}}-2\int_{\partial O_{\epsilon}}\partial_{w} \varphi\,\partial_{\bar{w}}\dot{F}^{i}\,\mathrm{d}\bar{w}\\ &\underbrace{-\int_{\partial O_{\epsilon}}\partial_{\bar{w}} \varphi\,\partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}}_{I_{4}}-\underbrace{- \int_{\partial O_{\epsilon}}|\partial_{w}\varphi|^{2}\dot{F}^{i}\,\mathrm{d} \bar{w}}_{I_{5}}+\oint_{C_{i}^{\epsilon}}|\partial_{w}\varphi|^{2}\,\mathrm{d} \bar{w}\\ &-\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}}\partial_{w}\dot{F}^{i}\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{ w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right)+\mathcal{O}\left(1\right).\end{split} \tag{4.21}\] Let us compute each of the integrals \(I_{1},\ldots,I_{5}\) separately using Lemma C.1 and Corollary 3.1 as well as equations (3.52) and (3.53). We begin with the integral \(I_{1}\): \[\begin{split} I_{1}&=\iint_{O_{\epsilon}}\Big{(}2 \partial_{w}^{2}\varphi-(\partial_{w}\varphi)^{2}\Big{)}\partial_{\bar{w}}\dot {F}^{i}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}=-2\int_{\partial O_{\epsilon}}T_{ \varphi}\dot{F}^{i}\,\mathrm{d}w\\ &=-2\sum_{j=1}^{n_{e}}\oint_{C_{j}^{\epsilon}}\left(\frac{h_{j}}{2( w-w_{j})^{2}}+\frac{c_{j}}{w-w_{j}}+\cdots\right)\Big{(}\delta_{ij}+(w-w_{j}) \partial_{w}\dot{F}^{i}(w_{j})+\cdots\Big{)}\,\mathrm{d}w\\ &-2\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{\epsilon}}\left(\frac{1}{2( w-w_{j})^{2}}+\frac{c_{j}}{w-w_{j}}+\cdots\right)\Big{(}\delta_{ij}+(w-w_{j}) \partial_{w}\dot{F}^{i}(w_{j})+\cdots\Big{)}\,\mathrm{d}w\\ &-2\oint_{C_{n}^{\epsilon}}\left(\frac{1}{2w^{2}}+\frac{c_{n}}{w^ {3}}+\cdots\right)\Big{(}w\partial_{w}\dot{F}^{i}(\infty)+\cdots\Big{)}\, \mathrm{d}w\\ &=4\pi\sqrt{-1}c_{i}+2\pi\sqrt{-1}\sum_{k=1}^{n-1}h_{k}\partial_{w }\dot{F}^{i}(w_{k})-2\pi\sqrt{-1}\partial_{w}\dot{F}^{i}(\infty)+\mathcal{O} \left(1\right)\quad\text{as}\quad\epsilon\to 0.\end{split}\] In the last line we have used the notation \(h_{k}=1-1/m_{k}^{2}\) for \(k=1,\ldots,n_{e}\) and \(h_{k}=1\) for \(k=n_{e}+1,\ldots,n-1\). In addition, we have \[I_{2} =-2\int_{\partial O_{\epsilon}}\partial_{w}\varphi\,\partial_{ \bar{w}}\dot{F}^{i}\,\mathrm{d}\bar{w}=-2\int_{\partial O_{\epsilon}}\partial_{ w}\varphi(w)\,M_{i}(w)\,\mathrm{d}\bar{w}\] \[\overset{\eqref{eq:C3.0}}{=}-2\sum_{j=1}^{n_{e}}\oint_{C_{j}^{ \epsilon}}\left(-\frac{1-\frac{1}{m_{j}}}{w-w_{j}}+\frac{c_{j}}{1-\frac{1}{m_ {j}}}+\cdots\right)\left(\frac{\bar{q}_{1}^{(j)}}{4\bar{J}_{1}^{(j)}}\cdot \left|w-w_{j}\right|^{1-\frac{2}{m_{j}}}+\cdots\right)\mathrm{d}\bar{w}\] \[\quad-2\sum_{j=n_{e}+1}^{n-1}\oint_{C_{j}^{\epsilon}}\left(- \frac{1}{w-w_{j}}\left(1+\frac{1}{\log\left|\frac{w-w_{j}}{J_{1}^{(j)}} \right|}\right)+c_{j}+\cdots\right)\times\] \[\qquad\qquad\qquad\times\left(-\frac{|\delta_{j}|^{2}\bar{q}_{1} ^{(j)}}{4\pi^{2}\bar{J}_{1}^{(j)}}\cdot\left|w-w_{j}\right|\log^{2}\left|w-w_ {j}\right|+\cdots\right)\mathrm{d}\bar{w}\] \[\quad-2\oint_{C_{n}^{\epsilon}}\left(-\frac{1}{w}\left(1+\frac{1 }{\log\left|\frac{w}{J_{-1}^{(n)}}\right|}\right)-\frac{c_{n}}{w^{2}}+\cdots \right)\left(-\frac{|\delta_{n}|^{2}\bar{q}_{1}^{(n)}\bar{J}_{-1}^{(n)}}{4\pi ^{2}}\cdot\frac{\log^{2}\left|w\right|}{\left|w\right|}+\cdots\right)\mathrm{d }\bar{w}\] \[=\mathcal{O}\left(1\right),\] as \(\epsilon\to 0\). Let us now calculate \(I_{3}+I_{4}\): \[I_{3}+I_{4}=-\int_{\partial O_{\epsilon}}\partial_{w}\varphi\,\partial_{w} \dot{F}^{i}\,\mathrm{d}w-\int_{\partial O_{\epsilon}}\partial_{\bar{w}} \varphi\,\partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}=-\int_{\partial O_{ \epsilon}}\partial_{w}\dot{F}^{i}\,\mathrm{d}\varphi=\mathcal{O}\left(1 \right),\] as \(\epsilon\to 0\). Finally, we can turn to calculating \(I_{5}\): \[I_{5} =-\int_{\partial O_{\epsilon}}|\partial_{w}\varphi|^{2}\dot{F}^{ i}\,\mathrm{d}\bar{w}\] \[=-\sum_{k=1}^{n-1}\oint_{C_{k}^{\epsilon}}|\partial_{w}\varphi|^{ 2}\left(\delta_{ik}+(w-w_{k})\partial_{w}\dot{F}^{i}(w_{k})+\cdots\right) \mathrm{d}\bar{w}\] \[\quad-\oint_{C_{n}^{\epsilon}}\left(\frac{1}{|w|^{2}}+\cdots \right)\left(w\partial_{w}\dot{F}^{i}(\infty)+\cdots\right)\mathrm{d}\bar{w}\] \[=-\oint_{C_{i}^{\epsilon}}|\partial_{w}\varphi|^{2}\,\mathrm{d} \bar{w}-\sum_{k=1}^{n-1}\oint_{C_{k}^{\epsilon}}\left(\frac{\left(1-\frac{1}{m_ {k}}\right)^{2}}{|w-w_{k}|^{2}}+\cdots\right)\left((w-w_{k})\partial_{w}\dot{F }^{i}(w_{k})+\cdots\right)\mathrm{d}\bar{w}\] \[\quad-\oint_{C_{n}^{\epsilon}}\left(\frac{1}{|w|^{2}}+\cdots \right)\left(w\partial_{w}\dot{F}^{i}(\infty)+\cdots\right)\mathrm{d}\bar{w}\] \[=-\oint_{C_{i}^{\epsilon}}|\partial_{w}\varphi|^{2}\,\mathrm{d} \bar{w}-2\pi\sqrt{-1}\sum_{k=1}^{n-1}\left(1-\frac{1}{m_{k}}\right)^{2} \partial_{w}\dot{F}^{i}(w_{k})+2\pi\sqrt{-1}\partial_{w}\dot{F}^{i}(\infty)+ \mathcal{O}\left(1\right),\] as \(\epsilon\to 0\) with \(m_{k}=\infty\) for \(k=n_{e}+1,\ldots,n-1\). As a result, we have \[I_{1}+\cdots+I_{5}=4\pi\sqrt{-1}c_{i}-\oint_{C_{i}^{\epsilon}}|\partial_{w} \varphi|^{2}\,\mathrm{d}\bar{w}+2\pi\sqrt{-1}\sum_{j=1}^{n_{e}}\left(-\frac{2} {m_{j}^{2}}+\frac{2}{m_{j}}\right)\partial_{w}\dot{F}^{i}(w_{j})+\mathcal{O} \left(1\right), \tag{4.22}\] as \(\epsilon\to 0\). Moreover, by changing the conformal structure, the variation of counterterm action \(\tilde{S}^{\epsilon}_{\text{ct}}\) gives \[\frac{\partial\tilde{S}^{\epsilon}_{\text{ct}}}{\partial w_{i}}=-2\pi\sum_{j=1}^{ n_{e}}\left(1-\frac{1}{m_{j}}\right)^{2}\partial_{w}\dot{F}^{i}(w_{j})+\mathcal{O} \left(1\right)\quad\text{ as }\ \ \epsilon\to 0. \tag{119}\] Notice that we have only taken into account the contributions coming from the regulating terms for conical points and not the punctures. The reason for this can be traced back to the equation (109). By doing similar analysis as in equation (106) for the contour integrals in (109), it is easy to see that the quasi-conformal transformations of these contour integrals are canceled by those of the counterterm \(-2\pi n_{p}\log\epsilon\). In other words, it is sufficient to only consider the quasi-conformal transformations of the bulk term for the case of punctures, which is in agreement with the analysis in [7]. Now, using equations (104), (105), (110), and putting everything together, we arrive at our desired result \[\frac{\partial\tilde{S}^{\epsilon}_{\mathbf{m}}}{\partial w_{i}} =\frac{\sqrt{-1}}{2}\left(4\pi\sqrt{-1}c_{i}-\oint_{C^{\epsilon}_ {i}}|\partial_{w}\varphi|^{2}\,\mathrm{d}\bar{w}+2\pi\sqrt{-1}\sum_{j=1}^{n_{e }}\left(1-\frac{1}{m_{j}}\right)\left(\frac{2}{m_{j}}\right)\partial_{w}\dot{F }^{i}(w_{j})\right.\] \[\left.+\oint_{C^{\epsilon}_{i}}|\partial_{w}\varphi|^{2}\, \mathrm{d}\bar{w}-2\pi\sqrt{-1}\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}} \right)\left(\frac{2}{m_{j}}\right)\partial_{w}\dot{F}^{i}(w_{j})+\mathcal{O} \left(1\right)\right)\] \[=-2\pi c_{i}+\mathcal{O}\left(1\right)\quad\text{ as }\ \ \epsilon\to 0.\] In order to complete the proof, it remains to show that the remainder in the above formula can be estimated uniformly in a neighborhood of an arbitrary point \((w_{1},\dots,w_{n-3})\in\mathcal{M}_{0,n}\). Let \(\Gamma\) be a Fuchsian group uniformizing the orbisurface \(O\). Then, the Hauptmodule \(J_{\mu}\), where \(\mu=t_{1}\mu_{1}+\dots+t_{n-3}\;\mu_{n-3}\in\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\), is continuously differentiable with respect to the Bers' coordinates \(t_{i}\) and its coefficients in an expansion similar to Eq.(107) have the same property. From this, we can conclude with the help of Eq.(110) and Lemma 3.3 that the remainders in assertions (2) and (4) of Lemma C.1 can be estimated uniformly. Using the commutative diagram (101), we can conclude that an analogous assertion is valid also for the remainders in Corollary 3.1 for \(\dot{F}^{i}\), and this completes the proof of the theorem. **Remark 4.3**.: Theorem 4.7 means that \(\sum_{i=1}^{n-3}(c_{i}\,\mathrm{d}w_{i}+\bar{c}_{i}\,\mathrm{d}\bar{w}_{i})\) is an exact 1-form on \(\mathcal{M}_{0,n}\) with anti-derivative \(-S_{\mathbf{m}}/2\pi\). In addition to Theorem 4.7, we can also generalize [7, Theorem 2] to include branch points: The Weil-Petersson metric defined on \(\mathcal{T}_{0,n}(\mathbf{m})\) in Sec. 3.1.3 can be projected onto \(\mathcal{M}_{0,n}\), since it is invariant under \(\operatorname{Aut}(\Psi)\). We will continue to call this metric obtained on \(\mathcal{M}_{0,n}\) as the Weil-Petersson metric and denote it with the same notation \(\left\langle\cdot,\cdot\right\rangle_{\text{\tiny{WP}}}\). Then, we have the following theorem: **Theorem** (Takhtajan and Zograf).: _For any fixed vector of orders \(\mathbf{m}=(m_{1},\dots,m_{n})\) such that \(\sum_{j=1}^{n_{e}}(1-\frac{1}{m_{j}})+n_{p}>2\), the function \(-S_{\mathbf{m}}\) is a real-analytic Kahler potential for the metric \(\left\langle\cdot,\cdot\right\rangle_{{}_{WP}}\) on \(\mathcal{M}_{0,n}\).66 Footnote 66: See Theorem 2 in [7] for the genus \(g=0\) punctured Riemann surfaces. \[\bar{\partial}\partial S_{\mathbf{m}}=-2\sqrt{-1}\ \omega_{\text{\it WP}}. \tag{4.24}\] Proof.: In order to prove this theorem, we have to prove that the accessory parameters \(c_{1},\ldots,c_{n-3}\) are continuously differentiable on \(\mathcal{M}_{0,n}\), and \[\frac{\partial c_{i}}{\partial\bar{w}_{j}}=\frac{1}{2\pi}\left\langle\frac{ \partial}{\partial w_{i}},\frac{\partial}{\partial w_{j}}\right\rangle_{\text {\it WP}}\quad\text{for}\quad i,j=1,\ldots,n-3. \tag{4.25}\] The proof of continuous differentiability of the functions \(c_{i}\) on \(\mathcal{M}_{0,n}\) follows readily from the definition of accessory parameters (3.54) and continuous differentiability of the Haupt-module \(J\) with respect to the Ber's coordinates. We now turn to proving Eq.(4.25) in the same way as for Theorem 2 in [7] where no branch point exists. Let \((w_{1},\ldots,w_{n-3})\) be an arbitrary point in \(\mathcal{M}_{0,n}\) and let \(\Gamma\) be a Fuchsian group uniformizing the orbisurface \(O\). It follows from the commutative diagram (3.57) that \[\text{Sch}\left(J_{\varepsilon\mu_{j}}^{-1}\circ F^{\varepsilon\mu_{j}};w \right)=\text{Sch}\left(f^{\varepsilon\mu_{j}}\circ J^{-1};z\right),\] where \(\mu_{j}\) is an element of the basis in \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) given by Eq.(3.65), and \(\varepsilon\in\mathbb{C}\) is sufficiently small. By using the following well-known property of Schwarzian derivative, \[\text{Sch}\left(A\circ B;w\right)=\text{Sch}\left(A;w\right)\circ B\ B^{ \prime 2}+\text{Sch}\left(B;w\right)\] we have \[\text{Sch}\left(J_{\varepsilon\mu_{j}}^{-1};w\right)\circ F^{\varepsilon\mu_{ j}}\left(\partial_{w}F^{\varepsilon\mu_{j}}\right)^{2}+\text{Sch}\left(F^{ \varepsilon\mu_{j}};w\right)=\text{Sch}\left(f^{\varepsilon\mu_{j}};z\right) \circ J^{-1}(J^{-1})^{\prime 2}+\text{Sch}\left(J^{-1};z\right).\] We can now differentiate both sides of the above equality with respect to \(\bar{\varepsilon}\) at the point \(\varepsilon=0\), using Eq.(3.52) and the fact that \(F^{\varepsilon\mu_{j}}\), and as a result \(w_{i}^{\varepsilon\mu_{j}}\), are holomorphic functions of \(\varepsilon\) at \(\varepsilon=0\). The left-hand side gives \[\frac{\partial}{\partial\bar{\varepsilon}}\bigg{|}_{\varepsilon=0 }\left(\text{Sch}\left(J_{\varepsilon\mu_{j}}^{-1};w\right)\circ F^{ \varepsilon\mu_{j}}\left(\partial_{w}F^{\varepsilon\mu_{j}}\right)^{2}+\text{ Sch}\left(F^{\varepsilon\mu_{j}};w\right)\right)\] \[\quad=\left.\frac{\partial}{\partial\bar{\varepsilon}}\bigg{|}_{ \varepsilon=0}\sum_{i=1}^{n-1}\left(\frac{h_{i}}{2(w-w_{i}^{\varepsilon\mu_{j }})^{2}}+\frac{c_{i}^{\varepsilon\mu_{j}}}{w-w_{i}^{\varepsilon\mu_{j}}} \right)=\sum_{i=1}^{n-1}\left(\left.\frac{\partial c_{i}^{\varepsilon\mu_{j}} }{\partial\bar{\varepsilon}}\right|_{\varepsilon=0}\right)\frac{1}{w-w_{i}},\] where \(c_{i}^{\varepsilon\mu_{j}}=c_{i}(w_{1}^{\varepsilon\mu_{j}},\ldots,w_{n-3}^{ \varepsilon\mu_{j}})\). To compute the right-hand side, we will use the definition of Schwarzian derivative (2.16), the fact that \(\partial/\partial\bar{\varepsilon}\) of the functions \(\partial_{z}f^{\varepsilon\mu_{j}}\) and \(\partial_{z}^{2}f^{\varepsilon\mu_{j}}\) vanishes at \(\varepsilon=0\), and the well-known Ahlfors formula67[85, 90] Footnote 67: The Ahlfors formula can be derived by comparing the expression for \(f_{-}^{\mu_{j}}(z)\), given by Eq.(3.61), and the equality \(\varLambda\varGamma^{*}=\text{id}\) discussed in Section 3.1.1. \[\frac{\partial}{\partial\bar{\varepsilon}}\partial_{z}^{3}f^{\varepsilon\mu_{ j}}\bigg{|}_{\varepsilon=0}=-\frac{1}{2}q_{j},\] where \(\mu_{j}\) and \(q_{j}\) are connected by the relation (3.65). Then, we have \[\begin{split}\left.\frac{\partial}{\partial\bar{\varepsilon}} \right|_{\varepsilon=0}&\left(\operatorname{Sch}\left(f^{ \varepsilon\mu_{j}};z\right)\circ J^{-1}(J^{-1})^{\prime 2}+\operatorname{Sch} \left(J^{-1};z\right)\,\right)\\ &=\left.\frac{\partial}{\partial\bar{\varepsilon}}\left[\partial_ {z}^{3}f^{\varepsilon\mu_{j}}\circ J^{-1}(J^{-1})^{\prime 2}\right]\right|_{ \varepsilon=0}=-\frac{1}{2}q_{j}\circ J^{-1}(J^{-1})^{\prime 2}=-\frac{1}{2}Q_{j}.\end{split} \tag{4.27}\] Equating the left-hand side, (4.26), and the right-hand side, (4.27), and using Eq.(3.55), we get \[\begin{split}\sum_{i=1}^{n-1}&\left(\left.\frac{ \partial c_{i}^{\varepsilon\mu_{j}}}{\partial\bar{\varepsilon}}\right|_{ \varepsilon=0}\right)\frac{1}{w-w_{i}}\\ &=\sum_{i=1}^{n-3}\left(\left.\frac{\partial c_{i}^{\varepsilon \mu_{j}}}{\partial\bar{\varepsilon}}\right|_{\varepsilon=0}\right)\frac{1}{w -w_{i}}+\left(\left.\frac{\partial}{\partial\bar{\varepsilon}}c_{n-2}^{ \varepsilon\mu_{j}}\right|_{\varepsilon=0}\right)\frac{1}{w}+\left(\left. \frac{\partial}{\partial\bar{\varepsilon}}c_{n-1}^{\varepsilon\mu_{j}} \right|_{\varepsilon=0}\right)\frac{1}{w-1}\\ &=\sum_{i=1}^{n-3}\left(\left.\frac{\partial c_{i}^{\varepsilon \mu_{j}}}{\partial\bar{\varepsilon}}\right|_{\varepsilon=0}\right)\frac{1}{w -w_{i}}+\left(\left.\frac{\partial}{\partial\bar{\varepsilon}}\left(-1+\frac{ n}{2}+\sum_{i=1}^{n-3}c_{i}^{\varepsilon\mu_{j}}(w_{i}^{\varepsilon\mu_{j}}-1) \right)\right|_{\varepsilon=0}\right)\frac{1}{w}\\ &\qquad+\left(\left.\frac{\partial}{\partial\bar{\varepsilon}} \left(1-\frac{n}{2}-\sum_{i=1}^{n-3}c_{i}^{\varepsilon\mu_{j}}w_{i}^{ \varepsilon\mu_{j}}\right)\right|_{\varepsilon=0}\right)\frac{1}{w-1}\\ &=\sum_{i=1}^{n-3}\left(\left.\frac{\partial c_{i}^{\varepsilon \mu_{j}}}{\partial\bar{\varepsilon}}\right|_{\varepsilon=0}\right)\left(\frac {1}{w-w_{i}}+\frac{w_{i}-1}{w}-\frac{w_{i}}{w-1}\right)\overset{\eqref{eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: **Lemma 4.1**.: _A Hermitian metric in a holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{0,\boldsymbol{m}}\) over \(\mathfrak{M}_{0,n}(\boldsymbol{m})\) is determined by \(\exp[S_{\boldsymbol{m}}/\pi]\), so that_ \[\mathsf{c}_{1}\left(\lambda_{0,\boldsymbol{m}},\exp[S_{\boldsymbol{m}}/\pi] \right)=\frac{1}{\pi^{2}}\omega_{WP}. \tag{4.28}\] Proof.: We first need to show that \(\exp[S_{\boldsymbol{m}}/\pi]\) is a hermitian metric in the holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{0,\boldsymbol{m}}\) defined in the Section 3.2. To do so, we use the same representation of \(\operatorname{Symm}\left(\boldsymbol{s}\right)\) introduced there. For our purposes, it suffices to prove this lemma for the case where the signature of orbifold Riemann surface \(O\) is given by \((0;\underbrace{m,\ldots,m}_{s},\underbrace{m^{\prime},\ldots,m^{\prime}}_{s^ {\prime}})\) with \(s\equiv s_{m}\) and \(s^{\prime}\equiv s_{m^{\prime}}>3\). Namely, we have only two kinds of points, \(s\) branch points of order \(m\) and \(s^{\prime}\) branch points of order \(m^{\prime}\). Furthermore, we will fix the last three points with order \(m^{\prime}>m\) to be at \(0,1,\infty\). Then, the generators and 1-cocycles of \(\operatorname{Symm}\left(\boldsymbol{s}\right)=\operatorname{Symm}\left(s \right)\times\operatorname{Symm}\left(s^{\prime}\right)\) would be the same as the ones we introduced in Section 3.2. Moreover, in this case we have \[S_{\boldsymbol{m}}[\varphi]=\\ \lim_{\epsilon\to 0^{+}}\left(\iint_{O_{\epsilon}}(|\partial_{w} \varphi|^{2}+e^{\varphi})\,\mathrm{d}^{2}w+\frac{\sqrt{-1}}{2}\sum_{k=1}^{s+s^ {\prime}-1}\left(1-\frac{1}{m_{k}}\right)\oint_{C_{k}^{\epsilon}}\varphi\left( \frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{k}}-\frac{\mathrm{d}w}{w-w_{k}}\right)\right. \\ \left.+\frac{\sqrt{-1}}{2}\left(1+\frac{1}{m^{\prime}}\right)\oint _{C_{s+s^{\prime}}^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}}- \frac{\mathrm{d}w}{w}\right)\right), \tag{4.29}\] where \[\begin{cases}O_{\epsilon}=\mathbb{C}\backslash\bigcup_{k=1}^{s+s^{\prime}-1} \left\{w\,\Big{|}\,|w-w_{k}|<\epsilon\right\}\cup\left\{w\,\Big{|}\,|w|> \epsilon^{-1}\right\},\\ C_{k}^{\epsilon}=\left\{w\,\Big{|}\,|w-w_{k}|=\epsilon\right\},\ \ \ k=1,2, \ldots,s+s^{\prime}-1\\ C_{s+s^{\prime}}^{\epsilon}=\left\{w\,\Big{|}\,|w|=1/\epsilon\right\},\end{cases} \tag{4.30}\] and the circles are oriented as a component of \(\partial O_{\epsilon}\). Notice that we did not consider the counterterms in (4.29). Actually, \(\operatorname{Symm}\left(\boldsymbol{s}\right)\) does not change the conformal family, and therefore the counterterms will not contribute to the variation under the action of this group, and therefore writing them would be redundant. To study the variation of \(S_{\boldsymbol{m}}[\varphi]\) under the action of \(\operatorname{Symm}\left(\boldsymbol{s}\right)\), it suffices to study its variation under the generators \(\{(\sigma_{j,j+1},\sigma^{\prime}_{i,i+1})\}_{j=1,i=1}^{j=s-1,i=s^{\prime}-1}\) of \(\operatorname{Symm}\left(\boldsymbol{s}\right)\). Considering the background we provided in Section 3.2 on the structure of the generators, variation is translated through the effect of the transformation \(\gamma_{j,j+1;i,i+1}\). We have: \[\Delta S_{\boldsymbol{m}}[\varphi]=S_{\boldsymbol{m}}[\varphi] \circ(\sigma_{k,k+1},\sigma^{\prime}_{i,i+1})-S_{\boldsymbol{m}}[\varphi] \tag{4.31}\] where \[\Delta S^{(1)}_{\mathbf{m}}[\varphi] =\iint_{\tilde{O}_{\epsilon}}(|\partial_{\gamma_{j,j+1;i,i+1}w} \tilde{\varphi}|^{2}+e^{\tilde{\varphi}})\,\mathrm{d}(\gamma_{j,j+1;i,i+1}w) \wedge\mathrm{d}(\overline{\gamma_{j,j+1;i,i+1}w})\] \[\qquad-\iint_{\tilde{O}_{\epsilon}}(|\partial_{w}\varphi|^{2}+e^{ \varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\,,\] \[\Delta S^{(2)}_{\mathbf{m}}[\varphi] =\sum_{k=1}^{s}\bigg{\{}\oint_{\tilde{C}^{*}_{k}}\tilde{\varphi} \left(\frac{\mathrm{d}(\overline{\gamma_{j,j+1;i,i+1}w})}{\overline{\gamma_{j,j+1;i,i+1}w}-\overline{\gamma_{j,j+1;i,i+1}w}_{k}}-\frac{\mathrm{d}(\gamma_{ j,j+1;i,i+1}w)}{\gamma_{j,j+1;i,i+1}w-\gamma_{j,j+1;i,i+1}w_{k}}\right)\] \[\qquad-\oint_{C^{*}_{k}}\varphi\left(\frac{\mathrm{d}\bar{w}}{ \bar{w}-\bar{w}_{k}}-\frac{\mathrm{d}w}{w-w_{k}}\right)\bigg{\}},\] \[\Delta S^{(3)}_{\mathbf{m}}[\varphi] =\sum_{k=s+1}^{s+s^{\prime}-1}\bigg{\{}\oint_{\tilde{C}^{*}_{k}} \tilde{\varphi}\left(\frac{\mathrm{d}(\overline{\gamma_{j,j+1;i,i+1}w})}{ \overline{\gamma_{j,j+1;i,i+1}w}-\overline{\gamma_{j,j+1;i,i+1}w_{k}}}-\frac{ \mathrm{d}(\gamma_{j,j+1;i,i+1}w)}{\gamma_{j,j+1;i,i+1}w-\gamma_{j,j+1;i,i+1} w_{k}}\right)\] \[\qquad-\oint_{C^{*}_{k}}\varphi\left(\frac{\mathrm{d}\bar{w}}{ \bar{w}-\bar{w}_{k}}-\frac{\mathrm{d}w}{w-w_{k}}\right)\bigg{\}},\] \[\Delta S^{(4)}_{\mathbf{m}}[\varphi] =\oint_{\tilde{C}^{*}_{s+s^{\prime}}}\tilde{\varphi}\left(\frac{ \mathrm{d}(\overline{\gamma_{j,j+1;i,i+1}w})}{\overline{\gamma_{j,j+1;i,i+1}w} }-\frac{\mathrm{d}(\gamma_{j,j+1;i,i+1}w)}{\gamma_{j,j+1;i,i+1}w}\right)- \oint_{C^{*}_{s+s^{\prime}}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}}- \frac{\mathrm{d}w}{w}\right),\] and \(\tilde{O}_{\epsilon}\), \(\tilde{C}^{*}_{k}\), \(\tilde{\varphi}\) are the transformed orbifold Riemann surface, circles, and Liouville field, respectively, such that \(\tilde{O}_{\epsilon}=\mathbb{C}\backslash\cup_{k=1}^{s+s^{\prime}}\mathrm{Int} \,\tilde{C}^{*}_{k}\). By looking at (3.37), we see that the index \(j\) does not have any non-trivial effect and we only have to worry about different values of \(i\). For \(i<s^{\prime}-3\), the transformation \(\gamma_{j,j+1;i,i+1}\) is given by the identity of \(\mathrm{PSL}(2,\mathbb{C})\), so, for this cases we have \(\Delta S_{\mathbf{m}}[\varphi]=0\). Accordingly, the only non-trivial cases are \(i=s^{\prime}-3,s^{\prime}-2,s^{\prime}-1\). Let us look at the case with \(i=s^{\prime}-3\). Here we have the transformation \(\gamma_{s^{\prime}-3,s^{\prime}-2}=(w-w_{s+s^{\prime}-3})/(1-w_{s+s^{\prime}-3})\) which for simplicity we call \(\gamma\). Let us calculate each contribution in (4.31) separately 68. Footnote 68: For \(\Delta S^{(1)}_{\mathbf{m}}[\varphi]\), the exponential terms give the same constants, and we can safely ignore them in variation. \[\Delta S^{(1)}_{\mathbf{m}}[\varphi] =\iint_{\tilde{O}_{\epsilon}}|\partial_{\gamma w}\tilde{\varphi}| ^{2}\,\mathrm{d}(\gamma w)\wedge\mathrm{d}(\overline{\gamma w})-\iint_{ \tilde{O}_{\epsilon}}|\partial_{w}\varphi|^{2}\,\mathrm{d}w\wedge\mathrm{d} \bar{w}\] \[=\iint_{\gamma^{-1}(\tilde{O}_{\epsilon})}|\partial_{w}\tilde{ \varphi}\circ\gamma|^{2}|\gamma^{\prime}|^{-2}(|\gamma^{\prime}|^{2}\,\mathrm{ d}w\wedge\mathrm{d}\bar{w})-\iint_{\tilde{O}_{\epsilon}}|\partial_{w} \varphi|^{2}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\] \[=\iint_{\gamma^{-1}(\tilde{O}_{\epsilon})\backslash O_{\epsilon}} |\partial_{w}\varphi|^{2}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\,,\] where \(\gamma^{\prime}=\partial\gamma(w)/\partial w\) and in the last line, we used the invariance of the hyperbolic metric: \[e^{\tilde{\varphi}\circ\gamma}\,\mathrm{d}(\gamma w)\wedge\mathrm{d}(\overline {\gamma w})=e^{\varphi}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\implies\tilde{ \varphi}\circ\gamma+\log|\gamma^{\prime}|^{2}=\varphi\implies\partial_{w} \tilde{\varphi}\circ\gamma=\partial_{w}\varphi, \tag{4.32}\] since \(\gamma^{\prime}=1/(1-w_{s+s^{\prime}-3})\). The region \(\gamma^{-1}(\tilde{O}_{\epsilon})\backslash O_{\epsilon}\) contains a part of \(\mathbb{C}\) bounded by the circles \(\tilde{C}_{k}\) and \(C_{k}\). Thus, by integration by parts and using the equations of motion together with taking into account the orientation of these circles, we have: \[\Delta S^{(1)}_{\mathbf{m}}[\varphi]= \sum_{k=1}^{s+s^{\prime}-1}\left(\oint_{\tilde{C}^{*}_{k}}\varphi \partial_{\bar{w}}\varphi\,\mathrm{d}\bar{w}-\oint_{C^{*}_{k}}\varphi \partial_{\bar{w}}\varphi\,\mathrm{d}\bar{w}\right)+\left(\oint_{\tilde{C}^{*}_ {s+s^{\prime}}}\varphi\partial_{\bar{w}}\varphi\,\mathrm{d}\bar{w}-\oint_{C^{*} _{s+s^{\prime}}}\varphi\partial_{\bar{w}}\varphi\,\mathrm{d}\bar{w}\right). \tag{4.33}\] The the explicit form of the circles \(\tilde{C}^{\epsilon}_{k}\) are given by \[\tilde{C}^{\epsilon}_{k}=\left\{\begin{aligned} &\left\{w\,\big{|}\,\,\,| \gamma w-\gamma w_{k}|=\epsilon\right\}=\left\{w\,\big{|}\,\,\,|w-w_{k}|= \epsilon|1-w_{s+s^{\prime}-3}|\right\}&\quad k\leq s+s^{\prime}-1,\\ &\\ &\left\{w\,\big{|}\,\,\,|\gamma w|=\frac{1}{\epsilon}\right\}= \left\{w\,\big{|}\,\,\,|w|=\frac{1}{\epsilon}|1-w_{s+s^{\prime}-3}|\right\}& \quad k=s+s^{\prime}.\end{aligned}\right. \tag{108}\] Now, with the use of equations (105),(108) and the asymptotic form of \(\varphi\) given by 101, one can find69 Footnote 69: Note that the orientation of the contour around the point at infinity is opposite of the other points, hence the sign difference in the last term. \[\Delta S^{(1)}_{\mathbf{m}}[\varphi]= -4\pi\sqrt{-1}\left(1-\frac{1}{m}\right)^{2}\!s\log\frac{1}{|1-w_ {s+s^{\prime}-3}|}-4\pi\sqrt{-1}\left(1-\frac{1}{m^{\prime}}\right)^{2}\!(s^ {\prime}-1)\log\frac{1}{|1-w_{s+s^{\prime}-3}|}\] \[+4\pi\sqrt{-1}\left(1+\frac{1}{m^{\prime}}\right)^{2}\log\frac{1 }{|1-w_{s+s^{\prime}-3}|}. \tag{109}\] Next, for the \(\Delta S^{(2)}_{\mathbf{m}}[\varphi]\) we have \[\Delta S^{(2)}_{\mathbf{m}}[\varphi]=\sum_{k=1}^{s}\left(\oint_{ \tilde{C}^{\epsilon}_{k}}\tilde{\varphi}\left(\frac{\mathrm{d}(\overline{ \gamma w})}{\overline{\gamma w}-\overline{\gamma w}_{k}}-\frac{\mathrm{d}( \gamma w)}{\gamma w-\gamma w_{k}}\right)\,-\oint_{C^{\epsilon}_{k}}\varphi \left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{k}}-\frac{\mathrm{d}w}{w-w_{ k}}\right)\right)\] \[=\sum_{k=1}^{s}\left(\oint_{\tilde{C}^{\epsilon}_{k}}(\varphi+2 \log|1-w_{s+s^{\prime}-3}|)\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{k}} -\frac{\mathrm{d}w}{w-w_{k}}\right)\,-\oint_{C^{\epsilon}_{k}}\varphi\left( \frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{k}}-\frac{\mathrm{d}w}{w-w_{k}} \right)\right),\] where we have used (105) and \(\gamma^{\prime}=1/(1-w_{s+s^{\prime}-3})\). By using the asymptotics of \(\varphi\) given in Lemma C.1 and equation (108), the above expression can be simplified to give \[\Delta S^{(2)}_{\mathbf{m}}[\varphi]=-8\pi\sqrt{-1}\,\,\frac{1}{m}s\log\frac{1}{|1 -w_{s+s^{\prime}-3}|}. \tag{110}\] By identical calculation to that of \(\Delta S^{(2)}_{\mathbf{m}}[\varphi]\), we get: \[\Delta S^{(3)}_{\mathbf{m}}[\varphi]=-8\pi\sqrt{-1}\,\,\frac{1}{m^{\prime}}(s^{ \prime}-1)\log\frac{1}{|1-w_{s+s^{\prime}-3}|}. \tag{111}\] Finally, \[\Delta S^{(4)}_{\mathbf{m}}[\varphi] =\oint_{\tilde{C}^{\epsilon}_{s+s^{\prime}}}\tilde{\varphi}\left( \frac{\mathrm{d}(\overline{\gamma w})}{\overline{\gamma w}}-\frac{\mathrm{d}( \gamma w)}{\gamma w}\right)-\oint_{\tilde{C}^{\epsilon}_{s+s^{\prime}}} \varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}}-\frac{\mathrm{d}w}{w}\right)\] \[=\oint_{\tilde{C}^{\epsilon}_{s+s^{\prime}}}(\varphi-2\log|1-w_{s +s^{\prime}-3}|)\left(\frac{\mathrm{d}\bar{w}}{\bar{w}}-\frac{\mathrm{d}w}{w} \right)-\oint_{\tilde{C}^{\epsilon}_{s+s^{\prime}}}\varphi\left(\frac{\mathrm{d }\bar{w}}{\bar{w}}-\frac{\mathrm{d}w}{w}\right),\] where in the last line we approximated \(w-w_{s+s^{\prime}-3}\) with \(w\) in the denominator of the second term due to \(w\) having a large absolute value. Again, using the asymptotics of \(\varphi\) from Lemma C.1, Eq.(4.34) and taking into account the opposite orientation of contours around the point at infinity, we have: \[\Delta S_{\mathbf{m}}^{(4)}[\varphi]=-8\pi\sqrt{-1}\ \frac{1}{m^{\prime}}\log\frac{1}{ |1-w_{s+s^{\prime}-3}|}. \tag{4.38}\] Putting (4.31), (4.35), (4.36), (4.37) and (4.38) together, we get \[\Delta S_{\mathbf{m}}[\varphi] = 2\pi\left(\frac{1}{m^{2}}-1\right)s\log|1-w_{s+s^{\prime}-3}|+2 \pi\left(\frac{1}{m^{\prime 2}}-1\right)(s^{\prime}-2)\log|1-w_{s+s^{\prime}-3}|.\] By comparing (4.39) with equation (3.38), we can readily see that for \(i=s^{\prime}-3\): \[\Delta S_{\mathbf{m}}[\varphi]=S_{\mathbf{m}}[\varphi]\circ(\sigma_{k,k+1},\sigma^{ \prime}_{i,i+1})-S_{\mathbf{m}}[\varphi]=-2\pi\log|f_{(\sigma_{k,k+1},\sigma^{ \prime}_{i,i+1})}|. \tag{4.40}\] Doing the analogous calculations for \(i=s^{\prime}-2,s^{\prime}-3\) would yield similar results. One can continue this proof for the case with the direct product of the symmetric groups of more strata inductively. Furthermore, had we chosen to fix the points \(0,1\) and \(\infty\) in the other stratum or decided to deal with punctures, we would still get the same result with minor changes in the path of the proof. So, in principle, we proved that under the action of any element \(\eta\) of \(\operatorname{Symm}\left(\mathbf{\mathsf{s}}\right)\), \(S_{\mathbf{m}}\) transforms according to the rule \[\exp[S_{\mathbf{m}}\circ\eta/\pi]\ |f_{\eta}|^{2}=\exp[S_{\mathbf{m}}/\pi]. \tag{4.41}\] This means that \(\exp[S_{\mathbf{m}}/\pi]\) is a Hermitian metric in the holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{0,\mathbf{m}}\) defined in the Section 3.2. As we mentioned before, (4.40) shows that 1-cocycles can be viewed as the modular anomaly caused by the non-covariance of the action under the effect of the modular group. To complete the proof, we remind that the first Chern form \(\mathsf{c}_{1}\left(\lambda_{0,\mathbf{m}},\exp[S_{\mathbf{m}}/\pi]\right)\) of the metrized \(\mathbb{Q}\)-line bundle \(\lambda_{0,\mathbf{m}}\) with a metric \(\exp[S_{\mathbf{m}}/\pi]\) is given by \[\mathsf{c}_{1}\left(\lambda_{0,\mathbf{m}},\exp[S_{\mathbf{m}}/\pi]\right)=-\frac{ \sqrt{-1}}{2\pi}\ \partial\bar{\partial}\left(\frac{S_{\mathbf{m}}}{\pi}\right). \tag{4.42}\] Thus using Theorem 4.24, we find that \[\mathsf{c}_{1}\left(\lambda_{0,\mathbf{m}},\exp[S_{\mathbf{m}}/\pi]\right)=\frac{1}{ \pi^{2}}\omega_{\text{WP}}. \tag{4.43}\] This statement completes the proof. ### Riemann Orbisurfaces of Genus \(>\)1 Consider a marked normalized Schottky group of rank \(g>1\) denoted by \((\Sigma;L_{1},\dots,L_{g})\). The Liouville action, at the classical level, is actually the on-shell value of the Liouville action functional. For the case of a closed Riemann surface with \(g>1\) it was first defined by Zograf and Takhtajan [8] (and was later interpreted by Takhtajan and Teo [54] in cohomological language) to be given by \[S[\varphi]=\iint_{\mathcal{D}}(|\partial_{w}\varphi|^{2}+e^{\varphi})\,\mathrm{d }^{2}w+\frac{\sqrt{-1}}{2}\sum_{k=2}^{g}\oint_{C_{k}}\theta_{L_{k}^{-1}}( \varphi), \tag{4.44}\] where the 1-form \(\theta_{L_{k}^{-1}}(\varphi)\) is given by \[\theta_{L_{k}^{-1}}(\varphi)=\left(\varphi-\frac{1}{2}\log|L_{k}^{\prime}|^{2 }-\log|l_{k}|^{2}\right)\left(\frac{L_{k}^{\prime\prime}}{L_{k}^{\prime}}\, \mathrm{d}w-\frac{\overline{L_{k}^{\prime}}}{\overline{L_{k}^{\prime}}}\, \mathrm{d}\bar{w}\right),\qquad\forall\,L_{k}\in(\Sigma;L_{1},\ldots,L_{g}). \tag{4.45}\] In the above equations, \(\mathcal{D}\) is the fundamental domain of the marked normalized Schottky group \((\Sigma;L_{1},\ldots,L_{g})\), \(\partial\mathcal{D}=\bigcup_{k=1}^{g}(C_{k}\cup C_{k}^{\prime})\), and \[l_{k}=\frac{1-\lambda_{k}}{\sqrt{\lambda_{k}}(a_{k}-b_{k})} \tag{4.46}\] is the left-hand lower element in the matrix representation of the generator \(L_{k}\in\mathrm{PSL}(2,\mathbb{C})\) for \(k=2,\ldots,g\). Notice that since we have chosen the marked Schottky group to be normalized, in particular \(a_{1}=0\) and \(b_{1}=\infty\), we have \(l_{1}=0\) and \(\theta_{L_{1}^{-1}}(\varphi)=0\). Zograf and Takhtajan [8, Theorems 1,2] have proven that \[\partial S=-2\pi\sum_{i=1}^{3g-3}c_{i}P_{i}\qquad\text{and}\qquad\bar{ \partial}\partial S=-2\sqrt{-1}\omega_{WP}, \tag{4.47}\] where \(\partial\) and \(\bar{\partial}\) are \((1,0)\) and \((0,1)\) components of the de Rham differential on \(\mathfrak{S}_{g}\) (see Section 3.3). This implies that \(-S\) is actually a Kahler potential for the projection of Weil-Petersson metric on \(\mathfrak{S}_{g}\). **Remark 4.5**.: The addition of the second term in the on-shell Liouville action (4.44) makes sure that: * The variation \(\delta S\) of the classical action \(S[\varphi]\) has the form \[\delta S[\varphi]\stackrel{{\text{\tiny def.}}}{{=}}\lim_{t \to 0}\frac{S[\varphi+t\delta\psi]-S[\varphi]}{t}=\iint_{\mathcal{D}}(-2 \partial_{\bar{w}}\partial_{w}\varphi+e^{\varphi})\delta\psi\,\mathrm{d}^{2}w\,,\] (4.48) where the variation \(\delta\psi\in\mathcal{C}^{\infty}(\Omega,\mathbb{R})\) is a smooth function on \(\Omega\) which is automorphic with respect to \(\Sigma\). For the marked Schottky group \((\Sigma;L_{1},\ldots,L_{g})\), the \(S[\varphi]\) is _independent_ of the specific choice of a fundamental domain \(\mathcal{D}\).70 Footnote 70: See footnote 51. When the conical points and cusps are present, the area integral in (4.44) diverges in the limits \(w\to w_{i}\) due to asymptotics of Liouville field \(\varphi\) (see Lemma C.1) and the classical Liouville action needs to be regularized. Let us define \(\hat{\mathcal{D}}\) as the pair \((\mathcal{D}_{\mathsf{0}},\widetilde{\mathscr{D}}|_{{}_{\mathcal{D}}})\) where \(\mathcal{D}_{\mathsf{0}}\) is defined as \(\mathcal{D}\cap\Omega_{\mathsf{0}}\) and \(\widetilde{\mathscr{D}}|_{{}_{\mathcal{D}}}\) denotes the restriction of \(\widetilde{\mathscr{D}}\) to \(\mathcal{D}\). Here, we have assumed that all singular points \(w_{1},\ldots,w_{n}\) belong to the interior of fundamental domain \(\mathcal{D}\). For sufficiently small \(\epsilon>0\), define71 Footnote 71: Note that \(\lim_{\epsilon\to 0}\hat{\mathcal{D}}_{\epsilon}=\mathcal{D}_{\text{reg}}\) where \(\mathcal{D}_{\text{reg}}:=\mathcal{D}_{0}\backslash\operatorname{Supp}(\widehat {\mathscr{D}}|_{{}_{\mathcal{D}}})\). \[\hat{\mathcal{D}}_{\epsilon}=\hat{\mathcal{D}}\backslash\bigcup_{i=1}^{n}D_{i}^ {\epsilon}, \tag{4.49}\] with \(D_{i}^{\epsilon}\stackrel{{\text{\tiny def.}}}{{=}}\left\{w \,\big{|}\,|w-w_{i}|<\epsilon\right\}\). It follows from Lemma C.1 that the following limit exists \[S_{\hat{\mathcal{D}}_{\text{reg}}}[\varphi]=\\ \lim_{\epsilon\to 0^{+}}\left(\iint_{\hat{\mathcal{D}}_{\epsilon}}(| \partial_{w}\varphi|^{2}+e^{2\varphi})\,\mathrm{d}^{2}w+\frac{\sqrt{-1}}{2} \sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{\epsilon}} \varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w }{w-w_{j}}\right)\right.\\ \left.-2\pi\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)^{2} \log\epsilon+2\pi n_{p}\big{(}\log\epsilon+2\log|\log\epsilon|\big{)}\right). \tag{4.50}\] **Remark 4.6**.: When \(n_{p}=0\), the appropriate \(S_{\hat{\mathcal{D}}_{\text{reg}}}[\varphi]\) is given by \[\lim_{\epsilon\to 0^{+}}\Bigg{(}\iint_{\hat{\mathcal{D}}_{ \epsilon}}(|\partial_{w}\varphi|^{2}+e^{2\varphi})\,\mathrm{d}^{2}w+\frac{ \sqrt{-1}}{2}\sum_{j=1}^{n}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{ \mathrm{d}w}{w-w_{j}}\right)\\ -2\pi\sum_{i=1}^{n}\left(1-\frac{1}{m_{j}}\right)^{2}\log \epsilon\Bigg{)}. \tag{4.51}\] Now, we can define the regularized action as \[S_{\boldsymbol{m}}[\varphi]=S_{\boldsymbol{m}}(\mathcal{D};w_{1},\ldots,w_{n} )=S_{\hat{\mathcal{D}}_{\text{reg}}}[\varphi]+\frac{\sqrt{-1}}{2}\sum_{k=2}^{ g}\oint_{C_{k}}\theta_{L_{k}^{-1}}(\varphi). \tag{4.52}\] This completes the definition of \(S_{\boldsymbol{m}}\) provided that all of the fixed points \(w_{1},\ldots,w_{n}\) lie in the interior of the fundamental domain -- i.e. \(w_{1},\ldots,w_{n}\in\operatorname{Int}\mathcal{D}\). The \(S_{\boldsymbol{m}}(\mathcal{D};w_{1},\ldots,w_{n})\) depends on the choice of representatives in \(\Sigma\cdot\{w_{1},\ldots,w_{n}\}\) and _no longer determines a function on the Schottky space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\)_. Note that \(w_{i}\) and \(L_{k}(w_{i})\) have the same order and are related by the action of \(\operatorname{Symm}\left(\mathbf{s}\right)\) acting on \(\hat{\Omega}\). The geometric meaning of \(S_{\boldsymbol{m}}\) is given by the following Lemma (also see Lemma 3.5): **Lemma 4.2**.: _The regularized Liouville action determines a Hermitian metric \(\exp[S_{\boldsymbol{m}}/\pi]\) in the holomorphic \(\mathbb{Q}\)-line bundle \(\mathscr{L}=\bigotimes_{i=1}^{n}\mathscr{L}_{i}^{h_{i}}\) over \(\mathfrak{S}_{g,n}(\boldsymbol{m})\)._ Proof.: To establish this claim for \(i=1,\ldots,n\), it is adequate to demonstrate that we have \[S_{\boldsymbol{m}}(\tilde{\mathcal{D}};w_{1},\ldots,L_{k}w_{i},\ldots,w_{n})-S_ {\boldsymbol{m}}(\mathcal{D};w_{1},\ldots,w_{n})=\pi h_{i}\log|L_{k}^{\prime} (w_{i})|^{2}, \tag{4.53}\] where \(w_{1},\ldots,w_{n}\in\operatorname{Int}\mathcal{D}\) and \(w_{1},\ldots,L_{k}w_{i},\ldots,w_{n}\in\operatorname{Int}\tilde{\mathcal{D}}\). Furthermore, it is sufficient to consider the case when \[\tilde{\mathcal{D}}=(\mathcal{D}\backslash\mathcal{D}_{0})\cup L_{k}(\mathcal{ D}_{0}),\] and \(\mathcal{D}_{0}\subset\mathcal{D}\) is such that \(\partial\mathcal{D}_{0}\cap\partial\mathcal{D}\subset C_{k}\) and \(w_{i}\in\mathcal{D}_{0}\), and all other \(w_{j}\in\mathcal{D}\backslash\mathcal{D}_{0}\) for \(j\neq i\). Note that by a finite combination of such transformations, any choice of a fundamental domain for \(\Sigma\) can be obtained from \(\mathcal{D}\). The computation of (4.53) closely follows the corresponding computation in the proof of Lemma 3 in [1] where no branch point were present. Let \[I_{\epsilon}(\mathcal{D};w_{1},\ldots,w_{n}) =\iint_{\hat{\mathcal{D}}_{\epsilon}}(|\partial_{w}\varphi|^{2}+e ^{\varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}+\sum_{k=2}^{g}\oint_{C_{k}} \theta_{L_{k}^{-1}}(\varphi)\] \[+\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_ {j}^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}- \frac{\mathrm{d}w}{w-w_{j}}\right),\] where we did not include the counterterms in the action due to the fact that the action of \(L_{k}\) does not change the conformal class. Since \(\tilde{C}_{j}=C_{j}\) for \(j\neq k\) and \(\tilde{C}_{k}=C_{k}-\partial\mathcal{D}_{0}\), we have \[\begin{split}\Delta I_{\epsilon}&=I_{\epsilon}( \tilde{\mathcal{D}};w_{1},\ldots,L_{k}w_{i},\ldots,w_{n})-I_{\epsilon}( \mathcal{D};w_{1},\ldots,w_{n})\\ &=\iint_{L_{k}(\mathcal{D}_{0})\backslash\tilde{D}_{\epsilon}^{ \epsilon}}(|\partial_{w}\varphi|^{2}+e^{\varphi})\,\mathrm{d}w\wedge\mathrm{d }\bar{w}-\iint_{\mathcal{D}_{0}\backslash L_{\epsilon}^{\epsilon}}(|\partial_ {w}\varphi|^{2}+e^{\varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}-\oint_{ \partial\mathcal{D}_{0}}\theta_{L_{k}^{-1}}(\varphi)\\ &+\sum_{j=1}^{n_{\epsilon}}\delta_{i\,j}\left(1-\frac{1}{m_{j}} \right)\left[\oint_{\tilde{C}_{j}^{\epsilon}}\varphi\left(\frac{\mathrm{d} \bar{w}}{\bar{w}-\overline{L_{k}w_{j}}}-\frac{\mathrm{d}w}{w-L_{k}w_{j}} \right)-\oint_{C_{j}^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}- \bar{w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right)\right].\end{split}\] According to Eq.(3.105) we have \[\begin{split} L_{k}^{*}\left((|\partial_{w}\varphi|^{2}+e^{ \varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\right)&=\left((| \partial_{w}\varphi|^{2}+e^{\varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w} \right)\circ L_{k}\,|L_{k}^{\prime}|^{2}\\ &=(|\partial_{w}\varphi|^{2}+e^{\varphi})\,\mathrm{d}w\wedge \mathrm{d}\bar{w}+\mathrm{d}\theta_{L_{k}^{-1}}(\varphi)\,,\end{split} \tag{4.54}\] which, together with using the Stokes theorem, we get \[\begin{split}\Delta I_{\epsilon}&=\iint_{\mathcal{D} _{0}\backslash L_{k}^{-1}(\tilde{D}_{\epsilon}^{\epsilon})}(|\partial_{w} \varphi|^{2}+e^{\varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}-\iint_{\mathcal{ D}_{0}\backslash D_{\epsilon}^{\epsilon}}(|\partial_{w}\varphi|^{2}+e^{ \varphi})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}-\oint_{\partial L_{k}^{-1}( \tilde{D}_{\epsilon}^{\epsilon})}\theta_{L_{k}^{-1}}(\varphi)\\ &+\sum_{j=1}^{n_{\epsilon}}\delta_{i\,j}\left(1-\frac{1}{m_{j}} \right)\left[\oint_{L_{k}^{-1}(\tilde{C}_{j}^{\epsilon})}(\varphi-\log|L_{k}^{ \prime}w|^{2})\left(\frac{\overline{L_{k}^{\prime}}w\,\mathrm{d}\bar{w}}{ \overline{L_{k}w}-\overline{L_{k}w_{j}}}-\frac{L_{k}^{\prime}w\,\mathrm{d}w}{ L_{k}w-L_{k}w_{j}}\right)\right.\\ &\qquad\qquad\qquad\left.-\oint_{C_{j}^{\epsilon}}\varphi\left( \frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}} \right)\right].\end{split}\] Since the third term's integrand does not have a pole, its contribution will be of \(\mathcal{O}\left(1\right)\) and it can be safely omitted. Furthermore, the exponential term in the Liouville term gives the Euler characteristic and its contribution will be canceled between related terms. Now, we can rewrite the above equation as \[\begin{split}\Delta I_{\epsilon}&=\iint_{D_{\epsilon}^ {\epsilon}}(|\partial_{w}\varphi|^{2})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}- \iint_{L_{k}^{-1}(\tilde{D}_{\epsilon}^{\epsilon})}(|\partial_{w}\varphi|^{2} )\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\\ &+\sum_{j=1}^{n_{\epsilon}}\delta_{i\,j}\left(1-\frac{1}{m_{j}} \right)\left[\oint_{L_{k}^{-1}(\tilde{C}_{j}^{\epsilon})}(\varphi-\log|L_{k}^{ \prime}(w)|^{2})\left(\partial_{\bar{w}}\log|L_{k}w-L_{k}w_{j}|^{2}\right.\right. \right.\\ &\left.\left.-\partial_{w}\log|L_{k}w-L_{k}w_{j}|^{2}\right)-\oint_ {C_{j}^{\epsilon}}\varphi\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}- \frac{\mathrm{d}w}{w-w_{j}}\right)\right].\end{split} \tag{4.55}\] By noting that \[\partial_{w}\log|L_{k}w-L_{k}w_{j}|^{2}=\partial_{w}\log|w-w_{j}|^{2}+\mathcal{O} \left(1\right),\] we can write (4.55) as follows \[\Delta I_{\epsilon} =\iint_{D_{i}^{\epsilon}}(|\partial_{w}\varphi|^{2})\,\mathrm{d}w \wedge\mathrm{d}\bar{w}-\iint_{L_{k}^{-1}(\tilde{D}_{1}^{\epsilon})}(| \partial_{w}\varphi|^{2})\,\mathrm{d}w\wedge\mathrm{d}\bar{w} \tag{4.56}\] \[+\sum_{j=1}^{n_{e}}\delta_{i\,j}\left(1-\frac{1}{m_{j}}\right) \left[\oint_{L_{k}^{-1}(\tilde{C}_{j}^{\epsilon})}(\varphi-\log|L_{k}^{\prime} (w)|^{2})\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w }{w-w_{j}}\right)\right.\] \[\qquad\qquad\qquad\left.-\oint_{C_{j}^{\epsilon}}\varphi\left( \frac{\mathrm{d}\bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}} \right)\right].\] By doing the integration by parts and imposing the equations of motion together with considering the orientation of the boundary circles, the first line in (4.56) becomes \[\iint_{D_{i}^{\epsilon}}(|\partial_{w}\varphi|^{2})\,\mathrm{d}w \wedge\mathrm{d}\bar{w}-\iint_{L_{k}^{-1}(\tilde{D}_{i}^{\epsilon})}(| \partial_{w}\varphi|^{2})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}=\oint_{C_{i}^{ \epsilon}}\varphi\partial_{\bar{w}}\varphi\mathrm{d}\bar{w}-\oint_{\tilde{C}_ {i}^{\epsilon}}\varphi\partial_{\bar{w}}\varphi\mathrm{d}\bar{w}, \tag{4.57}\] where \[\tilde{C}_{i}^{\epsilon}=\{w\big{|}\,|L_{k}w-L_{k}w_{i}|=\epsilon\}\equiv\{w \big{|}\,|w-w_{i}|=\frac{\epsilon}{L_{k}^{\prime}w_{i}}\}. \tag{4.58}\] To be more precise, the right hand side of (4.57) had an extra contribution \[\frac{1}{2}\iint_{L_{k}^{-1}(\tilde{D}_{i}^{\epsilon})}\varphi e^{\varphi}\, \mathrm{d}w\wedge\mathrm{d}\bar{w}-\frac{1}{2}\iint_{D_{i}^{\epsilon}}\varphi e ^{\varphi}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\,, \tag{4.59}\] which by defining \(y_{i}=|w-w_{i}|\), \(\alpha_{i}=1-1/m_{i},[y_{i}]=:\epsilon\leq y_{i}\leq\epsilon/L_{k}^{\prime}w_ {i}\), this contribution for branch points and cusps respectively becomes \[-\frac{2\pi}{2}\iint_{[y_{i}]}\frac{1}{y_{i}^{2\alpha_{i}-1}} \log y_{i}^{2\alpha_{i}}dy_{i}=\pi\left(\frac{\alpha_{i}}{2(1-\alpha_{i})^{2} }y_{i}^{2-2\alpha_{i}}-\frac{\alpha_{i}}{(1-\alpha_{i})}y_{i}^{2-2\alpha_{i}} \log y_{i}\right)\bigg{|}_{\epsilon}^{\epsilon/L_{k}^{\prime}w_{i}} \tag{4.60}\] \[-\frac{2\pi}{2}\iint_{[y_{i}]}\frac{1}{y_{i}\log^{2}y_{i}}\log \left(y_{i}^{2}\log^{2}y_{i}\right)dy_{i}\!=\!\pi\!\left(2\!+\!\frac{2}{\log y _{i}}\!-\!2\log|\log y_{i}|\!+\!\frac{2}{\log y_{i}}\log|\log y_{i}|\right) \!\bigg{|}_{\epsilon}^{\epsilon/L_{k}^{\prime}w_{i}} \tag{4.61}\] but both of them, and accordingly (4.59), vanish in the limit \(\epsilon\to 0\). Now, by using the asymptotic form of \(\varphi\) from the Lemma C.1 together with the relation (4.58), the Eq.(4.57) is simplified to72 Footnote 72: The \(D_{i}^{\epsilon}\)’s boundaries have the opposite orientation of the boundary of the fundamental domain. \[\iint_{D_{i}^{\epsilon}}(|\partial_{w}\varphi|^{2})\,\mathrm{d}w\wedge\mathrm{ d}\bar{w}-\iint_{L_{k}^{-1}(\tilde{D}_{i}^{\epsilon})}(|\partial_{w}\varphi|^{2})\, \mathrm{d}w\wedge\mathrm{d}\bar{w}=-4\pi\sqrt{-1}\left(1-\frac{1}{m_{i}}\right) ^{2}\log|L_{k}^{\prime}w_{i}|. \tag{4.62}\] For the rest of the integrals in (4.56), again by using the asymptotics of \(\varphi\) from Lemma C.1 and (4.58) we have \[\sum_{j=1}^{n_{e}}\delta_{ij} \left(1-\frac{1}{m_{j}}\right)\left[\oint_{L_{k}^{-1}(\tilde{C}_ {j}^{\epsilon})}(\varphi-\log|L_{k}^{\prime}(w)|^{2})\left(\frac{\mathrm{d} \bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right)\right. \tag{4.63}\] \[\left.-\oint_{C_{j}^{\epsilon}}\varphi\left(\frac{\mathrm{d} \bar{w}}{\bar{w}-\bar{w}_{j}}-\frac{\mathrm{d}w}{w-w_{j}}\right)\right]=8\pi \sqrt{-1}\left(\frac{1}{m_{i}^{2}}-\frac{1}{m_{i}}\right)\log|L_{k}^{\prime}w_ {i}|.\] Now, combining (4.60) and (4.61) gives \[\Delta I_{\epsilon}=-4\pi\sqrt{-1}\ h_{i}\log|L^{\prime}_{k}w_{i}|.\] So, in principle, we proved that \[\exp\Bigl{[}S_{\boldsymbol{m}}(\tilde{\mathcal{D}};w_{1},\ldots,L_{k}w_{i}, \ldots,w_{n})/(\pi h_{i})\Bigr{]}=\exp[S_{\boldsymbol{m}}(\mathcal{D};w_{1}, \ldots,w_{n})/(\pi h_{i})]\ |L^{\prime}_{k}w_{i}|^{2},\] which means that \(\exp[S_{\boldsymbol{m}}/\pi]\) is a Hermitian metric on \(\mathscr{L}=\bigotimes_{i=1}^{n_{p}+n_{e}}\mathscr{L}_{i}^{h_{i}}\). Combining Lemmas 4.2 and 3.5, we can deduce the following statement: **Corollary 4.1**.: _Put \(\mathsf{H}=\mathsf{h}_{1}^{m_{1}h_{1}}\cdots\mathsf{h}_{n_{e}}^{m_{n_{e}}h_{n_ {e}}}\mathsf{h}_{n_{e}+1}\cdots\mathsf{h}_{n}\). Then,_ \[\mathscr{S}_{\boldsymbol{m}}=S_{\boldsymbol{m}}-\pi\log\mathsf{H} \tag{4.62}\] _determines a smooth real-valued function on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\)._ The above form of \(\mathscr{S}_{\boldsymbol{m}}\) can be easily understand by demanding the Liouville action to be independent from the choice of a fundamental domain: It is clear from commutative diagram (2.13) and the definition of regularized Liouville action (4.52), that the problem of defining the appropriate Liouville action on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) is closely related to the Fuchsian uniformization of \(\dot{\hat{\Omega}}\). Then, the fact that \(\Omega\subset\hat{\mathbb{C}}\) and the observation that, roughly speaking, the action of Schottky group \(\Sigma\) on \(\dot{\hat{\Omega}}\) resembles that of \(\operatorname{\mathrm{Symm}}\left(\mathfrak{s}\right)\) on a genus zero Riemann orbisurface,73 suggests that the same pattern of "anomaly cancellation" observed in section 3.2.1 should also happen in this case. Footnote 73: More specifically, by acting each generator \(L_{k}\in\Sigma\) on a singular point \(w_{i}\) inside a particular fundamental domain \(\mathcal{D}\), we will get another singular point with same order of isotropy in a different fundamental domain. ## 5 Potentials for Weil-Petersson and Takhtajan-Zograf Metrics In this section, following [1; 2], we construct Kahler potentials for cuspidal and elliptic TZ metrics on \(\mathcal{M}_{0,n}\) (see Section 3.1.3). We will also prove that the first Chern forms of the line bundles \(\mathscr{L}_{i}\) over the Schottky space \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) with Hermitian metrics \(\mathsf{h}_{i}\) are given by \(\frac{1}{2\pi}\omega_{\text{\sc{TZ}},i}^{\text{\rm{ell}}}\) for \(i=1,\ldots,n_{e}\) and \(\frac{4}{3}\omega_{\text{\sc{TZ}},i}^{\text{\rm{cusp}}}\) for \(i=n_{e}+1,\ldots,n\). In addition, we will show that \(\frac{1}{\pi^{2}}\omega_{{}_{\text{\sc{WP}}}}\) is the first Chern form of the \(\mathbb{Q}\)-line bundle \(\mathscr{L}=\bigotimes_{i=1}^{n}\mathscr{L}_{i}^{h_{i}}\) with Hermitian metric \(\exp[S_{\boldsymbol{m}}/\pi]\), where the regularized classical Liouville action \(S_{\boldsymbol{m}}\) is given by Eq.(4.52). Then, it follows readily from these two results that the specific combination \(\omega_{{}_{\text{\sc{WP}}}}-\frac{4\pi^{2}}{3}\omega_{\text{\sc{TZ}}}^{\text{ \rm{cusp}}}-\frac{\pi}{2}\sum_{j=1}^{n_{e}}m_{j}h_{j}\,\omega_{\text{\sc{TZ}},j}^{\text{\rm{ell}}}\) of Weil-Petersson metric as well as cuspidal and elliptic Takhtajan-Zograf metrics has a _global_ Kahler potential on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\). ### Potentials for Cuspidal and Elliptic TZ Metrics on \(\mathcal{M}_{0,n}\) As in Section 3.2, let \(\Gamma\) be a marked normalized Fuchsian group with signature \((0;m_{1},\ldots,m_{n_{e}}\)\(,n_{p})\) that uniformizes the orbifold Riemann surface \(O\) and let \(J:\mathbb{H}\to O\) be the Klien's Hauptmodule. In addition, let \(\mathsf{h}_{i}=\left|J_{1}^{(i)}\right|^{\frac{2}{m_{i}}}\) for \(i=1,\ldots,n_{e}\), \(\mathsf{h}_{i}=\left|J_{1}^{(i)}\right|^{2}\) for \(i=n_{e}+1\) \(1,\ldots,n-1\), and \(\mathsf{h}_{n}=\left|J_{-1}^{(n)}\right|^{2}\) be smooth positive functions on \(\mathcal{M}_{0,n}\).74 Now, according to the expressions for \(\log\mathsf{h}_{i}\) in Remark 3.10, we prove the following lemma: Footnote 74: See Eq.(3.34). **Lemma 5.1**.: _For all \(k=1,\ldots,n-3\), we have_ \[\begin{split}\frac{\partial}{\partial w_{k}}\log\mathsf{h}_{i}& =\frac{1}{m_{i}}\partial_{w}\dot{F}^{k}(w_{i}),\hskip 28.452756pti=1, \ldots,n_{e},\\ \frac{\partial}{\partial w_{k}}\log\mathsf{h}_{i}& =\partial_{w}\dot{F}^{k}(w_{i}),\hskip 56.905512pti=n_{e}+1, \ldots,n.\end{split} \tag{5.1}\] Proof.: Consider the orbifold Riemann surface \(O\cong[\mathbb{H}/\Gamma]\). Using Lemma 3.3, it is sufficient to demonstrate that \[\left(\frac{\partial\log\mathsf{h}_{i}^{\varepsilon\mu_{k}}}{\partial \varepsilon}\right)\biggr{|}_{\varepsilon=0}=\begin{cases}\frac{1}{m_{i}} \partial_{w}\dot{F}^{k}(w_{i})&\text{for}\quad i=1,\ldots,n_{e},\\ &\\ \partial_{w}\dot{F}^{k}(w_{i})&\text{for}\quad i=n_{e}+1,\ldots,n,\end{cases} \tag{5.2}\] and for all \(k=1,\ldots,n-3\). The following proof repeats verbatim the proof of Lemma 4 in [1] for the case of punctures. Using the fact that \(F^{\varepsilon\mu_{k}}\) is holomorphic in \(\varepsilon\) at \(\varepsilon=0\), Corollary 3.1 and formulas in Remark 3.10, Eq.(3.79) we get: * \(i=1,\ldots,n_{e}\) _case:_ \[\left.\left(\frac{\partial\log\mathsf{h}_{i}^{\varepsilon\mu_{k}} }{\partial\varepsilon}\right)\right|_{\varepsilon=0}=-\left.\left(\frac{ \partial}{\partial\varepsilon}\right)\right|_{\varepsilon=0}\lim_{w\mapsto w_ {i}}\,\left(\varphi^{\varepsilon\mu_{k}}\circ F^{\varepsilon\mu_{k}}+\left(1- \frac{1}{m_{i}}\right)\log|F^{\varepsilon\mu_{k}}(w-w_{i})|^{2}\right)\] \[\qquad\qquad\qquad=-\lim_{w\to w_{i}}\left\{\frac{\partial}{ \partial\varepsilon}\biggr{|}_{\varepsilon=0}\left(\varphi^{\varepsilon\mu_{ k}}\circ F^{\varepsilon\mu_{k}}+\left(1-\frac{1}{m_{i}}\right)\log|F^{ \varepsilon\mu_{k}}(w-w_{i})|^{2}\right)\right\}\] \[\qquad\qquad\qquad=-\lim_{w\to w_{i}}\left(\partial_{w_{k}} \varphi+\partial_{w}\varphi\dot{F}^{k}(w)+\left(1-\frac{1}{m_{i}}\right)\frac {\dot{F}^{k}(w)-\dot{F}^{k}(w_{i})}{w-w_{i}}\right)\] \[\qquad\qquad\qquad\stackrel{{\eqref{eq:eq:eq:eq:eq: eq:eq * \(i=n_{e}+1,\ldots,n-1\) _case:_ \[\left(\frac{\partial\log\mathsf{h}_{i}^{\varepsilon\mu_{k}}}{ \partial\varepsilon}\right)\biggr{|}_{\varepsilon=0}=\left.\left(\frac{ \partial}{\partial\varepsilon}\right)\right|_{\varepsilon=0}\lim_{w\to w_{i}} \left(\log|F^{\varepsilon\mu_{k}}(w-w_{i})|^{2}-\frac{2e^{-\frac{v^{ \varepsilon\mu_{k}}\mu_{k_{0}}F^{\varepsilon\mu_{k}}(w)}{2}}}{|F^{ \varepsilon\mu_{k}}(w-w_{i})|}\right)\] \[\qquad\qquad=\lim_{w\to w_{i}}\left\{\left.\frac{\partial}{ \partial\varepsilon}\right|_{\varepsilon=0}\left(\log|F^{\varepsilon\mu_{k}}( w-w_{i})|^{2}-2(F^{\varepsilon\mu_{k}})^{*}\left(e^{-\frac{1}{2}\varphi^{ \varepsilon\mu_{k}}}\right)\left|\frac{\partial_{w}F^{\varepsilon\mu_{k}}(w) }{F^{\varepsilon\mu_{k}}(w-w_{i})}\right|\right)\right\}\] \[\qquad\qquad=\lim_{w\to w_{i}}\left\{\left.\frac{\dot{F}^{k}(w)- \dot{F}^{k}(w_{i})}{w-w_{i}}-\frac{e^{-\frac{1}{2}\varphi}(w)\left((w-w_{i}) \partial_{w}\dot{F}^{k}(w)-\dot{F}^{k}(w)+\dot{F}^{k}(w_{i})\right)}{(w-w_{i} )|w-w_{i}|}\right)\right\}\] \[\qquad\qquad=\partial_{w}\dot{F}^{k}(w_{i}).\] * \(i=n\) _case:_ \[\left(\frac{\partial\log\mathsf{h}_{n}^{\varepsilon\mu_{k}}}{ \partial\varepsilon}\right)\biggr{|}_{\varepsilon=0}=\left.\left(\frac{ \partial}{\partial\varepsilon}\right)\right|_{\varepsilon=0}\lim_{w\to\infty} \left(\log|F^{\varepsilon\mu_{k}}(w)|^{2}-\frac{2e^{-\frac{v^{\varepsilon \mu_{k}}\mu_{k_{0}}F^{\varepsilon\mu_{k}}(w)}{2}}}{|F^{\varepsilon\mu_{k}}(w) |}\right)\] \[\qquad\qquad=\lim_{w\to\infty}\left\{\left.\frac{\partial}{ \partial\varepsilon}\right|_{\varepsilon=0}\left(\log|F^{\varepsilon\mu_{k}} |^{2}-2(F^{\varepsilon\mu_{k}})^{*}\left(e^{-\frac{1}{2}\varphi^{\varepsilon \mu_{k}}}\right)\left|\frac{\partial_{w}F^{\varepsilon\mu_{k}}}{F^{ \varepsilon\mu_{k}}}\right|\right)(w)\right\}\] \[\qquad\qquad=\lim_{w\to\infty}\left\{\left.\frac{\dot{F}^{k}(w)}{ w}-\frac{e^{-\frac{1}{2}\varphi(w)}(w\partial_{w}\dot{F}^{k}(w)-\dot{F}^{k}(w))|w|}{w^{2} \bar{w}}\right)\right.\] \[\qquad\qquad=\partial_{w}\dot{F}^{k}(\infty).\] As before, let \(\partial\) and \(\bar{\partial}\) be the \((1,0)\) and \((0,1)\) components of de Rham differential \(d=\partial+\bar{\partial}\) on \(\mathcal{M}_{0,n}\). We have (see [1, 2]): **Lemma** (Takhtajan and Zograf).: _The functions \(-\log\mathsf{h}_{i},-\log\mathsf{h}_{j},\log\mathsf{h}_{n},:\,\mathcal{M}_{0,n }\to\mathbb{R}_{>0}\) for \(i=1,\ldots,n_{e}\) and \(j=n_{e}+1,\ldots,n-1\) are Kahler potential for TZ metrics \(\frac{1}{2}\langle\cdot,\cdot\rangle_{T\mathbb{Z},i}^{ell}\)\(,\frac{4\pi}{3}\langle\cdot,\cdot\rangle_{T\mathbb{Z},j}^{cusp}\), \(-\frac{4\pi}{3}\langle\cdot,\cdot\rangle_{T\mathbb{Z},n}^{cusp}\), respectively:75_ Footnote 75: See the Proposition 1 in [1] for punctured Riemann surfaces. \[\bar{\partial}\partial\log\mathsf{h}_{i}=-\sqrt{-1}\,\omega_{T\mathbb{Z},i}^{ ell},\qquad\bar{\partial}\partial\log\mathsf{h}_{j}=-\frac{8\pi\sqrt{-1}}{3}\, \omega_{T\mathbb{Z},j}^{cusp},\qquad\bar{\partial}\partial\log\mathsf{h}_{n} =\frac{8\pi\sqrt{-1}}{3}\,\omega_{T\mathbb{Z},n}^{cusp}. \tag{5.3}\] Proof.: We need to prove that for all \(j,k=1,\ldots,n-3\), \[-\frac{\partial^{2}\log\mathsf{h}_{i}}{\partial w_{j}\partial\bar{w}_{k}}=\left\{ \begin{aligned} &\frac{1}{2}\left\langle\frac{\partial}{ \partial w_{j}},\frac{\partial}{\partial w_{k}}\right\rangle^{\text{ell}}_{ \text{TZ},i}&&\text{for}\quad i=1,\ldots,n_{e},\\ &-\frac{\partial^{2}\log\mathsf{h}_{i}}{\partial w_{j}\partial \bar{w}_{k}}=\left\{\begin{aligned} &\frac{1}{2}\left\langle\frac{ \partial}{\partial w_{j}},\frac{\partial}{\partial w_{k}}\right\rangle^{\text{ cusp}}_{\text{TZ},i}&&\text{for}\quad i=n_{e}+1,\ldots,n-1,\\ &-\frac{4\pi}{3}\left\langle\frac{\partial}{\partial w_{j}}, \frac{\partial}{\partial w_{k}}\right\rangle^{\text{cusp}}_{\text{TZ},i}&& \text{for}\quad i=n.\end{aligned}\right.\] Let us consider the three cases \(i=1,...,n_{e}\), \(i=n_{e}+1,\ldots,n-1\), and \(i=n\) separately. The following proof repeats verbatim the proof of Proposition 1 in [1] for the case of punctures. According to Section 3.1, Lemma 3.3 and Eq.(3.79), for a given Riemann orbisurface \(O\cong[\mathbb{H}/\Gamma]\) one can write: * \(i=1,\ldots,n_{e}\) _case:_ \[-\frac{\partial^{2}\log\mathsf{h}_{i}}{\partial w_{j}\partial\bar{ w}_{k}}=-\left.\left(\frac{\partial^{2}\log\mathsf{h}_{i}^{\varepsilon_{j}\mu_{j}+ \varepsilon_{k}\mu_{k}}}{\partial\varepsilon_{j}\partial\bar{\varepsilon}_{k} }\right)\right|_{\varepsilon_{j}=\varepsilon_{k}=0}\] \[=\lim_{w\to w_{i}}\left\{\left.\left(\frac{\partial^{2}}{ \partial\varepsilon_{j}\partial\bar{\varepsilon}_{k}}\right)\right|_{ \varepsilon_{j}=\varepsilon_{k}=0}\left(\varphi^{(\varepsilon\mu)_{jk}}\circ F ^{(\varepsilon\mu)_{jk}}+\left(1-\frac{1}{m_{i}}\right)\log|F^{(\varepsilon \mu)_{jk}}(w-w_{i})|^{2}\right)\right\}\] \[=\lim_{w\to w_{i}}\left.\left(\frac{\partial^{2}}{\partial \varepsilon_{j}\partial\bar{\varepsilon}_{k}}\right)\right|_{\varepsilon_{j}= \varepsilon_{k}=0}\left(F^{\varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}} \right)^{*}(\varphi).\] In the above equation, we have used the notation \((\varepsilon\mu)_{jk}=\varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}\). Using the commutative diagram (3.57), one has \[(F^{\varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}})^{*}\Big{(}e^{\varphi^{ \varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}}}\Big{)}=(J^{-1})^{*}(f^{ \varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}})^{*}(\rho).\] Taking the logarithm of the above formula, we get \[\varphi^{\varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}}\circ F^{\varepsilon_{ j}\mu_{j}+\varepsilon_{k}\mu_{k}}+\log|\partial_{w}F^{\varepsilon_{j}\mu_{j}+ \varepsilon_{k}\mu_{k}}|^{2}=\log\bigl{(}(f^{\varepsilon_{j}\mu_{j}+ \varepsilon_{k}\mu_{k}})^{*}(\rho)\bigr{)}\circ J^{-1}+\log|(J^{-1})^{\prime}| ^{2}.\] Then, using the above equations together with Ahlfors formulae (3.74) and Wolpert's formula (3.18), we have \[-\left.\left(\frac{\partial^{2}\log\mathsf{h}_{i}^{\varepsilon_{j} \mu_{j}+\varepsilon_{k}\mu_{k}}}{\partial\varepsilon_{j}\partial\bar{ \varepsilon}_{k}}\right)\right|_{\varepsilon_{j}=\varepsilon_{k}=0}=\lim_{w \to w_{i}}\left.\left(\frac{\partial^{2}}{\partial\varepsilon_{j}\partial \bar{\varepsilon}_{k}}\right)\right|_{\varepsilon_{j}=\varepsilon_{k}=0}\log \bigl{(}(f^{\varepsilon_{j}\mu_{j}+\varepsilon_{k}\mu_{k}})^{*}(\rho)\bigr{)} \circ J^{-1}\\ \stackrel{{\eqref{eq:2.1.1}}}{{=}}\frac{1}{2}\lim_{w \to w_{i}}f_{\mu_{j}\bar{\mu}_{k}}\circ J^{-1}(w)=\frac{1}{2}\left\langle\frac{ \partial}{\partial w_{j}},\frac{\partial}{\partial w_{k}}\right\rangle^{\text{ ell}}_{\text{TZ},i}.\] * \(i=n_{e}+1,\ldots,n-1\) _case:_ \[-\frac{\partial^{2}\log\mathsf{h}_{i}}{\partial w_{j}\partial\bar{w}_{ k}}=-\left.\left(\frac{\partial^{2}\log\mathsf{h}_{i}^{\varepsilon_{j}\mu_{j}+ \varepsilon_{k}\mu_{k}}}{\partial\varepsilon_{j}\partial\bar{\varepsilon}_{k}} \right)\right|_{\varepsilon_{j}=\varepsilon_{k}=0}\\ =2\lim_{w\to w_{i}}\left\{\frac{1}{|w-w_{i}|}\left.\frac{ \partial^{2}}{\partial\varepsilon_{j}\partial\bar{\varepsilon}_{k}}\right|_{ \varepsilon_{j}=\varepsilon_{k}=0}\left(F^{(\varepsilon\mu)_{jk}}\right)^{*} \left(e^{-\frac{1}{2}\varphi^{(\varepsilon\mu)_{jk}}(w)}\right)\right.\\ +\left.e^{-\frac{1}{2}\varphi(w)}\left.\frac{\partial}{ \partial\varepsilon_{j}}\right|_{\varepsilon_{j}=0}\left(\frac{\partial_{w}F^ {\varepsilon_{j}\mu_{j}}(w)}{F^{\varepsilon_{j}\mu_{j}}(w)}\right)^{\frac{1} {2}}\left.\frac{\partial}{\partial\bar{\varepsilon}_{k}}\right|_{ \varepsilon_{k}=0}\left(\frac{\overline{\partial_{w}F^{\varepsilon_{k}\mu_{k }}(w)}}{\overline{F^{\varepsilon_{k}\mu_{k}}(w)}}\right)^{\frac{1}{2}}\right\}\] \[\overset{\eqref{eq:2.1}}{=}\lim_{w\to w_{i}}\left\{-\frac{1}{2} \log|w-w_{i}|f_{\mu_{j}\bar{\mu}_{k}}\circ J^{-1}(w)+\frac{1}{2}e^{-\frac{1}{2 }\varphi(w)}\times\right.\\ \left.\frac{\left((w-w_{i})\partial_{w}\dot{F}^{j}(w)-\dot{F}^{j} (w)+\dot{F}^{j}(w_{i})\right)}{|w-w_{i}|(\bar{w}-\bar{w}_{i})^{\frac{1}{2}}} \frac{\overline{\left((w-w_{i})\partial_{w}\dot{F}^{k}(w)-\dot{F}^{k}(w)+\dot {F}^{k}(w_{i})\right)}}{|w-w_{i}|(w-w_{i})^{\frac{1}{2}}}\right\}\] \[\overset{\eqref{eq:2.1}}{=}-\frac{1}{2}\lim_{w\to\infty}\frac{ \log|w-w_{i}|}{\operatorname{Im}(\varsigma_{i}^{-1}\circ J^{-1}(w))}( \operatorname{Im}\varsigma_{i}z)f_{\mu_{j}\bar{\mu}_{k}}(\varsigma_{i}z)\] \[=\pi\lim_{w\to\infty}(\operatorname{Im}\varsigma_{i}z)f_{\mu_{j} \bar{\mu}_{k}}(\varsigma_{i}z)\overset{\eqref{eq:2.1}}{=}\frac{4\pi}{3} \left\langle\frac{\partial}{\partial w_{j}},\frac{\partial}{\partial w_{k}} \right\rangle_{TZ,i}^{cusp},\] where in the last line, we have used the fact that \(\log|w-w_{i}|/\operatorname{Im}(\varsigma_{i}^{-1}\circ J^{-1}(w))=-2\pi\) as \(w\to w_{i}\). * \(i=n\) _case:_ \[\frac{\partial^{2}\log\mathsf{h}_{1}}{\partial w_{j}\partial\bar{w} _{k}}=\left.\left(\frac{\partial^{2}\log\mathsf{h}_{1}^{\varepsilon_{j}\mu_{j} +\varepsilon_{k}\mu_{k}}}{\partial\varepsilon_{j}\partial\bar{\varepsilon}_{k}} \right)\right|_{\varepsilon_{j}=\varepsilon_{k}=0}\\ =\lim_{w\to\infty}\left\{\left.\frac{\partial^{2}}{\partial \varepsilon_{j}\partial\bar{\varepsilon}_{k}}\right|_{\varepsilon_{j}= \varepsilon_{k}=0}\left(\log\left|F^{(\varepsilon\mu)_{jk}}\right|^{2}-2 \left(F^{(\varepsilon\mu)_{jk}}\right)^{*}\left(e^{-\frac{1}{2}\varphi^{( \varepsilon\mu)_{jk}}}\right)\left|\frac{\partial_{w}F^{(\varepsilon\mu)_{jk}}} {F^{(\varepsilon\mu)_{jk}}}\right|\right)(w)\right\}\\ =-2\lim_{w\to\infty}\left\{\left.\frac{1}{|w|}\left.\frac{ \partial^{2}}{\partial\varepsilon_{j}\partial\bar{\varepsilon}_{k}}\right|_{ \varepsilon_{j}=\varepsilon_{k}=0}\left(F^{(\varepsilon\mu)_{jk}}\right)^{*} \left(e^{-\frac{1}{2}\varphi^{(\varepsilon\mu)_{jk}}}\right)\right.\\ +\left.e^{-\frac{1}{2}\varphi(w)}\left.\frac{\partial}{\partial \varepsilon_{j}}\right|_{\varepsilon_{j}=0}\left(\frac{\partial_{w}F^{ \varepsilon_{j}\mu_{j}}(w)}{F^{\varepsilon_{j}\mu_{j}}(w)}\right)^{\frac{1}{2}} \left.\frac{\partial}{\partial\bar{\varepsilon}_{k}}\right|_{\varepsilon_{k}=0} \left(\frac{\overline{\partial_{w}F^{\varepsilon_{k}\mu_{k}}(w)}}{\overline{F^ {\varepsilon_{k}\mu_{k}}(w)}}\right)^{\frac{1}{2}}\right\}\] \[\overset{\eqref{eq:2.1}}{=}\lim_{w\to\infty}\left\{\frac{e^{- \frac{1}{2}\varphi(w)}}{2|w|}f_{\mu_{j}\bar{\mu}_{k}}\circ J^{-1}-\frac{e^{- \frac{1}{2}\varphi(w)}}{2}\frac{\left(w\partial_{w}\dot{F}^{j}(w)-\dot{F}^{j} (w)\right)}{|w|\bar{w}^{\frac{1}{2}}}\frac{\overline{\left(w\partial_{w}\dot{F} ^{k}(w)-\dot{F}^{k}(w)\right)}}{|w|w^{\frac{1}{2}}}\right\}\] \[=\lim_{w\to\infty}\left\{\frac{|w|\log|w|}{2|w|}f_{\mu_{j}\bar{ \mu}_{k}}\left(J^{-1}(w)\right)\right\}\overset{\eqref{eq:2.1}}{=}\frac{1}{2} \lim_{w\to\infty}\frac{\log|w|}{\operatorname{Im}(J^{-1}(w))}(\operatorname{Im}z) f_{\mu_{j}\bar{\mu}_{k}}(z)\] \[=\pi\lim_{w\to\infty}(\operatorname{Im}z)f_{\mu_{j}\bar{\mu}_{k}}(z) \overset{\eqref{eq:2.1}}{=}\frac{4\pi}{3}\left\langle\frac{\partial}{\partial w _{j}},\frac{\partial}{\partial w_{k}}\right\rangle_{\operatorname{TZ},n}^{ \operatorname{cusp}}.\] In the above equation, we have used the fact that \(\log|w|/\operatorname{Im}(J^{-1}(w))=2\pi\) as \(w\to\infty\). Using Lemmas 3.2 and 5.3, we can deduce the following corollary: **Corollary 5.1**.: _The function \(-\log\mathsf{H}=-m_{1}h_{1}\log\mathsf{h}_{1}\cdots-m_{n-1}h_{n-1}\log\mathsf{h} _{n-1}+\log\mathsf{h}_{n}\) is a Kahler potential for the combination \(\frac{4\pi}{3}\omega_{TZ}^{cusp}+\frac{1}{2}\sum_{j=1}^{n_{e}}m_{j}h_{j}\, \omega_{TZ,j}^{ell}\) on \(\mathcal{M}_{0,n}(\mathbf{m})\). The first Chern form of the Hermitian holomorphic \(\mathbb{Q}\)-line bundle \((\lambda_{0,\mathbf{m}},\mathsf{H})\) over \(\mathfrak{M}_{0,n}(\mathbf{m})\) is given by_ \[\mathsf{c}_{1}\left(\lambda_{0,\mathbf{m}},\mathsf{H}\right)=\frac{\sqrt{-1}}{2\pi }\bar{\partial}\partial\log\mathsf{H}=\frac{4\pi}{3}\omega_{TZ}^{cusp}+\frac{ 1}{2}\sum_{j=1}^{n_{e}}m_{j}h_{j}\,\omega_{TZ,j}^{ell}. \tag{5.4}\] **Remark 5.1**.: In analogy with the famous accessory parameters that are generated by \(S_{\mathbf{m}}\), the authors of [59] have defined the so-called "_auxiliary parameters_" as76 Footnote 76: Our definition of auxiliary parameters differs from that of reference [59] by a factor of \(\frac{1}{2}\). \[d_{i}:=\frac{1}{\mathsf{H}}\frac{\partial\mathsf{H}}{\partial w_{i}}. \tag{5.5}\] As emphasized in [59], the auxiliary parameters play an equally important role as the accessory parameters. In particular, it follows from the above corollary that an analog of the relation (4.25) can be found between the auxiliary parameters and Takhtajan-Zograf metrics. Next, Let us remind that the decomposition of \(r(z)\) is given by \[r(z)=\sum_{i=1}^{n-3}\mathsf{a}_{i}\,r_{i}(z), \tag{5.6}\] where \[\mathsf{a}_{i}=\iint_{\mathcal{F}(\Gamma)}\operatorname{Sch}\left(J;z\right) \mu_{i}(z)\,\mathrm{d}^{2}z\,. \tag{5.7}\] According to Section 3.2, for varying \(\Gamma\), the \(r(z)\) determines a \((1,0)\)-form r on \(\mathcal{T}_{0,n}\) and the latter one is corresponding to the \((1,0)\)-form \(\vartheta\),77 Footnote 77: See Lemma 3.3. \[\vartheta=\sum_{i=1}^{n-3}\mathsf{a}_{i}\,\mathrm{d}w_{i}\,,\] on \(\mathcal{M}_{0,n}\). Therefore, we also have: **Corollary 5.2**.: _The function \(\mathscr{S}_{\mathbf{m}}:\mathcal{M}_{0,n}\to\mathbb{R}\) satisfies_ \[\partial\mathscr{S}_{\mathbf{m}}=2\vartheta \tag{5.8}\] _and_ \[\bar{\partial}\partial\mathscr{S}_{\mathbf{m}}=-2\sqrt{-1}\left[\omega_{\text{WP} }-\frac{4\pi^{2}}{3}\omega_{TZ}^{cusp}-\frac{\pi}{2}\sum_{j=1}^{n_{e}}m_{j}h _{j}\,\omega_{TZ,j}^{ell}\right]. \tag{5.9}\] To see the (5.8), put \(\mathscr{S}_{\mathbf{m}}=S_{\mathbf{m}}-\pi\log\mathsf{H}\) and combine Lemma 5.1 with the proof of Theorem 4.7 which it gives \[\partial\mathscr{S}_{\mathbf{m}} =\partial S_{\mathbf{m}}-\pi\sum_{j=1}^{n}m_{j}h_{j}\,\partial\log \mathsf{h}_{j}=\sum_{i=1}^{n-3}\left(\frac{\partial S_{\mathbf{m}}}{\partial w_{i} }\right)dw_{i}-\pi\sum_{j=1}^{n}\sum_{i=1}^{n-3}m_{j}h_{j}\left(\frac{\partial }{\partial w_{i}}\log\mathsf{h}_{j}\right)dw_{i}\] \[=-2\pi\sum_{i=1}^{n-3}c_{i}dw_{i}-\pi\sum_{i=1}^{n-3}\sum_{j=1}^{ n}h_{j}\dot{F}_{w}^{i}(w_{j})dw_{i}=2\sum_{i=1}^{n-3}\left(-\pi c_{i}+\sum_{j=1}^{ n}h_{j}(\mathscr{E}_{j},M_{i})\right)dw_{i}\] \[\overset{\ref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_defdef_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_ The definition of \(\mathscr{E}_{j}\) is provided by (3.101). The \(\mathsf{R}(w)\) coincides with a \((1,0)\)-form \(\mathscr{Q}\) on the Schottky pace \(\mathfrak{S}_{g,n}(\mathbf{m})\) \[\begin{split}\mathscr{Q}=\sum_{i=1}^{3g-3+n}\mathrm{b}_{i}dw_{i}& =\mathrm{b}_{1}d\lambda_{1}+\cdots+\mathrm{b}_{g}d\lambda_{g}+ \mathrm{b}_{g+1}da_{3}+\cdots+\mathrm{b}_{2g-3}da_{g}\\ &+\mathrm{b}_{2g-2}db_{2}+\cdots+\mathrm{b}_{3g-3}db_{g}+ \mathrm{b}_{3g-2}dw_{1}+\cdots+\mathrm{b}_{3g-3+n}dw_{n}.\end{split} \tag{5.12}\] In the following two theorems, which can be regarded as generalizations of Theorems 1 and 2 of [1], we will explicitly describe canonical connections and curvature forms of the Hermitian holomorphic (\(\mathbb{Q}\)-)line bundles \(\mathscr{L}_{i}\) and \(\mathscr{L}=\bigotimes_{i=1}^{n}\mathscr{L}_{i}^{b_{i}}\): **Theorem 1**.: _Let \(\partial\) and \(\bar{\partial}\) be \((1,0)\) and \((0,1)\) components of the de Rham differential on Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\). The following statements are true._ 1. _On the Hermitian holomorphic line bundle_ \((\mathscr{L}_{i},\mathfrak{h}_{i}^{m_{i}})\)_, the canonical connection is given by_78__ Footnote 78: When \(m_{i}=\infty\), we will simply ignore \(m_{i}\) in the following formula. \[\partial\log\mathfrak{h}_{i}^{m_{i}}=-\frac{2}{\pi}\mathsf{R}_{i}.\] 2. _On the Hermitian holomorphic_ \(\mathbb{Q}\)_-line bundle_ \((\mathscr{L},e^{\frac{\mathbb{S}_{\mathbf{m}}}{\pi}})\)_, the canonical connection is given by_ \[\frac{1}{\pi}\partial S_{\mathbf{m}}=2\mathsf{R}_{0}.\] 3. _The function_ \(\mathscr{S}_{\mathbf{m}}:\mathfrak{S}_{g,n}(\mathbf{m})\to\mathbb{R}\) _given by Eq.(_4.62_) satisfies_ \[\partial\mathscr{S}_{\mathbf{m}}=2\mathscr{Q}.\] Proof.: We will prove each statement separately: * In order to prove part \((i)\), it is sufficient to show that \[\left(\frac{\partial\log\mathfrak{h}_{i}^{\varepsilon\mu_{j}}}{\partial \varepsilon}\right)\bigg{|}_{\varepsilon=0}=\begin{cases}-\frac{2}{\pi m_{i}} \left(\mathscr{E}_{i},M_{j}\right)&\text{for}\quad i=1,\ldots,n_{e},\\ -\frac{2}{\pi}\left(\mathscr{E}_{i},M_{j}\right)&\text{for}\quad i=n_{e}+1, \ldots,n.\end{cases}\] Using Lemma 5.1, we have \[\left(\frac{\partial\log\mathfrak{h}_{i}^{\varepsilon\mu_{j}}}{\partial \varepsilon}\right)\bigg{|}_{\varepsilon=0}=\begin{cases}\frac{1}{m_{i}} \partial_{w}\dot{F}^{j}(w_{i})&\text{for}\quad i=1,\ldots,n_{e},\\ \partial_{w}\dot{F}^{j}(w_{i})&\text{for}\quad i=n_{e}+1,\ldots,n.\end{cases}\] (5.13) In another side, according to (3.63) and (3.101), one see \[\pi\partial_{w}\dot{F}^{j}(w_{i})=-\iint_{\dot{\mathcal{D}}}M_{j}(w)\left( \frac{1}{(w-w_{i})^{2}}-\frac{1}{w(w-1)}\right)\mathrm{d}^{2}w=-2\left( \mathscr{E}_{i},M_{j}\right),\] which by substituting in (5.13) gives the desired result. * To prove part \((ii)\), we need to show that \[\left.\frac{\partial}{\partial\varepsilon}\right|_{\varepsilon=0}S_{\mathbf{m}}([L_{1 }^{\varepsilon\mu_{i}},\ldots,L_{g}^{\varepsilon\mu_{i}}];w_{1}^{\varepsilon \mu_{i}},\ldots,w_{n}^{\varepsilon\mu_{i}})=-2\pi c_{i}\quad\text{for}\quad i=1, \ldots,3g-3+n.\] We have \[\begin{split}\mathcal{L}_{\mu_{i}}S_{\mathbf{m}}& =\left.\frac{\partial}{\partial\varepsilon}\right|_{\varepsilon=0}S _{\mathbf{m}}\left([L_{1}^{\varepsilon\mu_{i}},\ldots,L_{g}^{\varepsilon\mu_{i}}];w _{1}^{\varepsilon\mu_{i}},\ldots,w_{n}^{\varepsilon\mu_{i}}\right)\\ &=\left.\frac{\sqrt{-1}}{2}\lim_{\epsilon\to 0}\left.\frac{ \partial}{\partial\varepsilon}\right|_{\varepsilon=0}I_{\epsilon}( \varepsilon)+\lim_{\epsilon\to 0}\frac{\partial\tilde{S}^{(\text{ct}) \epsilon}}{\partial w_{i}},\end{split}\] (108) with \(\tilde{S}^{(\text{ct})\epsilon}\) is given in equation (107) and \[\begin{split} I_{\epsilon}(\varepsilon)=\iint_{F^{\epsilon\mu _{i}}(\hat{\mathcal{D}}_{\epsilon})}|\partial_{w}\varphi^{\varepsilon\mu_{i}} |^{2}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}+\sum_{k=2}^{g}\oint_{F^{\epsilon\mu _{i}}(C_{k})}\theta_{(L_{k}^{\varepsilon\mu_{i}})^{-1}}(\varphi^{\varepsilon \mu_{i}})\\ +\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)\oint_{F^ {\epsilon\mu_{i}}(C_{j}^{\epsilon})}\varphi^{\varepsilon\mu_{i}}\left(\frac{ \mathrm{d}\bar{w}}{\bar{w}-\overline{w_{j}^{\varepsilon\mu_{i}}}}-\frac{ \mathrm{d}w}{w-w_{j}^{\varepsilon\mu_{i}}}\right),\end{split}\] where once again, we have used the Gauss-Bonnet formula for Riemann orbisurfaces [102; 103] \[\frac{\sqrt{-1}}{2}\iint_{F^{\epsilon\mu_{i}}(\hat{\mathcal{D}})}e^{ \varphi^{\epsilon\mu_{i}}}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}=2\pi\left(2g+ \sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)+n_{p}-2\right)=-2\pi \chi(X),\] to conclude that \[\frac{\sqrt{-1}}{2}\mathcal{L}_{\mu_{i}}\iint_{\hat{\mathcal{D}}}e^{\varphi} \,\mathrm{d}w\wedge\mathrm{d}\bar{w}=0.\] The calculation of \(\mathcal{L}_{\mu_{i}}S_{\mathbf{m}}\) closely follows the corresponding computation in the proof of Theorem 1 in [8], where regularization at the punctures can be found in the proof of Theorem 1 in [7] and for branch points in the proof of Theorem 4.7. More explicitly, by applying the change of variable formula \(\int_{F(\hat{\mathcal{D}})}\omega=\int_{\hat{\mathcal{D}}}F^{*}(\omega)\) and noting to the commutative diagram 3.102, one finds \[I_{\epsilon}(\varepsilon) =\iint_{\hat{\mathcal{D}}_{\epsilon}(\varepsilon)}(F^{\varepsilon \mu_{i}})^{*}\Big{(}|\partial_{w}\varphi^{\varepsilon\mu_{i}}|^{2}\,\mathrm{d}w \wedge\mathrm{d}\bar{w}\,\Big{)}+\sum_{k=2}^{g}\oint_{C_{k}}(F^{\varepsilon \mu_{i}})^{*}\left(\theta_{(L_{k}^{\epsilon\mu_{i}})^{-1}}(\varphi^{ \varepsilon\mu_{i}})\right)\] \[+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}(\varepsilon)}(F^{\varepsilon\mu_{i}})^{*}\left(\varphi^{\varepsilon \mu_{i}}\left(\frac{\mathrm{d}\bar{w}}{\bar{w}-w_{j}^{\varepsilon\mu_{i}}}- \frac{\mathrm{d}w}{w-w_{j}^{\varepsilon\mu_{i}}}\right)\right)\] \[=\iint_{\hat{\mathcal{D}}_{\epsilon}(\varepsilon)}|\partial_{w} \varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}|^{2}\,\mathrm{d}F^{ \varepsilon\mu_{i}}(w)\wedge\mathrm{d}\overline{F^{\varepsilon\mu_{i}}(w)}\] \[+\sum_{k=2}^{g}\oint_{C_{k}}\left(\varphi^{\varepsilon\mu_{i}} \circ F^{\varepsilon\mu_{i}}-\frac{1}{2}\log|(L_{k}^{\varepsilon\mu_{i}})^{ \prime}\circ F^{\varepsilon\mu_{i}}|^{2}-\log|l_{k}^{\varepsilon\mu_{i}}|^{2}\right)\] \[\times\left(\frac{(L_{k}^{\varepsilon\mu_{i}})^{\prime\prime} \circ F^{\varepsilon\mu_{i}}}{(L_{k}^{\varepsilon\mu_{i}})^{\prime}\circ F^{ \varepsilon\mu_{i}}}\,\mathrm{d}F^{\varepsilon\mu_{i}}(w)-\frac{\overline{( L_{k}^{\varepsilon\mu_{i}})^{\prime\prime}\circ F^{\varepsilon\mu_{i}}}}{(L_{k}^{ \varepsilon\mu_{i}})^{\prime}\circ F^{\varepsilon\mu_{i}}}\,\mathrm{d} \overline{F^{\varepsilon\mu_{i}}(w)}\right)\] \[+\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}(\varepsilon)}(\varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon\mu_ {i}})\left(\frac{\mathrm{d}\overline{F^{\varepsilon\mu_{i}}(w)}}{F^{ \varepsilon\mu_{i}}(w)-w_{j}^{\varepsilon\mu_{i}}}-\frac{\mathrm{d}F^{ \varepsilon\mu_{i}}(w)}{F^{\varepsilon\mu_{i}}(w)-w_{j}^{\varepsilon\mu_{i}}} \right).\] which by noting that \(\mathrm{d}F^{\varepsilon\mu_{i}}(w)=\partial_{w}F^{\varepsilon\mu_{i}}(dw+ \varepsilon M_{i}\,d\bar{w})\) and \(\mathrm{d}\overline{F^{\varepsilon\mu_{i}}(w)}=\overline{\partial_{w}F^{ \varepsilon\mu_{i}}}(\overline{\varepsilon}\overline{M_{i}}dw+d\bar{w})\), it turns to \[I_{\epsilon}(\varepsilon)=\iint_{\hat{\mathcal{D}}_{\epsilon}( \varepsilon)}|\partial_{w}\varphi^{\varepsilon\mu_{i}}\circ F^{\varepsilon \mu_{i}}|^{2}\,|\partial_{w}F^{\varepsilon\mu_{i}}|^{2}(1-|\varepsilon M_{i}|^ {2})\,\mathrm{d}w\wedge\mathrm{d}\bar{w}\] \[-2\sum_{k=2}^{g}\oint_{C_{k}}\varphi^{\varepsilon\mu_{i}}\circ F ^{\varepsilon\mu_{i}}\,\frac{\overline{(L_{k}^{\varepsilon\mu_{i}})^{\prime \prime}\circ F^{\varepsilon\mu_{i}}}}{(L_{k}^{\varepsilon\mu_{i}})^{\prime} \circ F^{\varepsilon\mu_{i}}}\,\overline{\partial_{w}F^{\varepsilon\mu_{i}}} (\bar{\varepsilon}\overline{M_{i}}\,\mathrm{d}w+\mathrm{d}\bar{w})\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\partial_{w}\varphi^{\varepsilon\mu_ {i}}\circ F^{\varepsilon\mu_{i}}\,\log|(L_{k}^{\varepsilon\mu_{i}})^{\prime} \circ F^{\varepsilon\mu_{i}}|^{2}\,\partial_{w}F^{\varepsilon\mu_{i}}( \mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\partial_{\bar{w}}\varphi^{ \varepsilon\mu_{i}}\circ F^{\varepsilon\mu_{i}}\,\log|(L_{k}^{\varepsilon\mu_{ i}})^{\prime}\circ F^{\varepsilon\mu_{i}}|^{2}\,\overline{\partial_{w}F^{ \varepsilon\mu_{i}}}(\bar{\varepsilon}\overline{M_{i}}\,\mathrm{d}w+\mathrm{d} \bar{w})\] \[+\sum_{k=2}^{g}\oint_{C_{k}}\log|(L_{k}^{\varepsilon\mu_{i}})^{ \prime}\circ F^{\varepsilon\mu_{i}}|^{2}\,\frac{\overline{(L_{k}^{\varepsilon \mu_{i}})^{\prime\prime}\circ F^{\varepsilon\mu_{i}}}}{(L_{k}^{\varepsilon\mu_ {i}})^{\prime}\circ F^{\varepsilon\mu_{i}}}\,\overline{\partial_{w}F^{ \varepsilon\mu_{i}}}(\bar{\varepsilon}\overline{M_{i}}\,\mathrm{d}w+\mathrm{d} \bar{w})\] \[+8\pi\sqrt{-1}\sum_{k=2}^{g}\log|l_{k}^{\varepsilon\mu_{i}}|^{2}\] \[+\sum_{j=1}^{n_{e}}\!\!\left(1-\frac{1}{m_{j}}\right)\!\!\oint_{C_{ j}^{\epsilon}(\varepsilon)}\!\!\left(\varphi^{\varepsilon\mu_{i}}\circ F^{ \varepsilon\mu_{i}}\right)\!\!\left(\frac{\overline{\partial_{w}F^{\varepsilon \mu_{i}}}(\bar{\varepsilon}\overline{M_{i}}\,\mathrm{d}w+\mathrm{d}\bar{w})}{ \overline{w^{\varepsilon\mu_{i}}}-w_{j}^{\varepsilon\mu_{i}}}-\frac{\partial_{w }F^{\varepsilon\mu_{i}}(\mathrm{d}w+\varepsilon M_{i}\,\mathrm{d}\bar{w})}{w^{ \varepsilon\mu_{i}}-w_{j}^{\varepsilon\mu_{i}}}\right)\] where \(C_{k}\)s are the usual components of \(\partial\hat{\mathcal{D}}\) and \[\begin{cases}\hat{\mathcal{D}}_{\epsilon}(\varepsilon)=\hat{\mathcal{D}}\backslash \bigcup_{i=1}^{n}D_{i}^{\epsilon}(\varepsilon),\\ D_{i}^{\epsilon}(\varepsilon)=\left\{w\in\mathcal{D}\,\Big{|}\,|w^{ \varepsilon\mu_{i}}-w_{i}^{\varepsilon\mu_{i}}|<\epsilon\right\},\\ C_{j}^{\epsilon}(\varepsilon)=\partial D_{j}^{\epsilon}(\varepsilon)=\left\{w \in\mathcal{D}\,\Big{|}\,|w^{\varepsilon\mu_{i}}-w_{j}^{\varepsilon\mu_{i}}| =\epsilon\right\}.\end{cases}\] Just as in the proof of Theorem 4.7, to calculate \(\dot{I}_{\epsilon}^{i}\), we must differentiate both the integrand and the integration domain \(\dot{\hat{\mathcal{D}}}_{\epsilon}(\varepsilon)\). Using Eq.(4.16), the second contribution gives \[\frac{\partial}{\partial\varepsilon}\bigg{|}_{\varepsilon=0}\iint_{\hat{ \mathcal{D}}_{\epsilon}(\varepsilon)}|\partial_{w}\varphi|^{2}\,\mathrm{d}w \wedge\mathrm{d}\bar{w}=-\sum_{j=1}^{n}\oint_{\partial D_{j}^{\epsilon}( \varepsilon)}|\partial_{w}\varphi|^{2}\left(\dot{F}^{i}(w)-\dot{F}^{i}(w_{j} )\right)\mathrm{d}\bar{w}\,, \tag{5.15}\] where the boundaries \(\partial D_{j}^{\epsilon}(\varepsilon)\) are oriented in a manner that is counter to the orientation of \(\partial\hat{\mathcal{D}}_{\epsilon}\). These boundaries also oriented as a boundary of \(D_{j}^{\epsilon}(\varepsilon)\). Moreover, the contribution from the variation of integral domain \(C_{j}^{\varepsilon}\) vanishes since \(\partial C_{j}^{\epsilon}=\emptyset\). The differentiation under the integral sign repeats the calculations done in the proof of [8, Theorem 1] almost word-for-word. The only change is that the integration domain has now changed to \(\mathcal{D}_{\epsilon}\). Accordingly, \[\dot{I}_{\epsilon}^{i} =\iint_{\hat{\mathcal{D}}_{\epsilon}}\left[(\partial_{w}\dot{ \varphi}^{i}+\partial_{w}^{2}\varphi\dot{F}^{i})\partial_{\bar{w}}\varphi+( \partial_{\bar{w}}\dot{\varphi}^{i}+\partial_{w}\partial_{\bar{w}}\varphi\dot {F}^{i})\partial_{w}\varphi+|\partial_{w}\varphi|^{2}\partial_{w}\dot{F}^{i} \right]\mathrm{d}w\wedge\mathrm{d}\bar{w}\] \[-\sum_{j=1}^{n}\oint_{\partial D_{j}^{\epsilon}}|\partial_{w} \varphi|^{2}\left(\dot{F}^{i}(w)-\dot{F}^{i}(w_{j})\right)\mathrm{d}\bar{w}-2 \sum_{k=2}^{g}\oint_{C_{k}}(\dot{\varphi}^{i}+\partial_{w}\varphi\dot{F}^{i}) \frac{\overline{L_{k}^{\prime\prime}}}{\overline{L_{k}^{\prime}}}\,\mathrm{d} \bar{w}\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\left[(\partial_{w}\dot{\varphi}^{i} +\partial_{w}^{2}\varphi\dot{F}^{i})\log|L_{k}^{\prime}|^{2}\,\mathrm{d}w+ \partial_{w}\varphi\frac{(\dot{L}_{k}^{i})^{\prime}+L_{k}^{\prime\prime}\dot{ F}^{i}}{L_{k}^{\prime}}\,\mathrm{d}w\right.\] \[\left.\hskip 56.905512pt+\partial_{w}\varphi\log|L_{k}^{\prime}|^ {2}\partial_{w}\dot{F}^{i}\,\mathrm{d}w+\partial_{w}\varphi\log|L_{k}^{ \prime}|^{2}M_{i}\,\mathrm{d}\bar{w}\right]\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\left[(\partial_{\bar{w}}\dot{ \varphi}^{i}+\partial_{w}\partial_{\bar{w}}\varphi\dot{F}^{i})\log|L_{k}^{ \prime}|^{2}+\partial_{w}\varphi\frac{(\dot{L}_{k}^{i})^{\prime}+L_{k}^{\prime \prime}\dot{F}^{i}}{L_{k}^{\prime}}\right]\mathrm{d}\bar{w}\] \[+\sum_{k=2}^{g}\oint_{C_{k}}\frac{(\dot{L}_{k}^{i})^{\prime}+L_{k} ^{\prime\prime}\dot{F}^{i}}{L_{k}^{\prime}}\frac{\overline{L_{k}^{\prime\prime }}}{\overline{L_{k}^{\prime}}}\,\mathrm{d}\bar{w}+8\pi\sqrt{-1}\sum_{k=2}^{g} \frac{\dot{l}_{k}^{i}}{l_{k}}\] \[-\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C_{j}^{ \epsilon}}\partial_{w}\dot{F}^{i}\left(\frac{d\bar{w}}{\bar{w}-\bar{w}_{j}}- \frac{dw}{w-w_{j}}\right)+\mathcal{O}\left(1\right),\] where we have used Eq.(4.18) and the fact that \(F^{\varepsilon\mu_{i}}\) and \(L_{k}^{\varepsilon\mu_{i}}\) are holomorphic in the variable \(\varepsilon\in\mathbb{C}\). As in the proof of Theorem 4.7, we can use Lemma 3.4 and the equality \(\partial_{\bar{w}}\dot{F}^{i}=M_{i}\) to rewrite the above equation as \[\dot{I}^{i}_{\epsilon} =\iint_{\dot{\mathcal{D}}_{\epsilon}}\left[(-\partial_{w}\varphi \partial_{w}\dot{F}^{i}-\partial_{w}^{2}\dot{F}^{i})\partial_{\bar{w}}\varphi+( -\partial_{w}\varphi\partial_{\bar{w}}\dot{F}^{i}-\partial_{w}\partial_{\bar{w} }\dot{F}^{i})\partial_{w}\varphi+|\partial_{w}\varphi|^{2}\partial_{w}\dot{F}^ {i}\right]\mathrm{d}w\wedge\mathrm{d}\bar{w}\] \[-\sum_{j=1}^{n}\oint_{\partial D^{\prime}_{j}}|\partial_{w} \varphi|^{2}\left(\dot{F}^{i}(w)-\dot{F}^{i}(w_{j})\right)\mathrm{d}\bar{w}+2 \sum_{k=2}^{g}\oint_{C_{k}}\partial_{w}\dot{F}^{i}\frac{\overline{L^{\prime \prime}_{k}}}{\overline{L^{\prime}_{k}}}\mathrm{d}\bar{w}\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\left[(-\partial_{w}\varphi\partial_ {w}\dot{F}^{i}-\partial_{w}^{2}\dot{F}^{i})\log|L^{\prime}_{k}|^{2}\,\mathrm{ d}w+\partial_{w}\varphi\frac{(\dot{L}^{i}_{k})^{\prime}+L^{\prime\prime}_{k} \dot{F}^{i}}{L^{\prime}_{k}}\,\mathrm{d}w\right.\] \[\left.\hskip 56.905512pt+\partial_{w}\varphi\log|L^{\prime}_{k}|^{ 2}\partial_{w}\dot{F}^{i}\,\mathrm{d}w+\partial_{w}\partial_{\bar{w}}\dot{F}^ {i}\log|L^{\prime}_{k}|^{2}\,\mathrm{d}\bar{w}\,\right]\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\left(\partial_{w}\varphi-\frac{ \overline{L^{\prime\prime}_{k}}}{\overline{L^{\prime}_{k}}}\right)\frac{( \dot{L}^{i}_{k})^{\prime}+L^{\prime\prime}_{k}\dot{F}^{i}}{L^{\prime}_{k}}\, \mathrm{d}\bar{w}+8\pi\sqrt{-1}\sum_{k=2}^{g}\frac{\dot{l}^{i}_{k}}{l_{k}}\] \[-\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)\oint_{C ^{\epsilon}_{j}}\partial_{w}\dot{F}^{i}\left(\frac{d\bar{w}}{\bar{w}-\bar{w}_ {j}}-\frac{dw}{w-w_{j}}\right)+\mathcal{O}\left(1\right).\] Let us first compute the integral over \(\dot{\mathcal{D}}_{\epsilon}\): \[\iint_{\dot{\mathcal{D}}_{\epsilon}}\left[(-\partial_{w}\varphi \partial_{w}\dot{F}^{i}-\partial_{w}^{2}\dot{F}^{i})\partial_{\bar{w}}\varphi +(-\partial_{w}\varphi\partial_{\bar{w}}\dot{F}^{i}-\partial_{w}\partial_{ \bar{w}}\dot{F}^{i})\partial_{w}\varphi+|\partial_{w}\varphi|^{2}\partial_{w} \dot{F}^{i}\right]\mathrm{d}w\wedge\mathrm{d}\bar{w}\] \[=\iint_{\dot{\mathcal{D}}_{\epsilon}}\left[\big{(}2\partial_{w}^{2 }\varphi-(\partial_{w}\varphi)^{2}\big{)}\partial_{\bar{w}}\dot{F}^{i}-2\frac{ \partial}{\partial w}\Big{(}\partial_{w}\varphi\partial_{\bar{w}}\dot{F}^{i} \Big{)}\right.\] \[\left.\hskip 113.811024pt+\frac{\partial}{\partial\bar{w}}\Big{(} \partial_{w}\varphi\partial_{w}\dot{F}^{i}\Big{)}-\frac{\partial}{\partial w} \Big{(}\partial_{\bar{w}}\varphi\partial_{w}\dot{F}^{i}\Big{)}\right]\mathrm{d }w\wedge\mathrm{d}\bar{w}\] \[=2\iint_{\dot{\mathcal{D}}_{\epsilon}}T_{\varphi}M_{i}\,\mathrm{ d}w\wedge\mathrm{d}\bar{w}\underbrace{-2\int_{\dot{\partial}\dot{ \mathcal{D}}_{\epsilon}}\partial_{w}\varphi\partial_{\bar{w}}\dot{F}^{i}\, \mathrm{d}\bar{w}}_{I_{1}}\underbrace{-\int_{\dot{\partial}\dot{\mathcal{D}}_{ \epsilon}}\partial_{w}\varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}w}_{I_{2}}- \underbrace{\int_{\dot{\partial}\dot{\mathcal{D}}_{\epsilon}}\partial_{\bar{w}} \varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}}_{I_{3}},\] where in the last line, we have used the definition of the Energy-Momentum \(T_{\varphi}=\mathrm{Sch}\left(J^{-1};w\right)=\partial_{w}^{2}\varphi-\frac{1}{ 2}(\partial_{w}\varphi)^{2}\). In order to compute the integrals \(I_{1}\), \(I_{2}\), and \(I_{3}\), we need some useful identities. From the equality \(F^{\epsilon\mu_{i}}\circ L_{k}=L_{k}^{\epsilon\mu_{i}}\circ F^{\epsilon\mu_{i}}\) for \(k=1,\ldots,g\) one can see that \[\left\{\begin{aligned} &\dot{F}^{i}\circ L_{k}=\dot{F}^{i}L^{ \prime}_{k}+\dot{L}^{i}_{k},\\ &\partial_{w}\dot{F}^{i}\circ L_{k}\,L^{\prime}_{k}=\partial_{w} \dot{F}^{i}L^{\prime}_{k}+\dot{F}^{i}L^{\prime\prime}_{k}+\dot{L}^{i}_{k},\\ &\partial_{\bar{w}}\dot{F}^{i}\circ L_{k}\,\overline{L^{\prime}_{k} }=\partial_{\bar{w}}\dot{F}^{i}L^{\prime}_{k},\end{aligned}\right.\] Moreover, from Eq.(3.105) one has \[\partial_{w}\varphi\circ L_{k}\,L^{\prime}_{k}+\frac{L^{\prime\prime}_{k}}{L^{ \prime}_{k}}=\partial_{w}\varphi,\qquad\partial_{\bar{w}}\varphi\circ L_{k}\, \overline{L^{\prime}_{k}}+\frac{\overline{L^{\prime\prime}_{k}}}{\overline{L^{ \prime}_{k}}}=\partial_{\bar{w}}\varphi.\] Now, recalling that \(\partial\dot{\mathcal{D}}_{\epsilon}=\left[\bigcup_{k=1}^{g}\left(C_{k}\cup C_{k}^{ \prime}\right)\right]\cup\left[\bigcup_{i=1}^{n+l}C_{i}^{\epsilon}\right]\) where \(C_{k}^{\prime}=-L_{k}(C_{k})\), together with implementing some of the above identities we get \[I_{1} =-2\int_{\partial\dot{\mathcal{D}}_{\epsilon}}\partial_{w}\varphi \partial_{\bar{w}}\dot{F}^{i}\,\mathrm{d}\bar{w}\] \[=-2\sum_{k=2}^{g}\oint_{C_{k}}\partial_{w}\varphi\partial_{\bar{ w}}\dot{F}^{i}\,\mathrm{d}\bar{w}-2\sum_{j=1}^{n_{e}}\oint_{C_{j}^{\epsilon}} \partial_{w}\varphi\partial_{\bar{w}}\dot{F}^{i}\,\mathrm{d}\bar{w}-2\sum_{j= 1}^{n_{p}}\oint_{C_{n_{e+j}}^{\epsilon}}\partial_{w}\varphi\partial_{\bar{w}} \dot{F}^{i}\,\mathrm{d}\bar{w}\] \[=-2\sum_{k=2}^{g}\oint_{C_{k}}\partial_{\bar{w}}\dot{F}^{i}\frac {L_{k}^{\prime\prime}}{L_{k}^{\prime}}\,\mathrm{d}\bar{w}-2\sum_{j=1}^{n_{e}} \oint_{C_{j}^{\epsilon}}\partial_{w}\varphi M_{i}\,\mathrm{d}\bar{w}-2\sum_{j= 1}^{n_{p}}\oint_{C_{n_{e+j}}^{\epsilon}}\partial_{w}\varphi M_{i}\,\mathrm{d} \bar{w}\,,\] which it can be simplified more by using the asymptotic expansions (C.30) \[I_{1} =2\sum_{k=2}^{g}\oint_{C_{k}}\left[\left(\frac{L_{k}^{\prime \prime}}{L_{k}^{\prime}}\right)^{\prime}\dot{F}^{i}+\frac{L_{k}^{\prime\prime }}{L_{k}^{\prime}}\partial_{w}\dot{F}^{i}\right]\mathrm{d}w\] \[-2\sum_{j=1}^{n_{e}}\oint_{C_{j}^{\epsilon}}\left(-\frac{1-\frac{ 1}{m_{j}}}{w-w_{j}}+\frac{c_{j}}{1-\frac{1}{m_{j}}}+\cdots\right)\left(\frac{ \bar{q}_{1}^{(j)}}{4\bar{J}_{1}^{(j)}}\cdot\left|w-w_{j}\right|^{1-\frac{2}{m_{ j}}}+\cdots\right)\mathrm{d}\bar{w}\] \[-2\sum_{j=1}^{n_{p}-1}\oint_{C_{n_{e}+j}^{\epsilon}}\left(-\frac{ 1}{w-w_{n_{e}+j}}\left(1+\frac{1}{\log\left|\frac{w-w_{n_{e}+j}}{J_{1}^{(n_{e }+j)}}\right|}\right)+\cdots\right)\times\] \[\qquad\qquad\qquad\times\left(-\frac{|\delta_{n_{e}+j}|\bar{q}_{ 1}^{(n_{e}+j)}}{4\pi^{2}\bar{J}_{1}^{(n_{e}+j)}}\cdot\left|w-w_{n_{e}+j}\right| \log^{2}\left|w-w_{n_{e}+j}\right|+\cdots\right)\mathrm{d}\bar{w}\] \[-2\oint_{C_{n}^{\epsilon}}\left(-\frac{1}{w}\left(1+\frac{1}{\log \left|\frac{w}{J_{-1}^{(n)}}\right|}\right)-\frac{c_{n}}{w^{2}}+\cdots\right) \left(-\frac{|\delta_{n}|^{2}\bar{q}_{1}^{(n)}\bar{J}_{-1}^{(n)}}{4\pi^{2}} \cdot\frac{\log^{2}\left|w\right|}{\left|w\right|}+\cdots\right)\mathrm{d} \bar{w}\] \[=2\sum_{k=2}^{g}\oint_{C_{k}}\left[\left(\frac{L_{k}^{\prime \prime}}{L_{k}^{\prime}}\right)^{\prime}\dot{F}^{i}+\frac{L_{k}^{\prime\prime }}{L_{k}^{\prime}}\partial_{w}\dot{F}^{i}\right]\mathrm{d}w+\mathcal{O}\left(1 \right);\] Moreover, \[I_{2} =-\int_{\partial\dot{\mathcal{D}}_{\epsilon}}\partial_{w}\varphi \partial_{w}\dot{F}^{i}\,\mathrm{d}w=-\sum_{k=2}^{g}\oint_{C_{k}}\partial_{w} \varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}w-\sum_{j=1}^{n}\oint_{C_{j}^{ \epsilon}}\partial_{w}\varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}w\] \[=-\sum_{k=2}^{g}\oint_{C_{k}}\left[\frac{L_{k}^{\prime\prime}}{L_ {k}^{\prime}}\partial_{w}\dot{F}^{i}-\left(\partial_{w}\varphi-\frac{L_{k}^{ \prime\prime}}{L_{k}^{\prime}}\right)\frac{(\dot{L}_{k}^{i})^{\prime}+L_{k}^{ \prime\prime}\dot{F}^{i}}{L_{k}^{\prime}}\right]\mathrm{d}w-\sum_{j=1}^{n} \oint_{C_{j}^{\epsilon}}\partial_{w}\varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}w\,;\] Furthermore, \[I_{3} =-\int_{\partial\dot{\mathcal{D}}_{\epsilon}}\partial_{\bar{w}} \varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}=-\sum_{k=2}^{g}\oint_{C_{k}} \partial_{\bar{w}}\varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}-\sum_{j=1}^{n }\oint_{C_{j}^{\epsilon}}\partial_{\bar{w}}\varphi\partial_{w}\dot{F}^{i}\, \mathrm{d}\bar{w}\] \[=-\sum_{k=2}^{g}\oint_{C_{k}}\left[\frac{\overline{L_{k}^{\prime \prime}}}{\overline{L_{k}^{\prime}}}\partial_{w}\dot{F}^{i}-\left(\partial_{ \bar{w}}\varphi-\frac{\overline{L_{k}^{\prime\prime}}}{\overline{L_{k}^{ \prime}}}\right)\frac{(\dot{L}_{k}^{i})^{\prime}+L_{k}^{\prime\prime}\dot{F}^{i }}{L_{k}^{\prime}}\right]\mathrm{d}\bar{w}-\sum_{j=1}^{n}\oint_{C_{j}^{ \epsilon}}\partial_{\bar{w}}\varphi\partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}\,;\] Using the fact that \[-\sum_{j=1}^{n}\oint_{C^{\epsilon}_{j}}\partial_{w}\varphi\partial_{w}\dot{F}^{i} \,\mathrm{d}w-\sum_{j=1}^{n}\oint_{C^{\epsilon}_{j}}\partial_{\bar{w}}\varphi \partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}=-\sum_{j=1}^{n}\oint_{C^{\epsilon}_{ j}}\partial_{w}\dot{F}^{i}\,\mathrm{d}\varphi=\mathcal{O}\left(1\right)\quad\text{as} \quad\epsilon\to 0,\] we get \[I_{1}+I_{2}+I_{3} =2\sum_{k=2}^{g}\oint_{C_{k}}\left[\left(\frac{L^{\prime\prime}_{ k}}{L^{\prime}_{k}}\right)^{\prime}\dot{F}^{i}+\frac{L^{\prime\prime}_{k}}{L^{ \prime}_{k}}\partial_{w}\dot{F}^{i}\right]\mathrm{d}w\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\left[\frac{L^{\prime\prime}_{k}}{L^{ \prime}_{k}}\partial_{w}\dot{F}^{i}-\left(\partial_{w}\varphi-\frac{L^{\prime \prime}_{k}}{L^{\prime}_{k}}\right)\frac{(\dot{L}^{i}_{k})^{\prime}+L^{\prime \prime}_{k}\dot{F}^{i}}{L^{\prime}_{k}}\right]\mathrm{d}w\] \[-\sum_{k=2}^{g}\oint_{C_{k}}\left[\frac{\overline{L^{\prime\prime }_{k}}}{L^{\prime}_{k}}\partial_{w}\dot{F}^{i}-\left(\partial_{\bar{w}} \varphi-\frac{\overline{L^{\prime\prime}_{k}}}{L^{\prime}_{k}}\right)\frac{( \dot{L}^{i}_{k})^{\prime}+L^{\prime\prime}_{k}\dot{F}^{i}}{L^{\prime}_{k}} \right]\mathrm{d}\bar{w}+\mathcal{O}\left(1\right).\] Substituting the above results in Eq.(5.16), we get \[\dot{I}^{i}_{\epsilon} =2\iint_{\hat{\mathcal{D}}_{\epsilon}}T_{\varphi}M_{i}\,\mathrm{d }w\wedge\mathrm{d}\bar{w}-\sum_{j=1}^{n}\oint_{C^{\epsilon}_{j}}|\partial_{w} \varphi|^{2}\left(\dot{F}^{i}(w)-\dot{F}^{i}(w_{j})\right)\mathrm{d}\bar{w}\] \[+\sum_{k=2}^{g}\oint_{C_{k}}\left[\partial_{w}\dot{F}^{i}\left( \frac{L^{\prime\prime}_{k}}{L^{\prime}_{k}}\,\mathrm{d}w+\frac{\overline{L^{ \prime\prime}_{k}}}{L^{\prime}_{k}}\,\mathrm{d}\bar{w}\right)+\log|L^{\prime}_ {k}|^{2}\left(\partial_{w}^{2}\dot{F}^{i}\,\mathrm{d}w+\partial_{\bar{w}} \partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}\right)\right]\] \[+\sum_{k=2}^{g}\oint_{C_{k}}\dot{F}^{i}\left[2\left(\frac{L^{ \prime\prime}_{k}}{L^{\prime}_{k}}\right)^{\prime}-\left(\frac{L^{\prime\prime }_{k}}{L^{\prime}_{k}}\right)^{2}\right]\mathrm{d}w\] \[+\sum_{k=2}^{g}\oint_{C_{k}}\frac{(\dot{L}^{i}_{k})^{\prime}+L^{ \prime\prime}_{k}}{(L^{\prime}_{k})^{2}}\,\mathrm{d}w+8\pi\sqrt{-1}\sum_{k=2}^ {g}\frac{\dot{l}^{i}_{k}}{l_{k}}\] \[-\sum_{j=1}^{n_{\epsilon}}\left(1-\frac{1}{m_{j}}\right)\oint_{C^ {\epsilon}_{j}}\partial_{w}\dot{F}^{i}\left(\frac{d\bar{w}}{\bar{w}-\bar{w}_{j }}-\frac{dw}{w-w_{j}}\right)+\mathcal{O}\left(1\right).\] Using the fact that the second line in (5.17) can be written as \[\sum_{k=2}^{g}\oint_{C_{k}}\left[\partial_{w}\dot{F}^{i}\left( \frac{L^{\prime\prime}_{k}}{L^{\prime}_{k}}\,\mathrm{d}w+\frac{\overline{L^{ \prime\prime}_{k}}}{L^{\prime}_{k}}\,\mathrm{d}\bar{w}\right)+\log|L^{\prime}_ {k}|^{2}\left(\partial_{w}^{2}\dot{F}^{i}\,\mathrm{d}w+\partial_{\bar{w}} \partial_{w}\dot{F}^{i}\,\mathrm{d}\bar{w}\right)\right]\] \[=\sum_{k=2}^{g}\oint_{C_{k}}\mathrm{d}\left(\partial_{w}\dot{F}^ {i}\log|L^{\prime}_{k}|^{2}\right),\] one can see that this sum vanishes; the third line in (5.17) is also equal to \(0\), since \[\sum_{k=2}^{g}\oint_{C_{k}}\dot{F}^{i}\left[2\left(\frac{L^{\prime\prime}_{k}}{ L^{\prime}_{k}}\right)^{\prime}-\left(\frac{L^{\prime\prime}_{k}}{L^{\prime}_{k}} \right)^{2}\right]\mathrm{d}w=2\sum_{k=2}^{g}\oint_{C_{k}}\dot{F}^{i}\,\mathrm{ Sch}\left(L_{k};w\right)\mathrm{d}w=0.\] As for the fourth line of Eq.(5.17), since the contour \(C_{k}\) encircles a zero of the function \(L^{\prime}_{k}\), i.e. the point \(L^{-1}_{k}(\infty)\), we get from the Cauchy formula that \[\oint_{C_{k}}\frac{(\dot{L}^{i}_{k})^{\prime}+L^{\prime\prime}_{k}}{(L^{\prime}_ {k})^{2}}\,\mathrm{d}w=-8\pi\sqrt{-1}\frac{\dot{l}^{i}_{k}}{l_{k}}\quad\text{ for}\quad k=2,\dots,g;\] therefore, the fourth line of Eq.(5.17) vanishes as well. Putting everything together, one has \[\begin{split}\dot{I}^{i}_{\epsilon}=2\iint_{\dot{\mathcal{D}}_{ \epsilon}}T_{\varphi}M_{i}\,\mathrm{d}w\wedge\mathrm{d}\bar{w}-\sum_{j=1}^{n} \oint_{C^{\epsilon}_{j}}|\partial_{w}\varphi|^{2}\left(\dot{F}^{i}(w)-\dot{F}^ {i}(w_{j})\right)\mathrm{d}\bar{w}\\ -\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\oint_{C^{ \epsilon}_{j}}\partial_{w}\dot{F}^{i}\left(\frac{d\bar{w}}{\bar{w}-\bar{w}_{j} }-\frac{dw}{w-w_{j}}\right)+\mathcal{O}\left(1\right).\end{split} \tag{5.18}\] Using Eq.(5.14), the limit \[\lim_{\epsilon\to 0}\iint_{\dot{\mathcal{D}}_{\epsilon}}T_{\varphi}M_{i}\, \mathrm{d}w\wedge\mathrm{d}\bar{w}\stackrel{{\eqref{eq:2.1.1}}}{{= }}-2\sqrt{-1}\mathrm{b}_{i},\] and \[\begin{split}&\lim_{\epsilon\to 0}\sum_{j=1}^{n}\oint_{C^{ \epsilon}_{j}}|\partial_{w}\varphi|^{2}\left(\dot{F}^{i}(w)-\dot{F}^{i}(w_{j}) \right)\mathrm{d}\bar{w}\\ &\stackrel{{ C.1}}{{=}}\lim_{\epsilon\to 0}\sum_{j=1}^{n_{ e}}\left(1-\frac{1}{m_{j}}\right)^{2}\oint_{C^{\epsilon}_{j}}\frac{\dot{F}^{i}(w)- \dot{F}^{i}(w_{j})}{|w-w_{j}|^{2}}\,\mathrm{d}\bar{w}\\ &+\lim_{\epsilon\to 0}\sum_{j=1}^{n_{p}-1}\oint_{C^{\epsilon}_{j}} \left(\frac{\dot{F}^{i}(w)-\dot{F}^{i}(w_{j})}{|w-w_{j}|^{2}}+\frac{2\big{(} \dot{F}^{i}(w)-\dot{F}^{i}(w_{j})\big{)}}{|w-w_{j}|^{2}\log|w-w_{j}|}\right) \mathrm{d}\bar{w}\\ &+\lim_{\epsilon\to 0}\oint_{C^{\epsilon}_{n}}\left(\frac{\dot{F} ^{i}(w)-\dot{F}^{i}(\infty)}{|w|^{2}}+\frac{2\big{(}\dot{F}^{i}(w)-\dot{F}^{i}( \infty)\big{)}}{|w|^{2}\log|w|}\right)\mathrm{d}\bar{w}\\ &=2\pi\sqrt{-1}\sum_{j=1}^{n}\left(1-\frac{1}{m_{j}}\right)^{2} \partial_{w}\dot{F}^{i}(w_{j})\quad\text{with}\quad m_{j}=\infty\quad\text{ for}\quad j=n_{e}+1,\ldots,n,\end{split}\] Accordingly, the Eq.(5.18) is simplified to \[\dot{I}^{i}_{\epsilon}=-4\sqrt{-1}\mathrm{b}_{i}-2\pi\sqrt{-1} \sum_{j=1}^{n}\left(1-\frac{1}{m_{j}}\right)^{2}\partial_{w}\dot{F}^{i}(w_{j} )-4\pi\sqrt{-1}\sum_{j=1}^{n_{e}}\left(1-\frac{1}{m_{j}}\right)\partial_{w} \dot{F}^{i}(w_{j}) \tag{5.19}\] Finally, by putting everything together from (5.14), (5.19) and (4.23), we get \[\begin{split}&\mathcal{L}_{\mu_{i}}S_{\mathbf{m}}=2\mathrm{b}_{i}+\pi \sum_{j=1}^{n}h_{j}\partial_{w}\dot{F}^{i}(w_{j})=-2\pi c_{i}+2\sum_{j=1}^{n} h_{j}\left(\mathcal{E}_{j},M_{i}\right)+\pi\sum_{j=1}^{n}h_{j}\partial_{w}\dot{F}^{i}(w_{ j})\\ &\stackrel{{\eqref{eq:2.1.1}}}{{=}}-2\pi c_{i}.\end{split}\] * The proof of part \((iii)\) follows readily from those of parts \((i)\) and \((ii)\). **Theorem 2**.: _The following statements are also true:_ 1. _The first Chern form of the Hermitian holomorphic line bundle_ \((\mathscr{L}_{i},\mathfrak{h}_{i}^{m_{i}})\) _is given by_ \[\mathsf{c}_{1}\left(\mathscr{L}_{i},\mathfrak{h}_{i}^{m_{i}}\right)=\frac{m_{i}} {2\pi}\omega_{TZ,i}^{ell},\qquad i=1,\ldots,n_{e}.\] 2. _The first Chern form of the Hermitian holomorphic_ \(\mathbb{Q}\)_-line bundle_ \((\mathscr{L},\exp[S_{\boldsymbol{m}}/\pi])\) _is given by_ \[\mathsf{c}_{1}\left(\mathscr{L},exp[S_{\boldsymbol{m}}/\pi]\right)=\frac{1}{ \pi^{2}}\omega_{\text{WP}}.\] 3. _The function_ \(\mathscr{S}_{\boldsymbol{m}}=S_{\boldsymbol{m}}-\pi\log\mathsf{H}\) _satisfies_ \[-\bar{\partial}\partial\mathscr{S}_{\boldsymbol{m}}=2\sqrt{-1}\left(\omega_{ \text{WP}}-\frac{4\pi^{2}}{3}\omega_{TZ}^{\text{cusp}}-\frac{\pi}{2}\sum_{j= 1}^{n_{e}}m_{j}h_{j}\,\omega_{TZ,j}^{ell}\right)\] Proof.: Since \[\mathsf{c}_{1}\left(\mathscr{L}_{i},\mathfrak{h}_{i}^{m_{i}}\right)=\frac{ \sqrt{-1}}{2\pi}\bar{\partial}\partial\log\mathfrak{h}_{i}^{m_{i}},\] the proof of part _(i)_ is exactly the same as that of Lemma 5.3. The proof of part _(ii)_ can be obtained by Theorem 1 and accordingly following similar analysis with the Theorem 4.24. The last part immediately follows from _(i)-(ii)_. ## 6 Discussion and some Future Directions This paper explores the semi-classical limit of Liouville field theory on Riemann orbisurfaces of finite conformal type \((g>1,n)\). This is accomplished through the use of Schottky global coordinates. This study can be seen as an extension of a previous work by Park, Takhtajan, and Teo [1] in which they examined the classical Liouville action on the Schottky space of compact Riemann surfaces with only punctures. In this paper, we include the contributions of orbifold points to the Liouville action. In Section 4.2, we noted that the Liouville action is independent from a specific choice of fundamental domain. Corollary 4.1 already demonstrates that \(\mathscr{S}_{\boldsymbol{m}}=S_{\boldsymbol{m}}-\pi\log\mathsf{H}\) is not reliant on the choice of a fundamental domain for the Schottky group \(\Sigma\); however, this can also be verified using holography. The holographic principle for compact Riemann surfaces \(X\) posits that the classical Liouville action on such surfaces is equal to the renormalized volume of a hyperbolic 3-manifold \(M\), where that surface is its conformal boundary. [53, 54, 104], extended this principle to punctured (for both Schottky and quasi-Fuchsian cases), and [44, 56] to orbifold Riemann surfaces (for only the quasi-Fuchsian case). However, this principle is not yet proven for 3-dimensional handlebody orbifolds with Riemann orbisurfaces as their conformal boundary. Thus, in an upcoming paper [57], we aim to demonstrate that the Liouville action \(\mathscr{S}_{\boldsymbol{m}}\) is linked to the renormalized volume of the corresponding hyperbolic 3-dimensional manifold \(M\) up to some constants that do not rely on moduli parameters. This would also imply that the Liouville action is independent of the fundamental domain chosen. Although the presented proofs only apply to orbifold Riemann surfaces (i.e. conical points of angle \(2\pi/m_{i}\)), we believe that most of our findings can be extended to (hyperbolic) conical Riemann surfaces with genus \(g>1\). Specifically, for weighted punctured Riemann surfaces, the modified classical Liouville action (with Schottky global coordinates) is expected to be accurately given by \(\mathscr{S}_{\boldsymbol{\alpha}}\) with \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\) and the cone angles \(2\pi(1-\alpha_{i})\) where each factor of \((1-\frac{1}{m_{i}})\) in \(\mathscr{S}_{\boldsymbol{m}}\) is replaced by \(\alpha_{i}\). Let's talk about it in more detail here. The weighted punctured Riemann surface \((X,\mathscr{D})\) is a compact Riemann surface \(X\) together with an \(\mathbb{R}\)-divisor \(\mathscr{D}=\sum_{i=1}^{n}\alpha_{i}\,x_{i}\) such that the weights \(0<\alpha_{i}\leq 1\) are associated to each puncture (or marked point) \(x_{i}\). Hyperbolic metrics on weighted punctured Riemann surfaces have, by definition, conical singularities at the punctures -- for this reason, the pair \((X,\mathscr{D})\) is also called a Riemann cone-surface (or conical Riemann surface). The existence and uniqueness of a conical hyperbolic metric with prescribed singularities at a finite number of points on a Riemann surface is a classical problem that is closely related (and in special cases, is equivalent) to the famous Uniformization Problem of Klein and Poincare. Let's remember that such metrics have been considered beginning with the work of Picard [73]. Starting from the classical results by Kazdan-Warner [105, 106, 107], the existence and uniqueness of conical hyperbolic metrics in every conformal class on a \((X,\mathscr{D})\) were proved by McOwen [75] and Troyanov [102]. The necessary and sufficient condition for the existence of a hyperbolic conical metric according to [75, 102], is that the statement of the Gauss-Bonnet theorem holds -- in other words, the degree of the log-canonical divisor \(\mathcal{K}_{X}+\mathscr{D}\) should be positive, where \(\mathcal{K}_{X}\) denotes the canonical divisor of \(X\). The positivity of the log-canonical divisor implies that \(\sum_{i=1}^{n}\alpha_{i}>2\). In this case, the unique hyperbolic metric \(ds^{2}\) on \(X\) is said to be compatible with the divisor \(\mathscr{D}\) if \(ds^{2}\) is a hermitian metric of class \(\mathcal{C}^{\infty}\) on the punctured Riemann surface \(X_{\rm reg}:X\backslash{\rm supp}(\mathscr{D})\) such that if \(u_{i}\) is a holomorphic local coordinate in a neighborhood \(U_{i}\) of \(x_{i}\), then there exists a real-analytic function \(\varphi_{i}(u_{i},\bar{u}_{i})\), which is smooth on \(U_{i}\backslash\{x_{i}\}\), and such that in \(U_{i}\) the metric \(ds^{2}\) is of the form \[ds^{2}=\frac{e^{\varphi_{i}}|du_{i}|^{2}}{|u_{i}-x_{i}|^{2\alpha_{i}}},\] for \(0<\alpha_{i}<1\) and \[ds^{2}=\frac{e^{\varphi_{i}}|du_{i}|^{2}}{-2|u_{i}-x_{i}|^{2}\log^{2}|u_{i}-x_ {i}|},\] for \(\alpha_{i}=1\). Moreover, \(\operatorname{Area}(X,ds^{2})=\pi\deg(\mathcal{K}_{X}+\mathscr{D})=-\pi\chi(X,\mathscr{D})\), where \(\chi(X,\mathscr{D})=\chi(X)-\deg(\mathscr{D})\) is by definition the Euler-Poincare characteristic of the Riemann cone surface \((X,\mathscr{D})\). The dependence of these hyperbolic cone metrics on the vector of weights \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\) is characterized by [108, Proposition 2.2]. The (unique) conical metrics of constant negative curvature (with fixed weights) induce new Kahler structures on the Teichmuller spaces of punctured Riemann surfaces that depend on the cone angles: In [95], Takhtajan and Zograf introduced a generalized Weil-Petersson metric, parameterized by the vector of weights \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\), on the moduli spaces of n-pointed rational curves in the context of Liouville actions. Later, a variation of hyperbolic conical metrics in holomorphic families was studied by Schumacher and Trapani in [108, 109] (see also [110, 111, 112]). They showed that it is still possible to introduce "harmonic Beltrami differentials" with respect to a hyperbolic conical metric, together with a Kodaira-Spencer map needed for the notion of a Weil-Petersson metric. To be more precise, the generalized Kodaira-Spencer map derived in [108, 109] identifies the tangent and cotangent spaces to Teichmuller space \(\mathcal{T}_{g,n}(\boldsymbol{\alpha})\)79 of Riemann cone-surfaces \((X,\mathscr{D})\) with appropriately defined "harmonic Beltrami differentials" and "holomorphic quadratic differentials" on the Riemann cone-surface. It turned out that for \(0<\alpha_{i}<1\), the generalized Weil-Petersson metric depends in a smooth monotone way on the weights \(\alpha_{i}\). In particular, for \(\alpha_{i}\to 0\) one recovers the Weil-Petersson metric for non-punctured surfaces, while for \(\alpha_{i}\to 1\), one gets the Weil-Petersson metric for Riemann surfaces with cusps. Note that in the case of general conical singularities, the monodromy group \(\Gamma\) of the Fuchsian differential equation is no longer discrete in \(\mathrm{PSL}(2,\mathbb{R})\); hence, \(\mathcal{T}_{g,n}(\boldsymbol{\alpha})\neq\mathcal{T}(\Gamma)\) (see [95, 113] and [87, SS2.2]). Moreover, one can consider the variation of hyperbolic conical metrics in holomorphic families, studied in [108, 109], and introduce a generalized Takhtajan-Zograf (TZ) metric. In particular, one should study the behavior of the integral kernel of the Resolvant \((\Delta_{0}+1/2)^{-1}\) (see, e.g. [114, SS8]). Although a conical singularity is simple, its presence has a profound impact on the Laplace operator. Unlike surfaces with smooth boundaries, the Laplace operator no longer has a canonical self-adjoint extension. Instead, it has many self-adjoint extensions, and these need not be equivalent [115]. Probably the most notable self-adjoint extension of the Laplace-Beltrami operator on a Riemann surface with conical singularities is given by the so-called Friedrichs extension (see, e.g. [60, 116]). Using the same method as in [108, 109], one can show that this generalized TZ metric depends in a smooth monotone way on the weights \(0<\alpha_{i}<1\) and that, for \(\alpha_{i}=1-1/m_{i}\) with integers \(2\leq m_{i}<\infty\), it corresponds to the elliptic TZ metric introduced in [2]. One can also construct a Kahler potential \((\sim\log\mathsf{H}_{\boldsymbol{\alpha}})\) for these generalized TZ metrics in terms of the solutions of Fuchsian differential equations. To be more precise, Kahler potentials for generalized TZ metric should be defined in terms of the coefficients of the expansion [95, Eq.(9)] (alternatively, in terms of the expansion of \(\varphi\) as in [60, Eq. (1.2) and (1.5)]). This generalizes the results of [1] to the case of Riemann cone surfaces. Lastly, regularized Liouville action \(S_{\boldsymbol{\alpha}}\) should be defined similar to [95] and the combination \(\mathscr{S}_{\boldsymbol{\alpha}}=S_{\boldsymbol{\alpha}}-\pi\log\mathsf{H}_{ \boldsymbol{\alpha}}\) should define a function on \(\mathfrak{S}_{g,n}(\boldsymbol{\alpha})\) (and \(\mathfrak{M}_{g,n}(\boldsymbol{\alpha})\)) (see also [60, Theorem 1.1]). Moreover, the different asymptotic behavior should be derived using the Fuchsian differential equation in analogy with [95, Lemma 4]. This asymptotics can be used to derive the first and second variations of \(\mathscr{S}_{\boldsymbol{\alpha}}=S_{\boldsymbol{\alpha}}-\pi\log\mathsf{H}_ {\boldsymbol{\alpha}}\). It is worth remembering that, when \(g>1\), the regularized Liouville action \(S_{\boldsymbol{\alpha}}\) has to be defined on the Schottky fundamental domain with the singularities removed; in this case, the fibration \(\jmath:\mathfrak{S}_{g,n}(\boldsymbol{\alpha})\to\mathfrak{S}_{g}\) allows us to write \(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g,n}(\boldsymbol{\alpha})\) as \(\jmath^{*}(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g})\oplus T^{*}_{\pi\circ\Phi( 0)}\mathscr{F}_{n}\left(\mathcal{D}\right)\). This decomposition makes it possible to use the methods of [8] on \(\jmath^{*}(T^{*}_{\pi\circ\Phi(0)}\mathfrak{S}_{g})\) while variations w.r.t. cone points should be carried out using a method similar to [95]. Accordingly, most of our findings can be extended to conical Riemann surfaces with genus \(g>1\). Finally, we mention some interpretations related to our results and also future direc tions: * abelian differentials of the first kind, and the period matrix \(\boldsymbol{\tau}\),80 Footnote 80: The \(\boldsymbol{\tau}\) is called period matrix. \[\int_{\alpha_{k}}v_{k^{\prime}}=\delta_{kk^{\prime}},\hskip 28.452756pt \boldsymbol{\tau}_{kk^{\prime}}=\int_{\beta_{k}}v_{k^{\prime}},\] with \[\operatorname{Im}\boldsymbol{\tau}_{kk^{\prime}}=\langle v_{k},v_{k^{\prime}} \rangle=\frac{i}{2}\int_{\mathcal{D}}v_{k}\,\bar{v}_{k^{\prime}}\;e^{\phi(w)} dwd\bar{w}.\] By defining also \(\det^{\prime}\Delta_{0}\) (zeta function regularized determinant of the Laplace operator in the hyperbolic metric \(\exp(\varphi)|dw|^{2}\) acting on functions) and \(S_{g}=S_{g,\boldsymbol{m}=0}\) as function on the Schottky space \(\mathfrak{S}_{g}\), Zograf [16] shown that there exists a holomorphic function \(\mathfrak{F}_{g}:\mathfrak{S}_{g}\to\mathbb{C}\), such that \[\frac{\det^{\prime}\Delta_{0}}{\det\operatorname{Im}\boldsymbol{\tau}}=c_{g} \,e^{-\frac{S_{g}}{12\pi}}\,|\mathfrak{F}_{g}|^{2},\] (6.1) with \(c_{g}\) as a constant that depends just on the genus \(g\) and \[\mathfrak{F}_{\mathfrak{g}}=\prod_{\{\gamma\}}\prod_{k=0}^{\infty}\left(1 - \;q_{\gamma}^{1+k}\right),\] (6.2) where \(\{\gamma\}\) runs over all the distinct primitive conjugacy classes in \(\Gamma\) (i.e., \(\gamma\in\Gamma\), that cannot be written as the power of any other element of \(\Gamma\))81, excluding the identity, and \(q_{\gamma}\) is the multiplier of \(\gamma\in\Gamma\) ( \(q_{\gamma}+1/q_{\gamma}=|\mathrm{Tr}\gamma|\)). Now, by comparing our result in part \((iv)\) of Theorem 2, Footnote 81: Or equivalently, the set of simple closed geodesics on the surface, and \(\log\{\gamma\}\) is the corresponding geodesic length. \[\bar{\partial}\partial\,\,\log\frac{\det\operatorname{Im}\boldsymbol{\tau}}{ \det^{\prime}\Delta_{0}}=-\frac{\sqrt{-1}}{6\pi}\left(\omega_{\mathrm{WP}} - \frac{4\pi^{2}}{3}\omega_{\mathrm{TZ}}^{\mathrm{cusp}}-\frac{\pi}{2}\sum_{j=1 }^{n_{e}}m_{j}h_{j}\,\omega_{\mathrm{TZ},j}^{\mathrm{ell}}\right),\] we can generalize Zograf's formula (6.1) using a function \(\mathfrak{F}_{g,n}(\boldsymbol{m})\) on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\), i.e. \(\mathfrak{F}_{g,n}(\boldsymbol{m}):\mathfrak{S}_{g,n}(\boldsymbol{m})\to \mathbb{C}\), such that \[\frac{\det^{\prime}\Delta_{0}}{\det\operatorname{Im}\boldsymbol{\tau}}=c_{g,n} (\boldsymbol{m})\;e^{-\frac{\mathscr{U}_{\boldsymbol{m}}}{12\pi}}\;\big{|} \mathfrak{F}_{g,n}(\boldsymbol{m})\big{|}^{2},\] (6.3) where \(\mathscr{S}_{\mathbf{m}}=S_{\mathbf{m}}-\pi\log\mathsf{H}\) is the classical generalized Liouville action and \(c_{g,n}(\mathbf{m})\) is a constant depending only on \(g,n\) and \(\mathbf{m}\). It would be interesting to find the exact form of the function \(\mathfrak{F}_{g,n}(\mathbf{m})\) which its importance, particularly from the perspective of physics, will be clear in the following items.83 Footnote 83: The exact form of this holomorphic anomaly formula for the determinant of Laplacian on sphere with just three conical singularities and on singular genus zero surfaces that can be glued from copies of (hyperbolic, spherical, or flat) double triangles have been found in [60] and [61], respectively. As a by-product, for these cases, the accessory parameters, the Liouville action, and \(\log\mathsf{H}\) are also explicitly evaluated. We would like to thank Victor Kalvin for informing us about [61]. * Zograf's formula has an interesting geometric description in the context of the Quillen metric and local index theorem. The Quillen metric on \(\lambda_{1}\) (the determinant line bundle associated with the Cauchy-Riemann operator \(\bar{\partial}_{1}\), usually called the _Hodge line bundle_) is defined by \[\left(\|v\|_{1}^{\text{Quill}}\right)^{2}=\frac{\det\text{Im}\,\mathbf{\tau}}{ \det^{\prime}\Delta_{0}},\] and accordingly, the first Chern form of the Hermition line bundle \((\lambda_{1},\|v\|_{1}^{\text{Quill}})\) over \(\mathfrak{S}_{g}\) is given by \[c_{1}(\lambda_{1},\|v\|_{1}^{\text{Quill}})=\frac{1}{12\pi^{2}}\omega_{\text{ WP}}.\] This observation provides the existence of an isometry between the line bundle over \(\mathfrak{M}_{g}\) determined by carrying the Hermitian metric \(\exp[S_{g}/12\pi]\) and the line bundle \(\lambda_{1}\) with the Quillen metric. More generally, Takhtajan and Zograf have studied the local index theorem for families of \(\bar{\partial}\)-operators in the orbifold setting [2]. The main result of this paper (see [2, Theorem 2]) is the following formula on the moduli space \(\mathfrak{M}_{g,n}(\mathbf{m})\) of punctured orbifold Riemann surfaces \(O=[\mathbb{H}/\Gamma]\): \[\mathsf{c}_{1}\left(\lambda_{k},\|\cdot\|_{k}^{Quill}\right)\!\!=\!\!\frac{6k^ {2}-6k+1}{12\pi^{2}}\omega_{WP}\!-\!\frac{1}{9}\omega_{cusp}\!-\!\frac{1}{4\pi }\!\!\sum_{j=1}^{n_{e}}m_{j}\!\Bigg{[}B_{2}\!\!\left(\left\{\frac{k-1}{m_{j}} \right\}\right)\!\!-\frac{1}{6m_{j}^{2}}\Bigg{]}\omega_{j}^{ell},\] (6.4) for \(k\geq 1\). Here \(\lambda_{k}\) is the determinant line bundle associated with the Cauchy-Riemann operator \(\bar{\partial}_{k}\) and is a holomorphic \(\text{Mod}(\Gamma)\)-invariant line bundle on \(\mathcal{T}(\Gamma)\) whose fibers are given by the determinant lines \(\bigwedge^{\max}\ker\bar{\partial}_{k}\otimes\left(\bigwedge^{\max}\text{ coker}\bar{\partial}_{k}\right)^{-1}\) while \(\|\cdot\|_{k}^{\text{Quill}}=\|\cdot\|_{k}/\sqrt{\det\Delta_{k}}\) denotes the _Quillen norm_ in \(\lambda_{k}\);84 since the determinant line bundle \(\lambda_{k}\) is Mod-invariant, one can alternatively think of \(\lambda_{k}\) as a holomorphic \(\mathbb{Q}\)-line bundle on the moduli space \(\mathfrak{M}_{g,n}(\mathbf{m})=\mathcal{T}(\Gamma)/\,\text{Mod}(\Gamma)\). Finally, \(B_{2}(x)=x^{2}-x+1/6\) is the second Bernoulli polynomial, and \(\{x\}\) denotes the fractional part of \(x\in\mathbb{Q}\). It is clear from Eq.(6.4) that for \(k=1\), the first Chern form of the determinant line bundle \(\lambda_{1}\) with Quillen norm is given by \(\frac{1}{12\pi^{2}}(\omega_{\text{WP}}-\frac{4\pi^{2}}{3}\omega_{\text{TZ}}^{ \text{cusp}}-\frac{\pi}{2}\sum_{j=1}^{n_{e}}m_{j}h_{j}\omega_{\text{TZ},j}^{ \text{ell}})\). Comparing this observation with our result in part \((iv)\) of Theorem 2 and noting to [16, Theorem 3.1], suggests that the following remark is correct: \[\left(\lambda_{k},\|\cdot\|_{k}^{\text{Quill}}\right)\!\!=\!\frac{1}{12\pi^{2}} \omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\! \frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{ \text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12 \pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\! \cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}} \omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\! \frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{ \text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12 \pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\! \cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{ \text{WP}}\!\!\cdot\!\frac{1}{12\pi^{2}}\omega_{\text{WP}}\!\!\cdot\!\frac{1}{12 \pi^{2}}\omega_{\text{WP}}\! **Remark 6.1**.: The function \(\mathscr{S}_{\mathbf{m}}\) on the Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\) determines a holomorphic \(\mathbb{Q}\)-line bundle \(\lambda_{{}_{Sch}}\) on the moduli space \(\mathfrak{M}_{g,n}(\mathbf{m})\) with Hermitian metric \(\left\langle\cdot,\cdot\right\rangle_{{}_{Sch}}\), where \(\left\langle 1,1\right\rangle_{{}_{Sch}}=\exp(\mathscr{S}_{\mathbf{m}}/12\pi)\) (here, \(1\) is understood as the corresponding section of the trivial bundle \(\mathfrak{S}_{g,n}(\mathbf{m})\times\mathbb{C}\to\mathfrak{S}_{g,n}(\mathbf{m})\)). The Hermitian \(\mathbb{Q}\)-line bundle \(\left(\lambda_{{}_{Sch}};\left\langle\cdot,\cdot\right\rangle_{{}_{Sch}}\right)\) is isometrically isomorphic to the Hodge line bundle \(\left(\lambda_{{}_{Hod}};\left\langle\cdot,\cdot\right\rangle_{{}_{Quil}}\right)\) over \(\mathfrak{M}_{g,n}(\mathbf{m})\) -- i.e. there exists an isomorphism \(\imath:\lambda_{{}_{Sch}}\to\lambda_{{}_{Hod}}\) such that \(\left\langle s,s\right\rangle_{{}_{Sch}}=\left\langle\imath\circ s,\imath \circ s\right\rangle_{{}_{Quil}}\) for every local section \(s\) of the bundle \(\lambda_{{}_{Sch}}\). While we do not attempt proving the above claim rigorously, we would like to comment that it might be possible to do so in analogy with the proof of [16, Theorem 3.1]. In particular, a very interesting question is to investigate whether it would be possible to determine the constants \(c_{\gamma}\) and \(c\) in the following naive generalization \[f_{\gamma}(t)=\exp\biggl{(}c\int_{t_{*}}^{t}\partial(\tilde{\mathscr{S}_{\mathbf{ m}}}-\tilde{\mathscr{S}_{\mathbf{m}}}\circ\gamma)+\frac{c}{2}\left(\tilde{\mathscr{S}_{ \mathbf{m}}}(t_{*})-\tilde{\mathscr{S}_{\mathbf{m}}}(\gamma t_{*})+2\pi\sqrt{-1}c_{ \gamma}\right)\biggr{)}, \tag{6.5}\] of \(f_{\gamma}\)s constructed in [16], such that Eq. (6.5) still defines a 1-cocycle of the action of Teichmuller modular group \(\operatorname{Mod}(\Gamma)\) (see [16] for more details). In this formula, \(\gamma\in\operatorname{Mod}(\Gamma)\) denotes mapping classes, \(\tilde{\mathscr{S}_{\mathbf{m}}}\) is defied as \(\mathscr{S}_{\mathbf{m}}\circ\pi\) where \(\pi:\mathcal{T}_{g,n}(\mathbf{m})\to\mathfrak{S}_{g,n}(\mathbf{m})\) is a natural holomorphic cover, \(t_{*}\) denotes a marked point in the Teichmuller space \(\mathcal{T}_{g,n}(\mathbf{m})\) while \(t\) is any other point in this space, and \(\partial\) denotes the (1,0)-component of the exterior differentiation operator on \(\mathcal{T}_{g,n}(\mathbf{m})\). It is worth mentioning that the Determinant Line Bundles and Quillen Metrics in the conical case have been constructed in [108, SS6]. The curvature tensor of the Weil-Petersson metric for Teichmuller spaces of compact (or punctured) Riemann surfaces was computed explicitly by Tromba [117] and Wolpert [118]. In [108, SS7], the authors show the analogous result for the weighted punctured case. * The function \(\mathfrak{F}_{g}\) has an important rule to find the physical wave-function of 3-dimensional pure AdS quantum gravity and define the last ingredient of canonical quantization, i.e., the scalar product between the physical wave-functions: the physical norm should be invariant under the mapping class group transformations. As we will see, it is achievable via the Quillen norm as follows. Let us recall that according to Eq.(6.1), \[\frac{\det^{\prime}\Delta_{0}}{\det\operatorname{Im}\tau}=c_{g}\,e^{-\frac{S_ {g}}{12\pi}}\,|\mathfrak{F}_{g}|^{2}.\] The determinant of the operator \(\Delta_{0}\) is invariant under large diffeomorphisms, and let's assume that under \(\operatorname{Mod}\) transformation, we have \[\det\operatorname{Im}\tau\to\det\operatorname{Im}\tau^{\prime}=|\mathfrak{A} |^{2}\,\det\operatorname{Im}\tau,\hskip 28.452756ptS_{g}\to S_{g}^{\prime}=S_{g}+ \mathfrak{B}+\bar{\mathfrak{B}},\] (6.6) which imply that the function \(\mathfrak{F}_{g}\) should transforms as follows \[\mathfrak{F}_{g}\to\mathfrak{F}_{g}^{\prime}=\frac{e^{\mathfrak{B}}}{\mathfrak{A }}\ \mathfrak{F}_{g}.\] Now, if the matrix \(U^{j}_{i}\), under large diffeomorphism, relates wave functions defined in two coordinate systems, i.e. \(\Psi^{\prime}{}^{i}=U^{i}_{j}e^{-\mathfrak{B}\mathsf{D}}\,\Psi^{j}\), then the functions \[\tilde{\Psi}^{i}=(\mathfrak{F}_{g})^{\mathsf{D}}\,\Psi^{i},\] (6.7) will transform as \[\tilde{\Psi}^{i}\to\tilde{\Psi}^{\prime i}=\mathfrak{A}^{-\mathsf{D}}\,U^{i}_ {j}\,\tilde{\Psi}^{j}.\] (6.8) By noting the equations (6.6), (6.7) and (6.8), the MCG-invariant scalar product of physical wave functions \(\tilde{\Psi}\) can be defined based on Quillen norm (see [119; 120] as well as [121]) \[\langle\tilde{\Psi}_{1},\tilde{\Psi}_{2}\rangle=\int_{\mathcal{T}(\Sigma)} \left(\frac{\det^{\prime}\Delta_{0}}{\det\mathrm{Im}\,\tau}\right)^{-\mathsf{ D}}\,\overline{\tilde{\Psi}}_{1}\wedge*\tilde{\Psi}_{2}.\] Accordingly, the function \(\mathfrak{F}_{g}\) determines the physical wave functions of 3-dimensional pure quantum gravity and their MCG-invariant scalar product. In light of the aforementioned observation and taking into account the remarks in the Introduction regarding the inclusion of massive particles in the path integral of 3-dimensional gravity (or considering 3-dimensional Seifert manifolds whose KK-reductions are related to conical Riemann surfaces), the function \(\mathfrak{F}_{g,n}(\boldsymbol{m})\) plays crucial rule in defining the physical wave-functions (and their scalar product) of 3-dimensional quantum gravity in presence of those massive particles (or in presence of Seifert manifold's contribution). This itself presents an intriguing problem warranting further exploration. Apart from that, \(\mathfrak{F}_{g}\) plays another significant role in the connection between the physical wave function of 3-dimensional pure gravity and dual two-dimensional Virasoro conformal blocks. According to [122], the 3-dimensional physical wave-functions, obeying the Gauss law constraints, are conformal blocks of quantum Liouville theory. Exploring this proposal in the presence of conical singularities, based on our results, is also an interesting problem, i.e., finding the relation between the wave-function of 3-dimensional AdS gravity in the presence of particles with special masses and quantum Liouville theory with some special vertex operators and even more to determine whether the partition function of gravity can indeed be decomposed into Virasoro characters. * Our results are applicable to two-dimensional theories of gravity such as deformed Jackiw-Teitelboim gravity [123; 124; 125; 126; 127; 123; 124; 125; 126; 127], where the gravitational path integral can be written as an integral over the moduli space of orbifold (conical) Riemann surfaces. * Our results, via an analytic continuation of the classical Matschull process, have potential application in the study of black hole production with a non-trivial topology inside the horizon from the collision of massive point particles with a certain mass [128; 129; 58].85 Footnote 85: Also see [130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 1777; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 2444; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 285; 287; 288; 289; 291; 286; 287; 288; 289; 288; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313] and [41; \(\lx@sectionsign\)4]. - that of the Selberg trace formula and its generalizations - where the sum over elements is used to compute the spectrum of differential operators on. The Selberg zeta function associated with the Riemann surface is (6.9) where is a unitary character. For, the above product admits a meromorphic continuation to the complex -plane. According to the Selberg trace formula, for the case where has just hyperbolic and parabolic elements, it is shown [135] that (6.1) where. Hence, Zograf's formula (6.1) also gives a factorization of as a function on. Accordingly, it would be intriguing to find a function akin to the Selberg zeta function acting on, utilizing the generalized holomorphic factorization theorem (6.3). That is important, at least, for finding the contribution of 1-loop effects to the partition function of 3-dimensional AdS gravity in the presence of massive particles. Let us remind that the one-loop partition function of pure three-dimensional gravity is given by (6.3) where is the kinetic operator for linearized graviton (symmetric traceless) fluctuations around the chosen background and are related to kinetic operators of vector ghost and weyl mode, respectively, since to compute the partition function we must also include the Fadeev-Popov modes arising due to gauge fixing. By computing each one concretely, it is shown that [136] (6.10) When is a solid torus,, and consists of the generator of and its inverse, the above formula reduces to (6.11) with.86 Comparing (6.10) with (6.9) implies that Footnote 86: Unlike the, the higher-loop contributions to the partition function may not vanish for the general case. At least for the handlebody geometries, there exists a proposal to calculate all-loop expressions [137]. \[Z_{\text{gravity}}^{\text{1-loop}}=Z_{\text{Sz}}^{-1}(2).\] Let us also remind that, according to (6.2), \(\mathfrak{F}_{\mathfrak{g}}=Z_{\rm Sz}(1)\). Hence, with knowledge of the function \(\mathfrak{F}_{g,n}(\mathbf{m})\), we can investigate the appropriate function that assumes the role of Selberg zeta function \(Z_{\rm Sz}(s)\) in the presence of conical singularities. Consequently, this exploration can also provide insights into the contribution of conical singularities to the 1-loop gravity partition function. For the special case \(M=\mathbb{H}_{3}/(\mathbb{Z}\times\mathbb{Z}_{m})\), it is shown [40] that in the presence of conical singularities, the 1-loop partition function again is given by \(Z_{\rm Sz}(s)\) but evaluated at a different point, i.e. \(Z_{\rm gravity}^{1-{\rm loop}}\sim Z_{\rm Sz}^{-1}(1)\). * In this paper, we explored further the relation between the modified classical Liouville action (i.e., the one which its Euler-Lagrange equation admits the hyperbolic Riemann surfaces with conical singularities as a solution), uniformization of orbifold Riemann surfaces and complex geometry of moduli spaces. One knows that the quantized Liouville theory can potentially describe the quantum corrections to those hyperbolic geometries (outside the singularities). Accordingly, one approach to understanding two-dimensional quantum gravity through quantum Liouville theory is to demand that this theory takes advantage of conformal symmetry akin to its classical counterpart. If the conformal symmetry is also symmetry of quantum Liouville theory, it should show itself in the conformal Ward identities (CWIs) for correlation functions of components of stress-energy tensor with another operators. An important point to consider is the space on which the Liouville action functional is expressed. This choice significantly impacts the form of the conformal Ward identities (CWIs) and their implications. Let's consider the simplest case, i.e., the Liouville action on the moduli space \(\mathcal{M}_{0,n}\) and identifying \(X_{\mathbf{m}}\) with \(V_{m_{1}}(x_{1})\cdots V_{m_{n}}(x_{n})\) and restoring the \(\hbar\). Since the \(S_{\mathbf{m}}\) in (4.1) is a well-defined (single-valued) function on \(\mathcal{M}_{0,n}\), the quantum Liouville theory is defined by \[\left\langle X_{\mathbf{m}}\right\rangle=\int\limits_{\mathscr{C}\mathscr{M}_{\bm {m}}(X)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! By substituting the above relations in (6.13), one gets [17, 66] \[\frac{1}{\hbar}\,T_{\varphi}(w)\;e^{-\frac{1}{2\pi\hbar}S_{\mathbf{m}}[\varphi]}=\sum _{i=1}^{n}\left(\frac{h_{\text{cl}}(m_{i})}{2\hbar(w-w_{i})^{2}}+\frac{1}{(w-w _{i})}\partial_{w_{i}}\right)e^{-\frac{1}{2\pi\hbar}S_{\mathbf{m}}[\varphi]},\] which implies that \[T_{\varphi}(w)\;=\sum_{i=1}^{n}\left(\frac{h_{\text{cl}}(m_{i})}{2(w-w_{i})^{2 }}-\frac{1}{2\pi}\frac{1}{(w-w_{i})}\partial_{w_{i}}S_{\mathbf{m}}[\varphi]\right). \tag{6.14}\] Comparing the above result with \(T_{\varphi}(w)=\text{Sch}(J^{-1};w)\), where \(\text{Sch}(J^{-1};w)\) is given by (3.52), implies that \[h_{\text{cl}}(m_{i})=h_{i}=1-\frac{1}{m_{i}^{2}},\hskip 28.452756pt\partial_{w_{i }}S_{\mathbf{m}}=-2\pi c_{i}, \tag{6.15}\] which are in agreement with Theorem 4.7. Moreover, the conformal symmetry at the quantum level implies that the vertex operators and components of the energy-momentum tensor satisfy the operator product expansion (OPE) of Belavin-Polyakov-Zamolodchikov, for example, \[\frac{1}{\hbar^{2}}\mathbf{T}(w)\mathbf{T}(w^{\prime}) =\frac{\mathsf{c}(\hbar)/2}{(w-w^{\prime})^{4}}+\frac{1}{\hbar} \frac{2\mathbf{T}(w^{\prime})}{(w-w^{\prime})^{2}}+\frac{1}{\hbar}\frac{1}{(w-w^{ \prime})}\partial_{w^{\prime}}\mathbf{T}(w^{\prime})+\text{regular terms},\] \[\frac{1}{\hbar^{2}}\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w}^{\prime}) =\text{regular terms}\] which yields the following CWIs, \[\frac{1}{\hbar^{2}}\left\langle\mathbf{T}(w)\mathbf{T}(w^{\prime})X_{\bm {m}}\right\rangle=\frac{\mathsf{c}(\hbar)/2}{(w-w^{\prime})^{4}}\left\langle X _{\mathbf{m}}\right\rangle+\frac{1}{\hbar}\left(\frac{2}{(w-w^{\prime})^{2}}+ \frac{1}{(w-w^{\prime})}\partial_{w^{\prime}}\right)\left\langle\mathbf{T}(w^{ \prime})X_{\mathbf{m}}\right\rangle\] \[\qquad\qquad\qquad+\frac{1}{\hbar}\sum_{i=1}^{n}\left(\frac{\bm {h}_{m_{i}}(\hbar)}{(w-w_{i})^{2}}+\frac{1}{(w-w_{i})}\partial_{w_{i}}\right) \left\langle\mathbf{T}(w^{\prime})X_{\mathbf{m}}\right\rangle,\] \[\frac{1}{\hbar^{2}}\left\langle\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w }^{\prime})X_{\mathbf{m}}\right\rangle=\frac{1}{\hbar}\sum_{i=1}^{n}\left(\frac{ \mathbf{h}_{m_{i}}(\hbar)}{(w-w_{i})^{2}}+\frac{1}{(w-w_{i})}\partial_{w_{i}} \right)\left\langle\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\right\rangle.\] Let us show that our results also agree with the second CWI in (6.16) at the tree-level. To see that, let us define the normalized connected two-point function \[\left\langle\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\right\rangle _{\text{nc}}=\frac{\left\langle\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w}^{\prime})X_ {\mathbf{m}}\right\rangle}{\left\langle X_{\mathbf{m}}\right\rangle}-\frac{\left\langle \mathbf{T}(w)X_{\mathbf{m}}\right\rangle}{\left\langle X_{\mathbf{m}}\right\rangle}\frac{ \left\langle\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\right\rangle}{ \left\langle X_{\mathbf{m}}\right\rangle}. \tag{6.17}\] By using (6.13) and its similar expression for \(\left\langle\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\right\rangle\), \[\frac{1}{\hbar}\left\langle\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}} \right\rangle=\sum_{j=1}^{n}\left(\frac{\bar{\mathbf{h}}_{m_{j}}(\hbar)}{(\bar{w}^{ \prime}-\bar{w}^{\prime}_{j})^{2}}+\frac{1}{(\bar{w}^{\prime}-\bar{w}^{\prime }_{j})}\partial_{\bar{w}^{\prime}_{j}}\right)\left\langle X_{\mathbf{m}}\right\rangle,\] the normalized connected correlation function (6.17) is simplified to \[\left\langle\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\right\rangle_{ \text{nc}}=\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{1}{(w-w_{i})}\frac{1}{(\bar{w}^{ \prime}-\bar{w}^{\prime}_{j})}\partial_{w_{i}}\partial_{\bar{w}^{\prime}_{j}} \log\left\langle X_{\mathbf{m}}\right\rangle. \tag{6.18}\] At the tree-level, the right-hand side of the above equation has an order of \(\mathcal{O}(\hbar^{-1})\). To find the same order of \(\hbar\) on the left-hand side, one can use the path integral representation of the two-point function, \[\left\langle\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\right\rangle =\int\limits_{\mathscr{C}\mathscr{M}_{\mathbf{m}}(X)}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! expectation value \(\langle X\rangle\) as a Hermitian metric in a certain holomorphic line bundle over moduli space and the quadratic differential \(\langle T(w)X\rangle_{nc}\,dw^{2}\) as a (1,0)-component of the canonical metric connection. This interpretation aligns with the tree-level analysis of the CWI (6.13) above. Moreover, they interpret \(\big{\langle}\mathbf{T}(w)\overline{\mathbf{T}}(\bar{w}^{\prime})X_{\mathbf{m}}\big{\rangle} _{\text{nc}}\,dwd\bar{w}\) as (1,1)-component of the curvature form of that connection; as illustrated in Eq. (6.21), this is also in agreement with the tree level analysis of (6.18) for the moduli space \(\mathcal{M}_{0,n}\). Now, One can utilize the aforementioned terminology to find new constraints on Kahler geometry of moduli space \(\mathfrak{M}_{0,n}(\mathbf{m})\) by demanding that the CWIs hold true when they are projected to this space. Conversely, one can also use the known information about Kahler geometry to find the "dynamical proof" of the Virasoro symmetry of the Liouville theory. Let's find one of those constraints by studying the quantum Liouville theory and its associated CWIs on the moduli space \(\mathfrak{M}_{0,n}(\mathbf{m})\). In comparison to the previous case, two points should be highlighted. Firstly, to define the path integral representation of quantum Liouville theory, one needs a Liouville functional that is a well-defined (single-valued) function on the moduli space \(\mathfrak{M}_{0,n}(\mathbf{m})\). It is shown that, unlike the function \(S_{\mathbf{m}}\) that is not invariant under the action of \(\text{Symm}\,(\mathbf{m})\), at least at the semi-classical limit, the function \(\mathscr{S}_{\mathbf{m}}\) has this property. Accordingly, for the moduli space \(\mathfrak{M}_{0,n}(\mathbf{m})\), instead of (6.11), we have \[\langle X_{\mathbf{m}}\rangle=\int\limits_{\mathscr{C}\mathscr{M}_{\mathbf{m}}(X)} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(R_{i}(w)\) and \(\mathcal{E}_{i}(w)\) which are given by \[\begin{split}\tilde{R}_{i}(w)&=-\frac{1}{\pi}\sum_{ \gamma_{j,j+1}\in\mathrm{Symm}(\mathbf{m})}R\left(\gamma_{j,j+1}(w),w_{i}\right) \gamma^{\prime}_{j,j+1}(w)^{2},\\ \tilde{\mathcal{E}}_{k}(w)&=\frac{1}{2}\sum_{\gamma_ {j,j+1}\in\mathrm{Symm}(\mathbf{m})}\left(\frac{1}{(\gamma_{j,j+1}(w)-w_{k})^{2}}- \frac{1}{\gamma_{j,j+1}(w)(\gamma_{j,j+1}(w)-1)}\right)\gamma^{\prime}_{j,j+1}( w)^{2},\\ \tilde{\mathcal{E}}_{n}(w)&=\frac{1}{2}\sum_{\gamma_ {j,j+1}\in\mathrm{Symm}(\mathbf{m})}\frac{1}{\gamma_{j,j+1}(w)(\gamma_{j,j+1}(w)- 1)}\;\gamma^{\prime}_{j,j+1}(w)^{2}\end{split} \tag{108}\] with \(k=1,\ldots,n-1\) and \(\gamma_{j,j+1}(w)\) are defined in Section 3.2.1. In the above, just for simplicity, we assumed that the signature of \(O\) is \((O;m,...,m)\), but it can be easily extended to the general case. Accordingly, the projection of \(\mathrm{Sch}\left(J^{-1};w\right)\) on moduli space \(\mathfrak{M}_{0,n}(\mathbf{m})\) is given by \[\begin{split}\sum_{i=1}^{n-3}\left(\mathrm{Sch}\left(J^{-1};w \right),M_{i}\right)d\tilde{w}_{i}&=\sum_{i=1}^{n-3}\bigg{(}-\pi \sum_{j=1}^{n-3}c_{j}\tilde{R}_{j}(w)+\sum_{j=1}^{n}h_{j}\tilde{\mathcal{E}}_{ j}(w),M_{i}\bigg{)}d\tilde{w}_{i}\\ &=\sum_{i=1}^{n-3}\bigg{(}-\pi c_{i}+\big{(}T_{s}(w),M_{i}\big{)} \bigg{)}d\tilde{w}_{i},\end{split} \tag{109}\] where \[\begin{split} T_{s}(w)&=\frac{1}{2}\sum_{i=1}^{n-1} \sum_{\gamma_{j,j+1}\in\mathrm{Symm}(\mathbf{m})}\frac{h_{i}}{(\gamma_{j,j+1}(w)- w_{i})^{2}}\gamma^{\prime}_{j,j+1}(w)^{2}\\ &-\frac{1}{2}\sum_{\gamma_{j,j+1}\in\mathrm{Symm}(\mathbf{m})}\frac {\sum_{i=1}^{n-1}h_{i}-h_{n}}{\gamma_{j,j+1}(w)(\gamma_{j,j+1}(w)-1)}\gamma^{ \prime}_{j,j+1}(w)^{2}.\end{split} \tag{110}\] In the above, the \(d\tilde{w}_{i}\)s are \((1,0)\) forms on cotangent space \(T^{*}_{[0]}\mathfrak{M}_{0,n}(\mathbf{m})\). The counterpart of (107) for this case is \[T^{\mathfrak{M}_{0,n}(\mathbf{m})}_{\varphi}(w)\!\!=\!\sum_{i=1}^{n}\sum_{\gamma_{ j,j+1}\in\mathrm{Symm}(\mathbf{m})}\!\!\!\left(\frac{h_{\mathrm{cl}}(m_{i})}{2( \gamma_{j,j+1}(w)-w_{i})^{2}}-\frac{1}{2\pi}\frac{1}{(\gamma_{j,j+1}(w)-w_{i} )}\partial_{\tilde{w}_{i}}\mathscr{S}_{\mathbf{m}}[\varphi]\right), \tag{111}\] which together with the relations (109) and (110) gives information about the moduli space of Riemann orbisurface \(O\), i.e, a relation between accessory parameters \(c_{i}\) in (109) for the Fuchsian uniformization and the first variation \(\partial_{\tilde{w}_{i}}\mathscr{S}_{\mathbf{m}}[\varphi]\). It also provides the opportunity to check the observed relation between the universal CWIs and Friedan-Shenker modular geometry. It is noteworthy that using the same methodology, one can probe the Kahler geometry of moduli space \(\mathfrak{M}_{g,n}(\mathbf{m})\) based on using the equations (107),(108),(109) and (108). Last but not least, it is shown [92] that the validity of CWI (107) at 1-loop approximation for the genus \(g=0\) case with just \(n_{p}\) punctures yields a formula for the first derivative of \(Z_{\rm Sz}(2)\). Moreover, it is shown [92] that when \(g\neq 0\) with just \(n_{p}\) punctures, the CWI (6.18) at the 1-loop level is equivalent to the local index theorem for the families of \(\bar{\partial}\)-operators on the punctured Riemann surfaces. Based on the above-examined cases, it would be interesting to explore more the relation between CWIs for the correlation functions \[\frac{1}{\hbar^{2}}\left\langle\prod_{k=0}\mathbf{T}(w_{k})\prod_{l=0}\overline{ \mathbf{T}}(\bar{w}_{l}^{\prime})\;X_{\mathbf{m}}\right\rangle,\] and uniformization of the Riemann surfaces and complex geometry of Teichmuller space \(\mathcal{T}_{g,n}(\mathbf{m})\), Schottky space \(\mathfrak{S}_{g,n}(\mathbf{m})\) and Moduli spaces \(\mathcal{M}_{g,n},\mathfrak{M}_{g,n}(\mathbf{m})\) and more importantly 2-dimensional quantum gravity. Moreover, the multi-point correlation functions can provide further evidence in favor of the profound role of Ward identities in Friedan-Shenker modular geometry. * Also, very recently, it has been shown [139] that asymptotic series (in powers of the central charge) for expansion of 2-dimensional conformal blocks, involving the exchange of identity operator, necessitates the existence of non-perturbative effects via resurgence analysis. In the dual three-dimensional theory, this implies that the graviton loop expansion is also an asymptotic series, and to cure it, one needs to consider new saddle points, which are particle-like states with large negative mass (non-manifold saddles with conical excesses). In this paper, we focus on geometries with conical defects (orbifold geometries), but studying the relation between our work and [139] would be interesting. The authors would like to thank Lorenz Eberhardt, Ruben Hidalgo, Kirill Krasnov, Alex Maloney, Leon Takhtajan, Lee-Peng Teo, Tina Torkaman, and Peter Zograf for inspiring discussions. We are especially grateful to Peter Zograf for providing us with an English translation of his paper [16] and to Lorenz Eberhardt and Leon Takhtajan for very helpful feedback on a draft version of this work. We are also grateful to Hossein Mohammadi for pointing out a large number of typos in an earlier version of this manuscript. The research of B.T. and A.N. is supported by the Iranian National Science Foundation (INSF) under Grant No. 4001859. ## Appendix A Introduction to Orbifold Riemann Surfaces Orbifolds lie at the intersection of many different areas of mathematics and Physics, including algebraic and differential geometry, as well as conformal field theory and string theory. Orbifolds were first introduced into topology and differential geometry by Satake [140], who called them _V-manifolds_. Satake described them as a generalization of smooth manifolds that are locally modeled on a quotient of \(\mathbb{R}^{\mathfrak{n}}\) by the action of a finite group and generalized concepts such as de Rham cohomology and the Gauss-Bonnet theorem to orbifolds. Shortly after the original paper of Satake, Baily introduced complex V-manifolds and generalized both the Hodge decomposition theorem [141] and Kodaira's projective embedding theorem [142] to V-manifolds. The concept of V-manifolds was later re-invented by Thurston [143] under the name of "orbifolds" and the notion of fundamental groups was generalized for these objects. Even though orbifolds were already very important objects in mathematics, the work of Dixon, Harvey, Vafa, and Witten [144, 145] as well as the subsequent work of Dixon, Friedan, Martinec, and Shenker [146] lead to a dramatic increase of interest in orbifolds among Physicists.88 The main objective of this appendix is to compile some basic facts about orbifolds and fix some notations used throughout this paper. Although we start with a general setting, the main focus of this appendix is on complex 1-dimensional orbifolds, called orbifold Riemann surfaces, as they are the objects of study in the main body of this paper. Footnote 88: Professor H. Arfaei was among the first string theorists who quickly realized the significance of orbifolds in string theory and worked on various aspects of their role within this framework [147, 148, 149]. To motivate our interest in orbifold Riemann surfaces, let us recall that ordinary Riemann surfaces are complex 1-dimensional algebro-geometric objects with a lot of good properties: Geometrical facts about Riemann surfaces are as "nice" as possible, and they often provide the intuition and motivation for generalizations to more complicated manifolds or varieties. The name "surface" comes from the fact that every Riemann surface is a two-dimensional real analytic manifold (i.e., a surface), but it contains more structures: In fact, a Riemann surface is the simplest example of a Kahler manifold which means that it admits three mutually compatible structures -- a complex structure, a Riemannian structure, and a symplectic structure. In addition, the existence of non-constant meromorphic functions on these surfaces can be used to show that any compact Riemann surface is a _projective algebraic curve_ and, therefore can be given by polynomial equations inside a projective space. _Orbifold Riemann surfaces_ are the natural generalization of Riemann surfaces in the orbifold world. Just like their manifold counterparts, orbifold Riemann surfaces can be viewed both as a complex orbifold of dimension 1 (_complex analysts viewpoint_) or as a smooth, proper Deligne-Mumford stack89 (over \(\mathbb{C}\)) of dimension 1 (_algebraic geometers viewpoint_). These orbisurfaces also admit Riemannian metrics and can be regarded as the simplest examples of Kahler orbifolds. When the emphasis is on the algebro-geometric viewpoint, orbifold Riemann surfaces are usually called _orbifold curves_ or _orbicurves_. Footnote 89: _Deligne–Mumford stacks_ (or _DM stacks_) are an algebraic generalization of orbifolds, and can be roughly thought of as an “orbivariety” or “orbischeme”. Just as orbifolds are locally the quotient of a manifold by a finite group, a DM stack can be characterized as being “locally” the quotient of a scheme by a finite group action. The viewpoint that one takes on the singular points of an orbifold depends a lot on what type of "space" one is working with: When working in the topological realm, one usually treats the orbifold singularities as an _additional structure_ - an _orbifold structure_ - on an _underlying topological space_ in the same way that one thinks of a smooth structure as an additional structure on a topological manifold (see [143, 150, 151, 152, 153]). In particular, a topological space is allowed to have several different orbifold structures. On the other hand, from an algebro-geometric viewpoint [154], it is more convenient to consider (analytic or algebraic) _stacks_ as the proper notion of space. Such a stack is then called an orbifold (be it an analytic or an algebraic one) if it admits a _covering_ by open substacks of the form [\(\tilde{U}/\Gamma\)],90 parameterising families of \(\Gamma\)-orbits in \(\tilde{U}\), where \(\tilde{U}\) is the local model for representable stacks (i.e. manifolds) and \(\Gamma\) is a finite subgroup of the automorphism group of \(\tilde{U}\). This second point of view treats an orbifold singularity as an intrinsic structure of the space. Footnote 90: If \(M\) is a complex manifold of dimension \(\mathfrak{n}\) and \(\Gamma\subset\operatorname{Aut}(M)\subset\operatorname{GL}(\mathfrak{n}, \mathbb{C})\) is a finite subgroup of holomorphic automorphisms of \(M\) which does _not_ act _freely_ (i.e. has fixed points on \(M\)), the quotient space \(M/\Gamma\) will have the structure of an analytic stack (or, equivalently, of a complex orbifold) and we will use the notation \([M/\Gamma]\) to mean \(M/\Gamma\) as an analytic orbifold/stack. The notation \(M/\Gamma\) will be reserved for the _coarse moduli space_ or the _underlying analytic space_ of \([M/\Gamma]\) which will be a _variety with quotient singularities_. For us, the appropriate notion of space will be that of _analytic spaces_[155], by which we mean a generalization of complex manifolds that allow the presence of singularities and are locally isomorphic to the common zero locus of a finite collection of holomorphic functions. Our point of view, which will be reflected in our introduction to orbifolds, lies somewhere in between the two extremes mentioned above: We will treat the subclass of codimension \(\geq 2\) orbifold singularities as the intrinsic structure of an _underlying complex analytic space_91 while the orbifold singularities of codimension-1 still need to be treated as additional structures on this analytic space (see [156] for more details). Footnote 91: This underlying analytic space is actually the same as the coarse moduli space of the orbifold regarded as an analytic stack. Throughout this paper, we will need to work with different characterizations of orbifold Riemann surfaces that have appeared in the literature: In order to introduce orbifold geometric structures in Appendix B, we will need to work with a definition of complex orbifolds based on _orbifold charts_[157] while a characterization of Riemann orbisurfaces as _log pairs_[158] will be more suitable for studying orbifold metrics [75]. A third way of characterizing Riemann orbisurfaces will be as _Riemann surfaces with signature_ and this viewpoint - which is closely related to the notion of Riemann orbisurfaces as log pairs - is the one that we have adapted in the main body of this paper. Our presentation in this appendix is closer to that of references [156, SS4], [159, Appx. E], and [160]. For more details about other approaches, the reader is encouraged to consult with [151, 152, 153, 154, 153] among others. ### Analytic Geometry We will start our introduction to complex orbifolds with a quick review of some background information about complex analytic spaces and analytic mappings -- particularly, ramified covering maps of analytic spaces. The reader is advised to consult with references [155, 160, 162, 163, 164, 165, 166] for more details. #### a.1.1 Complex Analytic Spaces and Analytic Mappings Let us start our review of complex analytic spaces with defining a complex analytic subvariety: **Definition A.1** (Analytic subvariety).: Let \(U\) be an open subset of \(\mathbb{C}^{\mathfrak{n}}\) (or of any complex analytic manifold \(M\)) and let \(X\) be a subset of \(U\). We say that \(X\) is an _analytic subvariety_ in \(U\) if, for any point \(x\) in \(U\), there exist a neighborhood \(V\) of \(x\) and a finite number of holomorphic functions \(f_{1},\ldots,f_{k}\) on \(V\) such that \[X\cap V=\left\{z\in V\,\Big{|}\,f_{1}(z)=\cdots=f_{k}(z)=0\right\}.\] In other words, in some open neighborhood of each point of \(U\), an analytic subvariety \(X\subset U\) is the set of common zeros of a finite number of complex analytic functions. We call \(f_{1},\ldots,f_{k}\) a system of local defining functions for \(X\) and a non-empty analytic subvariety of \(U\) which is locally defined by a _single_ (not identically zero) holomorphic function will be called an _analytic hypersurface_ in \(U\). Finally, a subvariety \(X\) of \(U\) is called _irreducible_ if it cannot be written as a union \(X=X_{1}\cup X_{2}\) where \(X_{i}\) are analytic subvarieties of \(U\) properly contained in \(X\). As suggested before, complex analytic subvarieties can be viewed as a generalization of complex (sub)manifolds which allow for the presence of singularities: **Definition A.2** (Smooth and singular points of subvarieties).: A point \(x\) on an analytic subvariety \(X\) is said to be _regular_ or _smooth_ if it is possible to choose coordinates \((z_{1},\ldots,z_{\mathfrak{n}})\) in an open neighborhood \(V\subset U\subset\mathbb{C}^{\mathfrak{n}}\) of point \(x\) such that locally \(X\) is a linear subspace \(\{z\in V\,|\,z_{k+1}=\cdots=z_{\mathfrak{n}}=0\}\) -- i.e. if \(X\cap V\) is a \(k\)-dimensional submanifold of \(\mathbb{C}^{\mathfrak{n}}\). The set of all regular points of \(X\) is an open dense subset of \(X\) and will be denoted by \(\operatorname{Reg}(X)\). The points of a subvariety that are not regular points are called the _singular points_. The set \(X\backslash\operatorname{Reg}(X)\) of all singular points of \(X\) will be denoted by \(\operatorname{Sing}(X)\) and is called the _singular locus_ of \(X\). An analytic subvariety \(X\) will be called a _smooth analytic subvariety_ if \(X=\operatorname{Reg}(X)\); evidently, a smooth analytic variety is just a complex analytic manifold itself. When \(X\) is an irreducible analytic subvariety, the _complex dimension of \(X\)_ is defined as the dimension of its smooth part \(\operatorname{Reg}(X)\) regarded as a complex manifold. More generally, if \(X\) is reducible, the dimension of \(X\) is defined as the maximum of the dimensions of its irreducible components. A reducible analytic subvariety \(X\) is called _pure dimensional_ if every irreducible component of \(X\) has the same dimension. It is clear that analytic subvariety \(X\subset U\) can be endowed with the relative topology coming from \(U\); however, the main point in the study of analytic subvarieties is that one should take into account consideration not only about the topology of these analytic subvarieties but also about their _function-theoretic properties_: For simplicity, let us take the open set \(U\subset\mathbb{C}^{\mathfrak{n}}\) to be a sufficiently small open polydisc \(\Delta(\boldsymbol{\epsilon})\) such that an analytic subvariety \(X\) of \(\Delta(\boldsymbol{\epsilon})\) can be determined as the set of common zeros of a finite number of functions that are analytic throughout \(\Delta(\boldsymbol{\epsilon})\). Under the natural addition and multiplication of complex-valued functions, the set of all holomorphic functions on \(\Delta(\boldsymbol{\epsilon})\) forms a ring \(\mathfrak{D}_{\Delta(\boldsymbol{\epsilon})}\) containing the constants \(c\in\mathbb{C}^{\mathfrak{n}}\) -- hence in fact a \(\mathbb{C}\)-algebra. The set of all analytic functions in \(\Delta(\boldsymbol{\epsilon})\) which vanish on \(X\) form an _ideal_\(\mathfrak{I}(X)\) in the ring \(\mathfrak{D}_{\Delta(\boldsymbol{\epsilon})}\), called the _ideal of \(X\)_. Then, the _ring of holomorphic functions on \(X\)_ is given by the quotient ring \(\mathfrak{O}_{X}:=\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}/\mathfrak{I}(X)\). It is easy to see that a subvariety \(X\) of \(\Delta(\boldsymbol{\epsilon})\) is irreducible precisely when the ideal \(\mathfrak{I}(X)\) is a _prime_ ideal in \(\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\), by which we mean \(\mathfrak{I}(X)\neq\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\) and for any two holomorphic functions \(f,f^{\prime}\in\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\) the statement \(ff^{\prime}\in\mathfrak{I}(X)\) implies \(f\in\mathfrak{I}(X)\) or \(f^{\prime}\in\mathfrak{I}(X)\) (or both). **Remark A.1**.: Let \(f_{1},\ldots,f_{k}\in\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\) be the system of defining functions for general (i.e., not necessarily irreducible) subvariety \(X\). Then, the defining functions \(f_{1},\ldots,f_{k}\) generate an _ideal_\(\mathfrak{I}\) in the \(\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\) which is sometimes called the _defining ideal_ for \(X\). While all holomorphic functions \(f\in\mathfrak{I}\) vanish on \(X\), i.e. \(\mathfrak{I}\subseteq\mathfrak{I}(X)\), the inverse statement is _not_ necessarily true -- i.e. in general \(\mathfrak{I}(X)\nsubseteq\mathfrak{I}\). However, provided that the polydisc \(\Delta(\boldsymbol{\epsilon})\) is small enough, an important fact is that _all_ holomorphic functions \(f\in\mathfrak{I}(X)\) have a power which is contained in \(\mathfrak{I}\). This motivates us to define the _radical_ of \(\mathfrak{I}\) as the set \[\sqrt{\mathfrak{I}}:=\left\{f\in\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})} \,\Big{|}\,f^{k^{\prime}}\in\mathfrak{I}\text{ for some positive integer }k^{\prime}\right\}.\] This is again an ideal in \(\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\) and we have \(\sqrt{\mathfrak{I}}=\mathfrak{I}(X)\). When the defining ideal \(\mathfrak{I}\) is prime in \(\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\), we get \(\sqrt{\mathfrak{I}}=\mathfrak{I}\); hence, \(\mathfrak{I}=\mathfrak{I}(X)\) exactly when \(X\) is irreducible. More generally, when the open set \(U\subset\mathbb{C}^{\mathfrak{n}}\) is not restricted to be small (or to be an open polydisc), the above statements hold true in small neighborhoods of each point \(x\in X\). For such local considerations, it is often convenient to introduce the notion of _germs_: To make this precise, consider two pairs \((X,U)\) and \((X^{\prime},U^{\prime})\) where \(U,U^{\prime}\) are open neighborhoods of the origin in \(\mathbb{C}^{\mathfrak{n}}\) and \(X,X^{\prime}\) are analytic subvarieties of \(U,U^{\prime}\) respectively. The two pairs \((X,U)\) and \((X^{\prime},U^{\prime})\) are said to define the same _germ of analytic subvarieties_ at the origin in \(\mathbb{C}^{\mathfrak{n}}\) if there exists a neighborhood \(W\subseteq U\cap U^{\prime}\) of \(0\) such that \(X\cap W=X^{\prime}\cap W\). We will denote the germ of analytic subvariety \(X\) at \(0\) in \(\mathbb{C}^{\mathfrak{n}}\) by \([X]_{0}\). Now, let \(\mathfrak{O}_{U}\) be the ring of holomorphic functions in some open subset \(U\subset\mathbb{C}^{\mathfrak{n}}\) containing the origin. Analogously, we can define an equivalence relation \(\sim_{0}\) between two holomorphic functions \(f,f^{\prime}\in\mathfrak{O}_{U}\) where \(f\sim_{0}f^{\prime}\) if there exists a neighborhood \(W\) of \(0\) such that the restrictions of \(f\) and \(f^{\prime}\) to \(W\) are identical -- i.e. \(f\big{|}_{{}_{W}}=f^{\prime}\big{|}_{{}_{W}}\). The equivalence class of a function \(f\) is called the _germ of holomorphic function \(f\) at the origin_ and will be denoted by \([f]_{0}\). In addition, the quotient ring \(\mathfrak{O}_{U,0}:=\mathfrak{O}_{U}/\sim_{0}\) will be the _ring of germs of holomorphic functions at the origin_.92 Footnote 92: If we denote by \(\mathbb{C}\{z_{1},\ldots,z_{n}\}\) the set of power series which converge absolutely in some neighborhood of \(0\), this set also has the structure of a ring. Since, as in the one variable case, \(f\sim_{0}f^{\prime}\) if and only if \(f\) and \(f^{\prime}\) have the same power series expansion, we may identify \(\mathfrak{O}_{U,0}\) with \(\mathbb{C}\{z_{1},\ldots,z_{n}\}\). Similar to the case of analytic subvariety of a sufficiently small open polydisc, to each germ \([X]_{0}\) of an analytic subvariety at the origin in \(\mathbb{C}^{\mathfrak{n}}\) there is canonically associated an ideal \(\mathfrak{I}([X]_{0})\) in the local ring93\(\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},0}\) which is defined as the ideal of germs of all analytic functions vanishing on the subvariety \(X\) representing the germ \([X]_{0}\). In the other direction, to each ideal \(\mathfrak{I}\subseteq\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},0}\) there is canonically associated a germ of an analytic subvariety at the origin in \(\mathbb{C}^{\mathfrak{n}}\), called the _locus of the ideal_\(\mathfrak{I}\) and denoted by \([X(\mathfrak{I})]_{0}\). The germ \([X(\mathfrak{I})]_{0}\) is defined as the germ represented by the analytic subvariety \(X=\{z\in U\,|\,f_{1}(z)=\cdots=\) \(f_{k}(z)=0\)\(\}\) of the open set \(U\subset\mathbb{C}^{\mathfrak{n}}\), where \(f_{i}\in\mathfrak{O}_{U}\) are analytic functions in \(U\) whose germs in \(\mathfrak{O}_{U,0}\) generate the ideal \(\mathfrak{I}\). Note that, similar to what we saw in Remark A.1, \(\mathfrak{I}\big{(}[X(\mathfrak{I})]_{0}\big{)}=\sqrt{\mathfrak{I}}\) and \(\sqrt{\mathfrak{I}}=\mathfrak{I}\) iff the ideal \(\mathfrak{I}\subseteq\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},0}\) is prime (equivalently, if the germ \([X(\mathfrak{I})]_{0}\) is irreducible). Finally, the residue class ring \(\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},0}/\mathfrak{I}([X]_{0})\) will now be denoted by \(\mathfrak{O}_{X,0}\) and will be called the _ring of germs of holomorphic functions on the subvariety \(X\)_ at the origin in \(\mathbb{C}^{\mathfrak{n}}\). Having learned how to think about the function-theoretic properties of an analytic subvariety locally, we are now ready to study those properties from a global perspective; it is then convenient to think about _sheaves_ of rings/ideals/modules as we shall now explain: Let us start with the local rings \(\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},z}\) of germs of holomorphic functions at any point \(z\in\mathbb{C}^{\mathfrak{n}}\). The set of rings \(\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},z}\) for all points \(z\in\mathbb{C}^{\mathfrak{n}}\) can be taken to form the _sheaf of germs of holomorphic functions of \(\mathfrak{n}\) complex variables_\(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}}}\); the restriction \(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}}}\big{|}_{U}\) of the sheaf of rings \(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}}}\) to any open set \(U\subset\mathbb{C}^{\mathfrak{n}}\) will be simply denoted by \(\mathscr{O}_{U}\). Similarly, consider an analytic subvariety \(X\) of an open subset \(U\subset\mathbb{C}^{\mathfrak{n}}\) and to each point \(z\in U\) associate the ideal \(\mathfrak{I}([X]_{z})\subseteq\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},z}\) of the germ of the subvariety \(X\) at that point (if \(z\notin X\) the ideal \(\mathfrak{I}([X]_{z})\) is of course the trivial ideal \(\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},z}\)). The set of all ideals \(\mathfrak{I}([X]_{z})\) at any point \(z\in U\) form an analytic subsheaf 94 of the sheaf \(\mathscr{O}_{U}\) over the set \(U\), which will be dented by \(\mathscr{I}(X)\) and called the _sheaf of ideals of the analytic subvariety \(X\)_. Finally, the restriction to the subvariety \(X\) of the analytic sheaf \(\mathscr{O}_{U}/\mathscr{I}(X)\) will be called the _the sheaf of germs of holomorphic functions on the subvariety \(X\)_ and is dented by \(\mathscr{O}_{X}\); the local rings \(\mathfrak{O}_{X,x}=\mathfrak{O}_{U,x}/\mathfrak{I}([X]_{x})\) at any point \(x\in X\) can then be viewed as the _stalks_ of \(\mathscr{O}_{X}\). Footnote 94: An _analytic sheaf_ over an open set \(U\subset\mathbb{C}^{\mathfrak{n}}\) is a sheaf of modules over \(\mathscr{O}_{U}\). Locally, a germ of an analytic subvariety determines a germ of a topological space and this space further possess a distinguished subring of the ring of germs of continuous complex-valued functions, namely the ring of germs of holomorphic functions on the subvariety. This observation suggests that the correct way of characterizing an analytic subvariety \(X\) is as a \(\mathbb{C}\)_-ringed space_: **Definition A.3**.: A _ringed space_\(X\) is a pair \((|X|,\mathscr{O}_{X})\) consisting of a Hausdorff topological space \(|X|\) and a sheaf of rings \(\mathscr{O}_{X}\) on \(|X|\), called the _structure sheaf_ of \(X\). It is called a _locally ringed space_ when, for every \(x\in|X|\), the stalk \(\mathscr{O}_{X,x}\) is a local ring. Its maximal ideal is denoted by \(\mathfrak{m}_{X,x}\). A locally ringed space is called a \(\mathbb{C}\)_-ringed space_ when furthermore \(\mathscr{O}_{X}\) is a sheaf of \(\mathbb{C}\)-algebras and, for every \(x\in|X|\), there is an isomorphism \(\mathscr{O}_{X,x}/\mathfrak{m}_{X,x}\cong\mathbb{C}\) of \(\mathbb{C}\)-algebras. It is clear that for analytic subvarieties, the role of structure sheaf is played by the _sheaf of germs of holomorphic functions_ on the subvariety. The notion of an analytic subvariety as defined in A.1 depends quite essentially on a particular embedding in the ambient space \(\mathbb{C}^{\mathfrak{n}}\). For example, the germ of an analytic subvariety at the origin in \(\mathbb{C}^{\mathfrak{n}}\) can also be viewed as the germ of an analytic subvariety at the origin in \(\mathbb{C}^{\mathfrak{n}+1}\) through the canonical embedding \(\mathbb{C}^{\mathfrak{n}}\hookrightarrow\mathbb{C}^{\mathfrak{n}+1}\) but these will be inequivalent germs of analytic subvarieties. It is thus evident that there is a point to introducing an equivalence relation among analytic subvarieties in order to investigate those properties, which are to some extent independent of the embeddings of these subvarieties in their ambient complex number spaces. Once again, for the sake of simplicity, let us consider sufficiently small open polydiscs \(\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\subset\mathbb{C}^{ \mathfrak{n}_{1}}\) and \(\varDelta^{\mathfrak{n}_{2}}(\boldsymbol{\epsilon}_{2})\subset\mathbb{C}^{ \mathfrak{n}_{2}}\) such that analytic subvarieties \(X_{1}\subset\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\) and \(X_{2}\subset\varDelta^{\mathfrak{n}_{2}}(\boldsymbol{\epsilon}_{2})\) can be determined as the set of common zeros of a finite number of functions that are analytic throughout \(\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\) and \(\varDelta^{\mathfrak{n}_{2}}(\boldsymbol{\epsilon}_{2})\), respectively. A continuous mapping between two such analytic subvarieties \(X_{1}\subset\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\) and \(X_{2}\subset\varDelta^{\mathfrak{n}_{2}}(\boldsymbol{\epsilon}_{2})\) is said to be a _complex analytic mapping_\(f:X_{1}\to X_{2}\) between these two subvarieties if there is a holomorphic mapping \(F:\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\to\varDelta^{ \mathfrak{n}_{2}}(\boldsymbol{\epsilon}_{2})\) such that the restriction of \(F\) to the subvariety \(X_{1}\subset\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\) is just \(f\) -- i.e. \(F\big{|}_{X_{1}}=f\). Additionally, two analytic subvarieties \(X_{1}\subset\varDelta^{\mathfrak{n}_{1}}(\boldsymbol{\epsilon}_{1})\) and \(X_{2}\subset\varDelta^{\mathfrak{n}_{2}}(\boldsymbol{\epsilon}_{2})\) are said to be _analytically equivalent_ if there are complex analytic mappings \(f:X_{1}\to X_{2}\) and \(g:X_{2}\to X_{1}\) such that the compositions \(f\circ g\) and \(g\circ f\) are the appropriate identity mappings. This notion of equivalence thus allows one to speak of analytic subvarieties without reference to the spaces in which they are embedded; an equivalence class is called an _analytic variety_, and a space which has locally the structure of an analytic variety is called an _analytic space_. For global considerations, it is more convenient to think about an analytic subvariety \(X\) as a \(\mathbb{C}\)-ringed space \((|X|,\mathscr{O}_{X})\). Complex analytic mappings \(f:X_{1}\to X_{2}\) between two subvarieties \(X_{1}\) and \(X_{2}\) are then viewed as _morphisms of \(\mathbb{C}\)-ringed spaces_ between \((|X_{1}|,\mathscr{O}_{X_{1}})\) and \((|X_{2}|,\mathscr{O}_{X_{2}})\): **Definition A.4** (Morphism of \(\mathbb{C}\)-ringed spaces).: A _morphism_\(f:X_{1}\to X_{2}\) of ringed spaces \((|X_{1}|,\mathscr{O}_{X_{1}})\) and \((|X_{2}|,\mathscr{O}_{X_{2}})\) is a pair \(f=(|f|,f^{*})\) consisting of a continuous map \[|f|:|X_{1}|\to|X_{2}|\] and a homomorphism \[f^{*}:\mathscr{O}_{X_{2}}\to|f|_{*}(\mathscr{O}_{X_{1}})\] of sheaves of rings on \(X_{2}\). For any point \(x\in X_{1}\), we think of \(f^{*}_{x}\) as the ring homomorphism \[f^{*}_{x}:\mathscr{O}_{X_{2},f(x)}\to\mathscr{O}_{X_{1},x}\] defined as the composition of the canonical homomorphisms \[\mathscr{O}_{X_{2},f(x)}\to\left(|f|_{*}(\mathscr{O}_{X_{1}})\right)_{f(x)} \to\mathscr{O}_{X_{1},x}.\] In case \(X_{1}\) and \(X_{2}\) are locally ringed spaces, a morphism by definition has to be _local_, that is, satisfy \[f^{*}_{x}\big{(}\mathfrak{m}_{X_{2},f(x)}\big{)}\subset\mathfrak{m}_{X_{1},x}\] for every \(x\in X_{1}\). A _morphism of \(\mathbb{C}\)-ringed spaces_\(X_{1}\) and \(X_{2}\) is a morphism of ringed spaces, where \(f^{*}\) is furthermore a homomorphism of sheaves of \(\mathbb{C}\)-algebras. In this case, \(f^{*}_{x}\) is automatically local for every \(x\in X_{1}\). It is an immediate consequence of this definition that two analytic subvarieties \(X_{1}\) and \(X_{2}\) determine _equivalent varieties_ if and only if there is a _topological homeomorphism_\(|f|:|X_{1}|\to|X_{2}|\) inducing an isomorphism \(f^{*}:\mathscr{O}_{X_{2}}\xrightarrow{\cong}|f|_{*}(\mathscr{O}_{X_{1}})\) between the sheaves of \(\mathbb{C}\)-algebras. Thus, the coherent analytic sheaf \(\mathscr{O}_{X}\) on an analytic subvariety \(X\) is the complete invariant determining equivalence as varieties -- hence the name _structure sheaf_. To build up complex analytic spaces, we construct local models as follows: Let \(U\subset\mathbb{C}^{\mathfrak{n}}\) be an open subset, and assume that \(\mathscr{I}\) is a coherent sheaf of \(\mathscr{O}_{U}\)-ideals. Then \[\operatorname{Supp}\left(\mathscr{O}_{U}/\mathscr{I}\right)=\left\{x\in U \,\Big{|}\left(\mathscr{O}_{U}/\mathscr{I}\right)_{x}\neq 0\right\}\] is an analytic subset of \(U\) which we will denote by \(\tilde{A}\). The pair \(\left(\tilde{A},\left(\mathscr{O}_{U}/\mathscr{I}\right)\big{|}_{\tilde{A}}\right)\) is a \(\mathbb{C}\)-ringed space, which is called a _local model of an analytic space_. **Definition A.5** (Complex analytic spaces and analytic mappings).: A _complex analytic space_, or an _analytic space_ for short, is a \(\mathbb{C}\)-ringed space \((|X|,\mathscr{O}_{X})\) satisfying the following conditions: 1. \(|X|\) is Hausdorff, 2. For every \(x\in X\), there is an open neighbourhood \(V_{x}\) of \(x\) such that \(\left(V_{x},\mathscr{O}_{X}\big{|}_{V_{x}}\right)\) is isomorphic (as \(\mathbb{C}\)-ringed space) to some local model. If \(X=(|X|,\mathscr{O}_{X})\) and \(Y=(|Y|,\mathscr{O}_{Y})\) are complex analytic spaces, then any morphism \[(|f|,f^{*}):(|X|,\mathscr{O}_{X})\to(|Y|,\mathscr{O}_{Y}),\] of \(\mathbb{C}\)-ringed spaces is called an _analytic map_ (or _holomorphic map_). **Remark A.2**.: Note that any complex manifold \(M\) can be considered as an analytic space \((|M|,\mathscr{O}_{M})\) and holomorphic maps \(f:M\to N\) between complex manifolds can be extended to analytic mappings \((|f|,f^{*}):(|M|,\mathscr{O}_{M})\to(|N|,\mathscr{O}_{N})\). Let \(|X|\) be a topological space. Any analytic variety \((|U|,\mathscr{O}_{U})\), where \(|U|\) is an open set in \(|X|\), is called an _analytic chart_ on \(|X|\). A family \(\left\{(|U_{a}|,\mathscr{O}_{a}),g^{*}_{ab}\right\}_{a,b\in A}\) consisting of complex charts on \(|X|\) and of \(\mathbb{C}\)-algebra isomorphisms \[g^{*}_{ab}:\mathscr{O}_{b}\big{|}_{|U_{a}|\cap|U_{b}|}\to\mathscr{O}_{a}\big{|} _{|U_{a}|\cap|U_{b}|},\] is called an _analytic atlas_ on \(X\), if \(\left\{|U_{a}|\right\}_{a\in A}\) is an open covering of \(|X|\), and if furthermore \[g^{*}_{ab}\circ g^{*}_{bc}=g^{*}_{ac}\quad\text{for all}\quad a,b,c\in A \qquad(\text{cocycle condition});\] the maps \(g^{*}_{ab}\) are the _gluing isomorphisms_ of the atlas. Furthermore, we have the following lemma (see e.g. [163, SS1.7] for the proof): **Lemma A.1** (Gluing Lemma).: _Let \(\left\{(|V_{i}|,\mathscr{O}_{i}),g^{*}_{ij}\right\}_{i,j\in I}\) be an analytic atlas on a Hausdorff topological space \(|X|\). Then, there exists a unique (up to isomorphism) complex analytic space \((|X|,\mathscr{O}_{X})\) and \(\mathbb{C}\)-algebra isomorphisms \(f^{*}_{i}:\mathscr{O}_{X}\big{|}_{|V_{i}|}\to\mathscr{O}_{i}\) for all \(i\in I\), such that_ \[g^{*}_{ij}=f^{*}_{i}\circ(f^{*}_{j})^{-1}\] _on every intersection \(|V_{i}|\cap|V_{j}|\)._ A complex analytic space \(Y\) is called an _open complex analytic subspace_ of \(X\), if \(|Y|\) is an open subset of \(|X|\), and \(\mathscr{O}_{Y}=\mathscr{O}_{X}\big{|}_{Y}\). In addition, \(Y\) is called a _closed complex analytic subspace_ of \(X\), if there is a coherent ideal \(\mathscr{J}\subset\mathscr{O}_{X}\) such that \(|Y|=\operatorname{Supp}(\mathscr{O}_{X}/\mathscr{J})\) and \(\mathscr{O}_{Y}=(\mathscr{O}_{X}/\mathscr{J})\big{|}_{Y}\). In this case, there is a canonical analytic map determined by the injection, which we denote by \(Y\hookrightarrow X\). A subset \(A\) of a complex analytic space \(X\) is called _analytic_ when there is a coherent ideal \(\mathscr{J}\subset\mathscr{O}_{X}\) such that \(A=\operatorname{Supp}(\mathscr{O}_{X}/\mathscr{J})\). We now discuss several possibilities of how good (respectively, how bad) a given analytic space \(X=(|X|,\mathscr{O}_{X})\) may behave at a point \(x\in X\). The situation is optimal if \(x\) is a smooth point of \(X\): **Definition A.6** (Smooth and singular points of analytic spaces).: A point \(x\) in an analytic space \(X=(|X|,\mathscr{O}_{X})\) is called _smooth_ or _regular_, if there exists a neighborhood \(V_{x}\) of \(x\) in \(|X|\) and an open set \(U\) in some complex number space \(\mathbb{C}^{\mathfrak{n}}\) such that \((V_{x},\mathscr{O}_{X}\big{|}_{V_{x}})\) and \((U,\mathscr{O}_{U})\) are analytically isomorphic -- i.e. if there exist analytic mappings \[(|f|,f^{*}):(V_{x},\mathscr{O}_{X}\big{|}_{V_{x}})\to(U,\mathscr{O}_{U})\quad \text{and}\quad(|g|,g^{*}):(U,\mathscr{O}_{U})\to(V_{x},\mathscr{O}_{X}\big{|}_ {V_{x}}),\] such that the compositions \(|f|\circ|g|\), \(|g|\circ|f|\), \(f^{*}\circ g^{*}\), and \(g^{*}\circ f^{*}\) are the appropriate identity mappings. In other words, a point \(x\in X\) is smooth if and only if \(\mathscr{O}_{X,x}\cong\mathscr{O}_{\mathbb{C}^{\mathfrak{n}},x}\). Of course, the singular locus is defined to be the set of all non-smooth points: \(\operatorname{Sing}(X)=X\backslash\operatorname{Reg}(X)\). When \(X\) is not necessarily smooth (i.e., has a non-empty singular locus), we define the following notions regarding the behavior of an analytic space \(X\) at a point \(x\in X\): * The analytic space \(X\) is called _irreducible at point_\(x\) if the stalk \(\mathscr{O}_{X,x}\) is an integral domain, otherwise \(X\) is called reducible at \(x\). All smooth points are irreducible points, since for such a point \(x\), the stalk \(\mathscr{O}_{X,x}\) is isomorphic to the ring of convergent power series \(\mathbb{C}\{z_{1},\dots,z_{\mathfrak{n}}\}\). The analytic space \(X\) will be called _locally irreducible_ if all points of \(X\) are irreducible; in particular, complex manifolds are locally irreducible. * The complex analytic space \(X\) is called _reduced at_\(x\) if the stalk \(\mathscr{O}_{X,x}\) is a reduced ring -- i.e. does not contain nilpotent elements. All irreducible points are reduced points of \(X\). We call \(X\) a _reduced analytic space_ if \(X\) is reduced at all its points; this happens when every local model for the space is defined by a radical sheaf of ideals. An analytic space \(X\), which isn't reduced has a reduction \(X_{\text{red}}\), which is a reduced analytic space with the same underlying topological space. There exists a canonical embedding \(\iota:X_{\text{red}}\hookrightarrow X\) and every morphism from \(X\) to a reduced analytic space factors through \(\iota\). In particular, every analytic mapping \(f:Y\to X\) of complex analytic spaces induces a canonical analytic morphism \(f_{\text{red}}:Y_{\text{red}}\to X_{\text{red}}\) of their reductions such that \(f\circ\iota_{Y}=\iota_{X}\circ f_{\text{red}}\) where \(\iota_{X}:X_{\text{red}}\hookrightarrow X\) and \(\iota_{Y}:Y_{red}\hookrightarrow Y\) are the canonical embeddings. When \(Y=Y_{\text{red}}\), the sheaf homomorphism component, \(f^{*}\), of an analytic map \((|f|,f^{*}):(|Y|,\mathscr{O}_{Y})\to(|X|,\mathscr{O}_{X})\) is _uniquely_ determined by its continuous mapping component \(|f|\). * A reduced point \(x\in X\) will be called a _normal point_ of \(X\), if the stalk \(\mathscr{O}_{X,x}\) is integrally closed in its quotient ring. Smooth points are normal, and \(X\) is irreducible at every normal point. An analytic space \(X\) will be called _normal_ if every point \(x\in X\) is a normal point; in a normal analytic space, the _singular locus has codimension at least two_. Once again, all non-normal analytic spaces can be smoothed out into normal spaces in a canonical way; this construction is called the _normalization_. In the rest of this appendix, we will mainly focus on _normal analytic spaces._ #### a.1.2 An Intermezzo on Line Bundles and Divisors In this subsection, we will introduce the basic definitions concerning line bundles and divisors. Although we are mainly interested in complex analytic spaces, we will mostly focus our attention to complex manifolds (i.e., smooth analytic spaces) to avoid complication; we will comment on some of the subtleties of generalizing the introduced notion to singular analytic spaces. We start with reviewing the notions of connection, curvature, and Chern classes for complex vector bundles (see [156, 163, 167, 168] for more details). #### Complex Vector Bundles Let \(M\) be a complex manifold of dimension \(\mathfrak{n}\). A _complex vector bundle_ of _rank_\(r\) over \(M\) is a smooth manifold \(E\) together with a continuous map \(\pi:E\to M\) such that there exists an open covering \(\mathcal{V}=\{V_{\alpha}\}_{\alpha\in A}\) of \(M\) with the following properties: 1. for each \(\alpha\in A\), there is a homeomorphism \[\psi_{\alpha}:\pi^{-1}(V_{\alpha})\xrightarrow{\sim}V_{\alpha}\times\mathbb{C} ^{r}\] with \(\operatorname{pr}\circ\psi_{\alpha}=\pi\) where \(\operatorname{pr}\) denotes the projection \(V_{\alpha}\times\mathbb{C}^{r}\to V_{\alpha}\); 2. for each pair \((\alpha,\beta)\in A\times A\), there is a \(\mathcal{C}^{\infty}\) map \[g^{\alpha\beta}:V_{\alpha}\cap V_{\beta}\to\operatorname{GL}(r,\mathbb{C})\] with \[\psi_{\alpha}\circ\psi_{\beta}^{-1}(p,\zeta)=\big{(}p,g^{\alpha\beta}(p)\zeta \big{)}\qquad\text{for}\qquad(p,\zeta)\in V_{\alpha}\cap V_{\beta}\times \mathbb{C}^{r}.\] We call \(\psi_{\alpha}\) a _trivialization_ of \(E\) on \(V_{\alpha}\). We also call \(g^{\alpha\beta}\) the _transition matrix_ of \(E\) on \(V_{\alpha}\cap V_{\beta}\) and the collection \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) the _system of transition matrices_ of \(E\). For each point \(p\) in \(V_{\alpha}\cap V_{\beta}\cap V_{\gamma}\) we have the identity \[g^{\alpha\beta}(p)\,g^{\beta\gamma}(p)=g^{\alpha\gamma}(p)\qquad\qquad\text{ (cocycle condition)}.\] (A.1) Thus, in particular, \(g^{\alpha\alpha}(p)=1\) (the identity matrix) and \(g^{\beta\alpha}(p)=\big{(}g^{\alpha\beta}(p)\big{)}^{-1}\). We may think of the system \(\big{\{}(V_{\alpha},\psi_{\alpha},g^{\alpha\beta})\big{\}}_{(\alpha,\beta)\in A \times A}\) as defining a _vector bundle structure_ on \(E\). Conversely, if we are given an open covering \(\mathcal{V}=\{V_{\alpha}\}_{\alpha\in A}\) of \(M\) and a collection \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) of \(\mathcal{C}^{\infty}\) maps \[g^{\alpha\beta}:V_{\alpha}\cap V_{\beta}\to\operatorname{GL}(r,\mathbb{C}),\] satisfying the cocycle condition (A.1) for \(p\in V_{\alpha}\cap V_{\beta}\cap V_{\gamma}\), we may construct a vector bundle as follows: For \((p_{\alpha},\zeta_{\alpha})\in V_{\alpha}\times\mathbb{C}^{r}\) and \((p_{\beta},\zeta_{\beta})\in V_{\beta}\times\mathbb{C}^{r}\), we define \((p_{\alpha},\zeta_{\alpha})\sim(p_{\beta},\zeta_{\beta})\) if and only if \[\begin{cases}p_{\alpha}=p_{\beta}(=p)\\ \zeta_{\alpha}=g^{\alpha\beta}(p)\,\zeta_{\beta}.\end{cases}\] Then, it is easy to see that this is an equivalence relation in the disjoint union \(\bigsqcup_{\alpha}(V_{\alpha}\times\mathbb{C}^{r})\). Now, let \(E\) be the quotient space \((\bigsqcup_{\alpha}(V_{\alpha}\times\mathbb{C}^{r}))/\sim\). Then, since \[(V_{\alpha}\times\mathbb{C}^{r})/\sim\,=\,V_{\alpha}\times\mathbb{C}^{r},\] \(E\) has a vector bundle structure with \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) as a system of transition matrices. A complex vector bundle over a complex manifold \(M\) is said to be _holomorphic_ if \(E\) admits a system of transition matrices \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) such that each \(g^{\alpha\beta}\) is holomorphic. Note that in this case, \(E\) has the structure of a complex manifold so that the projection \(\pi:E\to M\) is a holomorphic submersion. We will come back to this point later when we study holomorphic vector bundles on complex analytic spaces. Let \(\pi:E\to M\) be a complex vector bundle of rank \(r\) and \(V\) an open set in \(M\). A smooth complex section of \(E\) on \(V\) is a \(\mathcal{C}^{\infty}\)-map \(s:V\to E\bigbigsqcup_{V}:=\pi^{-1}(V)\) such that \(\pi\circ s=\mathrm{id}_{V}\), the identity map of \(V\). A vector bundle, \(E\), always admits the _zero section_ -- i.e., the map \(M\to E\) which assigns to each point \(p\in M\) the zero of the vector space \(E_{p}\). The set of \(\mathcal{C}\) complex sections of \(E\) on \(V\) is denoted by \(\mathcal{C}^{\infty}(V,E)\). This has a natural structure of vector space by the operations defined by \((s_{1}+s_{2})(p)=s_{1}(p)+s_{2}(p)\) and \((cs)(p)=c\,s(p)\) for \(s_{1},s_{2}\) and \(s\) in \(\mathcal{C}^{\infty}(V,E)\), \(c\in\mathbb{C}\) and \(p\in V\). More precisely, a section \(s\) on \(V\) can be described as follows: We fix a system of transition matrices \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) of \(E\) on an open covering \(\mathcal{V}=\{V_{\alpha}\}_{\alpha\in A}\). Using the \(\mathcal{C}^{\infty}\)-diffeomorphism \(\psi_{\alpha}:E\bigsqcup_{V_{\alpha}}\stackrel{{\sim}}{{\to}}V_ {\alpha}\times\mathbb{C}^{r}\), we may write \[\psi_{\alpha}\big{(}s(p)\big{)}=\big{(}p,s^{\alpha}(p)\big{)}\quad\text{for} \quad p\in V\cap V_{\alpha},\] where \(s^{\alpha}\) is a \(\mathcal{C}^{\infty}\)-map from \(V\cap V_{\alpha}\) into \(\mathbb{C}^{r}\). For each point \(p\in V\cap V_{\alpha}\cap V_{\beta}\), we have \[s^{\alpha}(p)=g^{\alpha\beta}(p)\,s^{\beta}(p).\] (A.2) Conversely, suppose we have a system \(\{s^{\alpha}\}_{\alpha\in A}\) of \(\mathcal{C}^{\infty}\)-maps satisfying (A.2). Then, by setting \(s(p)=\psi_{\alpha}^{-1}\big{(}p,s^{\alpha}(p)\big{)}\) for \(p\) in \(V\cap V_{\alpha}\), we have a section \(s\) over \(V\). For \(k=1,\ldots,r\), a _\(k\)-frame_ of \(E\) on an open set \(V\subset M\) is a collection \(\boldsymbol{s}=(s_{1},\ldots,s_{k})\) of \(k\) sections \(s_{i}\) of \(E\) on \(V\) linearly independent at each point in \(V\). An \(r\)-frame is simply called a _frame_. Note that a frame of \(E\) on \(V\) determines a trivialization of \(E\) over \(V\). Let us denote by \(\mathscr{E}_{M}\) the sheaf of germs of \(\mathcal{C}^{\infty}\) complex functions on \(M\). If \(\pi:E\to M\) is a \(\mathcal{C}^{\infty}\) complex vector bundle of rank \(r\) over \(M\), we denote by \(\mathscr{E}(E)\) the sheaf of germs of \(\mathcal{C}^{\infty}\)-sections of \(E\) -- i.e. the sheaf whose space of sections on an open subset \(V\subset M\) is \(\mathscr{E}(E)\bigsqcup_{V}=\mathcal{C}^{\infty}(V,E)\). It is clear that \(\mathscr{E}(E)\) is a \(\mathscr{E}_{M}\)-module. Furthermore, the sheaf \(\mathscr{E}(E)\) is a _locally free \(\mathscr{E}_{M}\)-module of rank \(r\)_: There exists a covering \(\mathcal{V}=\{V_{\alpha}\}_{\alpha\in A}\) of \(M\) and a sheaf isomorphism \[\psi_{\alpha}^{*}:\mathscr{E}(E)\big{|}_{V_{\alpha}}\to\mathscr{E}_{V_{\alpha}} ^{r}\qquad\mathscr{E}_{V_{\alpha}}^{r}:=\underbrace{\mathscr{E}_{V_{\alpha}} \oplus\cdots\oplus\mathscr{E}_{V_{\alpha}}}_{r}.\] Then, we have transition isomorphisms \(\psi_{\alpha}^{*}\circ(\psi_{\beta}^{*})^{-1}:\mathscr{E}_{M}^{r}\to\mathscr{E}_{M} ^{r}\) defined on \(V_{\alpha}\cap V_{\beta}\), and such isomorphism is the multiplication by an invertible matrix with \(\mathcal{C}^{\infty}\) coefficients on \(V_{\alpha}\cap V_{\beta}\). The concepts of complex vector bundles and locally free \(\mathscr{E}_{M}\)-modules are thus completely equivalent. If we are given some vector bundles, we may construct new ones by algebraic operations. Thus, we let \(E_{1}\) and \(E_{2}\) be complex vector bundles of rank \(r_{1}\) and \(r_{2}\) on \(M\). We may construct the direct sum \(E_{1}\oplus E_{2}\), the tensor product \(E_{1}\otimes E_{2}\), and the homomorphism \(\operatorname{Hom}(E_{1},E_{2})\). Note that there is a natural isomorphism \(\operatorname{Hom}(E_{1},E_{2})=E_{1}^{*}\otimes E_{2}\). We may also construct the complex conjugate \(\bar{E}_{1}\) and the k-th exterior power \(\bigwedge^{k}E_{1}\). The bundles \(E_{1}\) and \(E_{2}\) can be trivialized over the same covering \(\mathcal{V}=\{V_{\alpha}\}_{\alpha\in A}\) of \(M\) (otherwise take a common refinement). If \(\{g_{1}^{\alpha\beta}\}_{\alpha,\beta\in A}\) and \(\{g_{2}^{\alpha\beta}\}_{\alpha,\beta\in A}\) are the corresponding transition matrices of \(E_{1}\) and \(E_{2}\), then for example \(E_{1}\otimes E_{2}\), \(\bigwedge^{k}E_{1}\), \(E_{1}^{*}\) are the bundles defined by the transition matrices \(g_{1}^{\alpha\beta}\otimes g_{2}^{\alpha\beta}\), \(\bigwedge^{k}g_{1}^{\alpha\beta}\), \(\big{(}(g_{1}^{\alpha\beta})^{T}\big{)}^{-1}\) where "\(\cdot\)\(\cdot\)\(\cdot\)" denotes transposition. #### Connections and Curvature Now, for a \(\mathcal{C}^{\infty}\) complex vector bundle \(E\) of rank \(r\) on \(M\), we let \(\mathcal{A}^{k}(V,E)\) be the vector space of \(\mathcal{C}^{\infty}\) sections of \(\big{(}\bigwedge^{k}T^{*}M\big{)}\otimes E\) on \(V\subset M\), which are called _differential forms on \(V\) with values in \(E\)_. Thus \(\mathcal{A}^{0}(V,E):=\mathcal{C}^{\infty}(V,E)\) is the \(\mathcal{A}^{0}(V):=\mathscr{E}_{M}\big{|}_{{}_{V}}\)-module of \(\mathcal{C}^{\infty}\) sections of \(E\). **Definition A.7** (Connection).: A _connection_ for \(E\) is a \(\mathbb{C}\)-linear map \[\nabla:\mathcal{A}^{0}(M,E)\to\mathcal{A}^{1}(M,E)\] satisfying \[\nabla(fs)=df\otimes s+f\nabla(s)\qquad\text{for}\qquad f\in\mathcal{A}^{0}(M )\quad\text{and}\quad s\in\mathcal{A}^{0}(M,E).\] A connection \(\nabla\) is a local operator -- i.e., if a section \(s\) is identically \(0\) on an open set \(V\subset M\), so is \(\nabla(s)\). Thus, the restriction of \(\nabla\) to an open set \(V\) makes sense, and it is a connection for \(E\big{|}_{V}\). In addition, from the above definition, we conclude that if \(\nabla_{1},\dots,\nabla_{k}\) are connections for \(E\) and \(f_{1},\dots,f_{k}\) are \(\mathcal{C}^{\infty}\) functions on \(M\) with \(\sum_{i=1}^{k}f_{i}=1\), then \(\sum_{i=1}^{k}f_{i}\nabla_{i}\) will also be a connection on \(E\). If \(\nabla\) is a connection for \(E\), it induces a \(\mathbb{C}\)-linear map \[\nabla:\mathcal{A}^{1}(M,E)\to\mathcal{A}^{2}(M,E)\] and satisfying \[\nabla(\omega\otimes s)=d\omega\otimes s-\omega\wedge\nabla(s)\qquad\text{ for}\qquad\omega\in\mathcal{A}^{1}(M)\quad\text{and}\quad s\in\mathcal{A}^{0}(M,E).\] The composition \[\Theta=\nabla\circ\nabla:\mathcal{A}^{0}(M,E)\to\mathcal{A}^{2}(M,E)\] is called the _curvature_ of \(\nabla\). It is not difficult to see that \[\Theta(fs)=f\Theta(s)\qquad\text{for}\qquad f\in\mathcal{A}^{0}(M)\quad\text{ and}\quad s\in\mathcal{A}^{0}(M,E).\] The fact that a connection is a local operator allows us to get local representations of it and its curvature by matrices whose entries are differential forms. Thus, suppose that \(\nabla\) is a connection for a complex vector bundle \(E\) of rank \(r\) and that \(E\) is trivial on \(V\) -- i.e. \(E\big{|}_{V}\cong V\times\mathbb{C}^{r}\). If \(\boldsymbol{s}=(s_{1},\ldots,s_{r})\) is a frame of \(E\) on \(V\), then we may write \[\nabla(s_{i})=\sum_{j=1}^{r}A_{ij}\otimes s_{j}\quad\text{with}\quad A_{ij}\in \mathcal{A}^{1}(V)\quad\text{and for}\quad i=1,\ldots,r.\] We call \(A=(A_{ij})\) the _connection matrix with respect to \(s\)_. For an arbitrary section \(s\) on \(V\), we may write \(s=\sum_{i=1}^{r}f_{i}s_{i}\) with \(f_{i}\) being \(\mathcal{C}^{\infty}\)-functions on \(V\) and we compute \[\nabla(s)=\sum_{i=1}^{r}\left(df_{i}+\sum_{j=1}^{r}f_{j}A_{ji}\right)\otimes s _{i}.\] The connection \(\nabla\) is _trivial_ with respect to \(s\), if and only if \(A=0\). Thus in this case we have \(\nabla(s)=\sum_{i=1}^{r}df_{i}\otimes s_{i}\). Also, from the definition, we compute to get \[\Theta(s_{i})=\sum_{j=1}^{r}\theta_{ij}\otimes s_{j}\quad\text{with}\quad \theta_{ij}=dA_{ij}-\sum_{k=1}^{r}A_{ik}\wedge A_{kj}.\] We call \(\theta=(\theta_{ij})\) the _curvature matrix with respect to \(s\)_. If \(\boldsymbol{s}^{\prime}=(s^{\prime}_{1},\ldots,s^{\prime}_{r})\) is another frame of \(E\) on \(V^{\prime}\), we have \(s^{\prime}_{i}=\sum_{j=1}^{r}\mathfrak{g}_{ij}s_{j}\) for some \(\mathcal{C}^{\infty}\) functions \(\mathfrak{g}_{ij}\) on \(V\cap V^{\prime}\). The matrix \(\mathfrak{g}=(\mathfrak{g}_{ij})\) is non-singular at each point of \(V\cap V^{\prime}\). If we denote by \(A^{\prime}\) and \(\theta^{\prime}\) the connection and curvature matrices of \(\nabla\) with respect to \(\boldsymbol{s}^{\prime}\), we obtain the _gauge transformation law_ \[A^{\prime}=\mathfrak{g}A\mathfrak{g}^{-1}+\mathfrak{g}^{-1}d\mathfrak{g}\qquad \text{and}\qquad\theta^{\prime}=\mathfrak{g}\theta\mathfrak{g}^{-1}\quad\text {on}\quad V\cap V^{\prime}.\] (A.3) Now, suppose that \((M,\mathcal{J})\)95 is a complex manifold, and \(E\) is a complex vector bundle on \(M\). We can consider the tensor product bundle \(\bigwedge^{k,l}T^{*}M\otimes E\), and we let \(\mathscr{E}_{M}^{(k,l)}(E)\) denote the sheaf of germs of smooth sections of \(\bigwedge^{k,l}T^{*}M\otimes E\). Smooth sections of this sheaf are \((k,l)\)-forms with values in \(E\), the set of which we denote by \(\mathcal{A}^{(k,l)}(E)\). The connection \(\nabla\) in \(E\) induces a connection, also written as \(\nabla\), in \(\mathcal{A}^{(k,l)}(E)\). This connection splits as \(\nabla=\nabla^{(1,0)}\oplus\nabla^{(0,1)}\) giving maps Footnote 95: Here \(\mathcal{J}\) denotes an (integrable) almost complex structure. \[\nabla^{(1,0)}:\mathcal{A}^{(k,l)}(E)\to\mathcal{A}^{(k+1,l)}(E)\qquad\text{ and}\qquad\nabla^{(0,1)}:\mathcal{A}^{(k,l)}(E)\to\mathcal{A}^{(k,l+1)}(E).\] **Theorem A.1**.: _A smooth, complex vector bundle \(E\) over a complex manifold \(M\) admits a holomorphic structure if and only if there exists a connection \(\nabla\) in \(E\) such that \(\nabla^{0,1}=\bar{\partial}\)._ The holomorphic structure in \(E\) is _uniquely determined_ by the condition \(\nabla^{0,1}=\bar{\partial}\), and this condition says that the \((0,2)\) component \(\nabla^{0,1}\circ\nabla^{0,1}\) of the curvature of \(\nabla\) vanishes. Using partitions of unity, one easily sees that Hermitian metrics exist on every complex vector bundle: **Definition A.8** (Hermitian metric).: A _Hermitian metric_\(h\) on a complex vector bundle \(E\) is an assignment of a Hermitian inner product to each fibre \(E_{p}\) of \(E\) that varies smoothly with \(p\). A connection \(\nabla\) in \(E\) is called a Hermitian connection if \(\nabla h=0\) for some Hermitian metric \(h\). A vector bundle equipped with a Hermitian metric is often called an _Hermitian vector bundle_. We then have: **Proposition A.1**.: _Let \(E\) be a holomorphic vector bundle with a Hermitian metric \(h\). Then there exists a **unique** Hermitian connection \(\nabla\) such that \(\nabla^{0,1}=\bar{\partial}\). This unique connection is called **the** "Hermitian connection"_. #### Chern Forms Consider the space \(\mathsf{M}_{r\times r}(\mathbb{C})\) of complex \(r\times r\) matrices. For any \(\mathtt{M}\in\mathsf{M}_{r\times r}(\mathbb{C})\), we define \[\det(\mathtt{M}+\lambda\mathbb{1})=\sigma_{r}(\mathtt{M})+\lambda\sigma_{r-1} (\mathtt{M})+\cdots+\lambda^{r-1}\sigma_{1}(\mathtt{M})+\lambda^{r}.\] Clearly, for any \(i=1,\ldots,r\) the function \(\sigma_{i}:\mathsf{M}_{r\times r}(\mathbb{C})\to\mathbb{C}\) is a \(\mathrm{GL}(r,\mathbb{C})\)-invariant, complex homogeneous polynomial of \(\deg(\sigma_{i})=i\). Note that \(\sigma_{i}\) is the i-th elementary symmetric function of the eigenvalues of \(\mathtt{M}\). In particular, \(\sigma_{r}(\mathtt{M})=\det(\mathtt{M})\) and \(\sigma_{1}(\mathtt{M})=\mathrm{tr}(\mathtt{M})\). Since differential forms of even degrees commute with one another with respect to the exterior product, we may treat the curvature 2-form \(\theta\) as an ordinary matrix whose entries are numbers. Thus, we define **Definition A.9** (Chern form).: Let \(E\to M\) be a rank \(r\) complex vector bundle over \(M\), and let \(\nabla\) be a complex connection on \(E\) with curvature 2-form \(\theta\). For each \(i=1,\ldots,r\) we define the \(2i\)-form \[c_{i}(E,\nabla):=\sigma_{i}\left(\frac{\sqrt{-1}}{2\pi}\theta\right)\] and call it the _i-th Chern form_ of \(E\). Additionally, we have the following definition **Definition A.10** (Chern classes).: Given \((E,\nabla)\) and any \(1\leq i\leq r\), the i-th Chern form \(c_{i}(E,\nabla)\) is closed. Furthermore, if \(\nabla^{\prime}\) is another complex connection on \(E\) the difference \(c_{i}(E,\nabla)-c_{i}(E,\nabla^{\prime})\) is exact -- i.e. the cohomology class \([c_{i}(E,\nabla)]\in H^{2i}_{dR}(M)\simeq H^{2i}(M,\mathbb{C})\) is independent of \(\nabla\). The resulting cohomology class is called the _i-th Chern class_ of \(E\) and is denoted by \(c_{i}(E)\). **Remark A.3**.: It is known that \(c_{i}(E)\) is in the image of the canonical homomorphism \[H^{2i}(M,\mathbb{Z})\to H^{2i}(M,\mathbb{C}).\] In fact, it is possible to define \(c_{i}(E)\) in \(H^{2i}(M,\mathbb{Z})\) using the obstruction theory; it is the primary obstruction to constructing \(r-i+1\) sections linearly independent everywhere [168, SSI]. #### Complex Line Bundles Assume now that \(L\to M\) is a line bundle (\(r=1\)). Then, every collection of transition functions \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) defines a Cech 1-cocycle with values in the multiplicative sheaf \(\mathscr{E}_{M}^{*}\) of invertible \(\mathcal{C}^{\infty}\) complex functions on \(M\). In fact, the definition of the Cech differential (see e.g. [169, SS1.3]) gives \((\delta g)^{\alpha\beta\gamma}=g^{\beta\gamma}(g^{\alpha\gamma})^{-1}g^{ \alpha\beta}\), and we have \(\delta g=1\) in view of (A.1). Let \(\psi_{\alpha}^{\prime}\) be another family of trivializations and \(\{g^{\prime\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\) the associated cocycle (it is no loss of generality to assume that both are defined on the same covering since we may otherwise take a refinement). Then we have \[\psi_{\alpha}^{\prime}\circ\psi_{\alpha}^{-1}:V_{\alpha}\times\mathbb{C}\to V _{\alpha}\times\mathbb{C},\qquad(p,\zeta)\mapsto(p,u_{\alpha}(p)\zeta),\qquad u _{\alpha}\in\mathscr{E}_{M}^{*}(V_{\alpha}).\] It follows that \(g^{\alpha\beta}=g^{\prime\alpha\beta}u_{\alpha}^{-1}u_{\beta}\) -- i.e. the Cech 1-cocycles \(g^{\alpha\beta}\) and \(g^{\prime\alpha\beta}\) differ only by the Cech 1-coboundary \(\delta u\). Therefore, there exists a well-defined map which associates to every complex line bundle \(L\) over \(M\) the Cech cohomology class \(\{g^{\alpha\beta}\}_{(\alpha,\beta)\in A\times A}\in H^{1}(M,\mathscr{E}_{M}^ {*})\) of its cocycle of transition functions. It is easy to verify that the cohomology classes associated to two complex line bundles \(L\) and \(L^{\prime}\) are equal if and only if these bundles are isomorphic. It is also clear that the multiplicative group structure on \(H^{1}(M,\mathscr{E}_{M}^{*})\) corresponds to the tensor product of line bundles (the inverse of a line bundle being its dual). We may summarize this discussion by the following: **Proposition A.2**.: _The group of isomorphism classes of complex \(\mathcal{C}^{\infty}\) line bundles is in one-to-one correspondence with the Cech cohomology group \(\check{H}^{1}(M,\mathscr{E}_{M}^{*})\)._ Now, let \(\nabla\) be a connection on \(L\) with curvature 2-form \(\theta\). The de Rham class \([\theta]\in H^{2}_{dR}(M,\mathbb{C})\) does not depend on the particular choice of \(\nabla\). If \(\nabla\) is chosen to be hermitian with respect to a given hermitian metric on \(L\) (such a connection can always be constructed by means of a partition of unity) then \(\sqrt{-1}\theta\) is a real 2-form, thus \([\sqrt{-1}\theta]\in H^{2}_{dR}(M,\mathbb{R})\). Consider now the one-to-one correspondence given by Proposition A.2 and the exponential short exact sequence of sheaves on \(M\) \[0\to\mathbb{Z}\xrightarrow{\imath}\mathscr{E}_{M}\xrightarrow{\exp}\mathscr{ E}_{M}^{*}\to 0,\] where the map \(\imath\) is \(\imath(k)=2\pi\sqrt{-1}k\) and the exponential map sends the germ \(f\) of any complex \(\mathcal{C}^{\infty}\) function to \(\exp(f)\). Since \(\mathscr{E}_{M}\) is a fine sheaf,96 we have \(H^{q}(M,\mathscr{E}_{M})=0\) for all \(q>0\); in particular \(H^{1}(M,\mathscr{E}_{M})=H^{2}(M,\mathscr{E}_{M})=0\). So, the induced long exact cohomology sequence Footnote 96: A fine sheaf over a over _paracompact_ Hausdorff space \(M\) is one with “partitions of unity.” More precisely, for any open cover of the space \(M\), we can find a family of homomorphisms from the sheaf to itself with sum 1 such that each homomorphism is 0 outside some element of the open cover (see e.g., [170, Def. 4.35] for more details). \[\cdots\to H^{1}(M,\mathscr{E}_{M})\to H^{1}(M,\mathscr{E}_{M}^{*})\to H^{2}( M,\mathbb{Z})\to H^{2}(M,\mathscr{E}_{M})\to\cdots\] gives an isomorphism \(H^{1}(M,\mathscr{E}_{M}^{*})\simeq H^{2}(M,\mathbb{Z})\) which says that the _topological invariant_\(H^{2}(M,\mathbb{Z})\) can be thought of as the _group of complex line bundles on \(M\)_. This isomorphism is realized by associating to a complex line bundle \(L\) its first Chern class \(c_{1}(L)\). The natural morphism \[H^{2}(M,\mathbb{Z})\to H^{2}(M,\mathbb{R})\simeq H^{2}_{dR}(M,\mathbb{R})\] results in the following theorem **Theorem A.2**.: _The image of \(c_{1}(L)\) in \(H^{2}_{dR}(M,\mathbb{R})\) coincides with the de Rham cohomology class \([\sqrt{-1}\theta]\) associated to any (hermitian) connection \(\nabla\) on \(L\)._ #### Holomorphic Vector Bundles on Analytic Spaces Let us now generalize our discussion of holomorphic vector bundles to those defined over complex analytic spaces. We have: **Definition A.11** (Holomorphic vector bundle).: Let \(\pi:E\to X\) be an analytic map between reduced analytic spaces such that every fiber \(E_{x}:=\pi^{-1}(x)\) over a point \(x\in X\), is equipped with the structure of a \(r\)-dimensional complex vector space. Then, \(\pi:E\to X\) will be called a _holomorphic vector bundle of rank \(r\)_ on \(X\) if every point \(x\in X\) has an open neighborhood \(V\) in \(X\) such that the restricted map \(\pi\big{|}_{E\big{|}_{V}:=\pi^{-1}(V)}:E\big{|}_{V}\to V\) is (analytically) _trivial_ -- i.e. there exists a biholomorphic map \(E\big{|}_{V}\stackrel{{\cong}}{{=}}V\times\mathbb{C}^{r}\), called the _holomorphic trivialization_ of \(E\) over \(V\), which maps every fiber \(E_{x}\) for \(x\in V\) onto \(x\times\mathbb{C}^{r}\) as an isomorphism of complex vector spaces. A holomorphic vector bundle of rank \(r=1\) on \(X\) is called a _holomorphic line bundle_ on \(X\). An analytic map \(E\to E^{\prime}\) between holomorphic vector bundles is called a _bundle map_ if it is fiber preserving and if all induced maps \(E_{x}\to E^{\prime}_{x}\) are linear; clearly, the holomorphic vector bundles with bundle maps as morphisms form a category. Assume now that \(E\) is a holomorphic vector bundle on \(X\) of rank \(r\). A _holomorphic section_ of \(E\) over \(V\subset X\) is a holomorphic map \(s:V\to\pi^{-1}(V)\subset E\) such that \(\pi\circ s=\operatorname{id}_{V}\). If \(\pi^{-1}(V)\cong V\times\mathbb{C}^{r}\) then \(s\) is simply given by a \(r\)-tuple of holomorphic functions on \(V\). Hence, local holomorphic sections in \(E\) determine, in a natural way, a canonical presheaf on \(X\) which gives rise to the analytic sheaf \(\mathscr{O}_{X}(E)\) on \(X\) of germs of holomorphic sections in \(E\). Such a sheaf is always _locally free of the same rank as the rank of the vector bundle_: If \(E\big{|}_{V}\) is trivial, we have an isomorphism \(\mathscr{O}(E)\big{|}_{V}=\mathscr{O}_{V}^{r}\). It follows that if \(E\) is the trivial line bundle \(X\times\mathbb{C}\), the sheaf \(\mathscr{O}(E=X\times\mathbb{C})\) coincides with the structure sheaf \(\mathscr{O}_{X}\) of the complex analytic space \(X\). Moreover, the cohomology of a holomorphic vector bundle \(E\) over \(X\) is defined to be the sheaf cohomology of \(\mathscr{O}(E)\). In particular, we have \[H^{0}\big{(}X,\mathscr{O}(E)\big{)}=\Gamma\big{(}X,E\big{)}\] the _space of global holomorphic sections of \(E\)_. To study the holomorphic line bundles on the analytic space \(X\), we consider the exact sequence \[0\to\mathbb{Z}\stackrel{{\sharp}}{{\to}}\mathscr{O}_{X} \stackrel{{\exp}}{{\longrightarrow}}\mathscr{O}_{X}^{*}\to 0,\] where the maps \(\imath\) and \(\exp\) are defined as before and \(\mathscr{O}_{X}^{*}\) denotes the sheaf of invertible elements in \(\mathscr{O}_{X}\) -- in other words, the sheaf of nowhere-vanishing holomorphic function on \(X\). This induces a long exact sequence in cohomology, \[\cdots\to H^{1}(X,\mathscr{O}_{X})\to H^{1}(X,\mathscr{O}_{X}^{*})\xrightarrow{ \delta}H^{2}(X,\mathbb{Z})\to H^{2}(X,\mathscr{O}_{X})\to\cdots.\] The group \(H^{1}(X,\mathscr{O}_{X}^{*})\) represents the group of holomorphic line bundles on the analytic space \(X\) with group multiplication being the tensor product, and the inverse bundle being the dual bundle. This group is called the _Picard group_ of \(X\) and often denoted by \(\operatorname{Pic}(X)\). As seen above, the connecting homomorphism \(\delta\) takes a holomorphic line bundle \(\mathscr{L}\) to its first Chern class \(c_{1}(\mathscr{L})\), and the group \(H^{2}(X,\mathbb{Z})\) is isomorphic to the group of topological complex line bundles on \(X\). So if \(H^{2}(X,\mathscr{O}_{X})\neq 0\), we see that not every complex line bundle gives rise to a holomorphic line bundle. Similarly, if \(H^{1}(X,\mathscr{O}_{X})\neq 0\), there can be inequivalent holomorphic bundles associated to the same complex line bundle. The kernel of the map \(\delta\) is denoted by \(\operatorname{Pic}^{0}(X)\) and represents the subgroup of holomorphic line bundles that are trivial topologically. There is a holomorphic line bundle canonically associated with every analytic space: **Definition A.12** (Canonical line bundle).: Let \(X\) be a reduced analytic space of dimension \(\mathfrak{n}\). The \(\mathfrak{n}\)-th exterior power \(\bigwedge^{\mathfrak{n}}T^{*}_{(1,0)}X\) is a holomorphic line bundle, called the _canonical line bundle_ and denoted by \(\mathcal{K}_{X}\). The dual or inverse line bundle \(\mathcal{K}_{X}^{-1}\) is called the _anticanonical line bundle_. When the underlying analytic space \(X\) is understood we often write just \(\mathcal{K}\) for \(\mathcal{K}_{X}\). It is easy to see that **Proposition A.3**.: _The first Chern class of \(\mathcal{K}_{X}\) satisfies \(c_{1}(\mathcal{K}_{X})=-c_{1}(X)\)._ ### Meromorphic Functions and Divisors There are two equivalent ways to describe divisors on smooth complex manifolds. However, they are not equivalent for singular analytic spaces. We discuss both of these notions here. **Definition A.13** (Weil divisor).: A _Weil divisor_\(\mathcal{D}\) on an analytic space \(X\) is a locally finite formal linear combination of irreducible analytic hypersurfaces \(H_{i}\) \[\mathcal{D}=\sum_{i}a_{i}H_{i}\quad\text{with}\quad a_{i}\in\mathbb{Z},\] where locally finite means that every point \(x\in X\) has a neighborhood intersecting only finitely many of the \(H_{i}\)'s. \(\mathcal{D}\) is said to be _effective_ if \(a_{i}\geq 0\) for all \(i\) (not all \(a_{i}\)'s equal to zero). For a Weil divisor \(\mathcal{D}=\sum_{i}a_{i}H_{i}\), we set \(\operatorname{Supp}(\mathcal{D}):=\bigcup_{i}H_{i}\) and call it the _support_ of \(\mathcal{D}\). Additionally, the coefficient \(a_{i}\) is called the _multiplicity of \(\mathcal{D}\) along \(H_{i}\)_ and will be denoted by \(\operatorname{mult}_{H_{i}}(\mathcal{D})\); we set \(\operatorname{mult}_{H}(\mathcal{D})=0\) for every other irreducible divisor \(H\neq H_{i}\)\(\forall i\). Finally, the degree of \(\mathcal{D}\) is denoted by \(\deg(\mathcal{D})\) and is defined as the sum of coefficients \(a_{i}\) -- i.e. \(\deg(\mathcal{D}):=\sum_{i}\operatorname{mult}_{H_{i}}(\mathcal{D})=\sum_{i}a_ {i}\). Under the formal sum operation, Weil divisors form a group called the divisor group and denoted by \(\operatorname{Div}(X)\). It then follows, from definition A.1 of hypersurfaces, that a Weil divisor is described locally by the zero set of holomorphic functions. Now, let us recall that a _meromorphic function_ on an open set \(V\subset X\) is a ratio \(f/g\) of relatively prime holomorphic functions \(f\) and \(g\) on \(V\). We will denote by \(\mathscr{M}_{X}\) the sheaf of meromorphic functions on \(X\) and by \(\mathscr{M}_{X}^{*}\) the subsheaf of _not-identically-zero meromorphic functions_. We also denoted by \(\mathscr{O}_{X}^{*}\) the subsheaf of invertible elements in \(\mathscr{O}_{X}\) and called it the sheaf of _nowhere-vanishing holomorphic functions_. We have the following short, exact sequence \[0\to\mathscr{O}_{X}^{*}\to\mathscr{M}_{X}^{*}\to\mathscr{M}_{X}^{*}/\mathscr{O }_{X}^{*}\to 0. \tag{104}\] Then, we can define: **Definition A.14** (Cartier divisor).: A _Cartier divisor_ on \(X\) is a global section of the sheaf \(\mathscr{M}_{X}^{*}/\mathscr{O}_{X}^{*}\) -- i.e. an element of the group \(H^{0}(X,\mathscr{M}_{X}^{*}/\mathscr{O}_{X}^{*})\). Any Cartier divisor can be represented by giving an open covering \(\mathcal{V}:=\{V_{\alpha}\}_{\alpha\in A}\) of \(X\) and, for all \(\alpha\in A\), an element \(\phi^{\alpha}:=f^{\alpha}/g^{\alpha}\in\mathscr{M}_{X}^{*}(V_{\alpha})\) such that \(\phi^{\alpha}=u^{\alpha\beta}\phi^{\beta}\) on any intersection \(V_{\alpha}\cap V_{\beta}\) with \(u^{\alpha\beta}\in\mathscr{O}_{X}^{*}(V_{\alpha}\cap V_{\beta})\). A Cartier divisor \(\mathcal{D}\) on \(X\) is called _effective_ if it can be represented by the system \(\{(V_{\alpha},f^{\alpha})\}_{\alpha\in A}\) with all local equations \(f^{\alpha}\in\Gamma(V_{\alpha},\mathscr{O}_{X})\). Additionally, two systems \(\{(V_{\alpha},\phi^{\alpha})\}_{\alpha\in A}\) and \(\{(V_{\beta}^{\prime},\phi^{\prime\beta})\}_{\beta\in B}\) represent the same Cartier divisor if and only if on \(V_{\alpha}\cap V_{\beta}^{\prime}\), \(\phi^{\alpha}\) and \(\phi^{\prime\beta}\) differ by a multiplicative factor in \(\mathscr{O}_{X}^{*}(V_{\alpha}\cap V_{\beta}^{\prime})\). The abelian group of Cartier divisors on \(X\) will denoted by \(H^{0}(X,\mathscr{M}_{X}^{*}/\mathscr{O}_{X}^{*})\). If \(\mathcal{D}_{1}:=\{(V_{\alpha}^{1},\phi_{1}^{\alpha})\}_{\alpha\in A}\) and \(\mathcal{D}_{2}:=\{(V_{\beta}^{2},\phi_{2}^{\beta})\}_{\beta\in B}\) then \(\mathcal{D}_{1}+\mathcal{D}_{2}=\{(V_{\alpha}^{1}\cap V_{\beta}^{2},\phi_{1}^{ \alpha}\phi_{2}^{\beta})\}_{\alpha\in A,\beta\in B}\). Since on a smooth analytic space \(X\) (i.e. a complex manifold) the local rings \(\mathscr{O}_{X,x}\) are unique factorization domains (UFD), Weil divisors and Cartier divisors coincide: If we cover \(X\) by open sets \(\{V_{\alpha}\}_{\alpha\in A}\) so that \(H_{i}\) is defined by \(f_{i}^{\alpha}\) on \(V_{\alpha}\), we have the meromorphic function \(\phi^{\alpha}=\prod_{i}(f_{i}^{\alpha})^{a_{i}}\) which is determined by the expression of the Weil divisor \(\mathcal{D}=\sum_{i}a_{i}H_{i}\). The systems \(\{(V_{\alpha},\phi^{\alpha})\}_{\alpha\in A}\) would then correspond to a Cartier divisor. **Theorem A.3**.: _Let \(M\) be a smooth complex manifold. Then there is an isomorphism_ \[\operatorname{Div}(M)\simeq H^{0}(M,\mathscr{M}_{M}^{*}/\mathscr{O}_{M}^{*}).\] On smooth complex manifolds \(M\), such as the regular locus of an analytic space, we shall often identify Weil divisors and Cartier divisors by just referring to a _divisor_, \(\operatorname{Div}(M)\). This isomorphism does _not_ hold on singular analytic spaces: Let \(X\) be a normal analytic space and let \(\mathcal{D}^{\text{reg}}=\sum_{i}a_{i}H_{i}^{\text{reg}}\) be a Weil divisor defined on the regular locus \(\text{Reg}(X)\). Since the singular set of \(X\) has codimension at least \(2\), the Remmert-Stein extension theorem (see e.g. [163, p. 181]) ensures that \(\mathcal{D}^{\text{reg}}\) admits a unique extension to a Weil divisor \(\mathcal{D}\) on \(X\). However, not every Cartier divisor on \(\text{Reg}(X)\) extends to a Cartier divisor on \(X\). Thus, the group of Cartier divisors \(H^{0}(X,\mathscr{M}_{X}^{*}/\mathscr{O}_{X}^{*})\) on a singular analytic space \(X\) is identified with a subgroup of \(\operatorname{Div}(X)\). Any global section \(\phi\in\Gamma(X,\mathscr{M}_{X}^{*})\) determines a _principal Cartier divisor_\((\phi):=\{(X,\phi)\}\) by taking all local equations equal to \(\phi\). Equivalently, if \(\phi\) is a global meromorphic function on \(X\) which can be written locally as \(\phi=f/g\), we may consider a Weil divisor \((\phi)=\operatorname{ord}(f)Z_{f}-\operatorname{ord}(g)Z_{g}\) where \(Z_{f}\) denotes the zero set of the holomorphic function \(f\) and \(\operatorname{ord}(f)\) denotes its order of vanishing. Then, two divisors \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) on \(X\) are said to be _linearly equivalent_, written \({\cal D}\sim{\cal D}^{\prime}\), if \({\cal D}^{\prime}={\cal D}+(\phi)\), where \((\phi)\) denotes the principal divisor defined by the global meromorphic function \(\phi\). We denote by \([{\cal D}]\) the set of all divisors on \(X\) that are linearly equivalent to \({\cal D}\). It is called the _linear system_ of divisors defined by \({\cal D}\). The common intersection \(\bigcap_{{\cal D}^{\prime}\in[D]}{\cal D}^{\prime}\) is called the _base locus_ of linear system \([{\cal D}]\). We will also denote by \({\rm Cl}(X)\) the _divisor class group_ of Weil divisors modulo linear equivalence, and by \({\rm CaCl}(X)\) the _group of Cartier divisor classes_ (Cartier divisors modulo principal divisors). On a singular analytic space, the group \({\rm CaCl}(X)\) is generally a subgroup of the divisor class group \({\rm Cl}(X)\). We now describe the relationship between line bundles and divisors: From the short exact sequence (A.4) one has \[0\to H^{0}(M,\mathscr{M}_{X}^{*}/\mathscr{O}_{X}^{*})/H^{0}(M,\mathscr{M}_{X} )\to H^{1}(M,\mathscr{O}_{X}^{*})\to H^{1}(M,\mathscr{M}_{X}^{*}).\] This says that every divisor \({\cal D}\) on \(M\) determines a holomorphic line bundle \(\mathscr{L}({\cal D})\), and the line bundle \(\mathscr{L}({\cal D})\) is holomorphically trivial if and only if \({\cal D}\) is a _principal divisor_ -- i.e. the divisor \((\phi)\) of a global meromorphic function. The holomorphic line bundle \(\mathscr{L}({\cal D})\) has as the system of transition functions, the collection \(\{u^{\alpha\beta}\}\) of nowhere vanishing holomorphic functions \(u^{\alpha\beta}\in\mathscr{O}_{X}^{*}(V_{\alpha}\cap V_{\beta})\) defined uniquely in terms of Cartier divisors in A.14. If \(\sum_{i}a_{i}H_{i}\) is the Weil divisor corresponding to the Cartier divisors \({\cal D}\), we may write \(\mathscr{L}({\cal D})=\bigotimes_{i}\mathscr{L}_{i}^{a_{i}}\) where \(\mathscr{L}_{i}:=\mathscr{L}(H_{i})\) and \(\mathscr{L}_{i}^{a_{i}}\) denotes the tensor product of \(a_{i}\) copies of \(\mathscr{L}_{i}\), for \(a_{i}>0\), and the tensor product of \(-a_{i}\) copies of \(\mathscr{L}_{i}^{*}\), for \(a_{i}<0\). The quotient \(H^{0}(M,\mathscr{M}_{X}^{*}/\mathscr{O}_{X}^{*})/H^{0}(M,\mathscr{M}_{X})\) is precisely the Cartier divisor class group \({\rm CaCl}(M)\) defined before. Furthermore, \(H^{1}(M,\mathscr{M}_{X}^{*})=0\) if and only if every holomorphic line bundle on \(M\) has a global meromorphic section. In this case, we get an isomorphism between the Cartier divisor class group and the Picard group. This happens, for example, for smooth projective algebraic varieties (such as Riemann surfaces): **Proposition A.4**.: _Let \(X\) be a smooth projective algebraic variety. Then, we have \({\rm Cl}(X)\simeq{\rm CaCl}(X)\simeq{\rm Pic}(X)\)._ Finally, let us finish this subsection by pointing out that if each _prime divisor_ (i.e. irreducible hypersurface) \(H_{i}\) in a Weil divisor \({\cal D}=\sum_{i}a_{i}H_{i}\) is compact, up to linear equivalence, \({\cal D}\) defines a homology class \([{\cal D}]=\sum_{i}a_{i}[H_{i}]\) in \(H_{2{\tt n}-2}(M,\mathbb{Z})\). Furthermore, it is known that if \(M\) is compact, the homology class \([{\cal D}]\in H_{2{\tt n}-2}(M,\mathbb{Z})\) is the Poincare dual of the cohomology class \(c_{1}(\mathscr{L}({\cal D}))\in H^{2}(M,\mathbb{Z})\). #### a.1.3 Finite Group Actions, Quotient Singularities, and Galois Coverings Let \(Y=(|Y|,\mathscr{O}_{Y})\) be a reduced normal97 complex analytic space and let \(\Gamma\) be a finite subgroup of the group \({\rm Aut}(Y)\) of analytic automorphisms of \(Y\). Our main goal in this subsection is to study the analytic quotient of \(Y\) by the group \(\Gamma\). More concretely, we want to construct a reduced normal complex analytic space \(X=(|X|,\mathscr{O}_{X})\) together with a surjective analytic map \(\varpi:Y\to X\) which is invariant under \(\Gamma\) -- i.e. \(\varpi\circ\gamma=\varpi\) for all \(\gamma\in\Gamma\). #### Finite Group Actions and Analytic Quotients Since any analytic space \(Y\) has an underlying (Hausdorff) topological space \(|Y|\), we start our study of analytic quotients with a few definitions regarding the action of a topological group on a topological space: **Definition A.15** (Group action).: A topological group \(\Gamma\) induces a _(left) group action_ on a topological space \(|Y|\) if there is a map \(\Gamma\times|Y|\to|Y|\) such that: 1. For any \(y\in|Y|\), \((\mathbb{1},y)\mapsto y\) where \(\mathbb{1}\in\Gamma\) is the identity element, 2. For any \(y\in|Y|\) and any two group elements \(\gamma_{1},\gamma_{2}\in\Gamma\), \((\gamma_{1}\gamma_{2},y)=\big{(}\gamma_{1},(\gamma_{2},y)\big{)}\). We will usually write \(\gamma\cdot y\) or even \(\gamma(y)\) instead of \((\gamma,y)\). There is also a notion of _right group action_, which we will avoid introducing. **Remark A.4**.: Throughout this paper, all groups are assumed to act _effectively_, which means for every two distinct elements in the group, there is some point in the space at which they differ. Given a group action \(\Gamma\times|Y|\to|Y|\), we can associate to every element \(\gamma\in\Gamma\) a homeomorphism \(\imath_{\gamma}:|Y|\to|Y|\) which is defined as \(\imath_{\gamma}(y)=\gamma\cdot y\) for all \(y\in|Y|\). The map \(\gamma\mapsto\imath_{\gamma}\) induces a group homomorphism \(\imath:\Gamma\to\operatorname{Aut}\big{(}|Y|\big{)}\) where \(\operatorname{Aut}\big{(}|Y|\big{)}\) is the group of automorphisms of topological space \(|Y|\). Conversely, it is easy to see that any group homomorphism \(\imath:\Gamma\to\operatorname{Aut}\big{(}|Y|\big{)}\) yields a group action \(\Gamma\times|Y|\to|Y|\), by setting \(\gamma\cdot y=\imath_{\gamma}(y)\). Observe that a group action is _effective_ if and only if \(\imath:\Gamma\to\operatorname{Aut}\big{(}|Y|\big{)}\) is a _monomorphism_. Let us now run through some terms that are associated with this group action: The _isotropy subgroup_, also called the _stabilizer subgroup_, of any point \(y\in|Y|\) is defined as the set \(\Gamma_{y}:=\{\gamma\in\Gamma\,|\,\gamma\cdot y=y\}\) and is a closed subgroup of \(\Gamma\). The action of \(\Gamma\) on \(|Y|\) is said to be _free_ if \(\Gamma_{y}=\{\mathbb{1}\}\), for all \(y\in|Y|\). The set \(\Gamma(y):=\big{\{}\gamma\cdot y\in|Y|\,\big{|}\,\gamma\in\Gamma\big{\}}\) denotes the _orbit_ of point \(y\). The action of \(\Gamma\) on \(|Y|\) is called _transitive_ if \(\Gamma(y)=|Y|\) for any point \(y\in|Y|\) -- i.e. if for any two points \(y_{1},y_{2}\in|Y|\) there is a element \(\gamma\in\Gamma\) such that \(\gamma\cdot y_{1}=y_{2}\). Moreover, we call an action _regular_ if it is both transitive and free. We always denote by \(|Y|/\Gamma\) the set of all \(\Gamma\)-orbits in \(|Y|\).98 The orbit space \(|Y|/\Gamma\) will be called the _topological quotient_ of \(|Y|\) by (the left action of) \(\Gamma\) and the natural map \(|\varpi|:|Y|\to|Y|/\Gamma\), sending \(y\) to its left orbit \(\Gamma(y)\), is called the corresponding _quotient map_. To make the quotient map \(|\varpi|:|Y|\to|Y|/\Gamma:=|X|\) continuous for an arbitrary topological \(\Gamma\)-action on \(|Y|\), we have to endow \(|X|\) with the _quotient topology_: \(W\subset|X|\) is open, if and only if \(|\varpi|^{-1}(W)\) is open in \(|Y|\). Footnote 98: More concretely, we should denote the set of all _left_\(\Gamma\)-orbits in \(|Y|\) by “\(\Gamma\backslash|Y|\)” instead of “\(|Y|/\Gamma\)” (the later should be reserved for set of all _right_\(\Gamma\)-orbits). However, we will continue to use the notation \(|Y|/\Gamma\) with the understanding that it represents the quotient of \(|Y|\) by the left action of \(\Gamma\). **Remark A.5**.: For an open set \(V\subset|Y|\), the image \(\gamma(V)\) is also open for all \(\gamma\in\Gamma\). Hence, \[|\varpi|^{-1}\big{(}|\varpi|(V)\big{)}=\bigcup_{\gamma\in\Gamma}\gamma(V)\] is an open set -- i.e. \(|\varpi|(V)\) is open in \(|X|\). In other words, the quotient map \(|\varpi|:|Y|\to|X|\) is an _open map_. In particular, if \(|\varpi|\) is (locally) a bijective, it is (locally) a homeomorphism. Since analytic subvarieties (representing an analytic variety) inherit a locally compact Hausdorff structure with a countable basis from their ambient complex number spaces, we shall assume that all topological spaces in the present text have these properties, at least locally. However, while patching local models together, we also want to avoid the creation of new pathologies. Therefore, we always assume \(|Y|\) to be globally Hausdorff and to have a countable basis; in particular, all topological spaces in this paper are paracompact. Then, we have to put strong conditions on the action of group \(\Gamma\) in order to make sure that the topological quotient \(|Y|/\Gamma\) preserves these properties (e.g. being Hausdorff). For this reason, we will always assume that \(\Gamma\) acts _properly discontinuously_ on \(|Y|\) by which we mean that for all compact sets \(V\subset|Y|\), the set \[\left\{\gamma\in\Gamma\,\Big{|}\,\gamma(V)\cap V\neq\emptyset\right\}\] is _finite_; this ensures that the topological quotient \(|Y|/\Gamma\) is indeed a Hausdorff space. Note that _finite groups_ always have this property for trivial reasons. **Remark A.6**.: Since the finite group \(\Gamma\) acts properly discontinuously on \(|Y|\), there exist for all \(y\in|Y|\) (arbitrarily small) neighborhoods \(V_{y}\) of \(y\) such that \[\begin{cases}\gamma\big{(}V_{y}\big{)}=V_{y}\quad\text{for all}\quad\gamma\in \Gamma_{y},\\ \gamma\big{(}V_{y}\big{)}\cap V_{y}=\emptyset\quad\text{for all}\quad\gamma \in\Gamma\backslash\Gamma_{y}.\end{cases}\] It is then sufficient to construct the analytic quotient of \(\big{(}V_{y},\mathscr{O}_{Y}\big{|}_{{}_{V_{y}}}\big{)}\) by \(\Gamma_{y}\) for all \(y\in Y\) (which is obviously identical with the quotient of \(\bigcup_{\gamma\in\Gamma}\gamma(V_{y})\) by \(\Gamma\)).99 Footnote 99: In fact, these spaces may be glued together (in a uniquely determined manner) to a space which possesses the desired property, if and only if the underlying topological space is Hausdorff. When \(Y\) carries more structure, we are often compelled to equip the quotient \(Y/\Gamma\) with a comparable structure: Let \(Y=(|Y|,\mathscr{O}_{Y})\) be a reduced complex analytic space and let \(\Gamma\) be a finite subgroup of \(\operatorname{Aut}(Y)\), the group of complex analytic automorphisms of \(Y\).100 The orbit space \(X:=Y/\Gamma\) is then called the _analytic quotient_ of \(Y\) by \(\Gamma\) and is constructed as follows: Define topologically \(|X|:=|Y|/\Gamma\) and denote by \(|\varpi|:|Y|\to|X|\) the corresponding (topological) quotient map; according to remark A.6, it is sufficient to consider the case in which \(|Y|\) is small with respect to \(y\in|Y|\) and that \(\Gamma=\Gamma_{y}\) is finite. Then, for \(W\) open in \(|X|\), the set \(|\varpi|^{-1}(W)\) is open and \(\Gamma\)-invariant in \(|Y|\) such that we can form the invariant algebra Footnote 100: As a general rule, \(\operatorname{Aut}(Y)\) refers to the automorphisms of an object \(Y\) in a category which will sometimes not be mentioned explicitly, if in the given context there is no ambiguity. \[\mathscr{O}_{X}(W):=\big{(}\mathscr{O}_{Y}\left(|\varpi|^{-1}(W)\right)\big{)} ^{\Gamma}\] (A.5) where \(\Gamma\) acts in the obvious manner on the algebra \(\mathscr{O}_{Y}\left(|\varpi|^{-1}\big{(}|U|\big{)}\right)\) of holomorphic functions. Hence, we have furnished the topological space \(|X|\) with a ringed structure \((|X|,\mathscr{O}_{X})\). In order to see that \((|X|,\mathscr{O}_{X})\) is indeed a (reduced) complex analytic space, we need the following local identity: \[\mathscr{O}_{X,|\varpi|(y)}=\mathscr{O}_{Y,y}^{\Gamma}.\] (A.6) In fact, since \(|\varpi|\) is finite and \(|\varpi|^{-1}\big{(}|\varpi|(y)\big{)}=\{y\}\), we have \[\mathscr{O}_{X,|\varpi|(y)} =\lim_{W\ni|\varpi|(y)}H^{0}(W,\mathscr{O}_{Y})=\lim_{\underset {W\ni|\varpi|(y)}{\longrightarrow}}H^{0}(|\varpi|^{-1}(W),\mathscr{O}_{X})^{\Gamma}\] (A.7) \[=\big{[}\lim_{W\ni|\varpi|(y)}H^{0}(|\varpi|^{-1}(W),\mathscr{O}_ {X})\big{]}^{\Gamma}=\mathscr{O}_{Y,y}^{\Gamma}.\] More concretely, we have the following theorem [164, Theorem 8.1]: **Theorem A.4** (Analytic quotient).: _If \(\Gamma\) acts properly discontinuously on the reduced complex analytic space \(Y\), then there exists the analytic quotient \(X=Y/\Gamma\). The analytic quotient map \(\varpi:Y\to X\) is locally finite and surjective and (near any point \(y\)) isomorphic to the quotient map \(Y\to Y/\Gamma_{y}\), where \(\Gamma_{y}\) denotes the finite stabilizer subgroup of \(\Gamma\) at \(y\). In particular, the analytic algebra \(\mathscr{O}_{X,x}\) can be identified with the invariant algebra \(\mathscr{O}_{Y,y}^{Y}\) for an arbitrary point \(y\in|\varpi|^{-1}(x)\), and \(\varpi_{y}^{*}:\mathscr{O}_{X,x}\to\mathscr{O}_{Y,y}\) is just the finite inclusion \(\mathscr{O}_{Y,y}^{\Gamma_{y}}\hookrightarrow\mathscr{O}_{Y,y}\)._ **Remark A.7**.: If \(\mathscr{O}_{Y,y}\) is reduced or an integral domain, \(\mathscr{O}_{Y,y}^{\Gamma_{y}}\) is obviously reduced or an integral domain for arbitrary automorphism groups \(\Gamma_{y}\). Since the inclusion of \(\mathbb{C}\)-algebras \(\mathscr{O}_{Y,y}^{\Gamma_{y}}\hookrightarrow\mathscr{O}_{Y,y}\) is a finite homomorphism for finite groups \(\Gamma_{y}\), both algebras have the same dimension. Hence, under our standard assumptions, the analytic quotient \(X=Y/\Gamma\) has in \(x=\varpi(y)\) the same dimension as \(Y\) in \(y\). We finally note that also _normality_ will be inherited from \(Y\). #### Quotient Singularities Anticipating the introduction of complex orbifolds as objects which locally look like the quotient of a complex manifold with a finite group action, we turn to studying the singularities of an analytic quotient of a complex manifold by a finite subgroup of its holomorphic automorphisms. Such singularities are called quotient singularities: **Definition A.16** (Quotient singularity).: By a _quotient singularity_, we understand a singular point \(x\) of an analytic quotient \(X=Y/\Gamma\), where \(Y\) is a smooth analytic space and \(\Gamma\) is a finite group action on \(Y\) by analytic automorphisms. Since a smooth analytic space is, in fact, a complex analytic manifold, we will use \(M\) (instead of \(Y\)) to denote such spaces. Following Cartan, one can then show that quotient singularities are locally analytically isomorphic to quotients of affine spaces by linear actions:101 Footnote 101: This result can be applied to show that quotient singularities are in fact _algebraic_ in all dimensions. **Theorem A.5** (Cartan).: _Each quotient singularity is isomorphic to a quotient \(\mathbb{C}^{\mathfrak{n}}/\Gamma\), where \(\Gamma\) is a finite subgroup of \(\mathrm{GL}(\mathfrak{n},\mathbb{C})\)._ Let \(M\) be a complex analytic manifold and let \(\Gamma\) be a finite subgroup of its holomorphic automorphisms \(\operatorname{Aut}(M)\). We will denote the quotient singularity \(M/\Gamma\) by \(X\) and the corresponding analytic quotient map by \(\varpi:M\to X\). Since \(M\) is a complex manifold, it can be embedded in an ambient complex number space \(\mathbb{C}^{\mathfrak{n}}\) and the structure sheaf \(\mathscr{O}_{M}\) is given by the restriction \(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}}}\big{|}_{M}\); in particular, the stalks \(\mathscr{O}_{M,p}\) for all points \(p\in M\) are isomorphic with the \(\mathbb{C}\)-algebra \(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}},p}=\mathbb{C}\{z_{1},\ldots,z_{\mathfrak{ n}}\}\) of convergent power series at that point. Then, it follows from theorem A.4 and the above definition A.16 that quotient singularities are completely determined by the normal invariant algebra \[\mathscr{O}_{X,x}=\mathscr{O}_{\mathbb{C}^{\mathfrak{n}},p}^{\Gamma_{p}}\] for any point \(p\in|\varpi|^{-1}(x)\). As the ring \(\mathscr{O}_{X,x}\) depends only on the conjugacy class of \(\Gamma_{p}\), the quotients for conjugate groups are isomorphic. Hence, we consider only a representative for each conjugacy class. **Definition A.17** (Reflection groups and small groups).: An element \(\gamma\in\operatorname{Aut}(M)\), \(M\) a connected complex manifold, is called a _reflection_ (or, perhaps more precisely, a _pseudoreflection_) if it is of finite order and if the (analytic) fixpoint set \[\operatorname{Fix}(\gamma):=\big{\{}p\in M\,\big{|}\,\gamma\cdot p=p\big{\}},\] is of pure codimension-1 in \(M\). A finite group \(\Gamma\subset\operatorname{Aut}(M)\) is called a _reflection group_ if it is generated by pseudoreflections in \(\operatorname{Aut}(M)\). Of course, an element \(\gamma\) of finite order in \(\operatorname{GL}(\mathfrak{n},\mathbb{C})\) is a reflection if and only if it leaves a hyperplane in \(\mathbb{C}^{\mathfrak{n}}\) pointwise fixed; this is equivalent to \(\gamma\) having the eigenvalues 1 (of multiplicity \(\mathfrak{n}-1\)) and \(e^{\frac{2\pi\sqrt{-1}}{m}}\) -- an \(m\)-th root of unity with \(m\geq 2\). On the other hand, a finite subgroup \(\Gamma\subset\operatorname{GL}(\mathfrak{n},\mathbb{C})\) will be called _small_ if it contains no pseudoreflections. Then, we have the following well-known result due to Prill [171]: **Theorem A.6** (Classification of quotients of complex manifolds).: _Let \(\Gamma\subset\operatorname{GL}(\mathfrak{n},\mathbb{C})\) be a finite subgroup. The following two statements are true:_ 1. _The analytic quotient_ \(\mathbb{C}^{\mathfrak{n}}/\Gamma\) _is smooth if and only if_ \(\Gamma\) _is a reflection group;_ 2. _There exists a small group_ \(\digamma\) _such that_ \(\mathbb{C}^{\mathfrak{n}}/\Gamma\) _and_ \(\mathbb{C}^{\mathfrak{n}}/\digamma\) _are analytically isomorphic._ This, of course, is equivalent to the claim that the invariant algebra \(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}},p}^{\Gamma_{p}}\) is isomorphic to the convergent power series ring \(\mathbb{C}\{z_{1},\ldots,z_{\mathfrak{n}}\}\) if and only if \(\Gamma\subset\operatorname{GL}(\mathfrak{n},\mathbb{C})\) is a finite reflection group (see e.g. [164, SS8.8] for more details). **Proposition A.5** (Analytic spaces with at most quotient singularities).: _An analytic space \(X=(|X|,\mathscr{O}_{X})\) admitting only quotient singularities has the following properties:_ 1. \(X=(|X|,\mathscr{O}_{X})\) _is always a reduced normal analytic space;_ 2. _The singular locus_ \(\operatorname{Sing}(X)\) _is a closed reduced analytic subspace of_ \(X\) _and has complex codimension at least two in_ \(X\) 3. _The smooth locus_ \(\operatorname{Reg}(X)\) _is a complex manifold and a dense open subset of_ \(X\)_._ Finally, since all finite subgroups of \(\operatorname{GL}(1,\mathbb{C})\cong\mathbb{C}^{*}\) are reflection groups, we have the following important corollary to the above theorem: **Corollary A.1**.: _If \(M\) is a 1-dimensional complex analytic manifold (i.e., a Riemann surface) and \(\Gamma\subset\operatorname{Aut}(M)\) is a finite subgroup of its holomorphic automorphisms, the analytic quotient \(M/\Gamma\) will always be a smooth, complex analytic space -- i.e., another Riemann surface._ #### Ramified Analytic Coverings As we saw in theorem A.4, an analytic quotient map \(\varpi:Y\to X:=Y/\Gamma\) between two complex analytic spaces \(Y\) and \(X\) is locally finite and surjective. This motivates us to study such analytic mappings in more detail (see e.g., [163, SS7.2]): **Definition A.18** (Analytic covering map).: A finite surjective analytic map \(\varpi:Y\to X\) between irreducible analytic spaces is called an _analytic covering_. This means that there exists a thin subset102\(T\subset X\), called the _critical locus_ of the covering, such that Footnote 102: A subset \(T\subset X\) is called _thin_ if it has the property that each point has a neighborhood on which some non-zero holomorphic function vanishes. Since the set on which a holomorphic function vanishes is closed and has an empty interior, a thin set is _nowhere dense_, and the closure of a thin set is also thin. 1. \(\varpi^{-1}(T)\) is thin in \(Y\), and 2. the restriction \(\varpi\big{|}_{{}_{Y\setminus\varpi^{-1}(T)}}:Y\backslash\varpi^{-1}(T)\to X \backslash T\) is locally an analytic isomorphism. The second condition means that for a sufficiently small open neighborhood \(W_{x}\subset X\backslash T\) of any point \(x\in X\backslash T\), the inverse image \(\varpi^{-1}(W_{x})\) consists of a finite number of components, called _sheets_ of \(\varpi\), such that the restriction of \(\varpi\) to each component is a complex analytic isomorphism between that component and \(W_{x}\). **Remark A.8**.: We will always assume that our analytic spaces are irreducible so that "analytic covering" and "finite analytic surjection" can be regarded as synonyms. If \(\varpi:Y\to X\) is an analytic covering, the restriction of \(\varpi\) to the complement of critical locus is necessarily a finite-sheeted covering map; the number of sheets of this covering map will be called the (total) _degree_ of the analytic covering \(\varpi\) and will be denoted by \(\deg(\varpi)\). Additionally, for any point \(y\in Y\), there are arbitrarily small open neighborhoods \(V_{y}\subset Y\) of \(y\) such that the restriction \(\left.\varpi\right|_{{}_{V_{y}}}\) is also an analytic covering (see e.g. [162, SS5]). Since the degrees of these local analytic coverings can only decrease as the neighborhoods \(V_{y}\) shrink to the point \(y\), it is evident that the degree is the same for all sufficiently small such neighborhoods; this common degree will be called the _ramification index_ (or the _local degree_ or _multiplicity_) of the mapping \(\varpi\) at the point \(y\), and will be denoted by \(\deg_{\varpi}(y)\). **Remark A.9**.: Note that if \(\varpi:Y\xrightarrow{d\text{\ref{eq:1}}}X\) is an analytic covering of total degree \(d\), then selecting any point \(x\in X\) and letting \(\{y_{i}\}_{i\in I}:=\varpi^{-1}(x)\subset Y\) be the collection of distinct points in the pre-image of \(x\), it follows that \(\sum_{i\in I}\deg_{\varpi}(y_{i})=d\). For any point \(y\notin\varpi^{-1}(T)\), it is clear that \(\deg_{\varpi}(y)=1\). However, in general, there may very well be points \(y^{\prime}\in\varpi^{-1}(T)\) for which \(\deg_{\varpi}(y^{\prime})=1\). This is because not all the points of \(\varpi^{-1}(\varpi(y^{\prime}))\) need necessarily have the same ramification index, even when the critical locus is chosen to be as small as possible. Hence, one usually introduces the subset \[R_{\varpi}:=\left\{y\in Y\,\Big{|}\,\deg_{\varpi}(y)>1\right\}\] of \(\varpi^{-1}(T)\) which will be called the _ramification locus_ of the analytic covering \(\varpi:Y\to X\) and is a closed analytic subspace of \(Y\). The set \(B_{\varpi}:=\varpi(R_{\varpi})\) is called the _branching locus_ of \(\varpi\) and is a closed analytic subspace of \(X\); note that \(B_{\varpi}\) is clearly a subset of the critical locus \(T\). An analytic covering \(\varpi:Y\to X\) is said to _branched at most at \(T\)_ if the branch locus \(B_{\varpi}\) is contained in \(T\). In addition, the analytic covering \(\varpi\) will be called _unbranched_, if \(B_{\varpi}\) is empty. Observe that when \(X\) is singular, \(R_{\varpi}\) and \(B_{\varpi}\) can be of codimension \(>1\), even when \(\varpi\) is a non-trivial branched covering. However, when \(X\) is a smooth, complex analytic space, \(R_{\varpi}\) will be a hypersurface in \(Y\), and \(B_{\varpi}\) will be a hypersurface in \(X\). Now, let \(\varpi:Y\to X\) be an analytic covering of normal complex spaces. We define an automorphism of this analytic covering as a complex analytic automorphism \(f:Y\to Y\) with the property that the diagram (100) commutes -- i.e. \(\varpi\circ f=\varpi\). The group \[\operatorname{Aut}(\varpi):=\left\{f\in\operatorname{Aut}(Y)\,\Big{|}\,\varpi \circ f=\varpi\right\} \tag{101}\] of all such automorphisms of \(\varpi:Y\to X\) is called the _group of covering transformations_ or _deck transformations_. An analytic covering \(\varpi:Y\to X\) will be called a _Galois covering_ (or, in topologists language, _regular covering_) if \(\operatorname{Aut}(\varpi)\) acts _transitively_ on every fiber of \(\varpi\); the group \(\operatorname{Aut}(\varpi)\) itself will be called the _Galois group_ of \(\varpi\) and will be denoted by \(\operatorname{Gal}(Y/X)\) or \(\operatorname{Gal}(\varpi)\). In this case, the analytic quotient \(Y/\operatorname{Gal}(\varpi)\) is complex analytically equivalent to \(X\) and we get the following corollary of theorem A.4: **Corollary A.2**.: _Every analytic quotient map \(\varpi:Y\to X:=Y/\Gamma\) is locally a Galois covering map._ When the branched analytic covering \(\varpi:Y\to X\) is Galois, the total degree of \(\varpi\) is given by the order of its Galois group -- i.e. \(\deg(\varpi)=\#\operatorname{Gal}(\varpi)\). Consequently, using the fact that the restriction of \(\varpi\) to any (sufficiently small) neighborhood \(V_{y}\) of a point \(y\) is again a branched Galois covering, the ramification index of each point \(y\in Y\) will be equal to the order of its stabilizer subgroup \(\operatorname{Gal}(\varpi)_{y}:=\{\gamma\in\operatorname{Gal}(\varpi)\,|\, \gamma\cdot y=y\}\). Additionally, since for any point \(y\in Y\) the stabilizer subgroup of different points in its \(\operatorname{Gal}(\varpi)\)-orbit \(\varpi^{-1}(\varpi(y))\) vary only up to the conjugation by elements of \(\operatorname{Gal}(\varpi)\), one can always choose the critical locus \(T\) such that \[R_{\varpi}:=\left\{y\in Y\,\Big{|}\,\#\operatorname{Gal}(\varpi)_{y}>1\right\} =\varpi^{-1}(T)\quad\text{and}\quad B_{\varpi}:=\varpi(R_{\varpi})=T\] for any branched Galois covering \(\varpi:Y\to X\). In this case, we say that Galois covering \(\varpi:Y\to X\) is _branched along_\(B_{\varpi}\). Moreover, since for an arbitrary lift \(y\) of any point \(x\in X\) the ramification index \(\deg_{\varpi}(y)=\#\operatorname{Gal}(\varpi)_{y}\) is independent of our choice of \(y\in\varpi^{-1}(x)\), we can define the _branching index_ of \(\varpi\) at any point \(x\in X\) to be given by this common value. More generally, one defines a _branching function_ for \(\varpi:Y\to X\) on \(X\) as \[\nu_{\varpi}:X\ni x\to\#\operatorname{Gal}(\varpi)_{y}\in\mathbb{N}\] where \(y\) is any lift of \(x\) -- i.e. \(y\in\varpi^{-1}(x)\). We are finally ready to introduce notions of _ramification divisor_ and _branch divisor_ for a ramified Galois covering \(\varpi:Y\to X\) between connected normal analytic spaces: Let us start by defining \[X^{\prime}:=\Big{\{}x\in\operatorname{Reg}(X)\,\Big{|}\,\varpi^{-1}(x)\subset \operatorname{Reg}(Y)\Big{\}}\quad\text{and}\quad Y^{\prime}:=\varpi^{-1}(X^{ \prime})\] such that \(Y^{\prime}\) and \(X^{\prime}\) are open analytic subsets of \(Y\) and \(X\) respectively and their complements have codimension at least \(2\). Then, the restriction \(\varpi\big{|}_{Y^{\prime}}:Y^{\prime}\to X^{\prime}\) is a Galois covering map between complex analytic manifolds. Next, let us pick local coordinates \(z_{1},\ldots,z_{\mathfrak{n}}\) on a neighborhood \(V_{y}\) of a point \(y\in Y^{\prime}\) and let \(w_{1},\ldots,w_{\mathfrak{n}}\) be the coordinates around its image \(\varpi(y)\in X^{\prime}\). Then, \(w_{i}=\varpi_{i}(z_{1},\ldots,z_{\mathfrak{n}})\) gives the local expression of \(\varpi\) near the point \(y\) and the set \[\mathcal{R}^{\prime}:=\left\{y\in Y^{\prime}\,\Bigg{|}\,\det\!\left(\frac{ \partial\varpi_{i}}{\partial z_{j}}\right)=0\right\}\] can be viewed as the ramification locus of the restriction \(\varpi\big{|}_{Y^{\prime}}\) -- i.e. set of all points \(y\in Y^{\prime}\) around which \(\varpi\big{|}_{V_{y}}\) is not a biholomorphism. Notice that since both \(Y^{\prime}\) and \(X^{\prime}\) are by definition smooth, \(\mathcal{R}^{\prime}\) is necessarily a hypersurface in \(Y^{\prime}\). Since the complement of \(Y^{\prime}\) has codimension at least \(2\), the Remmert-Stein extension theorem (see e.g., [163, p. 181]) ensures that the topological closure of this set will be a hypersurface \(\mathcal{R}_{\varpi}\) in \(Y\). Additionally, since the Galois covering \(\varpi\) is finite, the set \(\mathcal{B}_{\varpi}:=\varpi(\mathcal{R}_{\varpi})\) will be a hypersurface in \(X\). We will denote the irreducible components of the hypersurface \(\mathcal{R}_{\varpi}\subset Y\) by \(\mathcal{R}_{i}\); for each irreducible hypersurface \(\mathcal{R}_{i}\subset Y\), the image \(\mathcal{B}_{i}:=\varpi(\mathcal{R}_{i})\) is also an irreducible hypersurface in \(X\) such that \(\mathcal{B}=\bigcup_{i}\mathcal{B}_{i}\). **Remark A.10**.: Observe that if Galois covering \(\varpi:Y\to X\) is ramified only along a _singular part_ of \(Y\), the sets \(\mathcal{R}_{\varpi}\) and \(\mathcal{B}_{\varpi}\) as defined above will be empty. Since we are assuming that analytic spaces \(Y\) and \(X\) are normal (i.e. their singular locus has codimension \(\geq 2\)), one concludes that \(\mathcal{R}_{\varpi},\mathcal{B}_{\varpi}\neq\emptyset\) if and only if \(\operatorname{Gal}(\varpi)\) contains at least one pseudoreflection. Now, let us consider the sets \[Y^{\prime\prime}:=Y^{\prime}\Big{\backslash}\Big{(}\operatorname{Sing}( \mathcal{R})\cup\varpi^{-1}\big{(}\operatorname{Sing}(\mathcal{B})\big{)} \Big{)}\quad\text{and}\quad X^{\prime\prime}:=\varpi(Y^{\prime\prime}).\] Both subsets are open and have complements of codimension at least \(2\) in \(Y\) and \(X\), respectively. Note that if \(y\in Y^{\prime\prime}\) either \(y\notin\mathcal{R}\) or \(y\) belongs to one and only one irreducible component \(\mathcal{R}_{i}\). In the first case, we say that \(\varpi\) is _unramified_ at \(y\); then \(\varpi\) is a local biholomorphism at \(y\). In the latter case (i.e., when \(y\) has ramification index \(>1\)), let \(\mathcal{R}_{i}\) be the unique irreducible component of \(\mathcal{R}\) passing through \(y\). Then, there are local coordinates \(z_{1},\ldots,z_{\mathfrak{n}}\) on \(V\subset Y^{\prime\prime}\) and \(w_{1},\ldots,w_{\mathfrak{n}}\) on \(W\subset X^{\prime\prime}\) centered at \(y\) and \(x=\varpi(y)\) respectively, such that locally \(\mathcal{R}_{i}\cap V=\{z_{1}=0\}\), \(\mathcal{B}_{i}\cap W=\{w_{1}=0\}\) and \[\varpi\big{|}_{V}:(z_{1},\ldots,z_{\mathfrak{n}})\mapsto(w_{1}=z_{1}^{m},w_{2 }=z_{2},\ldots,w_{\mathfrak{n}}=z_{\mathfrak{n}}), \tag{111}\] where \(m\in\mathbb{N}^{>1}:=\mathbb{N}\backslash\{1\}\) denotes the ramification index of \(\varpi\) at point \(y\) -- i.e. \(m=\deg_{\varpi}(y)\). For any irreducible component \(\mathcal{R}_{i}\), the ramification index \(\deg_{\varpi}(y)=\#\operatorname{Gal}(\varpi)_{y}\) will be the same for all points \(y\in\mathcal{R}_{i}\cap Y^{\prime\prime}\); this common value is denoted by \(\deg_{\varpi}(\mathcal{R}_{i})\) and will be called the _ramification index of \(\varpi\) along \(\mathcal{R}_{i}\)_. This enables us to define the _ramification divisor_ of a branched Galois covering \(\varpi\) as the formal linear combination \[\mathscr{R}_{\varpi}:=\sum_{i}(m_{i}-1)\mathcal{R}_{i}, \tag{112}\] where \(m_{i}=\deg_{\varpi}(\mathcal{R}_{i})\) are the ramification indices (or multiplicities) of \(\varpi\) along irreducible hypersurfaces (or prime divisors) \(\mathcal{R}_{i}\). **Remark A.11**.: The ramification divisor \(\mathscr{R}\) defined in this way is an _effective Weil divisor_ on \(Y\). As we discussed earlier, the notions of Weil and Cartier divisors coincide only on smooth analytic spaces (e.g., on \(Y^{\prime}\)). However, when \(Y\) only contains quotient singularities, every Weil divisor is \(\mathbb{Q}\)_-Cartier_ by which we mean some multiple of it is a Cartier divisor; such normal reduced analytic spaces are said to be \(\mathbb{Q}\)_-factorial_ (see [172] for more details). When the ramified analytic covering \(\varpi:Y\to X\) is Galois, it is possible to define the branch divisor on \(X\) as follows: As mentioned before, for each prime divisor \(\mathcal{R}_{i}\) on \(Y\), the image \(\mathcal{B}_{i}=\varpi(\mathcal{R}_{i})\) will be a prime divisor on \(X\). If \(\nu_{\varpi}:X\to\mathbb{N}\) denotes the branching function associated to the Galois covering \(\varpi\), the restriction \(\nu_{\varpi}\big{|}_{\mathcal{B}_{i}\cap X^{\prime\prime}}:\mathcal{B}_{i} \cap X^{\prime\prime}\to\mathbb{N}\) will be a constant function for each prime divisor \(\mathcal{B}_{i}\); this constant value will be denoted by \(\nu_{\varpi}(\mathcal{B}_{i})\) and is called the _branching index of \(\varpi\) along \(\mathcal{B}_{i}\)_. Then, we can define the _branch divisor_ of Galois covering \(\varpi\) as the effective \(\mathbb{Q}\)-divisor \[\mathscr{R}_{\varpi}:=\sum_{i}\left(1-\frac{1}{\nu_{\varpi}(\mathcal{B}_{i})} \right)\mathcal{B}_{i}.\] Here, by a \(\mathbb{Q}\)-divisor, we simply mean a formal finite sum of irreducible hypersurfaces with coefficients in \(\mathbb{Q}\) (i.e., a Weil divisor with rational coefficients). Note that with this convention, \(\mathscr{R}_{\varpi}=\varpi^{*}(\mathscr{R}_{\varpi})\), that is, the ramification divisor is the pull-back of the branch divisor. Let us finish our discussion of analytic coverings by focusing on the very important example of branched Galois coverings \(\varpi:Y\to X\) of 1-dimensional complex analytic spaces -- i.e., when both analytic spaces \(X\) and \(Y\) are Riemann surfaces. In that case, any non-constant map between compact Riemann surfaces is a finite branched covering. Let \(f:Y\to X\) be such a non-constant, holomorphic map between compact Riemann surfaces \(Y\) and \(X\). For every \(y\in Y\) there exist charts for \(f(y)\) such that the local expression of the branched covering \(f\) is of the form \(z\mapsto z^{m}\) where \(m=\deg_{f}(y)\) is the ramification index of \(f\) at point \(y\), \(z\) is the local coordinate on the covering Riemann surface \(Y\), and \(w=z^{m}\) is the local coordinate on the base Riemann surface \(X\) (see Fig. 2). #### a.1.4 Complex Analytic Orbifolds Let \(X_{O}\) be an analytic space of dimension \(\mathfrak{n}\) admitting only quotient singularities. An _orbifold chart_ or _local uniformizing system_ on an open subset \(U\subset X_{O}\) is a tuple \((U,\tilde{U},\Gamma,\mathrm{f})\) where connected and open set \(\tilde{U}\subset\mathbb{C}^{\mathfrak{n}}\) is biholomorphic to the open unit ball \(B^{\mathfrak{n}}\), \(\Gamma\subset\mathrm{GL}(\mathfrak{n},\mathbb{C})\) is a finite group acting effectively103 on \(\tilde{U}\) as holomorphic automorphisms, and the ramified covering \(\mathrm{f}:\tilde{U}\to U\), called a _folding map_, is a \(\Gamma\)-invariant map which induces a biholomorphism \(\tilde{U}/\Gamma\xrightarrow{\cong}U\). The pair \((\tilde{U},\Gamma)\) is called a _local model_ for \(U\). Footnote 103: The condition that local uniformizing groups act effectively is not always imposed in the literature, and there are occasions when this requirement is too restrictive. However, since we are exclusively concerned with _effective orbifolds or_ reduced orbifolds in this paper, it is convenient to incorporate this condition as part of our definition. **Definition A.19** (Analytic orbifold atlas).: An _analytic orbifold atlas_\(\mathcal{U}\) on an analytic space \(X\) (admitting only quotient singularities) is a collection \(\{(U_{a},\tilde{U}_{a},\mathrm{f}_{a})\}_{a\in A}\) charts on this analytic space such that the following conditions are satisfied: 1. \(\{U_{a}\}_{a\in A}\) is an open cover of the underlying complex space \(X_{O}\); 2. If \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathrm{f}_{a})\) and \((U_{b},\tilde{U}_{b},\Gamma_{b},\mathrm{f}_{b})\) for \(a,b\in A\) are two orbifold charts with \(U_{a}\cap U_{b}\neq\emptyset\), then for each \(x\in U_{a}\cap U_{b}\) there exists an orbifold chart \((U_{c},\tilde{U}_{c},\Gamma_{c},\mathrm{f}_{c})\in\mathcal{U}\) that contains \(x\) - i.e. \(x\in U_{c}\subseteq U_{a}\cap U_{b}\); 3. If \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathrm{f}_{a})\) and \((U_{b},\tilde{U}_{b},\Gamma_{b},\mathrm{f}_{b})\) for \(a,b\in A\) are two orbifold charts with \(U_{a}\subset U_{b}\), then there exists a holomorphic embedding, \[\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\] called the _change of charts_ (or _embedding_ or _gluing map_), such that the folding maps satisfy \(\mathrm{f}_{a}=\mathrm{f}_{b}\circ\eta_{ab}\). Moreover, for any three orbifold charts labeled by \(a,b,c\in A\) Figure 2: _A model branched covering._ and having the property \(U_{a}\subset U_{b}\subset U_{c}\), the corresponding embeddings should satisfy \(\eta_{ac}=\eta_{ab}\circ\eta_{bc}\). **Remark A.12**.: The choice of embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) is unique only up to the action of \(\Gamma_{b}\): Let \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathrm{f}_{a})\) and \((U_{b},\tilde{U}_{b},\Gamma_{b},\mathrm{f}_{b})\) be two orbifold charts on \(X\) with \(U_{a}\subset U_{b}\). If \(\eta_{ab},\eta^{\prime}_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) are two embeddings, then there exists a _unique_\(\gamma\in\Gamma_{b}\) such that \(\eta^{\prime}_{ab}=\gamma\circ\eta_{ab}\). As a result, an embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) induces a _monomorphism_104\(\Upsilon_{ab}:\Gamma_{a}\to\Gamma_{b}\) which is given by Footnote 104: We use the term “monomorphism” to mean an injective group homomorphism. \[\eta_{ab}\circ\gamma=\Upsilon_{ab}(\gamma)\circ\eta_{ab},\] (A.12) that is \(\eta_{ab}(\gamma\tilde{x})=\Upsilon_{ab}(\gamma)\eta_{ab}(\tilde{x})\) for all \(\gamma\in\Gamma_{a}\) and \(\tilde{x}\in\tilde{U_{a}}\). An analytic orbifold atlas \(\mathcal{U}\) is said to be a refinement of another analytic orbifold atlas \(\mathcal{V}\) if there exists an embedding of every chart in \(\mathcal{U}\) into some chart of \(\mathcal{V}\). This enables us to define an _equivalence relation_ between orbifold atlases where two orbifold atlases are said to be equivalent if they have a common refinement. Then, **Definition A.20** (Complex analytic orbifolds).: A _complex analytic orbifold_\(O\) of dimension \(\mathsf{n}\) is a pair \((X_{O},[\mathcal{U}])\) where \(X_{O}\) is the _underlying (complex) analytic space_ with at most finite quotient singularities and \([\mathcal{U}]\) is an equivalence class of analytic orbifold atlases on \(X_{O}\). A \(1\)-dimensional complex analytic orbifold will be called an _orbifold Riemann surface_ or a _Riemann orbisurface_. **Remark A.13**.: As in the manifold case, an orbifold atlas is always contained in a _unique maximal_ one and two orbifold atlases are equivalent if, and only if, they are contained in the same maximal atlas. Therefore, we can equivalently define an _analytic orbifold structure_ on a complex analytic space \(X_{O}\) (with at most finite quotient singularities) as the _datum of a maximal (analytic) orbifold atlas_ on this space. **Remark A.14**.: Let \(O=(X_{O},[\mathcal{U}])\) be a complex analytic orbifold where \(X_{O}\) is the underlying complex analytic space. Remember that \(X_{O}\) can be characterized as a \(\mathbb{C}\)-ringed space \((|X_{O}|,\mathscr{O}_{X})\) where \(|X_{O}|\) is a Hausdorff paracompact topological space; we will denote the topological space \(|X_{O}|\) simply by \(|O|\). Then, one can define \(O\) alternatively as a pair \((|O|,[\mathscr{U}])\) where the charts of \(\mathscr{U}\) are now given by a quadruple \((|U|,\tilde{U},\Gamma,|\mathsf{f}|)\). Here, \(|U|\subset|O|\) denotes the underlying topological space of each open analytic subset \(U\subset X_{O}\) and \(|\mathsf{f}|\) denotes the continuous map associated with the \(\Gamma\)-invariant analytic map \(\mathsf{f}=(|\mathsf{f}|,\mathsf{f}^{*})\) that induces a homeomorphism between \(|U|\) and \(\tilde{U}/\Gamma\) as topological spaces; as evident from our notation, the local models \((\tilde{U},\Gamma)\) are defined as before. This alternative definition of complex orbifolds is more common among topologists and closely resembles the definition of a _smooth orbifold_ (where the only differences are in local models and embeddings). #### Local Groups and Canonical Stratification Consider a point \(x\in X_{O}\) and a local chart \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathrm{f}_{a})\in\mathcal{U}\) containing \(x\). In addition, let us choose \(\tilde{x}\in\tilde{U}_{a}\) to be a particular pre-image of \(x\) and denote by \(\Gamma_{\tilde{x}}\) the subgroup of \(\Gamma_{a}\) that fixes \(\tilde{x}\). As our choice of \(\tilde{x}\in\operatorname{f}_{a}^{-1}(x)\) varies, the stabilizer subgroup of \(\tilde{x}\) varies only up to conjugation by the elements of \(\Gamma_{a}\). Similarly, as our choice of \(U_{i}\) containing \(x\) varies, the stabilizer varies only up to conjugation by a transition map. Therefore, we can define the _isotropy group_ or the _local group_\(\Gamma_{x}\) to be the conjugacy class of the stabilizer subgroup \(\Gamma_{\tilde{x}}\subset\Gamma_{a}\) for some \(\tilde{x}\in\operatorname{f}_{a}^{-1}(x)\). It is then clear that local group \(\Gamma_{x}\) for a point \(x\in X_{O}\) is _independent_ of both the _chart_, \(U_{a}\), and the _lift_\(\tilde{x}\in\tilde{U}_{a}\). **Remark A.15**.: The main observation we would like to make about this definition is that an orbifold chart \((U,\tilde{U},\Gamma,\operatorname{f})\) contains more data than simply the analytic quotient \(\tilde{U}/\Gamma\). In particular, an orbifold chart "remembers" the pseudoreflections contained in a local uniformizing group \(\Gamma_{i}\) and their corresponding fixed-point sets. This allows us to define the _singular points_ of \(O\) as points whose local isotropy group \(\Gamma_{x}\neq\{1\}\); those points with \(\Gamma_{x}=\{1\}\) are called _regular points_ of \(O\). The set \(\big{\{}x\in X_{O}\,\big{|}\,\Gamma_{x}\neq\{1\}\big{\}}\) of singular points of \(O\) is called the _singular locus_ or the _singular set_ of \(O\) and will be denoted by \(\operatorname{Sing}(O)\). It follows from theorem A.6 (or remark A.15) that if any local uniformizing group contains a pseudoreflection, the orbifold singular set \(\operatorname{Sing}(O)\) will be bigger than the singular set of the underlying analytic space \(\operatorname{Sing}(X_{O})\) and that \(\operatorname{Sing}(O)=\operatorname{Sing}(X_{O})\) if and only if none of the local uniformizing groups contain a pseudoreflection. The subset of all orbifold regular points, denoted by \(X_{O}^{\operatorname{reg}}\), is an open dense subset of \(X_{O}\); in particular, since \(\operatorname{Sing}(X_{O})\subseteq\operatorname{Sing}(O)\), the orbifold regular locus \(X_{O}^{\operatorname{reg}}:=X_{O}\backslash\operatorname{Sing}(O)\) will always be a complex sub-manifold of \(\operatorname{Reg}(X_{O})=X_{O}\backslash\operatorname{Sing}(X_{O})\). The local isotropy groups give a _canonical stratification_ of \(X_{O}\) by stating that two points lie in the same _stratum_\(\operatorname{Sing}_{j}(O)\) if their local groups are conjugate. Thus, we get a decomposition of \(X_{O}\) as \[X_{O}=X_{O}^{\operatorname{reg}}\bigsqcup_{j}\operatorname{Sing}_{j}(O),\qquad \operatorname{Sing}(O)=\bigsqcup_{j}\operatorname{Sing}_{j}(O),\] (A.13) where the union is taken over all conjugacy classes. The dense open subset of regular points, \(X_{O}^{\operatorname{reg}}\), is sometimes called the _principal stratum_ and corresponds to the trivial conjugacy class. #### Analytic Orbifold Maps and Orbifold Covering The standard notion of structure preserving maps between complex analytic orbifolds can be given in an analogous manner to complex manifolds (see [141, 142]). However, it wasn't realized until recently (see, e.g., [173, 174]) that certain problems arise with the usual definition of an orbifold map; namely, as we will see later, this definition does not, in general, induce morphisms of sheaves or V-bundles. This led to the introduction of the notion of "good maps" in [173]: **Definition A.21** (Analytic orbifold Maps).: Let \(O=(X_{O},\mathcal{U})\) and \(O^{\prime}=(X^{\prime}_{O},\mathcal{U}^{\prime})\) be two complex analytic orbifolds (not necessarily of the same dimension). A map \(f:O\to O^{\prime}\) is said to be an _analytic orbifold map_ (or a _holomorphic orbifold map_) if \(f\) gives an analytic mapping between the underlying complex analytic spaces, denoted by \((|f|,f^{*}):(|X_{O}|,\mathscr{O}_{X})\to(|X^{\prime}_{O}|,\mathscr{O}_{X^{ \prime}})\), which admits a _local lift_ at each point \(x\in X_{O}\): For every pair of orbifold charts \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathsf{f}_{a})\in\mathcal{U}\) and \((U^{\prime}_{a},\tilde{U}^{\prime}_{a},\Gamma^{\prime}_{a},\mathsf{f}^{\prime}_{ a})\in\mathcal{U}^{\prime}\) containing an arbitrary point \(x\in X\) and its image \(f(x)\in X^{\prime}\) respectively, with \(f(U_{a})\subset U^{\prime}_{a}\), the orbifold map \(f\) induces a group homomorphism between local isotropy groups \(\tilde{f}_{x}:\Gamma_{x}\to\Gamma^{\prime}_{f(x)}\) and a holomorphic \(\bar{f}_{x}\)-equivariant map \(\tilde{f}_{x}:\tilde{U}_{a}\to\tilde{U}^{\prime}_{a}\) such that the diagram \[\begin{CD}\tilde{U}_{a}@>{\bar{f}_{x}}>{}>\tilde{U}^{\prime}_{a}\\ @V{\mathsf{f}_{a}}V{}V@V{}V{\mathsf{f}_{a}^{\prime}}V\\ U_{a}@>{(|f|,f^{*})}>{}>U^{\prime}_{a}\end{CD}\] (A.14) commutes (see Figure 3). Moreover, a holomorphic orbifold map \(f_{\mathrm{orb}}\) is said to be _good_ if it is compatible with the embeddings: If \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) is an embedding on \(O\), then there is an embedding \(\hat{f}(\eta_{ab}):\tilde{U}^{\prime}_{a}\hookrightarrow\tilde{U}^{\prime}_{b}\) on \(O^{\prime}\), such that 1. \(\tilde{f}_{\tilde{U}_{b}}\circ\eta_{ab}=\hat{f}(\eta_{ab})\circ\tilde{f}_{ \tilde{U}_{a}}\), and 2. \(\hat{f}(\eta_{bc}\circ\eta_{ab})=\hat{f}(\eta_{bc})\circ\hat{f}(\eta_{ab})\). Note that conditions (i) and (ii) imply that the composition of good orbifold maps is again a good orbifold map. Finally, a holomorphic orbifold map \(f_{\mathrm{orb}}:O\to O^{\prime}\) is called an _analytic orbifold automorphism_ (or an _orbifold biholomorphism_) if it admits an analytic inverse. In this case, we clearly have \(\Gamma_{x}\cong\Gamma^{\prime}_{f(x)}\) for all \(x\in X\cong X^{\prime}\) -- i.e., biholomorphisms must preserve the orbifold stratification. **Remark A.16**.: Considering \(\mathbb{C}\) or \(\hat{\mathbb{C}}\) as orbifolds with an empty singular set, one can define _holomorphic/meromorphic orbifold functions_ on a complex orbifold \(O\) as holomorphic orbifold maps \(f_{\mathrm{orb}}:O\to\mathbb{C}\) or \(\hat{\mathbb{C}}\). (see remark A.24!) Now, we can easily define an orbifold Galois covering using the above definitions: **Definition A.22** (Orbifold Galois covering).: An _orbifold Galois covering_\(\varpi_{\mathrm{orb}}:\tilde{O}=(\tilde{X},\mathcal{V})\to O=(X,\mathcal{U})\) is an analytic orbifold map such that \(\varpi:\tilde{X}\to X\) is a Galois analytic covering and \(\mathrm{Gal}(\varpi)\subset\mathrm{Aut}(O)\).105 Footnote 105: Remember that for any orbifold \(O=(X_{O},\mathcal{U})\), we have \(\mathrm{Aut}(O)\subset\mathrm{Aut}(X_{O})\). In other words, the holomorphic orbifold map \(\varpi_{\mathrm{orb}}:\tilde{O}\to O\) is a projection map such that each point \(x\in X_{O}\) has a neighborhood \(U\cong\tilde{U}/\Gamma\) for which connected components \(V_{i}\) of \(\varpi^{-1}(U)\) are analytically isomorphic to \(\tilde{U}/\Gamma_{i}\) where \(\Gamma_{i}\subset\Gamma\). Therefore, the restriction of projection map \(\varpi\) to each sheet \(V_{i}\), i.e. \(\varpi\big{|}_{V_{i}}:V_{i}\to U\), corresponds to the natural projection \(\tilde{U}/\Gamma_{i}\to\tilde{U}/\Gamma\) (see Figure 4). By the _Galois group_ and _degree_ of an orbifold Galois covering, we mean its Galois group and degree as an analytic cover. Finally, in cases that we need to keep track of base points on covering and base spaces, we will use the notation \((\tilde{O},\tilde{x}_{0})\xrightarrow{\varpi_{\mathrm{orb}}}(O,x_{0})\) to refer to an orbifold covering for which \(\varpi(\tilde{x}_{0})=x_{0}\). #### Global Quotient Orbifolds As pointed out in remark A.12, the choice of embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) is in general _not unique_; so, the charts do not have to satisfy a cocycle condition upstairs, though of course they do downstairs where the open sets \(U_{a}\) glue to give the analytic space \(X_{O}\) (see the discussion around Lemma A.1). That is, the orbifold charts need not glue since an orbifold need not be a global quotient by a finite group. Figure 4: _Orbifold covering_. The figure on the left compares the covering maps for manifolds and orbifolds. The figure on the right shows how the local groups of an orbifold and its covering orbifold are related. Figure 3: _A local lift corresponding to an orbifold map._ Every holomorphic orbifold map between two complex orbifolds \(f_{\text{orb}}:O\to O^{\prime}\) consists of an analytic map \(f=(|f|,f^{*})\) between the underlying analytic spaces together with a holomorphic local lift which is given by a group homomorphism \(\bar{f}_{x}:\Gamma_{x}\to\Gamma_{f(x)}\) between local isotropy groups at each point \(x\in O\) and its corresponding image \(f(x)\in O^{\prime}\) and a holomorphic \(\bar{f}_{x}\)-equivariant map \(\tilde{f}_{\tilde{U}}:\tilde{U}\to\tilde{U}^{\prime}\) between local uniformizing neighborhoods. However, the most natural examples of complex orbifolds appear precisely when we take the quotient space \(M/\Gamma\) of a complex manifold \(M\) by a finite subgroup \(\Gamma\subset\operatorname{Aut}(M)\) of its analytic automorphisms: **Proposition A.6**.: _If \(M\) is a complex manifold and \(\Gamma\) is a group acting holomorphically, effectively, and properly discontinuously on \(M\), the quotient \(M/\Gamma\) has the structure of a complex orbifold. Such orbifolds are called "(effective) global quotient orbifolds" and we will denote them by \([M/\Gamma]\) in order to emphasize their orbifold structure.106_ Footnote 106: More concretely, one must use the geometric invariant theory (GIT) quotient to define the global quotient orbifolds. This is due to the fact that, as we have already seen, the analytic quotient defined in theorem A.4 is unable to produce the codimension-1 orbifold singularities created by the action of pseudoreflections contained in \(\Gamma\). Therefore, with this notation, the analytic quotient \(M/\Gamma\) plays the role of the underlying analytic space of the complex orbifold \([M/\Gamma]\). Proof.: For any \(x\in M/\Gamma\), choose a point \(\tilde{x}\in M\) that projects onto \(x\) and let \(\Gamma_{\tilde{x}}:=\left\{\gamma\in\Gamma\,\big{|}\,\gamma\tilde{x}=\tilde{x}\right\}\) denote the stabilizer subgroup (or isotropy subgroup) of \(\tilde{x}\). Since the action of \(\Gamma\) on \(M\) is properly discontinuous, according to Remark A.6, there exists a (small enough) neighborhood \(\tilde{U}_{\tilde{x}}\subset M\) containing \(\tilde{x}\) such that \(\gamma(\tilde{U}_{\tilde{x}})=\tilde{U}_{\tilde{x}}\) for all \(\gamma\in\Gamma_{\tilde{x}}\) and \(\tilde{U}_{\tilde{x}}\cap\gamma(\tilde{U}_{\tilde{x}})=\emptyset\) for all elements of \(\Gamma\) not in \(\Gamma_{\tilde{x}}\). Then, the analytic quotient map \(\varpi:M\to M/\Gamma\) will be a (branched) Galois covering which induces a biholomorphism between an open analytic subset \(U\subset M/\Gamma\) containing \(x\) and \(\tilde{U}_{\tilde{x}}/\Gamma_{\tilde{x}}\). Then, the quadruple \((U_{x},\tilde{U}_{\tilde{x}},\Gamma_{\tilde{x}},\varpi|_{\tilde{U}_{\tilde{x} }})\) will be an orbifold chart on \(M/\Gamma\) containing the point \(x\). By augmenting some cover \(\{U_{x}\}\) of \(M/\Gamma\) via adjoining finite intersections,107 one obtains a natural orbifold structure on \(M/\Gamma\) induced by the atlas \(\mathcal{U}=\{(U_{a},\tilde{U}_{a},\varpi|_{\tilde{U}_{a}})\}_{a\in A}\). The corresponding embeddings \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) are induced by transition functions on the complex manifold \(M\) and are thus guaranteed to be holomorphic; this ensures that the quotient orbifold \([M/\Gamma]=(M/\Gamma,\mathcal{U})\) inherits a complex structure that is uniquely determined by the complex structure of \(M\) and the group \(\Gamma\). Footnote 107: see [143, Chap. 14] for more details. Since any complex manifold can also be regarded as a complex orbifold for which all of the local uniformizing groups \(\Gamma_{i}\) are given by the trivial group \(\{1\}\), the analytic Galois covering map \(\varpi:M\to X_{O}:=M/\Gamma\) is naturally promoted to an orbifold Galois covering map \(\varpi_{\mathrm{orb}}:M\to O:=[M/\Gamma]\) for which \(\Gamma\) is the group of covering transformations - i.e. \(\operatorname{Gal}(\varpi_{\mathrm{orb}})\cong\Gamma\). More generally, if \(\Gamma\) is a discrete subgroup of holomorphic automorphisms of a complex manifold \(M\) that acts effectively and properly discontinuously on it and if \(\Gamma^{\prime}\) is a subgroup of \(\Gamma\), i.e. \(\Gamma^{\prime}\subset\Gamma\subset\operatorname{Aut}(M)\), the holomorphic orbifold map \([M/\Gamma^{\prime}]\to[M/\Gamma]\) will be an orbifold Galois covering. **Remark A.17**.: When the manifold \(M\) in the above construction of \([M/\Gamma]\) is _simply connected_, \(M\) plays the role of _universal covering space_ and \(\Gamma\) plays the role of _orbifold fundamental group_. Due to many nice features that global quotient orbifolds share with ordinary manifolds, they were given the name "good orbifolds" by Thurston [143, Chap. 14]: **Definition A.23** (Good orbifolds).: A complex orbifold \(O\) is called _good_ or _developable_ if it is analytically isomorphic to a global quotient orbifold \([M/\Gamma]\) in an orbifold sense (\(\Gamma\) is discrete but not necessarily finite). When the group \(\Gamma\) is also finite, the orbifold \(O\cong[M/\Gamma]\) will be called _very good_. In other words, a (complex) orbifold \(O\) is good (respectively, very good) _if and only if_\(O\) has a covering (respectively, finite covering) that is a (complex) manifold. Otherwise, we have a _bad_ orbifold. #### Complex Analytic Orbifolds as Log Pairs When local isotropy groups contain pseudoreflections, each reflection fixes a hyperplane in \(\tilde{U}_{i}\), and the folding map \(\mathfrak{f}_{i}:\tilde{U}_{i}\to U_{i}\) will have a _ramification divisor_\(\mathscr{R}_{i}\) on \(\tilde{U}_{i}\) and a _branch divisor_\(\mathscr{B}_{i}\) on \(U_{i}\). Let \(\mathcal{B}_{ij}\) denote the irreducible components of the branch divisor \(\mathscr{B}_{i}\) and let \(m_{ij}\) be the _branching index_ (or the _multiplicity_) of \(\mathfrak{f}_{i}\) along each prime divisor \(\mathcal{B}_{ij}\) such that \(\mathscr{B}_{i}=\sum_{j}\big{(}1-\frac{1}{m_{ij}}\big{)}\mathcal{B}_{ij}\). Then, the compatibility condition between orbifold charts means that there are _global prime divisors_\(\mathcal{B}_{j}\subset X_{O}\) and _ramification indices_\(m_{j}\) such that \(\mathcal{B}_{ij}=U_{i}\cap\mathcal{B}_{j}\) and \(m_{ij}=m_{j}\) (after suitable re-indexing). Therefore, it will be convenient to codify the above data by a single effective \(\mathbb{Q}\)-divisor \[\mathscr{D}:=\sum_{\mathcal{B}_{j}\subset\mathrm{Sing}(O)}\left(1-\frac{1}{m_{ j}}\right)\mathcal{B}_{j},\] (A.15) called the _branch divisor_ of \(O\). It turns out that a complex orbifold \(O=(X_{O},\mathcal{U})\) can be uniquely determined by the pair \((X_{O},\mathscr{D})\), called a _log pair_: An orbifold atlas \(\mathcal{U}=\{(U_{a},\tilde{U}_{a},\Gamma_{a},\mathfrak{f}_{a})\}_{a\in A}\) on a normal analytic space \(X_{O}\) is said to be _compatible_ with \(\mathscr{D}\) if every branch divisor \(\mathscr{B}_{i}\) associated with the Galois coverings \(\mathfrak{f}_{i}\) coincide with \(\mathscr{D}\cap U_{i}\). Therefore, we can alternatively characterize (although slightly inaccurately) a complex orbifold \(O\) as being defined by the pair \(\big{(}X_{O},\mathscr{D}\big{)}\). This point of view was taken in [158].108 Footnote 108: Traditional approaches studied either the singularities of a normal analytic space \(X\), or the singularities of a divisor \(\mathscr{D}\) on a smooth analytic space, but did not concentrate on problems that occur when both \(X\) and \(\mathscr{D}\) are singular. As in definition A.13, let us define the multiplicity \(\mathrm{mult}_{\mathscr{D}}(H)\) of \(\mathscr{D}\) along any irreducible divisor \(H\subset X_{O}\) as the rational number \(1-\frac{1}{m_{j}}\), if \(H=\mathcal{B}_{j}\) for any \(j\), and as \(0\) if \(H\neq\mathcal{B}_{j}\) for all \(j\). Then, we put \[\deg_{\mathscr{D}}(H):=\frac{1}{1-\mathrm{mult}_{\mathscr{D}}(H)},\] (A.16) and call it the _branching index of \(\mathscr{D}\) along \(H\)_. Observe that \[\deg_{\mathscr{D}\cap U_{i}}(\mathcal{B}_{ij})=\deg_{\mathfrak{f}_{i}}( \mathcal{B}_{ij})=m_{ij},\] and the branching index of \(\mathscr{D}\) along any \(H\neq\mathcal{B}_{j}\)\(\forall j\) is equal to \(1\). Then, the definition A.21 of analytic orbifold maps is equivalent to the following one: **Definition A.24** (Analytic orbifold maps between log pairs).: Let \((Y,\mathscr{D}_{Y})\) and \((X,\mathscr{D}_{X})\) be two complex orbifolds. A finite analytic map \(f:Y\to X\) between the underlying analytic spaces is called an _orbifold analytic map_\(f_{\mathrm{orb}}:(Y,\mathscr{D}_{Y})\to(X,\mathscr{D}_{X})\) if \[\deg_{\mathscr{D}_{X}}\big{(}f(H)\big{)}\Big{|}\deg_{\mathscr{D}_{Y}}(H)\cdot \deg_{f}(H),\] for every irreducible hypersurface \(H\subset Y\). The notions of orbifold biholomorphism and orbifold Galois covering can similarly be defined in this language. In particular, a branched Galois covering \(\varpi:Y\to X\) will be called an orbifold Galois covering \(\varpi_{\mathrm{orb}}:(Y,\mathscr{D}_{Y})\to(X,\mathscr{D}_{X})\) if \[\deg_{\mathscr{D}_{X}}\left(\varpi(H)\right)=\deg_{\mathscr{D}_{Y}}(H)\cdot \deg_{\varpi}(H),\] for every irreducible hypersurface \(H\subset Y\). ### Orbifold Riemann Surfaces Now that we have seen the basic definitions for the case of an \(\mathfrak{n}\)-dimensional complex orbifold let us specialize to the case of \(\mathfrak{n}=1\) where everything greatly simplifies (see e.g., [159, Appx. E] and [175] for more details). The most significant simplification comes from the fact that, by corollary A.1, every \(1\)-dimensional complex analytic space is smooth. In other words, the underlying analytic space of an orbifold Riemann surface is an ordinary (i.e., smooth) Riemann surface. Moreover, the orbifold charts on a Riemann surface can always be chosen to have the form \((\mathrm{D},\mathrm{D},\mathbb{Z}_{m_{i}},\mathrm{f}_{i})\) where \(\mathrm{D}\subset\mathbb{C}\) is the unit disk, \(\mathbb{Z}_{m_{i}}\) denotes the cyclic group of order \(m_{i}\geq 2\) which acts on \(\mathrm{D}\) in the standard way as the \(m_{i}\)-th roots of unity, and the folding map \(\mathrm{f}_{i}:\mathrm{D}\to\mathrm{D}/\mathbb{Z}_{m_{i}}\cong\mathrm{D}\) is a branched Galois covering of the form \(z\mapsto z^{m_{i}}\) (see Figure 2). Additionally, observe that \((\mathbb{Q})\)-divisors on Riemann surfaces are nothing but a formal linear combination of points with coefficients in \(\mathbb{Q}\) or \(\mathbb{Z}\).109 Therefore, we have Footnote 109: Since Riemann surfaces are smooth, we don’t have to differentiate between Weil and Cartier divisors. **Definition A.25** (Orbifold Riemann surface).: A closed _orbifold Riemann surface_\(O\) is a pair \((X_{O},\mathscr{D})\) consisting of a closed Riemann surface \(X_{O}\), called the _underlying Riemann surface structure_ of \(O\), together with a _branch divisor_\(\mathscr{D}=\sum_{x_{i}\in\mathrm{Sing}(O)}\left(1-\frac{1}{m_{i}}\right)x_{i}\), where \(x_{i}\in\mathrm{Sing}(O)\) are pairwise distinct marked points on \(X_{O}\), called the _conical points_, and each integer \(m_{i}\geq 2\) is the corresponding _branching index_ (or the _order of isotropy_) of the conical point \(x_{i}\). **Remark A.18**.: The above definition of a closed Riemann orbisurface can be generalized to include punctured orbifold Riemann surfaces as well. Formally, cusps can be viewed as a limit \(m\to\infty\) (or equivalently cone angle \(2\pi/m\to 0\)) of a conical singularity on a Riemann orbisurface (see Figure 5). However, this limit is _singular_ and cannot be blindly taken from the formulas for conical singularities. The above definition makes it clear that the whole theory of orbifold Riemann surfaces can be phrased in terms of Riemann surfaces with signature, where the signature is the map \(X_{O}\to\mathbb{N}\cup\{\infty\}\) taking a point to its order: **Definition A.26** (Riemann surfaces with signature).: By a _Riemann surface with signature_ we mean a Riemann surface \(X\) of finite type110\((g,n)\) together with an assignment of a branching index \(m_{i}\)_ to each marked point; these branching indices \(m_{i}\) are either integers \(\geq 2\) or the symbol \(\infty\). The _signature_ of \(O\) is the tuple \((g;m_{1},\ldots,m_{n})\) where \(g\) is the genus of \(X_{O}\) and branching indices \(m_{i}\) are ordered such that \(2\leq m_{1}\leq m_{2}\leq\cdots\leq m_{n}\). **Remark A.19**.: More precisely, by assigning the branching index \(1\) to every other point (except the marked points), we can define a Riemann surface with signature as a pair \((X,\nu)\) where \(\nu:X\to\mathbb{N}\cup\{\infty\}\) is a branching function. Note that if \(\mathcal{U}=\{(U_{i},\mathrm{D},\mathbb{Z}_{m_{i}},\mathrm{f}_{i})\}_{i\in I}\) is an orbifold atlas on \(X\) which is compatible with \(\nu\), the restriction \(\nu|_{{}_{U_{i}}}\) is precisely the same as the branching function associated to the branched Galois covering \(\mathrm{f}_{i}\) -- i.e. \(\nu|_{{}_{U_{i}}}\equiv\nu_{\mathrm{f}_{i}}\). Let \((\tilde{X},\tilde{\nu})\) and \((X,\nu)\) be two orbifold Riemann surfaces. A Galois branched covering map \(\varpi:\tilde{X}\to X\) is said to yield an orbifold Galois covering \(\varpi_{\mathrm{orb}}:(\tilde{X},\tilde{\nu})\to(X,\nu)\) if \[\nu\big{(}\varpi(\tilde{x})\big{)}=\tilde{\nu}(\tilde{x})\deg_{\varpi}(\tilde {x})\qquad\text{for all}\qquad\tilde{x}\in\tilde{X}.\] (A.17) Note that a conical point \(x_{j}\in\mathrm{Sing}(X,\nu)\) may be obtained in two different ways: 1. If \(\tilde{x}\notin\mathrm{Sing}(\tilde{X},\tilde{\nu})\) but it is a ramification point of \(\varpi\) with ramification index \(\deg_{\varpi}(\tilde{x})>1\), then \(\varpi(\tilde{x})\) is a conical point of order \(\deg_{\varpi}(\tilde{x})\). 2. If \(\tilde{x}\in\mathrm{Sing}(\tilde{X},\tilde{\nu})\) and the local degree of \(\varpi\) at \(\tilde{x}_{i}\) is given by \(\deg_{\varpi}(\tilde{x}_{i})\geq 1\), then \(\varpi(\tilde{x}_{i})\) is also a conical point of order \(\tilde{\nu}(\tilde{x}_{i})\deg_{\varpi}(\tilde{x}_{i})\). As a result, we will always have \(|\,\mathrm{Sing}(\tilde{X},\tilde{\nu})|\leq|\,\mathrm{Sing}(X,\nu)|\). When the branching function \(\tilde{\nu}(x)\) is the trivial branching function \(\tilde{\nu}\equiv 1\), we conclude: **Proposition A.7**.: _Let \(\varpi:\tilde{X}\to X\) be a branched Galois covering between two Riemann surfaces. The base Riemann surface \(X\) can be naturally given the structure of a (developable) Riemann orbisurface \(O=(X,\nu_{\varpi})\) where \(\nu_{\varpi}:X\to\mathbb{N}\) is the branching function associated to the branched Galois covering \(\varpi\). Note that \(O\cong[\tilde{X}/\operatorname{Gal}(\varpi)]\)._ Figure 5: _Cusps as limits of cone points. Formally, cusps can be viewed as a limit \(m\to\infty\) (or cone angle \(\theta_{m}=2\pi/m\to 0\)) of a conical singularity on a Riemann orbisurface._ ### Universal Orbifold Covering and Orbifold Fundamental Group In Remark A.17, we have briefly mentioned universal covers of good orbifolds. However, even when an orbifold \(O\) is not necessarily assumed to be developable, it is rather easy to define a _universal covering orbifold_ of \(O\) in much the same way as one defines the universal covering of manifolds and the same uniqueness property holds in the orbifold case as well: **Definition A.27** (Universal covering orbifold).: An orbifold Galois covering \[\varpi_{\mathrm{orb}}:\tilde{O}\to O\] (A.18) is called a _universal orbifold covering_ of \(O\) if for any other orbifold covering \[\varpi^{\prime}_{\mathrm{orb}}:\tilde{O}\to O\] (A.19) there exists a lifting of \(\varpi_{\mathrm{orb}}\) to an orbifold covering map \(\varpi^{\prime\prime}_{\mathrm{orb}}:\tilde{O}\to\tilde{O}\) such that the diagram (A.20) commutes. In the case of developable orbifolds \(O\cong[M/\Gamma]\), any (unramified) manifold covering \(\tilde{M}\to M\) gives an orbifold covering by composition with the quotient map \(M\to[M/\Gamma]\). In particular, the universal covering of \(M\) gives rise to a universal orbifold covering of \(O\), and the orbifold fundamental group is given by the short exact sequence \[1\to\pi_{1}(M)\to\pi_{1}(O)\to\Gamma\to 1\,.\] (A.21) Once again, the situation is particularly nice for the case of orbifold Riemann surfaces: **Theorem A.7** ([159] Theorem E.1).: _With the following two exceptions, every orbifold Riemann surface of finite type \(O=(X_{O},\mathscr{D})\) admits as the universal cover either the Riemann sphere \(\hat{\mathbb{C}}\), the complex plane \(\mathbb{C}\) or the hyperbolic plane \(\mathbb{H}\) which is necessarily a finite Galois branched covering and is unique up to conformal isomorphism over \(X_{O}\); this is a consequence of the classical uniformization theorem. The only exceptions, called "bad" orbisurfaces in Thurston's terminology, are given by:_ 1. _Teardrop orbisurface: Riemann sphere_ \(\hat{\mathbb{C}}\) _with just one ramified point (see Figure_ 6_), or_ 2. _Spindle orbisurface: Riemann sphere_ \(\hat{\mathbb{C}}\) _with two ramified points for which the ramification indices are different (see Figure_ 6_)._ **Remark A.20**.: The statement that _every_ developable Riemann orbisurface of finite type is _finitely_ covered by \(\hat{\mathbb{C}}\), \(\mathbb{C}\), or \(\mathbb{H}\) is equivalent to asserting that _any_ finitely generated, discrete subgroup \(\Gamma\) of (orientation-preserving) isometries \(\operatorname{Isom}^{+}(\hat{\mathbb{C}})\cong\operatorname{PSL}(2,\mathbb{C})\), \(\operatorname{Isom}^{+}(\mathbb{C})\), or \(\operatorname{Isom}^{+}(\mathbb{H})\cong\operatorname{PSL}(2,\mathbb{R})\) with quotient space of finite type has a _torsion-free subgroup of finite index_. The conjugacy classes of torsion in \(\Gamma\) correspond to the conical points of \([\mathbb{X}/\Gamma]\), for \(\mathbb{X}\in\{\hat{\mathbb{C}},\mathbb{C},\mathbb{H}\}\), and one obtains a torsion-free subgroup of finite index of \(\Gamma\) by avoiding these (finite number of) conjugacy classes. The resulting torsion-free subgroup is precisely the fundamental group of the underlying Riemann surface, \(\pi_{1}(X_{O})\). More generally, Thurston proved that every orbifold \(O\) has a universal cover regardless of being developable or not (see [143, Prop. 13.2.4]) and also defined the orbifold fundamental group \(\pi_{1}(O)\) as the group of covering transformations of its universal covering: **Definition A.28** (Orbifold fundamental group).: The _orbifold fundamental group_\(\pi_{1}(O)\) of an orbifold \(O\) is the Galois group of its universal covering. We need an interpretation of \(\pi_{1}(O)\) in terms of homotopy classes of loops in \(O\). However, defining the correct notion of homotopy would be an issue: For instance, a disk \(\operatorname{D}_{m}\cong[\operatorname{D}/\mathbb{Z}_{m}]\) with one cone point of order \(m\) has as its universal covering a non-singular disk, with covering transformation group \(\mathbb{Z}_{m}\) acting by rotations. Thus, intuitively, a loop in \(\operatorname{D}_{m}\) winding \(m\) times around the cone point should be null-homotopic (see Figure 7). Figure 6: _Bad orbisurfaces._ The top figure shows the (3)-teardrop orbisurface, which consists of the tuple \(\big{(}\hat{\mathbb{C}},\mathscr{D}=3N\big{)}\) where the Riemann sphere \(\hat{\mathbb{C}}\) is the underlying Riemann surface and the north pole \(N\) is the only marked point with multiplicity 3. The bottom figure shows the (3,4)-spindle orbisurface that is given by the tuple \(\big{(}\hat{\mathbb{C}},\mathscr{D}=3N+4S\big{)}\) where now both the north pole \(N\) and the south pole \(S\) are marked points with inequivalent cone orders. Note that the \((m_{N})\)-teardrop can be viewed as the \((m_{N},1)\)-spindle. Both of these orbisurfaces are “bad” orbisurfaces in the sense that they admit no Riemann surface as their universal covering. In order to introduce the correct notion of orbifold homotopy, it is more convenient to view the complex \(\mathfrak{n}\)-dimensional orbifold \(O\) as a smooth (oriented) real orbifold of dimension \(2\mathfrak{n}\) and work with topologists definition of an orbifold (see remark A.14): Let \(O=(|O|,\mathscr{U})\) be a smooth orbifold and let \((|U|,\tilde{U}\subset\mathbb{R}^{2\mathfrak{n}},\Gamma\subset\mathrm{SO}(2 \mathfrak{n}),|\mathfrak{f}|)\in\mathscr{U}\) be an orbifold chart on the underlying topological space \(|O|\). Then, \(([0,1]\times|U|,[0,1]\times\tilde{U},\Gamma,\mathrm{id}\times|\mathfrak{f}|)\) where \(\gamma\in\Gamma\) acts on \([0,1]\times\tilde{U}\) via \(\gamma(t,x)=(t,\gamma x)\) is an orbifold chart on \([0,1]\times O\). The collection of all such orbifold charts forms an orbifold atlas, giving \([0,1]\times O\) the structure of a smooth orbifold. It thus makes sense to say that two orbifold maps \[f_{\mathrm{orb}},f^{\prime}_{\mathrm{orb}}:O\to O^{\prime}\] are _homotopic in the category of orbifolds_ if there is an orbifold map \[F:[0,1]\times O\to O^{\prime},\qquad F(t,x)=F_{t}(x)\] with \(F_{0}=f_{\mathrm{orb}}\) and \(F_{1}=f^{\prime}_{\mathrm{orb}}\). Armed with the notion of homotopy of orbifold maps, one can define the orbifold fundamental group \(\pi_{1}(O)\) exactly as one does for the usual fundamental group, just replacing homotopies by orbifold homotopies. One should note that any two orbifold maps which are homotopic as orbifold maps are also homotopic as maps between topological spaces, but that the converse does not need to be true. In fact, there are plenty of orbifolds which are simply connected as topological spaces but whose orbifold fundamental group is non-trivial -- i.e., there are orbifold maps \(\ell:S^{1}\to O\) which, as orbifold maps, are not homotopic to the constant map. See e.g. [176, SS2.3] and [177, SS2.2] for more details. At this stage, the reader might wonder how one can _compute_ fundamental groups of orbifolds. This can be done by studying the fundamental group of the regular locus \(O\backslash\operatorname{Sing}(O)\). Indeed, if singular locus \(\operatorname{Sing}(O)\) has _real_ codimension at least two (which is Figure 7: _Orbifold covering and homotopy classes of loops._ This figure compares the loops around a ramified point of index three on an orbifold Riemann surface with those around the pre-image of this cone point on the covering space. We can see that circling once around the pre-image of the ramification point on the covering (orbi)surface is equivalent to circling three times around the ramified point on the base orbisurface. always the case if \(O\) is complex), then \(O\!\setminus\!\operatorname{Sing}(O)\) is connected, and we have a surjective homomorphism \(\pi_{1}\big{(}O\!\setminus\!\operatorname{Sing}(O)\big{)}\to\pi_{1}(O)\) induced by inclusion \(O\!\setminus\!\operatorname{Sing}(O)\hookrightarrow O\). The surjectivity of this homomorphism comes from the fact that any loop on \(O\) can be perturbed to avoid the singular locus. To compute \(\pi_{1}(O)\), we only need to find the kernel of \(\pi_{1}\big{(}O\!\setminus\!\operatorname{Sing}(O)\big{)}\to\pi_{1}(O)\) -- i.e. which elements of \(\pi_{1}\big{(}O\!\setminus\!\operatorname{Sing}(O)\big{)}\) get killed. For the special case of orbifold Riemann surfaces, we can use the _orbifold Seifert-van Kampen theorem_ to arrive at the following proposition (see e.g. [143, 151] for more details):111 Footnote 111: While for the proof of the following proposition requires more complicated machinery for orbifolds of general dimension, the conclusion of proposition A.8 is valid in all dimensions. **Proposition A.8**.: _The group \(\pi_{1}(O)\) is the quotient of \(\pi_{1}\big{(}O\!\setminus\!\operatorname{Sing}(O)\big{)}\) by the group normally generated by the elements \(\mu_{x}^{m_{x}}\), for all \(x\in\operatorname{Sing}(O)\), where \(m_{x}:=\#\Gamma_{x}\) is the order of the local isotropy group of \(x\) and \(\mu_{x}\) is a meridian around \(x\)._ Thus, if \(O\) is an orbifold Riemann surface with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\), \(\pi_{1}(O)\) can be presented as (see Figure 1 and 8) \[\pi_{1}(O,x_{*})=\left\langle\mathsf{A}_{1},\mathsf{B}_{1},\dots,\mathsf{A}_{ g},\mathsf{B}_{g},\mathsf{C}_{1},\dots,\mathsf{C}_{n_{e}},\mathsf{P}_{1},\dots, \mathsf{P}_{n_{p}}\right|\mathsf{C}_{1}^{m_{1}}=\dots=\mathsf{C}_{n_{e}}^{m_{ n_{e}}}=\prod_{i=1}^{g}[\mathsf{A}_{i},\mathsf{B}_{i}]\prod_{j=1}^{n_{e}} \mathsf{C}_{j}\prod_{k=1}^{n_{p}}\mathsf{P}_{k}=\operatorname{id}\right\rangle,\] (A.22) where \(\mathsf{A}_{i}\)s and \(\mathsf{B}_{i}\)s are homotopy classes of loops (based at \(x_{*}\)) that span \(H_{1}(X_{O},\mathbb{Z})\), \(\mathsf{C}_{j}\)s and \(\mathsf{P}_{k}\)s are meridians around conical points and cusps respectively, the commutator \([\mathsf{A}_{i},,\mathsf{B}_{i}]\) is defined as \(\mathsf{A}_{i}\mathsf{B}_{i}\mathsf{A}_{i}^{-1}\mathsf{B}_{i}^{-1}\), and the relation \(\prod_{i=1}^{g}[\mathsf{A}_{i},\mathsf{B}_{i}]\prod_{j=1}^{n_{e}}\mathsf{C}_{ j}\prod_{k=1}^{n_{p}}\mathsf{P}_{k}=\operatorname{id}\) comes from cutting open \(O\) along the chosen basis for \(\pi_{1}(O,x_{*})\). ### Orbifold Euler Characteristic and Riemann-Hurwitz Formula The _orbifold Euler characteristic_ is a generalization of the notion of Euler characteristic for manifolds that includes contributions coming from nontrivial automorphisms. In particular, while every manifold has an integer Euler characteristic, the orbifold Euler characteristic is, in general a rational number. In this subsection, we will only focus on studying the orbifold Euler characteristic for the case of developable Riemann orbisurfaces.112 As we will see, the Euler characteristic has an important connection to branched Galois coverings, and this Figure 8: _Orbifold fundamental group_. A choice of generators for the homotopy group of a surface with marked points. allows us to calculate the orbifold Euler characteristic for developable orbifold Riemann surfaces using the so called Riemann-Hurwitz formula. For simplicity, let us start by only considering closed Riemann surfaces. We remember from elementary topology that a closed Riemann surface \(Y\) has only one topological invariant, which we may take to be its genus \(g\). In this case, the Euler Characteristic of \(Y\), denoted by \(\chi(Y)\), is found by triangulating \(Y\) and using the formula \(\chi(Y)=F_{aces}-E_{dges}+V_{ertices}\). As expected, the result only depends on the genus \(g\) and is given by \(\chi(Y)=2-2g\). Now, let \(\varpi:Y\to X\) be a Galois covering map between closed Riemann surfaces. There is a formula relating the various invariants involved: the genus of \(Y\), the genus of \(X\), the degree of \(\varpi\), and the amount of ramification: **Theorem A.8** (Riemann-Hurwitz relation).: _Let \(\varpi:Y\xrightarrow{d:1}X\) be a branched Galois covering map of degree \(d\) -- i.e. \(\deg(\varpi)=\#\operatorname{Gal}(Y/X)=d\). We have the relation_ \[\chi(Y)=d\left[\chi(X)-\deg(\mathscr{B}_{\varpi})\right]\] _where \(\mathscr{B}_{\varpi}=\sum_{x_{j}\in B_{\varpi}}(1-\frac{1}{\nu_{\varpi}(x_{j})}) x_{j}\) is the branch divisor associated to the branched Galois covering \(\varpi\) and \(\deg(\mathscr{B}_{\varpi})=\sum_{x_{j}\in B_{\varpi}}(1-\frac{1}{\nu_{\varpi}( x_{j})})\)._ Proof.: Here, we will provide a simple topological proof of the Riemann-Hurwitz (RH) formula:113 Choose a sufficiently small triangulation of \(X\) so that each triangle is contained in an evenly covered neighborhood and such that every branch point of \(\varpi\) is a vertex in this triangulation. Then, as mentioned above, \(\chi(X)=F-E+V\) where \(F\), \(E\), and \(V\) are the number of faces, edges, and vertices (respectively) of the chosen triangulation. Since \(\varpi:Y\to X\) is surjective, the pullback of this triangulation is clearly a triangulation of \(Y\). Thus, we just need to count the number of faces, edges, and vertices of this pulled-back triangulation to calculate the Euler Characteristic of \(Y\). We denote these numbers by \(\tilde{F}\), \(\tilde{E}\), and \(\tilde{V}\) respectively. Footnote 113: One can also prove the RH relation by using a mixture of the Gauss-Bonnet formula and topological considerations [179, §2.1] or by using the relation \(\mathcal{K}_{Y}=\varpi^{*}\mathcal{K}_{X}+\mathscr{R}_{\varpi}\) between the corresponding canonical divisors. We will come back to this point later in this appendix. The pull-back of each evenly covered neighborhood will contain \(d\) copies of the triangle contained within it. Thus we have \(d\) faces and \(d\) edges -- i.e. \(\tilde{F}=d\,F\) and \(\tilde{E}=d\,E\). Naively, one would expect there to be \(d\) vertices as well; however, since a branch point \(x_{j}\in B_{\varpi}\) has \[\left|\varpi^{-1}(x_{j})\right|=d-\sum_{y_{i}\in\varpi^{-1}(x_{j})}\big{(} \deg_{\varpi}(y_{i})-1\big{)}=d/\nu_{\varpi}(x_{j})\] (A.23) distinct pre-images, we have114 Footnote 114: Note that \(\sum\limits_{y_{i}\in R_{\varpi}}=\sum\limits_{x_{j}\in B_{\varpi}}\sum \limits_{y_{i}\in\varpi^{-1}(x_{j})}\). \[\tilde{V}=d\,V-\sum_{y_{i}\in R_{\varpi}}\big{(}\deg_{\varpi}(y_{i})-1\big{)}= d\,V-\sum_{x_{j}\in B_{\varpi}}\frac{d}{\nu_{\varpi}(x_{j})}\big{(}\nu_{\varpi}(x_{j})-1 \big{)}.\] (A.24) Therefore, \[\chi(Y)=\tilde{F}-\tilde{E}+\tilde{V}=d\Big{[}(F-E+V)-\sum_{x_{j}\in B_{\varpi }}(1-\tfrac{1}{\nu_{\varpi}(x_{j})})\Big{]}=d\,\big{[}\chi(X)-\deg(\mathscr{B} _{\varpi})\big{]}.\] (A.25) **Remark A.21**.: The RH relation proved above assumes that the branched covering \(\varpi\) is Galois. However, it is possible to write RH relation in a way which is true for all branched coverings regardless of whether they are Galois or not. We have to pay attention to two main differences in this case: (i) For a general branched covering the degree \(d\) of the covering is not necessarily equivalent to the order of the covering transformation group; in general, \(d\leq\#\operatorname{Aut}(\varpi)\) with equality happening only when \(\varpi\) is Galois. (ii) When \(\varpi\) is not required to be Galois, the ramification indices of ramified points \(y_{1},y_{2}\in\varpi^{-1}(x)\) need not be the same. Therefore, the last equalities in both (A.23) and (A.24) are not true in this general setting; in fact, when \(\varpi\) is not necessarily Galois, we cannot even define branching indices and branch divisors. However, we can still write \(\tilde{V}=d\,V-\sum_{y_{i}\in R_{\varpi}}\big{(}\deg_{\varpi}(y_{i})-1\big{)}\) which results in the following form for RH relation: \[\chi(Y)=d\,\chi(X)-\deg(\mathscr{R}_{\varpi}).\] This form of the Riemann-Hurwitz formula holds true for a general branched covering and is obviously equivalent to the previous form when \(\varpi\) is Galois. Now, let the orbifold Riemann surface \(O=(X_{O},\mathscr{D})\) be a developable Riemann orbisurface with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\). Any such orbifold Riemann surface is finitely covered by a Riemann surface \(Y\) such that \(\mathscr{D}\) is the branch divisor of the branched Galois covering \(\varpi:Y\to X_{O}\). Then, it immediately follows from the proposition A.7 that there exists a corresponding orbifold Galois covering \(\varpi_{\text{orb}}:Y\xrightarrow{d:1}O=(X_{O},\mathscr{D})\) and it is natural to define the orbifold Euler characteristic of \(O\) by using the equation \(\chi(Y)=d\,\chi(O)\). Hence, we get \[\chi(O)=\frac{1}{d}\,\chi(Y)=\chi(X_{O})-\deg(\mathscr{D})=2-2g-n_{p}-\sum_{i= 1}^{n_{e}}\left(1-\frac{1}{m_{i}}\right).\] (A.26) In the following subsection, we will derive this relation in an equivalent way using the notion of _orbifold canonical divisor_. Let us end this subsection by making the theorem A.7 a little sharper. It immediately follows from equation \(\chi(Y)=d\,\chi(O)\) that the Euler characteristic of a developable Riemann orbisurface \(O\) should have the same sign as the Euler characteristic of its universal covering. Hence, we have the following corollary: **Corollary A.3**.: _Let \(O\) be a closed (or possibly punctured) orbifold Riemann surface, which is not a teardrop or a spindle. Then, \(O\) admits \(\mathbb{H}\), \(\mathbb{C}\), or \(\hat{\mathbb{C}}\) as its universal covering if and only if \(\chi(O)<0\), \(\chi(O)=0\), or \(\chi(O)>0\) respectively._ The orbifold Riemann surfaces with \(\chi(O)<0\) are called _hyperbolic_ and are the main focus of our study in the main body of this paper. Notice that all hyperbolic Riemann surfaces are, by definition, developable. ### Orbisheaves, Orbibundles, and Orbidivisors The notions of bundle theory, and more generally, sheaf theory, are fundamental to doing geometry on any object. Fortunately, these notions and many other usual differential geometric concepts can be generalized to the orbifold case with the help of orbifold maps (see, e.g., [156, SS4.2]). #### V-bundles In this section, we define holomorphic vector V-bundles (or orbibundles) as a reasonable generalization of holomorphic vector bundles over complex manifolds. Remember that we defined a rank \(r\) holomorphic vector bundle \(E\) over an analytic space \(X\) as an analytic map \(\pi:E\to X\) such that \(\pi\) is locally a projection \(V\times\mathbb{C}^{r}\to V\). Similarly, a holomorphic vector V-bundle of rank \(r\) over an orbifold \(O=(X,\mathcal{U})\) should be thought of as a pair \(\big{(}\mathcal{E}=(E,\mathcal{V}),\,\pi_{\mathrm{orb}}:\mathcal{E}\to O \big{)}\) where \(\mathcal{E}\) is a complex orbifold and \(\pi_{\mathrm{orb}}\) is an analytic orbifold map. Thus, starting from an analytic map \(\pi:E\to X\) between the underlying analytic spaces, our task reduces to the _construction of the appropriate local lifts of \(\pi\)_ (as in definition A.21): **Definition A.29** (Holomorphic vector V-bundle).: Let \(O=(X,\mathcal{U})\) be a complex orbifold. A _holomorphic vector V-bundle_ (or a _holomorphic vector orbibundle_) of rank \(r\) over \(O\) is a collection of holomorphic vector bundles \(\tilde{\pi}_{a}:\tilde{E}_{a}\to\tilde{U}_{a}\) with fiber \(\mathbb{C}^{r}\) for each orbifold chart \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathfrak{f}_{a})\) of \(O\), together with a collection of group homomorphisms \(\bar{\pi}_{a}:\Gamma_{a}\to\bar{\pi}_{a}(\Gamma_{a})\) defining an action of \(\Gamma_{a}\) on \(\tilde{E}_{a}\) by (ordinary) holomorphic bundle maps, such that: 1. Each \(\tilde{\pi}_{a}\) is \(\Gamma_{a}\)-equivariant, so that the following diagram is commutative for any \(\gamma\in\Gamma_{a}\): \[\begin{CD}\tilde{E}_{a}@>{\tilde{\pi}_{a}(\gamma)}>{}>\tilde{E}_{a}\\ @V{}V{\tilde{\pi}_{a}}V@V{}V{\tilde{\pi}_{a}}V\\ \tilde{U}_{a}@>{\gamma}>{}>\tilde{U}_{a}\end{CD}\] 2. For any holomorphic embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) of charts on \(O\), there exists a holomorphic bundle isomorphism \(\widehat{\eta}_{ab}:\tilde{E}_{a}\to\tilde{E}_{b}\big{|}_{\eta_{ab}(\tilde{U }_{a})}:=\tilde{\pi}_{a}^{-1}\big{(}\eta_{ab}(\tilde{U}_{a})\big{)}\), such that \(\widehat{\eta}_{ab}\) is \(\bar{\pi}_{a}\)-equivariant. 3. For two embeddings \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) and \(\eta_{bc}:\tilde{U}_{b}\hookrightarrow\tilde{U}_{c}\), we have \(\widehat{\eta_{ab}\circ\eta_{bc}}=\widehat{\eta}_{ab}\circ\widehat{\eta}_{bc}\). **Remark A.22**.: The total (underlying) analytic space \(E\) of an orbibundle is obtained from the local bundles \(\tilde{E}_{a}\) in the following way: Choosing small enough orbifold charts on \(O\), there always exists a local trivialization \(\tilde{E}_{a}\cong\tilde{U}_{a}\times\mathbb{C}^{r}\) such that \(\tilde{\pi}_{a}:\tilde{U}_{a}\times\mathbb{C}^{r}\to\tilde{U}_{a}\) is a holomorphic projection on the first factor and the action of \(\bar{\pi}_{a}(\Gamma_{a})\) on \(\tilde{U}_{a}\times\mathbb{C}^{r}\) is diagonalized -- i.e. for any pair \((\tilde{x},v)\in\tilde{U}_{a}\times\mathbb{C}^{r}\) and any \(\gamma\in\Gamma_{i}\), we have \(\bar{\pi}_{a}(\gamma)\cdot(\tilde{x},v):=(\gamma\cdot\tilde{x},\widehat{ \Upsilon}_{a}(\gamma)\cdot v)\) where \(\widehat{\Upsilon}_{a}:\Gamma_{a}\to\mathrm{GL}(r,\mathbb{C})\) is a monomorphism. Then, we have a branched Galois covering \(\mathfrak{f}_{a}^{b}:\tilde{E}_{a}\to E_{a}:=\tilde{E}_{a}/\bar{\pi}_{a}(\Gamma _{a})\). As a result, since \(\tilde{\pi}_{a}\) is \(\Gamma_{a}\)-equivariant, we get a unique _analytic projection map \(\pi_{a}:E_{a}\to U_{a}\)_ such that the following diagram commutes: Now, we can glue the analytic varieties \(E_{a}\) in the following way, stemming from the gluing condition on \(X\): Let \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathsf{f}_{a})\) and \((U_{b},\tilde{U}_{b},\Gamma_{b},\mathsf{f}_{b})\) be any two orbifold charts in \(\mathcal{U}\) with \(U_{a}\cap U_{b}\neq\emptyset\) and let \(x\in U_{a}\cap U_{b}\) be a point. Then, according to definition A.19, there always exists another orbifold chart \((U_{c}\subset U_{a}\cap U_{b},\tilde{U}_{c},\Gamma_{c},\mathsf{f}_{c})\in \mathcal{U}\) containing \(x\) such that embeddings \(\eta_{ca}:\tilde{U}_{c}\hookrightarrow\tilde{U}_{a}\) and \(\eta_{cb}:\tilde{U}_{c}\hookrightarrow\tilde{U}_{b}\) induce bundle biholomorphisms \(\widehat{\eta}_{ca}:\tilde{E}_{c}\to\tilde{E}_{a}\big{|}_{\eta_{ca}(\tilde{U}_ {c})}\) and \(\widehat{\eta}_{cb}:\tilde{E}_{c}\to\tilde{E}_{b}\big{|}_{\eta_{cb}(\tilde{U}_ {b})}\). Gluing \(E_{a}\) and \(E_{b}\) according to this data results in a complex orbifold \(\mathcal{E}\) with an underlying analytic space \(E\) and an analytic orbifold map \(\pi_{\mathrm{orb}}:\mathcal{E}\to O\), which is determined by the analytic map \(\pi:E\to X\) (obtained by gluing analytic projection maps \(\pi_{a}=\pi|_{E_{a}}\)) and local lifts \(\tilde{\pi}_{a}:\tilde{E}_{a}\to\tilde{U}_{a}\). The next concept needed to defined is holomorphic sections of holomorphic vector V-bundles. Defining this is easy globally : a section of \(\mathcal{E}\) in orbibudle \(\pi_{\mathrm{orb}}:\mathcal{E}\to O\) is a holomorphic orbifold map \(s:O\to\mathcal{E}\) satisfying \(\pi_{\mathrm{orb}}\circ s=\mathrm{id}_{O}\). Locally, this concept can be defined as follows: **Definition A.30** (Sections of V-bundles).: If we consider the holomorphic vector V-bundle \(\pi_{\mathrm{orb}}:\mathcal{E}\to O\), a _holomorphic section_ of \(\mathcal{E}\) can be defined in either of the following two equivalent ways: 1. \(s:O\to\mathcal{E}\) is a holomorphic orbifold map satisfying \(\pi_{\mathrm{orb}}\circ s=\mathrm{id}_{O}\). 2. A collection of \(\Gamma_{a}\)_-equivariant_ holomorphic sections \(s_{a}:\tilde{U}_{a}\to\tilde{E}_{a}\) such that for any embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) the following diagram commutes: To glue the local sections \(s_{i}\) to the global section \(s:O\to\mathcal{E}\) one should demand the Equivariance of the local sections. We will call a local holomorphic section \(s_{a}:\tilde{U}_{a}\to\tilde{E}_{a}\)_-invariant_ (as opposed to \(\Gamma_{a}\)_-equivariant_) if \(\gamma\circ s_{a}=s_{a}\). Given the local holomorphic sections \(s_{a}:\tilde{U}_{a}\to\tilde{E}_{a}\) of a holomorphic vector V-bundle \(\mathcal{E}\), we can always construct \(\Gamma_{a}\)-invariant local sections \(s_{a}^{\Gamma_{a}}\) by "averaging over the group" -- i.e. we define an invariant local section by \[s_{a}^{\Gamma_{a}}=\frac{1}{\#\Gamma_{a}}\sum_{\gamma\in\Gamma_{a}}s_{a}\circ\gamma.\] (A.27) Notice that this determines a well-defined map from the underlying analytic space \(X_{O}\), namely \(s_{U_{a}}:=(\mathsf{f}_{a})_{*}\big{(}s_{a}^{\Gamma_{a}}\big{)}:U_{a}\to \tilde{E}_{a}\). Gluing these invariant local sections over each orbifold chart, we obtain _global invariant sections_ and view them _interchangeably_ as invariant objects on \(\tilde{U}_{a}\)s or as objects on \(U\)s. However, note that smoothness in the orbifold sense is somewhat different from ordinary smoothness, and holomorphic invariant sections can have singular behavior (although usually in a controlled way) when viewed as objects on the open analytic subsets \(U_{a}\). **Remark A.23**.: Consider the easiest (but still important) example of a _trivial holomorphic line V-bundle_: This line V-bundle is given by trivial holomorphic line bundles \(\tilde{E}_{a}\cong\tilde{U}_{a}\times\mathbb{C}\) on each local uniformizing neighborhood \(\tilde{U}_{a}\) together with a trivial action of \(\Gamma_{a}\) on the second factor -- i.e. \(\widehat{\Upsilon}_{a}(\Gamma_{a})=1\in\operatorname{GL}(1,\mathbb{C})\). Then, clearly \(E_{a}\cong U_{a}\times\mathbb{C}\) and the total space \(\mathcal{E}\) is just \(O\times\mathbb{C}\). Holomorphic sections of this bundle clearly are in a one-to-one-correspondence with analytic orbifold maps from \(O\) to \(\mathbb{C}\) endowed with the trivial orbifold structure. So, according to remark A.16, they seem to be a good candidate for a _structure orbisheaf_ on \(O\); however, in order to get coherent sheaves on the underlying space \(X_{O}\), nonetheless, we have to deal with _invariant sections_ of the trivial holomorphic line bundles \(\tilde{U}_{a}\times\mathbb{C}\to\tilde{U}_{a}\) or sheaves \(\tilde{\mathcal{F}}_{a}\) on the local uniformizing neighborhoods \(\tilde{U}_{a}\) (see remark 17). All of the standard notions of the tangent bundle, cotangent bundle, and the different associated tensor bundles have V-bundle analogues: On every local uniformizing neighborhood \(\tilde{U}_{a}\), take the holomorphic tangent bundle \(T^{1,0}\tilde{U}\cong\tilde{U}\times\mathbb{C}^{\mathfrak{n}}\) and for any change of charts \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) on \(O\), construct the corresponding bundle biholomorphism \(\widehat{\eta}_{ab}:T^{1,0}\tilde{U}_{a}\to T^{1,0}\tilde{U}_{b}|_{\eta_{ab}( \tilde{U}_{a})}\) by defining it to be given by \(\eta_{ab}\) on the first factor and the Jacobian \(\operatorname{Jac}[\eta_{ab}]\) on the second one. If we denote by \((\partial_{1}^{(a)},\dots,\partial_{\mathfrak{n}}^{(a)})\) the local coordinate basis on each \(T^{1,0}\tilde{U}_{a}\), the Jacobian matrix \(\operatorname{Jac}[\eta_{ab}]\in\operatorname{GL}(\mathfrak{n},\mathbb{C})\) is defined as \[\Big{(}\operatorname{Jac}[\eta_{ab}]\Big{)}_{k,l}=\frac{\partial_{k}^{(b)} \circ\eta_{ab}}{\partial_{l}^{(a)}}.\] **Remark A.24**.: Locally, around any point \(x\in X_{O}\), the fiber \(\big{(}\pi_{\text{orb}}^{1,0}\big{)}^{-1}(x)\subset T^{1,0}O\) is not biholomorphic to \(\mathbb{C}^{\mathfrak{n}}\), but is biholomorphic to a small neighbourhood of \(x\in X_{O}\) -- i.e. in general \(\big{(}\pi_{\text{orb}}^{1,0}\big{)}^{-1}(x)\cong\mathbb{C}^{\mathfrak{n}}/ \Gamma_{x}\). This is because, in a local chart, the actions of \(\gamma\in\Gamma_{a}\) on \(\tilde{U}_{a}\) and of \(\operatorname{Jac}(\gamma)\) on \(T^{1,0}_{\ell_{a}^{-1}(x)}\tilde{U}_{a}\) are essentially the same. On the other hand, the underlying analytic space of \(T^{1,0}O\) is not necessarily the ordinary tangent space \(T^{1,0}X_{O}\). The above construction obviously generalizes to the anti-holomorphic tangent V-bundle \(T^{0,1}O\), holomorphic and anti-holomorphic cotangent V-bundles, symmetric and antisymmetric tensor V-bundles of type \((k,l)\), etc. Particularly, if \(O\) has dimension \(\mathfrak{n}\), we denote the highest exterior power of the holomorphic cotangent V-bundle, \(\bigwedge^{\mathfrak{n}}T^{*}_{1,0}O\), by \(K_{O}\) and call it the _orbifold canonical bundle_. Additionally, one can easily generalize notions such as Riemannian and Hermitian metrics, orbifold \((p,q)\)-differential forms, Hermitian and Chern connections, Chern forms, etc., to the orbifold setting. We will come back to these notions in the next subsections. **Remark A.25**.: Notice that all of the above definitions simplify for the case of developable orbifolds \(O\cong[M/\Gamma]\). In this case, we can always view objects defined on \(O\) -- such as tensors, differential forms, connections, etc. -- as globally defined ordinary objects defined on \(M\) that are invariant under the action of \(\Gamma\). We will come back to this point in the later subsections when we study differential forms and metrics on hyperbolic Riemann orbisurfaces in greater detail. Now, consider a Weil divisor \(\mathcal{D}\) on the underlying analytic space \(X_{O}\). We can lift its restriction \(\mathcal{D}\cap U_{a}\) to a divisor \(\tilde{D}_{\tilde{U}_{a}}\) on the local uniformizing neighborhood \(\tilde{U}_{a}\) by \(\tilde{D}_{\tilde{U}_{a}}:=\mathrm{f}_{a}^{-1}(\mathcal{D}\cap U_{a})\). The collection of all such divisors \(\tilde{D}_{\tilde{U}_{a}}\) on each \(\tilde{U}_{a}\) defines an _orbidivisor_ (or a _Baily divisor_) on the orbifold \(O\) (see e.g. [156, Def. 4.4.11] for more details). In fact, we have the following proposition: **Proposition A.9**.: _The branch divisor \(\mathscr{D}\) or more generally any \(\mathbb{Q}\)-divisor on \(X_{O}\) of the form \(\sum_{i}\frac{b_{i}}{m_{i}}\mathcal{D}_{i}\), where \(m_{i}\) is a ramification index and \(b_{i}\in\mathbb{Z}\), lifts to an orbitivisor on \(O=(X_{O},\mathcal{U})\)._ An divisor obtained as the lift of a branch divisor is called a _ramification divisor_. The following is straightforward **Proposition A.10**.: _To each Baily divisor \(\mathcal{D}\) on the orbifold \(O\), there corresponds a complex line V-bundle \(\mathscr{L}(\mathcal{D})\)._ The most important Baily divisor on a complex orbifold \(O\) is the _orbifold canonical divisor_\(\mathcal{K}_{O}\) which is any Baily divisor associated to the canonical orbibundle \(K_{O}\). In the presence of a branch divisor \(\mathscr{D}\), an orbifold canonical divisor \(\mathcal{K}_{O}\) is not the same (meaning not linearly equivalent) as the canonical divisor \(\mathcal{K}_{X}\) of the underlying analytic space \(X_{O}\). In fact we have (for proof see [156, Prop. 4.4.15]) **Proposition A.11**.: _The orbifold canonical divisor \(\mathcal{K}_{O}\) and canonical divisor \(\mathcal{K}_{X}\) of its underlying analytic space are related by_ \[\mathcal{K}_{O}\cap U_{i}\equiv\mathrm{f}_{i}^{*}(\mathcal{K}_{X}\cap U_{i})+ \sum_{j}(1-\tfrac{1}{m_{j}})\mathrm{f}_{i}^{*}(\mathcal{D}_{j}\cap U_{i}).\] _In terms of the orbifold rational Chern class, the above equation implies_ \[c_{1}(O)=-c_{1}(\mathcal{K}_{O})=\underbrace{-c_{1}(\mathcal{K}_{X})}_{c_{1}( X)}-\sum_{j}(1-\tfrac{1}{m_{j}})c_{1}\big{(}\mathscr{L}(\mathcal{D}_{j}) \big{)}\in H^{2}(X,\mathbb{Q}).\] Let \(O=(X,\mathscr{D})\) be a good orbifold Riemann surface. An orbifold canonical divisor is given by \[\mathcal{K}_{O}=\mathrm{f}^{*}\mathcal{K}_{X}+\mathscr{D},\] (A.28) where \(\mathcal{K}_{X}\) is an ordinary canonical divisor on the underlying Riemann surface \(X\). Thus, if \(g\) denotes the genus of \(X\), the orbifold Chern number (obtained by integrating the first Chern character over \(X\)) is \[c_{1}(O)=-\deg(\mathcal{K}_{O})=-\deg(\mathcal{K}_{X})-\deg(\mathscr{D})=2-2g- n_{p}-\sum_{i=1}^{n_{\varepsilon}}(1-\tfrac{1}{m_{i}}),\] (A.29) which equals the orbifold Euler characteristic \(\chi(O)\) defined before (this follows from the equivalence of the top Chern class with the Euler class). #### Orbisheaves We first introduce the notion of an orbisheaf following [156, Def. 4.2.1]. Similar to V-bundles, _Orbifold sheaves_ or _orbisheaves_ consist of a sequence of sheaves defined on the disjoint union \(\bigsqcup_{a}\tilde{U}_{a}\) of the local uniformizing neighborhoods that satisfy certain compatibility conditions with respect to the local uniformizing groups and embeddings: **Definition A.31** (Orbisheaf).: Let \(O=(X_{O},\mathcal{U})\) be a complex orbifold. An _orbisheaf_\(\mathcal{F}_{O}\) on \(O\) consists of a collection of sheaves \(\{\tilde{\mathcal{F}}_{a}\}_{a\in A}\) defined over each local uniformizing neighborhood \(\tilde{U}_{a}\) of \(O\), such that for each embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) there exists an isomorphism of sheaves \(\eta_{ab}^{*}:\tilde{\mathcal{F}}_{a}\to(\eta_{ab})^{*}\big{(}\tilde{\mathcal{ F}}_{b}\big{)}\), which is functorial. Let \(\mathcal{F}_{O}\) be an orbisheaf on \(O\), and \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathfrak{f}_{a})\) an orbifold chart. Then, one can define an action of \(\Gamma_{a}\) on the sheaf \(\tilde{\mathcal{F}}_{a}\), which says that \(\tilde{\mathcal{F}}_{a}\) is a _\(\Gamma_{a}\)-equivariant_ sheaf on \(\tilde{U}_{a}\). So, every orbisheaf \(\mathcal{F}_{O}\) is equivariant under the local uniformizing groups \(\Gamma_{a}\). We now have **Definition A.32** (Structure orbisheaf).: The _structure orbisheaf_\(\mathscr{O}_{O}\) of an orbifold \(O\) is the orbisheaf defined by the collection of structure sheaves \(\mathscr{O}_{\tilde{U}_{a}}\) defined on each local uniformizing neighborhood \(\tilde{U}_{a}\). The structure orbisheaf \(\mathscr{O}_{O}\) is well-defined since each embedding \(\eta_{ab}:\tilde{U}_{a}\hookrightarrow\tilde{U}_{b}\) induces an isomorphism \(\mathscr{O}_{\tilde{U}_{a}}\approx(\eta_{ab})^{*}(\mathscr{O}_{\tilde{U}_{b}})\) by sending \(f\in\mathscr{O}_{\tilde{U}_{a},\tilde{x}}\) to \(f\circ\eta_{ab}^{-1}\in(\eta_{ab})^{*}(\mathscr{O}_{\tilde{U}_{b}})\). This definition evidently does not align with the holomorphic sections of the trivial line V-bundle nor it yields a sheaf on the underlying space \(X_{O}\). Therefore, we need to utilize local \(\Gamma_{a}\)-invariant sections (in contrast to \(\Gamma_{a}\)-equivariant) of such sheaves and then assemble them across \(X_{O}\)[156, Lemma 4.2.4]. Accordingly, \(\mathcal{F}_{X}\) are defined as sheaves on \(X_{O}\) which are invariant local sections of orbisheaves \(\mathcal{F}_{O}\). In this regard, \(H^{0}(\tilde{U}_{a},\mathscr{O}_{O})^{\Gamma_{a}}\simeq H^{0}(U_{a},\mathscr{ O}_{X})\) holds for the structure sheaves. For a coherent orbisheaf \(\mathcal{F}_{O}\) of \(\mathscr{O}_{O}\)-modules, the \(\Gamma_{a}\)-invariant sections are coherent sheafs of \(\mathscr{O}_{X}\)-modules. Interestingly, this helps one to construct the orbisheaf cohomology. But one should note that this cohomology only probes the topology of the underlying analytic space \(X_{O}\). Hence, a more complicated notion of cohomology, the so called _Chen-Ruan cohomology_, of an orbifold is needed to probe the full topological features of an orbifold. See e.g., [173] and [161] for more details. There are several important orbisheaves on complex orbifolds that we shall work with: First, there is the structure orbisheaf \(\mathscr{O}_{O}\) defined in A.32, where each \(\mathscr{O}_{\tilde{U}_{a}}\) is the sheaf of holomorphic functions on \(\tilde{U}_{a}\). Similarly, there is the meromorphic orbisheaf \(\mathscr{M}_{O}\) consisting of meromorphic functions on each local uniformizing neighborhood \(\tilde{U}_{a}\). Finally, there is the _canonical orbisheaf_ of a complex orbifold: On a complex orbifold \(O\) of complex dimension \(\mathfrak{n}\), we denote by \(\Omega_{O}^{k}\) the orbisheaf of _holomorphic differential \(k\)-forms_ on \(O\). This is the orbisheaf constructed from the collection of ordinary canonical sheaves \(\Omega_{\tilde{U}_{a}}^{k}\) on each orbifold uniformizing neighborhood \(\tilde{U}_{a}\). \(\Omega_{O}^{k}\) is a locally free orbisheaf of rank \(\binom{\mathfrak{n}}{k}\). Then, **Definition A.33** (Canonical orbisheaf).: The _canonical orbisheaf_ of a complex orbifold \(O\) of complex dimension \(\mathfrak{n}\) is the orbisheaf \(\Omega_{O}^{\mathfrak{n}}\). ### Orbifold Metrics In this section, we delve into the examination of metrics on orbifolds. It is clear that, for each \(\tilde{U}_{i}\) on \((X_{O},\mathcal{U})\), this metric should be defined as a \(\Gamma_{i}\)-invariant metric: **Definition A.34** ( Hermitian orbifold metrics).: A Hermitian metric, \(\mathsf{h}\), on a complex orbifold \(O=(X_{O},\mathcal{U})\) can be characterized as a family of \(\Gamma_{a}\)-invariant (local) Hermitian metrics \(\tilde{\mathsf{h}}_{a}^{\Gamma_{a}}\) defined on each neighborhood \(\tilde{U}_{a}\) such that the change of charts are Hermitian isometries. A complex orbifold with a Hermitian metric is called a _Hermitian orbifold_. **Remark A.26**.: A slight modification of the usual partition of unity arguments assures us that _every complex orbifold_ admits a _Hermitian metric_ (see [180] for more details). **Remark A.27**.: There is a beautiful connection between the preceding discussion and the geometry of the situation which is provided by the _Gauss-Bonnet theorem_: Consider a good compact orbifold Riemann surface \(O\) which is expressed as \([\tilde{X}/\Gamma]\) where \(\tilde{X}\in\{\hat{\mathbb{C}},\mathbb{C},\mathbb{H}\}\) and \(\Gamma<\text{Isom}^{+}(\tilde{X})\) is discrete group. There exists a canonical constant curvature Hermitian metric on \(\tilde{X}\) which induces a Hermitian metric of constant curvature on \(O\). \(O\) has a well-defined area \(A(O)\) which has the same naturality property under finite coverings as the Euler number, i.e. if \(\tilde{O}\) is an orbifold covering of \(O\) of degree \(d\), then \(A(\tilde{O})=d\cdot A(O)\). Hence, we can the use the fact that \(O\) is finitely covered by some Riemann surface and apply the usual Gauss-Bonnet Theorem to this Riemann surface. In particular, if \(O\) is \([\hat{\mathbb{C}}/\Gamma]\), we deduce that \(A(O)=2\pi\chi(O)\) and if \(O\) is \([\mathbb{H}/\Gamma]\), we deduce that \(A(O)=-2\pi\chi(O)\). More generally, one can define Hermitian metric on every holomorphic V-bundle: **Definition A.35** (Hermitian metrics on holomorphic V-bundles).: Let \(O=(X_{O},\mathcal{U})\) be a complex orbifold and \(\pi_{\text{orb}}:\mathcal{E}\to O\) a holomorphic vector V-bundle. A _Hermitian orbifold metric_\(\mathsf{h}\) on \(\mathcal{E}\) is a collection of local \(\Gamma_{a}\)-invariant Hermitian metrics \(\tilde{h}_{a}^{\Gamma_{a}}\) on each local holomorphic vector bundle \(\tilde{E}_{a}\to\tilde{U}_{a}\), such that all embeddings are _Hermitian isometries_. Finally, let \(\mathcal{E}\to O\) be a holomorphic vector V-bundle endowed with a Hermitian metric \(\mathsf{h}\). A _Hermitian connection_\(\nabla\) on \(\mathcal{E}\) is defined to be a collection \(\{\nabla_{a}\}\) of \(\Gamma_{a}\)-equivariant Hermitian connections supported on each local uniformizing neighborhood \(\tilde{U}_{a}\) such that \(\nabla_{a}\)s are compatible with changes of charts. Then, the first Chern class or degree of a V-bundle can be defined using Chern-Weil theory; notice that the degree of a V-bundle is a rational number. Sobolev spaces and Hodge theory for V-bundles follow in the same way. Let \(O=(X_{O},\mathcal{D})\) be an orbifold Riemann surface. We say that a hermitian metric of class \(\infty\) on the underlying Riemann surface \(X_{O}\) is compatible with the branch divisor \(\mathscr{D}=\sum(1-1/m_{i})x_{i}\) if in a holomorphic local coordinate system centered at \(x_{i}\) the metric is of the form \((\rho(u)/|u|^{2-2/m_{i}})|du|^{2}\) for \(m_{i}\neq\infty\), whereas it is of the form \(\rho(u)/|u|^{2}\log^{2}(|u|^{-2}))|du|^{2}\) if \(m_{i}=\infty\). Here, \(\rho\) is continuous at the marked points and positive. The cone angle is \(2\pi/m_{i}\), including the complete case with angle zero. Let \(K_{X}\) be the canonical divisor of \(X_{O}\); the orbifold Riemann surface is called stable, if the degree of the divisor \(K_{X}+\mathscr{D}\) is positive. In this case, by a result of McOwen [75] and Troyanov [102, 181], there exists a unique conical metric \(ds_{\text{hyp}}^{2}(\mathscr{D})\) on \(X_{O}\) in the given conformal class, which has constant curvature \(-1\) and prescribed cone angles. Moreover \(\mathrm{Vol}(X_{O},ds^{2}_{\mathrm{hyp}}(\mathscr{D}))/\pi=\deg(K_{X}+\mathscr{D})= -\chi(O)\). Where by definition \(\chi(O)=\chi(X_{O})-\deg(\mathscr{D})\) is the Euler characteristic of the Riemann orbisurface \(O=(X_{O},\mathscr{D})\). #### a.6.1 Hyperbolic metric on Riemann Orbisurfaces The Poincare metric on \(\mathbb{H}\) \[\mathrm{d}s^{2}_{\mathrm{hyp}}=\frac{|\,\mathrm{d}z\,|^{2}}{(\mathrm{Im}\,z)^ {2}},\] (A.30) is the _unique_ (up to multiplicative constant) Riemannian metric that is invariant under \(\mathrm{PSL}(2,\mathbb{R})\), and descends to a Riemannian metric on \(O=[\mathbb{H}/\Gamma]\). As a metric on \(O\), it has singularities at the elliptic and parabolic fixed points. One can describe the local geometry of a hyperbolic cusp and a hyperbolic cone using a distinguished holomorphic coordinate \(w\) (called _rotationally symmetric_ (\(rs\)) by Wolpert [182, 183]) that is unique up to a constant of modulus \(1\): * i.e. \(\mathcal{C}_{\infty}\cong\mathbb{S}^{1}\times\mathbb{R}^{+}\). The isotropy group that corresponds to the above fundamental domain consists of \(\mathbb{Z}\) acting by addition. Let \(\mathcal{C}_{\infty,\epsilon}\), the hyperbolic cusp with apex at infinity and horocycle at height \(\epsilon\), denote the submanifold of \(\mathcal{C}_{\infty}\) obtained by restricting the previous fundamental domain to \(\mathrm{Im}\,z>\epsilon\) - i.e. \(\mathcal{C}_{\infty,\epsilon}\cong\mathbb{S}^{1}\times[\epsilon,\infty)\). This fundamental domain can be endowed with the Poincare metric Footnote 115: See Lemma 2.1.1 of [184]. \[\mathrm{d}s^{2}_{\mathrm{cusp}}=\frac{|\,\mathrm{d}z\,|^{2}}{(\mathrm{Im}\,z) ^{2}}.\] (A.31) Observe that this is a _complete_ metric of Gaussian curvature \(-1\) and finite volume, \(\mathrm{Vol}(\mathcal{C}_{\infty,\epsilon})=\epsilon\). The hyperbolic cusp \(\mathcal{C}_{\infty,\epsilon}\) can equivalently be presented as a Riemann surface with boundary, parameterized by the complex coordinate \(w\equiv e^{i2\pi z}\), valued in the punctured disk \(\mathbb{D}^{*}(0,e^{-2\pi\epsilon})\). The hyperbolic metric can then be written as Footnote 116: See section 2 of [185]. \[\mathrm{d}s^{2}_{\mathrm{cusp}}=\frac{|\,\mathrm{d}w\,|^{2}}{(|w|\log|w|)^{2}}.\] (A.32) The coordinate \(w\) is uniquely determined by this condition, up to a factor of modulus \(1\). Following [182, 183], we will call \(w\) an \(rs\)-coordinate in a neighborhood of a parabolic fixed-point (see Figure 9). * _The model cone:_ For a given positive integer \(m\), let \(\mathcal{C}_{m}\) denote the infinite hyperbolic cone of angle \(2\pi/m\).116 One can realize \(\mathcal{C}_{m}\) as a half-infinite cylinder \(\mathbb{S}^{1}\times\mathbb{R}^{+}\), equipped with the constant curvature \(-1\) metric \[\mathrm{d}s^{2}_{\mathrm{cone}}=\left(\frac{2\pi}{m}\right)^{2}\frac{|\, \mathrm{d}z\,|^{2}}{\sinh^{2}\bigl{(}\frac{2\pi}{m}\,\mathrm{Im}\,z\bigr{)}}.\] (A.33) In contrast to the cusp case, this metric is _not_ complete. A suitable change of variables provides a parameterization of the hyperbolic cone by \(\mathcal{C}_{m}\cong(0,\infty)\times(0,2\pi]\) with coordinates \((\rho,\theta)\). The metric in this coordinate becomes \[\mathrm{d}s_{\mathrm{cone}}^{2}=\mathrm{d}\rho^{2}+m^{-2}\sinh^{2}(\rho)\, \mathrm{d}\theta^{2}\,,\] (A.34) having volume form117 Footnote 117: The \(\star\) in Eq. (A.35) is the Hodge star, and the notation \(\star\)1 emphasizes that the volume form is the Hodge dual of the constant map on the manifold. \[\star\mathbb{1}=m^{-1}\sinh(\rho)\,\mathrm{d}\rho\wedge\mathrm{d}\theta\,.\] (A.35) A fundamental domain for \(\mathcal{C}_{m}\) in the hyperbolic unit disk \(\mathbb{D}\) is provided by a sector with a vertex at the origin and with angle \(2\pi/m\) - i.e. \(\left\{\mathfrak{u}_{\mathbb{D}}\in\mathbb{D}\,\middle|\,0\leq\arg(\mathfrak{ u}_{\mathbb{D}})<\frac{2\pi}{m}\right\}\). The hyperbolic metric on \(\mathcal{C}_{m}\) is the metric induced onto the fundamental domain (viewed as a subset of the \(\mathbb{D}\) endowed with its complete hyperbolic metric). The isotropy group which corresponds to this fundamental domain is the group \(\mathbb{Z}_{m}\) consisting of the numbers \(\exp(\sqrt{-1}\,2\pi j/m)\) for \(j=1,2,\ldots,m\) acting by multiplication. As before, let the hyperbolic cone of angle \(2\pi/m\) and a boundary at height \(\epsilon\), \(\mathcal{C}_{m,\epsilon}\cong\mathbb{S}^{1}\times[\epsilon,\infty)\), be the submanifold of \(\mathcal{C}_{m}\) obtained by restricting the \((\rho,\theta)\)-coordinate to \(0\leq\rho<\cosh^{-1}(1+\epsilon m/2\pi)\). A fundamental domain for \(\mathcal{C}_{m,\epsilon}\) in the unit disk model is obtained by adding the restriction that \(|u|<\sqrt{\epsilon m/(4\pi+\epsilon m)}\). An elementary calculation shows that the volume of this manifold is finite and is given by \(\mathrm{vol}(\mathcal{C}_{m,\epsilon})=\epsilon\). Finally, the hyperbolic cone can also be seen as a Riemann surface with boundary, parameterized by the complex coordinate \(\tilde{w}\in\mathbb{D}^{*}(0,R)\), such that \[\mathrm{d}s_{\mathrm{cone}}^{2}=\frac{4|\,\mathrm{d}\tilde{w}\,|^{2}}{m^{2}| \tilde{w}|^{2-2/m}\left(1-|\tilde{w}|^{2/m}\right)^{2}}.\] (A.36) As for the case of the cusp, a coordinate \(\tilde{w}\) with this property is unique up to a factor of modulus \(1\) and was also called [63] an \(rs\)-coordinate after Wolpert [182, 183] (see Figure 9). The parameter \(R\) can be easily obtained by computing and comparing volumes in different coordinates. In particular, as \(\epsilon m\to 0\), we have \(R\sim(\epsilon m/4\pi)^{m/2}\). ### Orbifold Differential Forms and Automorphic Forms In this section, we will study differential forms on hyperbolic Riemann orbisurfaces in more details (see, e.g. [2, 186, 92] for more details). We start by defining orbifold differential forms on a general complex orbifold as a collection of invariant differential forms on each local uniformizing neighborhood: **Definition A.36** (Orbifold differential forms).: If \(O=(X_{O},\mathcal{U})\) is a complex orbifold with an atlas of analytic orbifold charts \(\mathcal{U}=\{(U_{a},\tilde{U}_{a},\Gamma_{a},\mathrm{f}_{a})\}_{a\in A}\), we can define a complex _orbifold \(k\)-form_\(\phi\) on \(O\) as a collection of local \(\Gamma_{a}\)-invariant complex \(k\)-forms \(\{\tilde{\phi}_{a}^{\Gamma_{a}}\}_{a\in A}\) defined on each local uniformizing neighborhood \(\tilde{U}_{i}\) such that every \(\tilde{\phi}_{i}^{\Gamma_{i}}\) is preserved by all the change of charts. We say that the complex orbifold \(k\)-form \(\phi\) is _bigraded_ of type \((p,q)\), with \(k=p+q\), if \(\phi\) is an invariant section of the V-bundle \(\bigwedge^{p,q}T^{*}O:=\left(\bigwedge^{p}T^{*}_{1,0}O\right)\wedge\left( \bigwedge^{q}T^{*}_{0,1}O\right)\). We will denote the vector space of all such orbifold \((p,q)\)-forms on \(O\) by \(\mathcal{E}^{p,q}(O)\). **Remark A.28**.: Integration theory also goes through: Let \((U_{a},\tilde{U}_{a},\Gamma_{a},\mathsf{f}_{a})\in\mathcal{U}\) be an orbifold chart and let \(\phi\) be an orbifold differential from compactly supported on \(V\subset X_{O}\). The characterization of \(\phi\) as a collection of local \(\Gamma_{a}\)-invariant differential forms \(\{\tilde{\phi}_{a}^{\Gamma_{a}}\}\) that are supported on each \(\tilde{U}_{a}\), enables us to define the integration of \(\phi\) over \(V\) as \[\int_{V}\phi=\sum_{a\in A}\frac{1}{\#\Gamma_{a}}\int_{\mathsf{f}_{a}^{-1}(U_{a }\cap V)}\tilde{\phi}_{a}^{\Gamma_{a}},\] (A.37) where we have used partitions of unity to write the integral over \(V\) as a sum of integrals over \(V\cap U_{a}\). So, all of the standard integration techniques, such as _Stokes' theorem_, are equally valid on orbifolds. Now, consider a hyperbolic orbifold Riemann surface \(O\) and let \(K_{O}\) (\(\neq K_{X}\)) denote its holomorphic cotangent V-bundle (or its orbifold canonical bundle). For any \(k,l\in\mathbb{Z}\), an orbifold \((k,l)\)-differential on \(O\) is defined as an element of \(\mathcal{A}^{k,l}(O):=\Gamma(O,K_{O}^{k}\otimes\overline{K}_{O}^{l})\) -- the vector space of smooth global sections of line V-bundle \(K_{O}^{k}\otimes\overline{K}_{O}^{l}\).118 For any pair of non-negative integers \(p\) and \(q\), there exists an isomorphism between the space \(\mathcal{E}^{p,q}(O,K_{O}^{k}\otimes\overline{K}_{O}^{l})\) of orbifold differential forms of type \((p,q)\) with coefficients in the line V-bundle \(K_{O}^{k}\otimes\overline{K}_{O}^{l}\) and the complex vector space \(\mathcal{A}^{0,0}(O,K_{O}^{k+p}\otimes\overline{K}_{O}^{l+q})\). Footnote 118: Whenever \(k\) or \(l\) are negative, we understand \(K_{O}^{k}:=(TO)^{-k}\) and \(\overline{K}_{O}^{l}:=(\bar{TO})^{-l}\). Every hyperbolic Riemann orbisurface \(O\) can be realized as an orbifold quotient of the upper half-plane \(\mathbb{H}=\{z\in\mathbb{C}\,|\,\operatorname{Im}z>0\}\) by a finite discrete subgroup group \(\Gamma\) of its (orientation preserving) isometries \(\operatorname{Isom}^{+}(\mathbb{H})\cong\operatorname{PSL}(2,\mathbb{R})\), called a _Fuchsian group_. Then, using the realization of \(O\) as \([\mathbb{H}/\Gamma]\), one can identify every orbifold \((k,l)\)-differential with a \(\Gamma\)-automorphic form of weight \((2k,2l)\) on \(\mathbb{H}\): An _automorphic form_ of weight \((2k,2l)\) Figure 9: _Rotation symmetric coordinates_. Model cusp, \(\mathcal{C}_{\infty,\epsilon}\), and model cone, \(\mathcal{C}_{m,\epsilon}\), are shown in an \(\epsilon\)-neighborhoods of the parabolic and elliptic fixed points (i.e. at height \(\epsilon\) in \(rs\)-coordinates \(w,\tilde{w}\)). The hyperbolic metrics, \(\mathrm{d}s_{\mathrm{cusp}}^{2}\) and \(\mathrm{d}s_{\mathrm{cone}}^{2}\), in rotation symmetric coordinates, \(w\) and \(\tilde{w}\), are also shown in neighborhoods of cusps and cones respectively. for \(\Gamma\) is a \(\Gamma\)-invariant global section of the line bundle \(K_{\mathbb{H}}^{k}\otimes\overline{K_{\mathbb{H}}}^{l}\). We will denote the space of \(\Gamma\)-automorphic forms of weight \((2k,2l)\) by \(\mathcal{A}^{k,l}(\mathbb{H},\Gamma)\); an arbitrary element \(\phi\) of \(\mathcal{A}^{k,l}(\mathbb{H},\Gamma)\) has the form \(\phi=\phi(z)\,\mathrm{d}z^{k}\,\mathrm{d}\bar{z}^{l}\) where \(\phi(z)\) transforms according to the rule \(\phi(\gamma z)\gamma^{\prime}(z)^{k}\overline{\gamma^{\prime}(z)}^{l}=\phi(z)\) for all \(\gamma\in\Gamma\) and \(z\in\mathbb{H}\). ## Appendix B Geometric structures on orbifolds ### Basic definitions and some theorems In [187] Ehresmann studied what he called _locally homogeneous spaces_. More precisely, a locally homogeneous geometry on a manifold \(M\), gives a \((G,\mathbb{X})\)-structure on \(M\) in the following sense described by Ehresmann. Let \(\mathbb{X}\) be a (homogeneous) complex manifold - called the _model space_ - and let \(G\) be a group acting effectively, transitively, and holomorphically on \(\mathbb{X}\). A _holomorphic \((G,\mathbb{X})\)-structure_ on a complex manifold \(M\) is given by an open cover \(\{U_{a}\}_{a\in A}\) of \(M\) with holomorphic charts \(\mathfrak{f}_{a}:U_{a}\to\mathbb{X}\) such that the transition maps \(\mathfrak{f}_{a}\circ\mathfrak{f}_{b}^{-1}:\mathfrak{f}_{b}(U_{a}\cap U_{b}) \to\mathfrak{f}_{a}(U_{a}\cap U_{b})\) are given (on each connected component) by the restriction of an element \(\mathbb{g}_{a,b}\in G\) (see Figure 10). Note that any geometric feature of \(\mathbb{X}\) which is invariant by the symmetry group \(G\) has an intrinsic meaning on the manifold \(M\) equipped with a \((G,\mathbb{X})\)-structure. There exists a useful _globalization of the coordinate charts_ of a geometric structure in terms of the universal covering space and the fundamental group. The \((G,\mathbb{X})\)-coordinate atlas \(\{(U_{a},\mathfrak{f}_{a})\}_{a\in A}\) is replaced by a universal covering space \(\tilde{M}\) with its group of deck transformations \(\pi_{1}(M)\): The coordinate charts \(\mathfrak{f}_{a}:U_{a}\to\mathbb{X}\) are replaced by a globally defined map \(\mathrm{dev}:\tilde{M}\to\mathbb{X}\) called a _developing map_ (see Figure 11). In addition, the developing map is equivariant with respect to the actions of \(\pi_{1}(M)\): \[\mathrm{dev}\circ\gamma=\mathrm{hol}(\gamma)\circ\mathrm{dev},\] (B.1) Figure 10: _Geometric structure on manifolds_. A geometric \((G,\mathbb{X})\)-structure on a complex manifold \(M\) is given by an atlas of holomorphic charts such that open neighborhoods are biholomorphic to open subsets of \(\mathbb{X}\) and transition maps are given by restrictions of elements of \(G\). _sentation_ - i.e., the coordinate changes are replaced by the holonomy homomorphism. The resulting _developing pair_\((\operatorname{dev},\operatorname{hol})\) is unique up to composition/conjugation by elements in \(G\), i.e. up to \(\big{(}\operatorname{dev},\operatorname{hol}(\cdot)\big{)}\to\big{(} \operatorname{g}\circ\operatorname{dev},\operatorname{g}\operatorname{hol}( \cdot)\text{g}^{-1}\big{)}\) transformations. This determines the structure. In this section, we will introduce \((G,\mathbb{X})\)-structures on orbifolds. Simply put, a _\((G,\mathbb{X})\)-orbifold_ is locally modeled on \(\mathbb{X}\) modulo finite subgroups of \(G\). We will start by giving a definition of geometric structures on orbifolds based on atlases of charts, as well as using developing maps from the universal orbifold covering. Then, we will introduce and study the deformation spaces of these orbifold \((G,\mathbb{X})\)-structures in analogy with Goldman's work [188; 189; 190] for the manifold case. Some of the most important examples of these geometric structures on orbifolds are provided by projective structures as well as hyperbolic, Euclidean, or spherical structures; we will end this section by studying these specific examples and the relation between them. See [191; 192] for more on orbifold geometric structures. In order to give a precise definition of an orbifold geometric structure modeled by the pair \((G,\mathbb{X})\), we need to introduce the notion of (holomorphic) orbifold \((G,\mathbb{X})\)-charts: Let \(O=(X_{O},\mathcal{U})\) be a complex orbifold and let \(U\subset X_{O}\) be an open subset of \(X_{O}\) with a local model \((\tilde{U},\Gamma)\). Then, a _holomorphic \((G,\mathbb{X})\)-chart_ on \(U\) is defined to be given by a pair \((\tilde{\mathfrak{f}},\mathfrak{C})\) where \(\tilde{\mathfrak{f}}:\tilde{U}\hookrightarrow\mathbb{X}\) gives a holomorphic embedding of local uniformizing neighborhood \(\tilde{U}\) into the model space \(\mathbb{X}\) (equivalently, \(\tilde{\mathfrak{f}}:\tilde{U}\to\tilde{V}\subset\mathbb{X}\) is a holomorphic isomorphism onto an open subset of \(\mathbb{X}\)) and is considered as a _local \(\mathbb{X}\)-coordinate_ on \(\tilde{V}\) while \(\mathfrak{C}:\Gamma\hookrightarrow G\) is an injective group homomorphism (equivalently, a group isomorphism between the local uniformizing group \(\Gamma\) and a finite subgroup of \(G\)). **Definition B.1** (Orbifold \((G,\mathbb{X})\)-structures).: Let \(\mathcal{U}:=\big{\{}(U_{a},\tilde{U}_{a},\Gamma_{a},\mathfrak{f}_{a})\big{\}} _{a\in A}\) denote an orbifold atlas that induces an orbifold structure on \(X_{O}\). A _holomorphic orbifold \((G,\mathbb{X})\)-atlas_ on \(X_{O}\) that is _compatible with orbifold atlas_\(\mathcal{U}\) is given by a collection of holomorphic Figure 11: _Development pair._ \((G,\mathbb{X})\)-charts \(\left\{(\tilde{\mathfrak{f}}_{a}:\tilde{U}_{a}\hookrightarrow\mathbb{X},\, \mathfrak{C}_{a}:\Gamma_{a}\hookrightarrow G)\right\}_{a\in A}\) such that holomorphic embeddings \(\eta_{ab}:\tilde{U}_{a}\rightarrow\tilde{U}_{b}\) are realized as elements \(\mathrm{g}_{a,b}\in G\) and monomorphisms \(\Upsilon_{ab}:\Gamma_{a}\hookrightarrow\Gamma_{b}\) are given by conjugations \(\gamma\mapsto\mathrm{g}_{ab}\circ\gamma\circ\mathrm{g}_{ab}^{-1}\) for all \(\gamma\in\Gamma_{a}\). The datum of a holomorphic orbifold \((G,\mathbb{X})\)-atlas compatible with the _maximal_ complex orbifold atlas, \(\mathcal{U}_{\mathrm{max}}\), defines a _holomorphic \((G,\mathbb{X})\)-structure_ on complex orbifold \(O=(X_{O},\mathcal{U}_{\mathrm{max}})\). If a complex orbifold \(O\) admits a holomorphic \((G,\mathbb{X})\)-structure, one can always choose a local model \((\tilde{U}_{a},\Gamma_{a})\) for each open set \(U_{a}\subset X_{O}\) where \(\tilde{U}_{a}\subset\mathbb{X}\) and \(\Gamma_{a}<G\); notice that if we require the collection \(\{(\tilde{U}_{a}\subset\mathbb{X},\Gamma_{a}<G)\}\) to be a set of local models for a complex orbifold, the space \(\mathbb{X}\) should itself admit a complex structure and the action of group \(G\) on \(\mathbb{X}\) should be given by holomorphic automorphisms. When there is no risk of confusion, we will say that a maximal orbifold atlas \(\mathcal{U}_{\mathrm{max}}:=\left\{(U_{a},\tilde{U}_{a}\subset\mathbb{X}, \Gamma_{a}<G,\mathrm{f}_{a})\right\}_{a\in A}\) induces a complex orbifold \((G,\mathbb{X})\)-structure on \(X_{O}\). Once again, the definition of complex orbifold \((G,\mathbb{X})\)-structures simplifies considerably for the case of Riemann orbisurfaces (orbifold Riemann surfaces) due to the restricted nature of singular points in one complex dimension: **Definition B.2** (Complex \((G,\mathbb{X})\)-structures on Riemann orbisurfaces).: Let \(O=(X_{O},\mathscr{D})\) be a Riemann orbisurface. A complex \((G,\mathbb{X})\)-structure on the Riemann orbisurface \(O\) is given by a \((G,\mathbb{X})\)-structure on its underlying Riemann surface \(X_{O}\) such that the complex structure that is induced on \(X_{O}\) by the \((G,\mathbb{X})\)-structure coincides with the already existent complex structure on \(X_{O}\). A _holomorphic orbifold \((G,\mathbb{X})\)-map_\(f:O\to O^{\prime}\) is a holomorphic orbifold map \(O\overset{f}{\rightarrow}O^{\prime}\) such that its holomorphic local lift at each point \(x\in X_{O}\) is given by a pair of maps between local models \((\tilde{U}_{a},\Gamma_{x})\overset{(\tilde{f}_{x}\tilde{f}_{x})}{\longrightarrow }(\tilde{U}_{a}^{\prime},\Gamma_{f_{O}(x)}^{\prime})\) where the group homomorphism \(\tilde{f}_{x}:\Gamma_{x}\rightarrow\Gamma_{f_{O}(x)}^{\prime}\) is induced by conjugation \(\gamma_{a}\mapsto\mathrm{g}_{ab}\circ\gamma_{a}\circ\mathrm{g}_{ab}^{-1}\) for all \(\gamma_{a}\in\Gamma_{x}\) and holomorphic \(\tilde{f}_{x}\)-equivariant map \(\tilde{f}_{x}:\tilde{U}_{a}\rightarrow\tilde{U}_{a}^{\prime}\) is given by a restriction of \(\mathrm{g}\in G\). Note that if \(O\) is a complex orbifold and \(f:O\to O^{\prime}\) is a holomorphic orbifold map to another complex orbifold \(O^{\prime}\) equipped with a (holomorphic) \((G,\mathbb{X})\)-structure \(\mathfrak{G}^{\prime}\), we can pull-back the \((G,\mathbb{X})\)-structure \(\mathfrak{G}^{\prime}\) on \(O^{\prime}\) to another \((G,\mathbb{X})\)-structure \(f^{*}(\mathfrak{G}^{\prime})\) on \(O\) such that \(f\) becomes a \((G,\mathbb{X})\)-map. In particular, a complex \((G,\mathbb{X})\)-structure on an orbifold \(O\) induces a complex \((G,\mathbb{X})\)-structure on its covering orbifolds through a pull-back by the covering map. **Theorem B.1** (Thurston).: _When \(G\) is a group of biholomorphisms of a complex manifold \(\mathbb{X}\), then every complex \((G,\mathbb{X})\)-orbifold is good._ **Remark B.1**.: If \(G\) is a subgroup of a linear group, then \(O\) is _very good_ by Selberg's lemma provided that \(O\) has a finitely generated fundamental group. In particular, all geometric orbifold Riemann surfaces are very good - i.e. finitely covered by a manifold. Next, we note that the idea of developing map extends to orbifolds with complex geometric structures: **Theorem B.2**.: _Let \(O\) be a \((G,\mathbb{X})\)-orbifold, where \((G,\mathbb{X})\) is a complex analytic geometry. Then, there exists a developing map_ \[\mathrm{dev}:\tilde{O}\to\mathbb{X}\] _defined on the universal covering \(\tilde{O}\) and a holonomy representation \(\mathrm{hol}:\pi_{1}(O)\to G\) such that_ \[\mathrm{dev}\circ\gamma=\mathrm{hol}(\gamma)\circ\mathrm{dev}\] _for any deck transformation \(\gamma\in\pi_{1}(O)\)._ ### Hierarchy of Geometric Structures Often one geometric structure _contains_ or _refines_ another geometry as follows. Suppose that \(G\) and \(G^{\prime}\) act transitively on \(\mathbb{X}\) and \(\mathbb{X}^{\prime}\) respectively, and \(\mathbb{X}\xrightarrow{f}\mathbb{X}^{\prime}\) is a local biholomorphism which is equivariant with respect to a homomorphism \(G\xrightarrow{F}G^{\prime}\) -- i.e. for each \(\mathtt{g}\in G\), the following diagram \[\begin{CD}\mathbb{X}@>{f}>{}>\mathbb{X}^{\prime}\\ @V{\mathtt{g}}V{}V@V{}V{F(\mathtt{g})}V\\ \mathbb{X}@>{f}>{}>\mathbb{X}^{\prime}\end{CD}\] (B.2) commutes. Then (by composition with \(f\) and \(F\)) every \((G,\mathbb{X})\)-structure determines a \((G^{\prime},\mathbb{X}^{\prime})\)-structure. There are many important examples of this correspondence, most of which occur when \(f\) is an embedding. For example, when \(f\) is the identity map and \(G\subset G^{\prime}\) is a subgroup preserving some extra structure on \(\mathbb{X}=\mathbb{X}^{\prime}\), then every \((G,\mathbb{X})\)-structure is a fortiori an \((G^{\prime},\mathbb{X}^{\prime})\)-structure. A more important example for us is the relation between projective and hyperbolic structures: In this case the map \(f:\mathbb{H}\hookrightarrow\mathbb{CP}^{1}\) will be an embedding and \(\mathrm{PSL}(2,\mathbb{R})\) can be viewed as the subgroup of \(\mathrm{PSL}(2,\mathbb{C})\) which leaves the subspace \(\mathbb{H}\subset\mathbb{CP}^{1}\) invariant. Therefore, every hyperbolic structure determines a projective structure. ### Orbifold \(\mathbb{CP}^{1}\)-Structure and Projective Connections Let \(O=(X_{O},\mathscr{D})\) be a hyperbolic orbifold Riemann surface with signature \((g;m_{1},\dots,m_{n_{e}};n_{p})\). A \(\mathbb{CP}^{1}\)_-structure_ or _complex projective structure_ on the Riemann orbisurface \(O\) is an orbifold \((G,\mathbb{X})\)-structure on its underlying Riemann surface \(X_{O}\) with \(\mathbb{X}=\mathbb{CP}^{1}\) and \(G=\mathrm{PSL}(2,\mathbb{C})\) such that the complex structure on \(X_{O}\) induced by the \(\mathbb{CP}^{1}\)-structure coincides with the given complex structure on this Riemann surface (see Appx.B.1 for more details). Similar to other orbifold \((G,\mathbb{X})\)-structures, a \(\mathbb{CP}^{1}\)-structure on \(O\) can be equivalently described as a _developing pair_\((\mathrm{dev},\mathrm{hol})\) where \(\mathrm{dev}:\tilde{O}\to\mathbb{CP}^{1}\) is the _developing map_ defined on the universal cover \(\tilde{O}\cong\mathbb{H}\) and \(\mathrm{hol}:\pi_{1}(O)\to\mathrm{PSL}(2,\mathbb{C})\) is the _holonomy_ or _monodromy representation_ such that developing map is a hol-equivariant immersion. Since any hyperbolic Riemann orbisurface is developable, there exists a finite Galois covering \[\varpi:Y\to X_{O}^{\mathrm{punc}}\] (B.3) such that \(\varpi\) is unramified over \(X_{O}^{\mathrm{reg}}=X_{O}^{\mathrm{punc}}\setminus\mathrm{Sing}_{{}_{\lambda }}(O)\), and for each \(x_{i}\in\mathrm{Sing}_{{}_{\lambda}}(O)\), the order of ramification at every point of \(\varpi^{-1}(x_{i})\) is \(\nu(x_{i})=m_{i}\) -- i.e. \(\varpi\) induces a holomorphic orbifold covering map \(Y\to O\). Let us, for the sake of simplicity, take \(O\) to be a compact Riemann orbisurface so that \(Y\) is a closed Riemann surface of genus \(\tilde{g}\); the case with punctures follows from the formal limit \(m\to\infty\). Let \(H\) be the group of deck transformations or the _Galois group_ for \(\varpi\) such that \(O\cong[Y/H]\); we have \(H\subseteq\mathrm{Aut}(Y)\) where \(\mathrm{Aut}(Y)\) denotes the group of holomorphic automorphisms of \(Y\). Now, let us fix a projective structure \(P\) on the closed Riemann surface \(Y\) and consider the convex combination \[P^{ H}:=\frac{1}{\#H}\sum_{\mathrm{h}\in H}\mathrm{h}^{*}P, \tag{110}\] where \(\mathrm{h}^{*}P\) denotes the pullback of \(\mathbb{CP}^{1}\)-structure \(P\) by holomorphic automorphism \(\mathrm{h}\), \(\#H\) is the order of \(H\), and the average is defined using the convex structure of the space \(\mathcal{P}(Y)\) of all projective structures on \(Y\) compatible with its complex structure. Note that this projective structure \(P^{ H}\) on \(Y\) is clearly left invariant by the action of \(H\) on \(Y\). We will now construct an orbifold projective structure on \(X_{O}\) using \(P^{ H}\) (see [83, 193]): Let \(\big{\{}(\tilde{U}_{a},\tilde{\mathfrak{f}}_{a})\big{\}}_{a\in A}\) be a maximal \(\mathbb{CP}^{1}\)-atlas on \(Y\) where open subsets \(\tilde{U}_{a}\) of \(Y\) left invariant by the action of \(H\) on \(Y\) and \(\mathbb{CP}^{1}\)-coordinate functions \(\tilde{\mathfrak{f}}_{a}:\tilde{U}_{a}\to\tilde{V}_{a}\subset\mathbb{CP}^{1}\) are holomorphic isomorphisms compatible with the projective structure \(P^{ H}\). Consider the ramified coverings \[\varpi\circ\tilde{\mathfrak{f}}_{a}^{-1}:\tilde{V}_{a}\to U_{a}\cong\tilde{U}_{a}/H _{a}\subset X_{O}, \tag{111}\] where \(H_{a}\subset H\) is the Galois group of the restriction \(\varpi|_{\tilde{U}_{a}}\). Then, the collection of ramified coverings \(\big{\{}\varpi\circ\tilde{\mathfrak{f}}_{a}^{-1}\big{\}}_{a\in A}\) combine together to define an orbifold projective structure on \(X_{O}\). Indeed, that they define a \(\mathbb{CP}^{1}\)-structure on \(O\) is an immediate consequence of the facts that \(P^{ H}\) is left invariant by the action on \(Y\) of the Galois group \(H\) and \(\varpi\) is ramified exactly over \(\mathrm{Sing}(O)\) with \(m_{i}=\nu(x_{i})\) as the order of ramification over each \(x_{i}\in\mathrm{Sing}(O)\). Note that one consequence of this construction is that the space of all \(\mathbb{CP}^{1}\)-structures on \(O\) compatible with its complex structure, \(\mathcal{P}(O)\), is the fixed point locus for the action of \(H\) on \(\mathcal{P}(Y)\) -- i.e. \(\mathcal{P}(O)=\mathcal{P}^{ H}(Y)\subset\mathcal{P}(Y)\). It is well known that if \(\big{\{}(\tilde{U}^{\prime}_{b},\tilde{\mathfrak{f}}_{b})\big{\}}_{b\in B}\) is a \(\mathbb{CP}^{1}\)-atlas on a compact Riemann surface \(Y\) that defines a complex projective structure \(P\in\mathcal{P}(Y)\) and \(\big{\{}(\tilde{U}_{a},\tilde{u}_{a}:\tilde{U}_{a}\to\mathbb{C})\big{\}}_{a \in A}\) is any atlas of holomorphic charts on this Riemann surface, the collection of holomorphic functions \(\Big{\{}\mathrm{Sch}\left(\tilde{\mathfrak{f}}^{\prime}_{b};\tilde{u}_{a} \right)\Big{\}}\), defined on overlaps \(\tilde{U}^{\prime}_{b}\cap\tilde{U}_{a}\), give what is known as a "(holomorphic) projective connection" on \(Y\). Let \(\big{\{}(\tilde{U}_{a},\tilde{u}_{a})\big{\}}_{a\in A}\) be any complex-analytic atlas on \(Y\) with local coordinates \(\tilde{u}_{a}:\tilde{U}_{a}\to\mathbb{C}\) and transition functions \(\tilde{u}_{a}=\tilde{g}_{ab}\circ\tilde{u}_{b}\) on \(\tilde{U}_{a}\cap\tilde{U}_{b}\). A _holomorphic projective connection_\(\tilde{R}\) on \(Y\) is in general defined as a collection \(\{\tilde{r}_{a}\}_{a\in A}\) of holomorphic functions \(\tilde{r}_{a}\) supported on \(\tilde{U}_{a}\) that satisfy \[\tilde{r}_{b}=\tilde{r}_{a}\circ\tilde{g}_{ab}(\tilde{\mathfrak{g}^{\prime}}_{ ab})^{2}+\mathrm{Sch}\left(\tilde{g}_{ab};\tilde{u}_{b}\right), \tag{112}\] on every intersection \(\tilde{U}_{a}\cap\tilde{U}_{b}\). One can conversely show that any projective connection \(\tilde{R}\) on \(Y\) defines a \(\mathbb{CP}^{1}\)-structure \(P\in\mathcal{P}(Y)\) as follows (see e.g. [194, Proposition 3.3]): Let \(\tilde{R}=\{\tilde{r}_{a}\}_{a\in A}\) be a holomorphic projective connection with respect to complex-analytic atlas \(\big{\{}(\tilde{U}_{a},\tilde{u}_{a})\big{\}}_{a\in A}\). On each open subset \(\tilde{U}_{a}\), let \(\zeta_{a}\) be any solution of the equation \[\mathrm{Sch}\left(\zeta_{a};\tilde{u}_{a}\right)=-\tilde{r}_{a}. \tag{113}\] The holomorphic functions \(\zeta_{a}\) have nowhere vanishing derivatives, so that we can assume they are injective up to shrinking \(\tilde{U}_{a}\)s. Then, the new coordinates \(\big{\{}\zeta_{a}\circ\tilde{u}_{a}\big{\}}_{a\in A}\) define the same complex structure on \(Y\). In addition, the Schwarzian derivatives \(\operatorname{Sch}\left(\zeta_{a}\circ\tilde{u}_{a};\zeta_{b}\circ\tilde{u}_{b}\right)\) can easily be shown to vanish by using \(\operatorname{Sch}\left(f\circ g;z\right)=\operatorname{Sch}\left(f;g(z) \right)\left(g^{\prime}\right)^{2}+\operatorname{Sch}\left(g;z\right)\). This implies that \(\big{\{}(\tilde{U}_{a},\zeta_{a}\circ\tilde{u}_{a})\big{\}}_{a\in A}\) is an atlas of complex projective structure. A different choice of atlas or a different collection of solutions \(\zeta_{a}\) would define the same complex projective structure. Therefore, one concludes that the set of holomorphic projective connections \(\tilde{R}\) on a Riemann surface \(Y\) is in bijection with the set of \(\mathbb{CP}^{1}\)-structures on \(Y\). Similar to the way that we have constructed orbifold \(\mathbb{CP}^{1}\)-structures on \(X_{O}\) using \(H\)-invariant projective structures \(P^{{ H}}\) on the covering Riemann surface \(Y\), we can try to construct projective connections on \(X_{O}\) by starting from \(H\)-invariant projective connections on \(Y\). More concretely, let \(\big{\{}(\tilde{U}_{a},\tilde{u}_{a})\big{\}}_{a\in A}\) be a complex-analytic atlas on \(Y\) and let \(\tilde{R}^{{ H}}\) be a holomorphic projective connection corresponding to the \(H\)-invariant \(\mathbb{CP}^{1}\)-structure \(P^{{ H}}\in\mathcal{P}^{{ H}}(Y)\). The projective connection \(\tilde{R}^{{ H}}\) is given by a collection \(\{\tilde{r}^{{ H}}_{a}\}_{a\in A}\) of holomorphic functions \(\tilde{r}^{{ H}}_{a}\) supported on each open subset \(\tilde{U}_{a}\) that are invariant under the action of Galois groups \(H_{a}\subset H\) of restrictions \(\varpi|_{\tilde{U}_{a}}:\tilde{U}_{a}\to U_{a}\cong\tilde{U}_{a}/H_{a}\subset X _{O}\) and satisfy (B.6) on every overlap \(\tilde{U}_{a}\cap\tilde{U}_{b}\). Let \(\tilde{U}_{a}\) be an open subset containing only one ramification point of the covering \(\varpi:Y\to X_{O}\) with multiplicity \(m_{a}\) such that \(H_{a}:=\operatorname{Gal}(\varpi|_{\tilde{U}_{a}})\cong\mathbb{Z}_{m_{a}}\). If \(\tilde{r}^{{ H}}_{a}\) is one of the holomorphic functions defining \(\tilde{R}^{{ H}}\) that is supported on \(\tilde{U}_{a}\) and is left invariant by the Galois group \(\mathbb{Z}_{m_{a}}\) for \(\varpi|_{\tilde{U}_{a}}:\tilde{U}_{a}\to U_{a}\cong\tilde{U}_{a}/\mathbb{Z}_{m _{a}}\subset X_{O}\), then \(\tilde{r}^{{ H}}_{a}\) descends, by the map \(\varpi|_{\tilde{U}_{a}}\), to a meromorphic function with at most a pole of order 2 at singular point \(x_{a}\in U_{a}\). In other words, \(\tilde{r}^{{ H}}_{a}=(\varpi|_{\tilde{U}_{a}})^{*}r_{a}\), where \(r_{a}\) is a meromorphic function on \(U_{a}\) with pole only at \(x_{a}\) of order at most two. In fact, if \(x_{a}\in U_{a}\) is a singular point of order \(m_{a}\) and \(u_{a}(x_{a})=0\), the behavior of \(r_{a}\) near this singularity is given by \[r_{a}(u_{a})=\frac{1-1/m_{a}^{2}}{u_{a}^{2}}+\mathcal{O}\big{(}|u_{a}|^{-1} \big{)}\quad\text{as}\quad u_{a}\to 0.\] (B.8) If \(m_{a}\neq\infty\), the monodromy in \(\operatorname{PSL}(2,\mathbb{C})\) around this _regular singularity_ will be given by multiplication with \(e^{\frac{2\pi i}{m_{a}}}\) while if \(m_{a}=\infty\), the monodromy around this singularity will be given by a non-trivial parabolic element. The collection \(\{r_{a}\}_{a\in A}\) of such meromorphic functions will define a _quasi-bounded projective connection_\(R\) on the underlying Riemann surface \(X_{O}\) which is in bijection with orbifold \(\mathbb{CP}^{1}\)-structure on this surface (see e.g. [195; 196; 197] for more details). A quasi-bounded projective connection \(R\) naturally determines a second-order linear differential equation on the Riemann surface \(X_{O}\), the Fuchsian differential equation \[\frac{\mathrm{d}^{2}\!\mathsf{y}_{a}}{\mathrm{d}u_{a}^{2}}+\frac{1}{2}r_{a} \mathsf{y}_{a}=0,\] (B.9) where \(\{\mathsf{y}_{a}\}_{a\in A}\) is understood and as a multi-valued meromorphic differential of order \(-1/2\) on \(X_{O}\). Last but not least, the difference between two projective connections is a meromorphic quadratic differential with only simple poles -- i.e., a collection \(\{q_{a}\}_{a\in A}\) of meromorphic functions on each open subset \(U_{a}\in X_{O}\) with the transformation law \[q_{b}=q_{a}\circ g_{ab}(g^{\prime}_{ab})^{2},\] (B.10) and the additional condition that \(q_{a}(u_{a})={\cal O}\big{(}|u_{a}|^{-1}\big{)}\) as \(u_{a}\to 0\), if \(x_{a}\in U_{a}\) is a singular point and \(u_{a}(x_{a})=0\). Conversely, we can add a meromorphic quadratic differential to a given quasi-bounded projective connection \(R\) to obtain a new quasi-bounded projective connection. Since we know that each Riemann orbisurface has at least one \(\mathbb{CP}^{1}\)-structure, the one given by Poincare-Koebe uniformization, we will have the Proposition 2.1 (see [83]). ## Appendix C Asymptotics Near Elliptic and Parabolic Fixed Points In this appendix, we will sketch the derivation of the asymptotics of the Liouville field \(\varphi\) in a neighborhood of each of the parabolic and elliptic points (see section 2 of [63] as well as the proof of lemma 2 in [7] and lemma 4 in [95]). One of the remarkable corollaries of the uniformization theorem is that the orbifold Riemann surface \(O^{\rm reg}\) has a unique metric of constant curvature \(-1\) compatible with the complex structure. It is the projection on \(O\) of the Poincare metric \(({\rm Im}\,z)^{-2}|\,{\rm d}z\,|^{2}\) on \(\mathbb{H}\) and has the form \({\rm d}s^{2}=e^{\varphi(w)}|\,{\rm d}w\,|^{2}\). The condition that the curvature is constant and equal to \(-1\) means that the function \(\varphi\) satisfies the Liouville's equation on \(O^{\rm reg}\), \[\partial_{w}\partial_{\bar{w}}\varphi=\frac{1}{2}e^{\varphi}.\] (C.1) The two asymptotic forms of metric (A.32) and (A.36) in an \(\epsilon\)-neighborhood of cusps and cones enable us to find the asymptotic form of the Liouville field \(\varphi\) near these parabolic and elliptic fixed-points: * _Asymptotic form of \(\varphi(w)\) near cusps_: \[\varphi(w)=\left\{\begin{aligned} &-2\log|w-w_{i}|-2\log|\log|w-w_{i}||+{ \cal O}(1)\quad(w_{i}\neq\infty),& w\to w_{i},\\ &-2\log|w|-2\log\log|w|+{\cal O}(1),& w\to\infty;\end{aligned}\right.\] (C.2) * _Asymptotic form of \(\varphi(w)\) near cones_: \[\varphi(w)=\left\{\begin{aligned} &-2(1-\frac{1}{m_{i}})\log|w-w_{i}|+{ \cal O}(1)\quad(w_{i}\neq\infty),& w\to w_{i},\end{aligned}\right.\] (C.3) One can also derive these asymptotics by studying the mapping \(J\), called _Klein's Haupt-module_, as a meromorphic function on \(\mathbb{H}\) which is automorphic with respect to the Fuchsian group \(\Gamma\)- i.e. \[J(\gamma z)=J(z)\qquad\text{ for }\,z\in\mathbb{H}\,\text{ and }\,\forall\gamma\in\Gamma\] (C.4) For the sake of simplicity, let us assume that the Fuchsian group \(\Gamma\) has genus \(0\); we can choose a standard system of \(n_{e}\) elliptic generators \(\tau_{1},\ldots,\tau_{n_{e}}\) of orders \(m_{1},\ldots,m_{n_{e}}\) and \(n_{p}\) parabolic generators \(\kappa_{1},\ldots,\kappa_{n_{p}}\) (\(n_{e}+n_{p}=n\)) satisfying the single relation \(\tau_{1}\cdots\tau_{n_{e}}\kappa_{1}\cdots\kappa_{n_{p}}=1\). Let \(z_{1},\ldots,z_{n_{e}}\in\hat{\mathbb{C}}\) and \(z_{n_{e}+1},\ldots,z_{n}\in\mathbb{R}\cup\{\infty\}\) be the fixed points of elliptic and parabolic generators respectively which project into \(w_{1},\ldots,w_{n_{e}},w_{n_{e}+1},\ldots,w_{n}\in\hat{\mathbb{C}}\). We will assume that \(z_{n-2}=0,z_{n-1}=1\), and \(z_{n}=\infty\), which can always be achieved by conjugation with PSL\((2,\mathbb{R})\). The elements \(\tau_{1},\ldots,\tau_{n_{e}}\) and \(\kappa_{1},\ldots,\kappa_{n_{p}}\) are then represented in their _matrix form_ as \[\tau_{i}=\begin{pmatrix}z_{i}-\lambda_{i}\bar{z}_{i}&(\lambda_{i}-1)z_{i}\bar{z }_{i}\\ 1-\lambda_{i}&\lambda_{i}z_{i}-\bar{z}_{i}\end{pmatrix}\!,\quad\kappa_{j}= \begin{pmatrix}1+\delta_{n_{e}+j}z_{n_{e}+j}&-\delta_{n_{e}+j}z_{n_{e}+j}^{2} \\ \delta_{n_{e}+j}&1-\delta_{n_{e}+j}z_{n_{e}+j}\end{pmatrix}\!,\quad\kappa_{n_{p }}=\begin{pmatrix}1&\delta_{n}\\ 0&1\end{pmatrix}\!,\] (C.5) with \(i=1,\ldots,n_{e}\) and \(j=1,\ldots,n_{p}-1\). In the above equation, \(\lambda_{1},\ldots,\lambda_{n_{e}}=e^{\frac{2\pi i}{m_{1}}},\ldots,e^{\frac{2 \pi i}{m_{e}}}\) are called the _multipliers_ of \(\tau_{1},\ldots,\tau_{n_{e}}\) and \(\delta_{n_{e}+1},\ldots,\delta_{n}\in\mathbb{R}\) are called _translation lengths_ of \(\kappa_{1},\ldots,\kappa_{n_{p}}\). Therefore, according to the Eq.(C.4) and using (C.5), it is easy to see that in a neighborhood of each elliptic point \(z_{i}\), \(i=1,\ldots,n_{e}\) with ramification index \(m_{i}\), the Hauptmodule can be expanded as \[J(z)=w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\left(\frac{z-z_{i}}{z-\bar{z}_{i}} \right)^{km_{i}},\] (C.6) and in a neighborhood of each of the parabolic points \(z_{i}\), \(i=n_{e}+1,\ldots,n-1\): \[J(z)=w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\exp\!\left(-\frac{2\pi\sqrt{-1}k}{| \delta_{i}|(z-z_{i})}\right)\!.\] (C.7) Similarly, in a neighborhood of the parabolic point \(z_{n}=\infty\), the function \(J(z)\) can be expanded as \[J(z)=\sum_{k=-1}^{\infty}J_{k}^{(n)}\exp\!\left(\frac{2\pi\sqrt{-1}kz}{| \delta_{n}|}\right)\!.\] (C.8) In equations (C.6), (C.7) and (C.8), the coefficients \(J_{1}^{(i)},J_{-1}^{(n)}\neq 0\) because the mapping \(J\) is univalent in any fundamental domain for \(\Gamma\). If we choose elements \(\varsigma_{n_{e}+j},\ldots,\varsigma_{n}\in\text{PSL}(2,\mathbb{R})\) such that \(\varsigma_{n_{e}+j}(\infty)=z_{n_{e}+j}\) and \[\varsigma_{n_{e}+j}^{-1}\;\kappa_{j}\;\varsigma_{n_{e}+j}=\begin{pmatrix}1& \pm 1\\ 0&1\end{pmatrix}\!,\] for \(j=1,\ldots,n_{p}\), we can also rewrite (C.7) and (C.8) in the form \[J(\varsigma_{i}z)=\begin{cases}w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\exp\! \left(2\pi\sqrt{-1}kz\right)\!,&i=n_{e}+1\ldots n-1,\\ \sum_{k=-1}^{\infty}J_{k}^{(i)}\exp\!\left(2\pi\sqrt{-1}kz\right)\!,&i=n.\end{cases}\] (C.9) The hyperbolic metric \(e^{\varphi(w)}|\operatorname{d}\!w|^{2}\) on \(O\) is given by \[e^{\varphi(w)}=\frac{\left|J^{-1}(w)^{\prime}\right|^{2}}{\left(\operatorname{ Im}J^{-1}(w)\right)^{2}},\] (C.10) and satisfies the Liouville's equation (3.51). In order to find the asymptotic behavior of the Liouville field \(\varphi(w)\) near branch points and cusps, we need to study the _multivalued analytic function \(J^{-1}:O\to\mathbb{H}\) which is a locally univalent linearly polymorphic function119 on \(O\). Let us first calculate the expansion of \(J^{-1}(w)\) near elliptic fixed points. To do so, we will rewrite the expansion (C.6) as Footnote 119: Linearly polymorphic means that the branches of this function are connected by linear fractional transformations in \(\Gamma\). \[J=w_{i}+\sum_{k=1}^{\infty}J_{k}^{(i)}\left(\mathfrak{u}_{\mathbb{D}}^{m_{i}} \right)^{k},\] (C.11) where \(i=1,\ldots,n_{e}\) and \(\mathfrak{u}_{\mathbb{D}}\equiv(z-z_{i})/(z-\bar{z}_{i})\) is the coordinate on \(\mathbb{D}\) instead of \(\mathbb{H}\). Formally, by inverting the above power series, we get \[\mathfrak{u}_{\mathbb{D}}=\tilde{J}^{-1}(w)=\left(\frac{1}{J_{1}^ {(i)}}\right)^{\frac{1}{m_{i}}}(w-w_{i})^{\frac{1}{m_{i}}}-\frac{J_{2}^{(i)}}{ m_{i}\left(J_{1}^{(i)}\right)^{2+\frac{1}{m_{i}}}}(w-w_{i})^{1+\frac{1}{m_{i}}}\\ +\frac{-2m_{i}J_{1}^{(i)}J_{3}^{(i)}+\left(J_{2}^{(i)}\right)^{2 }(1+3m_{i})}{2m_{i}^{2}\left(J_{1}^{(i)}\right)^{4+\frac{1}{m_{i}}}}(w-w_{i})^ {2+\frac{1}{m_{i}}}+\cdots\] (C.12) Then, by using the definitions \(\mathfrak{u}_{\mathbb{D}}\), we have \[J^{-1}(w)=z_{i}+2\sqrt{-1}\ \mathrm{Im}\,z_{i}\left(\frac{w-w_{i}}{J_{1}^{(i)} }\right)^{\frac{1}{m_{i}}}\left(1+\frac{J_{2}^{(i)}}{m_{i}\left(J_{1}^{(i)} \right)^{2}}(w-w_{i})+\cdots\right).\] (C.13) We can use the same method to find the expansion of \(J^{-1}\) near cusps as well. Near parabolic points by rewriting the equations (C.7) and (C.8) as a power series in \(\iota_{j\neq n}\equiv\exp\left(-\frac{2\pi\sqrt{-1}}{|\delta_{j}|(z-z_{j})}\right)\) and \(\iota_{n}\equiv\exp\left(\frac{2\pi\sqrt{-1}z}{|\delta_{n}|}\right)\), we get \[J(z)=\begin{cases}w_{j}+\sum_{k=1}^{\infty}J_{k}^{(j)}\iota_{j}^{k},&j=n_{e}+1,\ldots,n-1,\\ \sum_{k=-1}^{\infty}J_{k}^{(j)}\iota_{j}^{k},&j=n.\end{cases}\] As before, we can formally invert the above power series (\(j=n_{e}+1,\ldots,n-1\)) \[\begin{cases}\iota_{j}=\frac{1}{J_{1}^{(j)}}(w-w_{j})-\frac{J_{2}^{(j)}}{ \left(J_{1}^{(j)}\right)^{3}}(w-w_{j})^{2}\\ \qquad\qquad+\frac{2\left(J_{2}^{(j)}\right)^{2}-J_{1}^{(j)}J_{3}^{(j)}}{ \left(J_{1}^{(j)}\right)^{5}}(w-w_{j})^{3}+\cdots,&w\to w_{j},\\ \iota_{n}=\frac{J_{-1}^{(n)}}{w}+\frac{J_{-1}^{(n)}J_{0}^{(n)}}{w^{2}}+ \frac{J_{-1}^{(n)}\left(\left(J_{0}^{(n)}\right)^{2}-J_{-1}^{(n)}J_{1}^{(n)} \right)}{w^{3}}+\cdots,&w\to\infty\end{cases}\] and again using the definitions \(\iota_{j\neq n}\) and \(\iota_{n}\), to see \((j=n_{e}+1,\ldots,n-1)\) \[J^{-1}(w)=\begin{cases}z_{j}-\dfrac{2\pi\sqrt{-1}}{|\delta_{j}|} \bigg{(}\log\!\left(\dfrac{w-w_{j}}{J_{1}^{(j)}}\right)-\dfrac{J_{2}^{(j)}}{ \left(J_{1}^{(j)}\right)^{2}}(w-w_{j})+\\ \qquad\qquad+\dfrac{\frac{3}{2}\left(J_{2}^{(j)}\right)^{2}-J_{1}^{(j)}J_{3}^ {(j)}}{\left(J_{1}^{(j)}\right)^{4}}(w-w_{j})^{2}+\cdots\bigg{)}^{-1},&w\to w _{j},\\ \dfrac{|\delta_{n}|}{2\pi\sqrt{-1}}\left(\log\!\left(\dfrac{J_{-1}^{(n)}}{w} \right)+\dfrac{J_{0}^{(n)}}{w}+\dfrac{\frac{1}{2}\left(J_{0}^{(n)}\right)^{2}- J_{-1}^{(n)}J_{1}^{(n)}}{w^{2}}+\cdots\right),&w\to\infty.\end{cases}\] Let us identify the accessory parameters as (see _lemma 1_ of [7] and _lemma 3_ of [95]) \[\begin{cases}c_{i}\equiv-\dfrac{h_{i}J_{2}^{(i)}}{\left(J_{1}^{(i)} \right)^{2}},&i=1,\ldots,n_{e},\\ c_{i}\equiv-\dfrac{J_{2}^{(i)}}{\left(J_{1}^{(i)}\right)^{2}},&i=n_{e}+1, \ldots,n-1,\\ c_{n}\equiv J_{0}^{(i)},&i=n,\end{cases}\] where \(h_{i}/2\equiv(1-1/m_{i}^{2})/2\) is the _conformal weight_ of the _twist operators_ corresponding to branch points [84]. Accordingly, we can summarize the asymptotic behavior of the \(J^{-1}(w)\) near conical singularities and cusps ( \(i=1,\ldots,n_{e}\), \(j=n_{e}+1,\ldots,n-1\)) \[J^{-1}(w)=\begin{cases}z_{i}+2\sqrt{-1}\ \text{Im}\,z_{i}\left(\dfrac{w-w_{i}} {J_{1}^{(i)}}\right)^{\frac{1}{m_{i}}}\left(1-\dfrac{c_{i}}{m_{i}h_{i}}(w-w_{ i})+\cdots\right),&w\to w_{i},\\ z_{j}-\dfrac{2\pi\sqrt{-1}}{|\delta_{j}|}\left(\log\!\left(\dfrac{w-w_{j}}{J_ {1}^{(j)}}\right)-c_{j}(w-w_{j})+\cdots\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 2. _For_ \(i=1,\ldots,n_{e}\) _and_ \(j=n_{e}+1,\ldots,n-1\)_,_ \[\varphi(w)=\left\{\begin{aligned} &-2(1-\frac{1}{m_{i}})\log|w-w_{i}|+\log \frac{4|J_{1}^{(i)}|^{-\frac{2}{m_{i}}}}{m_{i}^{2}}+\mathcal{O}\left(1\right) & w\to w_{i}\\ &-2\log|w-w_{j}|-2\log\left|\log\left|\frac{w-w_{j}}{J_{1}^{(j)} }\right|\right|+\mathcal{O}\left(1\right)& w\to w_{j},\\ &-2\log|w|-2\log\left|\frac{w}{J_{-1}^{(n)}}\right|+\mathcal{O} \big{(}|w|^{-1}\big{)},& w\to\infty\end{aligned}\right.\] _For_ \(i=1,\ldots,n_{e}\) _and_ \(j=n_{e}+1,\ldots,n-1\)_,_ \[\left\{\begin{aligned} &\frac{4|J_{1}^{(i)}|^{-\frac{2}{m_{i}}}}{m_{i}^{2}} =\lim_{w\to w_{i}}e^{\varphi(w)}|w-w_{i}|^{2-\frac{2}{m_{i}}},\\ &\left|J_{1}^{(j)}\right|^{2}=\lim_{w\to w_{j}}\exp\!\left(\log|w -w_{j}|^{2}-\frac{2e^{-\frac{\varphi(w)}{2}}}{|w-w_{j}|}\right),\\ &\left|J_{-1}^{(n)}\right|^{2}=\lim_{w\to\infty}\exp\!\left(\log| w|^{2}-\frac{2e^{-\frac{\varphi(w)}{2}}}{|w|}\right),\end{aligned}\right.\] 4. _For_ \(i=1,\ldots,n_{e}\) _and_ \(j=n_{e}+1,\ldots,n-1\)_,_ \[\partial_{w}\varphi(w)=\left\{\begin{aligned} &-\frac{1-\frac{1}{m_{i}}}{w-w_{i}}+ \frac{c_{i}}{1-\frac{1}{m_{i}}}+\mathcal{O}(1)& w\to w_{i},\\ &-\frac{1}{w-w_{j}}\left(1+\left(\log\left|\frac{w-w_{j}}{J_{1}^{( j)}}\right|\right)^{-1}\right)+c_{j}+\mathcal{O}(1)& w\to w_{j},\\ &-\frac{1}{w}\left(1+\left(\log\left|\frac{w}{J_{-1}^{(n)}} \right|\right)^{-1}\right)-\frac{c_{n}}{w^{2}}+\mathcal{O}\bigg{(}\frac{1}{|w |^{2}}\bigg{)},& w\to\infty\end{aligned}\right.\] 5. _For_ \(i=1,\ldots,n_{e}\) _and_ \(j=n_{e}+1,\ldots,n-1\)_,_ \[\partial_{w}^{2}\varphi(w)=\left\{\begin{aligned} &\frac{1-\frac{1}{m_{i}}}{(w-w_{i})^{2}} +\cdots& w\to w_{i},\\ &\frac{1}{(w-w_{j})^{2}}+\frac{1}{(w-w_{j})^{2}\log\left|\frac{w- w_{j}}{J_{1}^{(j)}}\right|}+\frac{1}{(w-w_{j})^{2}\log^{2}\left|\frac{w-w_{j}}{J_{1}^{( j)}}\right|}+\cdots\\ &\frac{2c_{n}}{w^{3}}+\frac{1}{w^{2}}+\frac{1}{w^{2}\log\left| \frac{w}{J_{-1}^{(n)}}\right|}+\frac{1}{w^{2}\log^{2}\left|\frac{w}{J_{-1}^{( n)}}\right|}+\cdots,& w\to\infty\end{aligned}\right.\] Proof.: Parts 2-5 follow from (C.10) and (C.14). Also, the part 1 follows from parts 2-5 and the definition of _Schwarzian derivative_ in (2.16). **Remark C.1**.: In this paper, we sometimes need to study the behavior of objects in the case when the point at the infinity is a conical point. So it is good to clear things out in that case as well. In this remark, we assume that the point \(w_{n}\) is a conical point at infinity. Instead of (C.6) we have: \[J(z)=\sum_{k=-1}^{\infty}J_{k}^{(n)}\left(\mathfrak{u}_{\mathbb{D}}^{m_{n}} \right)^{k}.\] By inverting this expansion, we arrive at \[\mathfrak{u}_{\mathbb{D}}=\left(J_{-1}^{(n)}\right)^{\frac{1}{m_{n}}}\left( \frac{1}{w}\right)^{\frac{1}{m_{n}}}+\frac{\left(J_{-1}^{(n)}\right)^{\frac{1 }{m_{n}}}J_{0}^{(n)}}{m_{n}}\left(\frac{1}{w}\right)^{1+\frac{1}{m_{n}}}+\cdots,\] which, according to the _lemma 3_ of [95], it implies that in the case of the conical point at the infinity, the accessory parameter becomes \[c_{n}\equiv h_{n}J_{0}^{(n)},\] (C.15) with \(h_{n}\equiv 1-1/m_{n}^{2}\). Next point of interest is the part 3 of lemma C.1 which implies that for conical points \(w_{i}\) that are not at infinity, we have \[\mathfrak{h}_{i}=|J_{1}^{(i)}|^{\frac{2}{m_{i}}}=\frac{4}{m_{i}^{2}}\lim_{w \to w_{i}}e^{-\varphi(w)}|w-w_{i}|^{-2+\frac{2}{m_{i}}}.\] This expression is also subject to a slight change for \(\mathfrak{h}_{i}\) when \(w_{i}\to\infty\). Considering \(\mathfrak{u}_{\mathbb{D}}=(z-z_{i})/(z-\bar{z}_{i})\) and \(J^{-1}(w)=z\) to write \(\mathfrak{u}_{\mathbb{D}}=(J^{-1}(w)-z_{i})/(z-\bar{z}_{i})\) implies that \[J^{-1}(w)=z_{i}+2\sqrt{-1}\operatorname{Im}z_{i}\left(\left(J_{-1}^{(n)} \right)^{\frac{1}{m_{n}}}\left(\frac{1}{w}\right)^{\frac{1}{m_{n}}}+\frac{ \left(J_{-1}^{(n)}\right)^{\frac{1}{m_{n}}}J_{0}^{(n)}}{m_{n}}\left(\frac{1}{ w}\right)^{1+\frac{1}{m_{n}}}+\cdots\right).\] Now from (C.10) and the equation above we have \[e^{\varphi(w)}=\frac{4}{m_{n}^{2}}\left(J_{-1}^{(n)}\right)^{\frac{2}{m_{n}}} \left(\frac{1}{w}\right)^{2\left(1+\frac{1}{m_{n}}\right)}+\cdots.\] Accordingly, \[\mathfrak{h}_{n}=\left|J_{-1}^{(n)}\right|^{\frac{2}{m_{n}}}=\frac{m_{n}^{2}} {4}\lim_{w\to\infty}e^{\varphi(w)}|w|^{2(1+\frac{1}{m_{n}})}.\] (C.16) ### Quadratic and Beltrami Differentials In this appendix, we will study the behavior of quadratic differentials, i.e., cusp forms of weight 4, and harmonic Beltrami differentials, automorphic forms of weight (-2,2), near elliptic and parabolic fixed points and connect them with functions on the Riemann orbisurface \(O\) (see [101, 198] for more details). Let us begin by studying the behavior of quadratic differential \(q(z)\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) as \(z\to\infty\): Parabolic generator \(\kappa_{n_{p}}\in\Gamma\) with the fixed-point \(z_{n}=\infty\) has the following normal form (see (C.5)): \[\kappa_{n_{p}}(z)=z+\delta_{n},\] i.e., it is a translation by \(\delta_{n}\). Therefore, the equation \(q(\kappa_{n_{p}}z)\kappa^{\prime}_{n_{p}}(z)^{2}=q(z)\) satisfied by the quadratic differential \(q(z)\) can be translated as \[q(z+\delta_{n})=q(z).\] (C.17) It is then easy to verify that the following Fourier series expansion \[q(z)=\sum_{k=1}^{\infty}q_{k}^{(n)}\exp\biggl{(}\frac{2\pi\sqrt{-1}kz}{|\delta _{n}|}\biggr{)},\] (C.18) satisfies the equation (C.17) and therefore represents the behavior of \(q(z)\) near \(z_{n}=\infty\). In order to study the behavior of \(q(z)\) near the fixed points of other parabolic generators \(\kappa_{1},\ldots,\kappa_{n_{p}-1}\), we note that the Mobius transformation \[\mathrm{PSL}(2,\mathbb{R})\ni\varsigma_{n_{e}+j}(z)=\frac{1}{z-z_{n_{e}+j}}, \qquad j=1,\ldots,n_{p}-1,\] sends that fixed-point to \(\infty\) and we have \(\varsigma_{n_{e}+j}^{-1}\,\kappa_{j}\,\varsigma_{n_{e}+j}(z)=z+\delta_{n_{e}+j}\). As a result, the normal form of the parabolic generators \(\kappa_{j}\) for \(j=1,\ldots,n_{p}-1\) is given by \[\frac{1}{\kappa_{j}(z)-z_{n_{e}+j}}=\frac{1}{z-z_{n_{e}+j}}+\delta_{n_{e}+j},\] and we have that \(q(z)\) satisfies \[q\left(z_{n_{e}+j}+\frac{(z-z_{n_{e}+j})}{1+\delta_{n_{e}+j}(z-z_{n_{e}+j})} \right)\frac{1}{(1+\delta_{n_{e}+j}(z-z_{n_{e}+j}))^{4}}=q(z),\] near parabolic points \(z_{n_{e}+1},\ldots,z_{n-1}\). Therefore, as \(z\to z_{i}\) the quadratic differential has the following Fourier series expansion \[q(z)=\frac{1}{(z-z_{i})^{4}}\sum_{k=1}^{\infty}q_{k}^{(i)}\exp\biggl{(}-\frac {2\pi\sqrt{-1}k}{|\delta_{i}|(z-z_{i})}\biggr{)},\qquad i=n_{e}+1,\ldots,n-1\] (C.19) To study the behavior of differentials near the elliptic fixed points \(z_{1},\ldots,z_{n_{e}}\), it is easier to first use the unit disk model of the hyperbolic plane. Let \(\tau_{i}\in\Gamma\) be an elliptic generator of order \(m_{i}\) with fixed points \(z_{i}\in\mathbb{H}\) and \(\bar{z}_{i}\in\bar{\mathbb{H}}\). Then, the Mobius transformation \[\mathrm{PSL}(2,\mathbb{R})\ni\varsigma_{i}(z)=\frac{z-z_{i}}{z-\bar{z}_{i}}, \qquad i=1,\ldots,n_{e},\] (C.20) sends the points \((z_{i},\bar{z}_{i})\) to \((0,\infty)\) and therefor \(\varsigma_{i}^{-1}\,\tau_{i}\,\varsigma_{i}(z)\) fixes both \(0\) and \(\infty\) and should be given by \(\varsigma_{i}^{-1}\,\tau_{i}\,\varsigma_{i}(z)=\lambda_{i}z\). For elliptic generators, the _multiplier_\(\lambda_{i}\) can be determined to be the \(m_{i}\)-th primitive root of unity, i.e. \(\lambda_{i}=\exp\bigl{(}2\pi\sqrt{-1}/m_{i}\bigr{)}\), via the use of condition \(\tau_{i}^{m_{i}}=\mathbbm{1}\). Therefore, the normal form of the elliptic generators \(\tau_{i}\) of order \(m_{i}\) is given by \[\frac{\tau_{i}(z)-z_{i}}{\tau_{i}(z)-\bar{z}_{i}}=e^{\frac{2\pi\sqrt{-1}}{m_{i }}}\frac{z-z_{i}}{z-\bar{z}_{i}},\qquad i=1,\ldots,n_{e}.\] The Mobius transformations (C.20) give the standard isomorphism \(\varsigma_{i}:\mathbb{H}\to\mathbb{D}\); let us denote the coordinate on \(\mathbb{D}\) by \(\mathfrak{u}_{\mathbb{D}}=\varsigma_{i}(z)\) and the push-forward of the density of Poincare metric, \(\rho(z)=(\mathrm{Im}\,z)^{-2}\), by \(\rho(\mathfrak{u}_{\mathbb{D}})=4(1-|\mathfrak{u}_{\mathbb{D}}|^{2})^{-2}\). Similarly, if we denote the push-forward of \(q(z)\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) with \(q(\mathfrak{u}_{\mathbb{D}})\in\mathcal{H}^{2,0}(\mathbb{D},\Gamma)\), it satisfies \[q(\lambda_{i}\mathfrak{u}_{\mathbb{D}})\,\lambda_{i}^{2}=q(\mathfrak{u}_{ \mathbb{D}}).\] It is easy to see that the solution to the above equation has the following power series form in \(\mathfrak{u}_{\mathbb{D}}\): \[q(\mathfrak{u}_{\mathbb{D}})=\sum_{k=1}^{\infty}q_{k}^{(i)}\mathfrak{u}_{ \mathbb{D}}^{km_{i}-2}\qquad\text{as}\quad\mathfrak{u}_{\mathbb{D}}\to 0,\] (C.21) since \(\lambda_{i}^{m_{i}}=1\). We can also find the pullback of \(q(\mathfrak{u}_{\mathbb{D}})\in\mathcal{H}^{2,0}(\mathbb{D},\Gamma)\) to \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) by using \(q(\mathfrak{u}_{\mathbb{D}})\,\mathrm{d}\mathfrak{u}_{\mathbb{D}}{}^{2}=q(z )\,\mathrm{d}z^{2}\) - i.e. \(q(z)=q(\mathfrak{u}_{\mathbb{D}})\left(\frac{\mathrm{d}\mathfrak{u}_{\mathbb{D }}}{\mathrm{d}z}\right)^{2}\), \[q(z)=-\frac{4(\mathrm{Im}\,z_{i})^{2}}{(z-\bar{z}_{i})^{4}}\sum_{k=1}^{\infty }q_{k}^{(n+j)}\left(\frac{z-z_{i}}{z-\bar{z}_{i}}\right)^{km_{i}-2}\qquad i=1, \ldots,n_{e},\] (C.22) which satisfies the equation \[q\left(\frac{-z\,z_{i}+\bar{z}_{i}\left(\lambda_{i}z+z_{i}+\lambda_{i}z_{i} \right)}{z(\lambda_{i}-1)-\lambda_{i}z_{i}+\bar{z}_{i}}\right)\,\lambda_{i}^{ 2}\frac{\left(\frac{-z\,\bar{z}_{i}+\bar{z}_{i}(\lambda_{i}z+z_{i}+\lambda_{i} z_{i})}{z(\lambda_{i}-1)-\lambda_{i}z_{i}+\bar{z}_{i}}-\bar{z}_{i}\right)^{4}}{(z- \bar{z}_{i})^{4}}=q(z).\] We can summarize the behavior of \(q(z)\in\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) near branch points and cusps as \[q(z)=\left\{\begin{aligned} &-\frac{4(\mathrm{Im}\,z_{i})^{2}}{(z- \bar{z}_{i})^{4}}\,\sum_{k=1}^{\infty}q_{k}^{(i)}\left(\frac{z-z_{i}}{z-\bar{ z}_{i}}\right)^{km_{i}-2}&(i=1,\ldots,n_{e}),\ \ \ \ z\to z_{i},\\ &\frac{1}{(z-z_{i})^{4}}\,\sum_{k=1}^{\infty}q_{k}^{(i)}\exp\! \left(-\frac{2\pi\sqrt{-1}k}{|\delta_{i}|(z-z_{i})}\right)&(i=n_{ e}+1,\ldots,n-1),\ z\to z_{i},\\ &\sum_{k=1}^{\infty}q_{k}^{(n)}\exp\!\left(\frac{2\pi\sqrt{-1}k }{|\delta_{n}|}\right)\!,&\qquad\qquad\qquad\qquad\qquad z\to \infty.\end{aligned}\right.\] (C.23) Using the complex anti-linear isomorphism \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\cong\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) given by \(q(z)\mapsto\mu(z)=\rho(z)^{-1}\,\overline{q(z)}\),120 we have Footnote 120: Equivalently, \(q(\mathfrak{u}_{\mathbb{D}})\mapsto\mu(\mathfrak{u}_{\mathbb{D}})=\rho( \mathfrak{u}_{\mathbb{D}})^{-1}\,\overline{q(\mathfrak{u}_{\mathbb{D}})}\) induces the isomorphism \(\mathcal{H}^{2,0}(\mathbb{D},\Gamma)\cong\mathcal{H}^{-1,1}(\mathbb{D},\Gamma)\). \[\mu(z)=\left\{\begin{aligned} &-\frac{4(\mathrm{Im}\,z)^{2}( \mathrm{Im}\,z_{i})^{2}}{(\bar{z}-z_{i})^{4}}\,\sum_{k=1}^{\infty}\bar{q}_{k}^{ (i)}\left(\frac{\bar{z}-\bar{z}_{i}}{\bar{z}-z_{i}}\right)^{km_{i}-2}&(i= 1,\ldots,n_{e}),\ \ \ \ z\to z_{i},\\ &\frac{(\mathrm{Im}\,z)^{2}}{(\bar{z}-z_{i})^{4}}\,\sum_{k=1}^{ \infty}\bar{q}_{k}^{(i)}\exp\!\left(\frac{2\pi\sqrt{-1}k}{|\delta_{i}|(\bar{z }-z_{i})}\right)&(i=n_{e}+1,\ldots,n-1),\ z\to z_{i},\\ &(\mathrm{Im}\,z)^{2}\,\sum_{k=1}^{\infty}\bar{q}_{k}^{(n)}\exp \!\left(-\frac{2\pi\sqrt{-1}k\bar{z}}{|\delta_{n}|}\right)\!,& \qquad\qquad\qquad\qquad\qquad z\to\infty,\end{aligned}\right.\] (C.24) where we have used the fact that \(\bar{z}_{n_{e}+j}=z_{n_{e}+j}\in\mathbb{R}\) for \(j=1,\ldots,n_{p}-1\). The mapping \[q(z)\mapsto Q(w)=(q\circ J^{-1})(w)\,\left(J^{-1}(w)^{\prime}\right)^{2},\] (C.25) determines a linear isomorphism of spaces \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) and \(\mathcal{H}^{2,0}(O)\) and the inverse mapping is given by \[Q(w)\mapsto q(z)=(Q\circ J)(z)\,J^{\prime}(z)^{2}.\] (C.26) The Petersson inner product in \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) can be carried over to \(\mathcal{H}^{2,0}(O)\) by setting \(\langle Q_{1},Q_{2}\rangle\stackrel{{\text{\tiny def.}}}{{=}} \langle Q_{1}\circ JJ^{\prime 2},Q_{2}\circ JJ^{\prime 2}\rangle\) for all \(Q_{1},Q_{2}\in\mathcal{H}^{2,0}(O)\), so that \[\langle Q_{1},Q_{2}\rangle=\iint_{\mathbb{C}}e^{-\varphi(w)}Q_{1}(w)\, \overline{Q_{2}(w)}\,\mathrm{d}^{2}w\,.\] (C.27) It follows from Eq.(C.23) and Eq.(C.14) that \[Q(w) =\sum_{k=1}^{\infty}q_{k}^{(n)}\exp\Biggl{(}\frac{2\pi\sqrt{-1}k }{|\delta_{n}|}\times\frac{|\delta_{n}|}{2\pi\sqrt{-1}}\left(\log\Biggl{(} \frac{J_{-1}^{(n)}}{w}\Biggr{)}+\frac{c_{n}}{w}+\cdots\Biggr{)}\Biggr{)}\times\] \[\qquad\qquad\qquad\qquad\qquad\times\left(\frac{|\delta_{n}|}{2 \pi\sqrt{-1}}\left(-\frac{1}{w}-\frac{c_{n}}{w^{2}}+\cdots\right)\right)^{2}\] \[\simeq\left(-\frac{|\delta_{n}|^{2}}{4\pi^{2}}\left(\frac{1}{w^{ 2}}+\frac{2c_{n}}{w^{3}}+\frac{c_{n}^{2}}{w^{4}}+\cdots\right)\right)\sum_{k= 1}^{\infty}q_{k}^{(n)}\left(\frac{J_{-1}^{(n)}}{w}\right)^{k}\] \[=-\frac{|\delta_{n}|^{2}q_{1}^{(n)}J_{-1}^{(n)}}{4\pi^{2}}\cdot \frac{1}{w^{3}}-\frac{|\delta_{n}|^{2}J_{-1}^{(n)}\bigl{(}2c_{n}q_{1}^{(n)}+q_{ 2}^{(n)}J_{-1}^{(n)}\bigr{)}}{4\pi^{2}}\cdot\frac{1}{w^{4}}+\mathcal{O}\bigl{(} |w|^{-5}\bigr{)}\,\,\,\text{as}\,\,\,\,\,w\to\infty,\] and \[Q(w)=\left(-\frac{2\pi\sqrt{-1}}{|\delta_{i}|}\frac{1}{\log\biggl{(} \frac{w-w_{i}}{J_{1}^{(i)}}\biggr{)}-c_{i}(w-w_{i})+\cdots\biggr{)}}^{-4}\times\] \[\quad\times\sum_{k=1}^{\infty}q_{k}^{(i)}\exp\Biggl{(}-\frac{2\pi \sqrt{-1}k}{|\delta_{i}|}\times-\frac{|\delta_{i}|}{2\pi\sqrt{-1}}\left(\log \Biggl{(}\frac{w-w_{i}}{J_{1}^{(i)}}\Biggr{)}-c_{i}(w-w_{i})+\cdots\right) \Biggr{)}\times\] \[\quad\times\left(-\frac{2\pi\sqrt{-1}}{|\delta_{i}|}\frac{-1}{(w- w_{i})\log^{2}(\frac{w-w_{i}}{J_{1}^{(i)}})}+\cdots\right)^{2}\] \[\quad=-\frac{|\delta_{i}|^{2}q_{1}^{(i)}}{4\pi^{2}J_{1}^{(i)}} \cdot\frac{1}{w-w_{i}}+\mathcal{O}\bigl{(}|w-w_{i}|^{-2}\bigr{)},\,\,\,\,\,\, \,(i=n_{e}+1,\ldots,n-1)\,\,\text{as}\,\,w\to w_{i}.\] Similarly, near branch points \(w_{1},\ldots,w_{n_{e}}\) we have \[Q(w)=(q\circ\tilde{J}^{-1})(w)\left(\tilde{J}^{-1}(w)^{\prime} \right)^{2},\] (C.28) where \(\tilde{J}^{-1}:O\to\mathbb{D}\) is the inverse of Klien's Hauptmodule in the unit disk model of the hyperbolic plane. Using equations (C.12) and (C.21), we get \[Q(w)=\sum_{k=1}^{\infty}q_{k}^{(i)}\left(\left(\frac{1}{J_{1}^{(i )}}\right)^{\frac{1}{m_{i}}}(w-w_{i})^{\frac{1}{m_{i}}}-\frac{J_{2}^{(i)}}{m_{i }\left(J_{1}^{(i)}\right)^{2+\frac{1}{m_{i}}}}(w-w_{i})^{1+\frac{1}{m_{i}}}+ \cdots\right)^{km_{i}-2}\times\] \[\times\left(m_{i}^{-1}\left(\frac{1}{J_{1}^{(i)}}\right)^{\frac{1 }{m_{i}}}(w-w_{i})^{-1+\frac{1}{m_{i}}}-(1+\frac{1}{m_{i}})\frac{J_{2}^{(i)}}{ m_{i}\left(J_{1}^{(i)}\right)^{2+\frac{1}{m_{i}}}}(w-w_{i})^{\frac{1}{m_{i}}}+ \cdots\right)^{2}\] \[\quad=\frac{q_{1}^{(i)}}{m_{i}^{2}J_{1}^{(i)}}\cdot\frac{1}{w-w_ {i}}+\mathcal{O}(1)\hskip 28.452756pt(i=1,\ldots,n_{e}),\;\text{as}\;\;w\to w_{n+j}.\] We can summarize the behavior of \(Q(w)\in\mathcal{H}^{2,0}(O)\) near conical singularities and punctures as follows \[Q(w)=\left\{\begin{aligned} &\frac{q_{1}^{(i)}}{m_{i}^{2}J_{1}^{(i )}}\cdot\frac{1}{w-w_{i}}+\mathcal{O}(1)&(i=1,\ldots,n_{e}),\; \;\;\;\;w\to w_{i},\\ &-\frac{|\delta_{i}|^{2}q_{1}^{(i)}}{4\pi^{2}J_{1}^{(i)}}\cdot \frac{1}{w-w_{i}}+\mathcal{O}\big{(}|w-w_{i}|^{-2}\big{)},&(i=n_{e} +1,\ldots,n-1),\;\;\;\;w\to w_{i},\\ &-\frac{|\delta_{n}|^{2}q_{1}^{(n)}J_{-1}^{(n)}}{4\pi^{2}}\cdot \frac{1}{w^{3}}+\mathcal{O}\big{(}|w|^{-4}\big{)}& w\to\infty. \end{aligned}\right.\] (C.29) Finally, using the complex anti-linear isomorphism \(\mathcal{H}^{2,0}(O)\cong\mathcal{H}^{-1,1}(O)\) given by \(Q(w)\mapsto M(w)=e^{-\varphi(w)}\,\overline{Q(w)}\), we have \[M(w)=\left\{\begin{aligned} &\frac{\tilde{q}_{1}^{(i)}}{4\bar{J}_{1}^{(i)}} \cdot|w-w_{i}|^{1-\frac{2}{m_{i}}}+\mathcal{O}(|w-w_{i}|)&(i=1, \ldots,n_{e}),\;\;w\to w_{i},\\ &-\frac{|\delta_{i}|^{2}\bar{q}_{1}^{(i)}}{4\pi^{2}\bar{J}_{1}^{(i )}}\cdot|w-w_{i}|\log^{2}|w-w_{i}|+\mathcal{O}\big{(}\log^{2}|w-w_{i}|\big{)} \\ &(i=n_{e}+1,\ldots,n-1),\;\;w\to w_{i},\\ &-\frac{|\delta_{n}|^{2}\bar{q}_{1}^{(n)}\bar{J}_{-1}^{(n)}}{4 \pi^{2}}\cdot\frac{\log^{2}|w|}{|w|}+\mathcal{O}\big{(}|w|^{-2}\log^{2}|w| \big{)}& w\to\infty.\end{aligned}\right.\] (C.30) Using the definition (3.59) as well as equations (C.14) and (C.24), one can independently check the above asymptotic behaviors for \(M(w)\). List of symbols in the main text \begin{tabular}{|l|l|} \hline \(g\) & Genus of a corresponding surface, or rank of a Schottky group \\ \(n_{e}\) & Number of elliptic fixed points or branch points \\ \(n_{p}\) & Number of parabolic fixed points or punctures \\ \(n\) & Number of all marked points, namely \(n_{e}+n_{p}\) \\ \(m_{i}\) & Orders of the marked points \\ \(\mathbf{m}\) & \(n\)-tuple of all branching indices for an orbifold, i.e. \((m_{1},\ldots,m_{n})\) \\ \(h_{i}/2\) & conformal weights corresponding to the marked points, i.e. \(1-\frac{1}{m_{i}^{2}}\) \\ \(\hat{\mathbb{N}}^{>1}\) & Natural numbers with the inclusion of \(\infty\) and omission of \(1\) \\ D & Unit disk in \(\mathbb{C}\) \\ \(X\) & Riemann surface without singularities \\ \(O\) & Orbifold Riemann surface \\ \(O^{\mu}\) & Orbifold Riemann surface deformed by \(\mu\) \\ \(X_{O}\) & Underlying Riemann surface of \(O\) \\ \(X_{O}^{\rm reg}\) & Regular locus of \(O\) \\ \(\hat{X}_{O}\) & Compactified underlying Riemann surface \\ \(K\) & Kleinian group \\ \(\Gamma\) & Fuchsian group \\ \(\Gamma^{\mu}\) & Fuchsian group deformed by \(\mu\) \\ \(\Sigma\) & Schottky group \\ \(N\) & Smallest normal subgroup of \(\Gamma\) containing \(\{\alpha_{1},\ldots,\alpha_{g},\tau_{1},\ldots,\tau_{n_{e}},\kappa_{1},\ldots, \kappa_{n_{p}}\}\) \\ \(\pi_{1}(O,x_{*})\) & Fundamental group of an orbifold based at \(x_{*}\) \\ \(\mathbb{Z}_{m}\) & Cyclic group of order \(m\) \\ \({\rm Aut}_{*}(\Gamma)\) & Group of proper automorphisms of \(\Gamma\) \\ \({\rm Inn}(\Gamma)\) & Group of inner automorphisms \\ \({\rm MCG}(O)\) & Mapping class group, namely \({\rm Homeo}^{+}(O)/\,{\rm Homeo}^{+}_{\rm id}(O)\) \\ \({\rm Mod}(\Gamma)\) & Teichmuller modular group of \(\Gamma\), i.e. \({\rm Aut}_{*}(\Gamma)/\,{\rm Inn}(\Gamma)={\rm Out}^{+}(\Gamma)\) \\ \({\rm Homeo}^{+}(O)\) & Group of orientation preserving homeomorphisms of \(O\) in the category of orbifolds, which has \({\rm Homeo}^{+}_{\rm id}(O)\) as its identity component \\ \({\rm Out}^{+}(\Gamma)\) & Group of outer automorphisms of \(\Gamma\simeq\pi_{1}(O)\) \\ \({\rm MCG}_{0}(O)\) & Group of pure mapping classes of \(O\) \\ \({\rm Mod}_{0}(\Gamma)\) & Group of pure mapping classes of \(\Gamma\) \\ \({\rm Symm}\,({\mathfrak{s}}_{i})\) & Symmetric group associated with the stratum of order \(m_{i}\) \\ \({\rm Symm}\,({\mathfrak{s}})\) & Product symmetric group \({\rm Symm}\,({\mathfrak{s}}_{2})\times{\rm Symm}\,({\mathfrak{s}}_{3})\times \cdots\times{\rm Symm}\,({\mathfrak{s}}_{\infty})\) \\ deck & Group of covering or deck transformations \\ \(H_{1}\) & First homology group \\ \(\Omega\) & Region of discontinuity of Schottky group \(\Sigma\) \\ \(\Omega_{0}\) & Region of discontinuity with pre-images of cusps subtracted \\ \(\hat{\bar{\Omega}}\) & \((\Omega_{0},\widetilde{\mathscr{D}})\) \\ \(\Omega^{\rm reg}\) & \(\Omega_{0}\backslash{\rm Supp}(\widetilde{\mathscr{D}})\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \(\Lambda\) & Limit set of a Kleinian group, i.e. \(\hat{\mathbb{C}}\backslash\Omega\) \\ \hline \(\mathcal{F}\) & Fundamental domain of a Fuchsian group \\ \(\mathcal{D}\) & Fundamental domain of a Schottky group \\ \(\mathcal{D}_{0}\) & \(\mathcal{D}\cap\Omega_{0}\) \\ \(\hat{\mathcal{D}}\) & \((\mathcal{D}_{0},\widetilde{\mathscr{D}}\big{|}_{{}_{\mathcal{D}}})\) \\ \(\hat{\mathcal{D}}_{\epsilon}\) & Regularized singular fundamental domain of a Schottky group defined by \(\hat{\mathcal{D}}\backslash\bigcup_{i=1}^{n}D_{i}^{\epsilon}\) \\ \(\gamma\) & An element of a Fuchsian group \\ \(\sigma\) & An element of a Schottky group \\ \(\eta\) & An element of a symmetric group \\ \(\mathsf{A}_{i}\) & Homotopy class of loops around a handle \\ \(\mathsf{B}_{i}\) & Homotopy class of loops around a hole \\ \(\mathsf{C}_{i}\) & Homotopy class of loops around a branch point \\ \(\mathsf{P}_{i}\) & Homotopy class of loops around a puncture \\ \(\alpha_{i},\beta_{i}\) & Hyperbolic generators of a Fuchsian group \\ \(\tau_{i}\) & Elliptic generator of a Fuchsian group \\ \(\kappa_{i}\) & Parabolic generator of a Fuchsian group \\ \(L_{i}\) & Generator of a Schottky group \\ \(\tilde{a}_{i},\tilde{b}_{i}\) & attracting and repelling fixed points of the loxodromic element \(L_{i}\) \\ \(\lambda_{i}\) & multiplier of the loxodromic element \(L_{i}\) \\ \(\mathsf{a}_{i}\) & A retrospection of a Riemann surface \\ \(f_{\eta}\) & 1-cocycle associated with an element \(\eta\in\operatorname{Symm}\left(\mathsf{s}\right)\) \\ \(\operatorname{Sing}(O)\) & Set of all singular points for an orbifold \(O\) \\ \(\operatorname{Sing}_{m}(O)\) & Set of singular points of order \(m\) for an orbifold \(O\), i.e. \(\nu^{-1}(m)\) \\ \(\operatorname{Sing}_{{}_{\lambda}}(O)\) & Set of singular points of finite order for an orbifold \(O\), i.e. \(\bigsqcup_{m\neq\infty}\operatorname{Sing}_{m}(O)\) \\ \(\mathsf{s}_{m}\) & Cardinality of the stratum of singular points of order \(m\in\hat{\mathbb{N}}^{\times}\) \\ \(\mathsf{s}\) & Signature type of an orbifold \(O\), which is the unordered set of cardinalities of all strata, i.e. \(\left\{\mathsf{s}_{m}\right\}_{m\in\hat{\mathbb{N}}^{>1}}\) \\ \(\nu\) & Branching function which assigns to each singular point its corresponding branching order \\ \(x_{i}\) & Marked points on the Riemannian orbifold \\ \(z_{i}^{\epsilon},z_{i}^{p}\) & Elliptic and Parabolic fixed points of a Fuchsian group, respectively \\ \(x_{i}^{\epsilon},z_{i}^{p}\) & Images of the elliptic and Parabolic fixed points of a Fuchsian group, under the projection \(\mathbb{H}\to O=[\mathbb{H}/\Gamma]\), respectively \\ \(z\) & Coordinates on the upper half-plane \\ \(w\) & Global coordinates on \(\Omega\) \\ \(w_{i}\) & Singular points on an orbifold, namely branch points for \(i=1,\cdots,n_{e}\), and cusps \\ & for \(i=n_{e}+1,\cdots,n\) \\ \(t\) & Bers coordinates \\ \(\varepsilon\) & Small complex parameter for variation \\ \(\epsilon\) & Small real parameter for regularization \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(\mathscr{C}\mathscr{M}_{\mathbf{\alpha}}(X)\) & Space of all smooth conformal metrics \(e^{\psi(u,\bar{u})}|du|^{2}\) on \(X\backslash\{x_{1},\ldots,x_{n}\}\) which have \\ & conical singularities of angles \(2\pi(1-\alpha_{i})\) at the insertion points \\ \(\mathscr{C}\mathscr{M}(O)\) & Space of singular conformal metrics on \(X_{O}\) representing \(\mathscr{D}\) \\ \(\mathscr{D}\) & Branch divisor \\ \(\widetilde{\mathscr{D}}\) & Branch divisor corresponding to the lift of the original \(\mathscr{D}\) under the Schottky group \\ \(U_{a}\) & Open subset in \(X_{O}\) \\ \(u_{a}\) & Coordinate function on an orbifold chart \\ \(u_{\mathbb{D}}\) & Coordinate on unit disk \\ \(g_{ab}\) & Transition function between two orbifold coordinate charts, \(U_{a}\) and \(U_{b}\) \\ \(\mathcal{C}^{\infty}\) & Smooth functions \\ \(\varphi,\psi\) & (Classical) Liouville field \\ \(\mathscr{S}_{\mathbf{m}}\) & Liouville action functional for non-zero genus \\ \(S_{\mathbf{m}}\) & Liouville action functional for zero genus \\ \(\mathbb{F}_{O}\) & Free energy defined through the action functional on an orbifold \\ \(\Delta_{0}\) & Laplace operator \\ \(\pi_{\Gamma}\) & Orbifold covering map between \(\mathbb{H}\) and an orbifold \(O\) that provides it with the Fuchsian global coordinates \\ \(\pi_{\Sigma}\) & Orbifold covering map between \(\Omega\) or \(\mathring{\Omega}\) and an orbifold \(O\) which restricted to \(\Omega^{\text{reg}}\) gives the Schottky global coordinates \\ \(J\) & Depending on the context, the orbifold covering map between \(\mathbb{H}\) and \(\Omega\) or \(\mathring{\Omega}\), namely Klein's Hauptmodul, or orbifold covering map between \(\mathbb{H}\) and \(O\) \\ j & The epimorphism \(\Gamma\to\Sigma\) which maps \(\beta_{i}\)s to \(L_{i}\)s and all \(\alpha_{i}\)s, \(\kappa_{j}\)s, and \(\tau_{k}\)s to \(\mathbb{1}\) \\ \(J_{\mu}\) & Orbifold covering map between \(\mathbb{H}\) and an underlying Riemann surface \(O^{\mu}\) \\ \(J_{k}^{(i)}\) & The \(k^{th}\) order coefficient of \(J\)'s expansion around the \(i^{th}\) marked point \\ h\({}_{i}\) & Smooth functions on \(\mathcal{M}_{0,n}\) defined by \(\left|J_{1}^{(i)}\right|^{\frac{2}{m_{i}}}\) for \(i=1,\ldots,n_{e}\), \(\left|J_{1}^{(i)}\right|^{2}\) for \(i=n_{e}+1,\ldots,n-1\), and \(\left|J_{-1}^{(n)}\right|^{2}\) for \(i=n\) \\ \(\varLambda\) & Complex anti-linear mapping between \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) and \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) defined through the Bergman integral \\ \(\mathfrak{S}_{g}\) & Schottky space of genus \(g\), i.e. the set of equivalence classes of representations \([\varrho_{\mu}]:\Sigma\to\text{PSL}(2,\mathbb{C})\), \(\mu\in\mathfrak{D}(\Sigma)\). \\ \(\mathfrak{S}_{g,n}(\mathbf{m})\) & Generalized Schottky space of genus \(g\) and \(n\) marked points, corresponding to \(\mathbf{m}=(m_{1},\ldots,m_{n})\) \\ \(\mathcal{T}_{g,n}(\mathbf{m})\) & Teichmuller space of marked Riemann orbisurfaces of genus \(g>1\), corresponding to \(\mathbf{m}=(m_{1},\ldots,m_{n})\) \\ \(\mathcal{T}(\Gamma)\) & Teichmuller space defined as the set of equivalence classes of representations \([\varrho_{\mu}]:\Gamma\to\text{PSL}(2,\mathbb{R})\), \(\mu\in\mathfrak{D}(\Gamma)\) \\ \(\mathscr{P}_{g,n}(\mathbf{m})\) & Affine bundle \(\mathscr{P}_{g,n}(\mathbf{m})\to\mathcal{T}_{g,n}(\mathbf{m})\), that the Fuchsian projective connection \(\text{Sch}(\pi_{\Gamma}^{-1})\) gives a canonical section for it or \(\mathscr{P}_{g,n}(\mathbf{m})\to\mathfrak{S}_{g,n}(\mathbf{m})\) which has the Schottky projective connection \(\text{Sch}(\pi_{\Sigma}^{-1})\) as a canonical section \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(\mathfrak{M}_{g,n}(\mathbf{m})\) & Moduli space of Riemannian orbifold with signature \((g;m_{1},\ldots,m_{n_{e}};n_{p})\) that is \\ & isomorphic to \(\mathcal{T}(\Gamma)/\operatorname{Mod}(\Gamma)\) \\ \(\mathcal{M}_{g,n}\) & Moduli space of smooth complex algebraic curves of genus \(g\) with \(n\) labeled points, i.e. \(\mathcal{T}(\Gamma)/\operatorname{Mod}_{0}(\Gamma)\) \\ \(R\) & Projective connection \\ \(r_{a}\) & Elements of a projective connection \\ \(\operatorname{Sch}\left(f;z\right)\) & Schwartzian derivative of \(f\) with respect to \(z\) \\ \(\mathcal{L}_{\mu},\mathcal{L}_{\bar{\mu}}\) & Lie derivative in holomorphic and anti-holomorphic tangential directions, \(\mu\) and \(\bar{\mu}\) \\ \(Q\) & Quadratic differential \\ \(q_{a}\) & Holomorphic functions constructing a quadratic differential \\ \(\mu,\mu^{\prime}\) & Beltrami differentials for a Fuchsian group \\ \(M\) & Functions defined by \(\left(\mu\circ J^{-1}\right)\frac{\overline{(J^{-1})^{\prime}}}{(J^{-1})^{ \prime}}\), which construct the analogue of the Beltrami equation for \(F^{\mu}\) \\ \(\mu_{i}\) & Basis element for harmonic differentials in \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) \\ \(M_{i}\) & Basis element for harmonic differentials in \(\mathcal{H}^{-1,1}(O)\) \\ \(q_{i}\) & Basis element for quadratic differentials in \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) \\ \(R_{i}\) & Linearly independent elements that generate the space \(\mathcal{H}^{2,0}(O)\) \\ \(Q_{i}\) & Basis element for quadratic differentials in \(\mathcal{H}^{2,0}(O)\) biorthogonal to \(R_{i}\) \\ \(P_{i}\) & Basis element for quadratic differentials in \(\mathcal{H}^{2,0}(\hat{\Omega},\Sigma)\) \\ \(\mathcal{E}_{i}\) & Terms multiplying to conformal weights in the energy-momentum tensor expansion for zero genus \\ \(\mathscr{E}_{i}\) & Terms multiplying to conformal weights in the energy-momentum tensor expansion for non-zero genus \\ \(v_{i}\) & Normalized basis of the space of holomorphic 1-forms - abelian differentials of the first kind \\ \(\phi\) & An element of \(\mathcal{A}^{k,\ell}(\mathbb{H},\Gamma)\) \\ \(\mathsf{R}(w)\) & Projection of the automorphic form \(\operatorname{Sch}\left(J^{-1};w\right)\) of weight four for the Schottky \\ & group to the subspace \(\mathcal{H}^{2,0}(\hat{\Omega},\Sigma)\cong T^{*}_{\pi\circ\Phi(0)}\mathfrak{S} _{g,n}(\mathbf{m})\) \\ b\({}_{i}\) & Coefficients of the projection \(\mathsf{R}(w)\) \\ Proj\({}_{\mathcal{H}^{k,\ell}}\) & Projection operator onto \(\mathcal{H}^{k,\ell}(\mathbb{H},\Gamma)\) \\ \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) & Complex Banach space of the Beltrami differentials for \(\Gamma\) \\ \(\mathcal{A}^{-1,1}(\Omega,\Sigma)\) & Complex Banach space of the Beltrami differentials for \(\Sigma\) \\ \(\mathfrak{D}(\Gamma)\) & Open ball of radius 1 in \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) \\ \(\mathfrak{D}(\Sigma)\) & Open ball of radius 1 in \(\mathcal{A}^{-1,1}(\Omega,\Sigma)\) \\ \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) & Space of cusp forms of weight 4 for the group \(\Gamma\)-equivalently, meromorphic (2,0)-tensors/quadratic differentials on the Riemann orbisurface \(O\) \\ \(\mathcal{H}^{2,0}(\hat{\Omega},\Sigma)\) & Meromorphic quadratic differential for Schottky group \(\Sigma\) \\ \(\mathcal{H}^{k,0}(\mathbb{H},\Gamma)\) & cusp forms of weight \(2k\) for \(\Gamma\) \\ \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) & Harmonic Beltrami differentials, that are a subspace \(\Lambda^{*}\left(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\right)\) of \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \({\cal H}^{-1,1}(\hat{\Omega},\Sigma)\) & Space of harmonic Beltrami differentials with respect to the hyperbolic metric \\ & on \(\hat{\Omega}\) \\ \({\cal H}^{2,0}(O)\) & Space of quadratic differentials on an orbifold \\ \({\cal H}^{-1,1}(O)\) & Space of harmonic differentials on an orbifold \\ \({\cal A}^{k,\ell}(\mathbb{H},\Gamma)\) & Smooth family of automorphic forms of weight \((2k,2\ell)\), or \((k,\ell)\)-tensors on an orbifold \(O\) \\ \(\Phi\) & Mapping from \({\mathfrak{D}}(\Gamma)\) to \({\cal T}_{g,n}(\mathbf{m})\) \\ \(\Psi\) & Mapping from \({\cal T}_{0,n}(\mathbf{m})\) to \(\mathbb{C}^{n-3}\), defined by \((\Psi\circ\Phi)(\mu)=(w_{1}^{\mu},\ldots,w_{n-3}^{\mu})\in\mathbb{C}^{n-3}\) \\ \(\pi\) & Mapping from \({\cal T}_{g,n}(\mathbf{m})\) to \({\mathfrak{S}}_{g,n}(\mathbf{m})\) \\ \({\cal N}(\mathbb{H},\Gamma)\) & Kernel of the differential \({\rm d}\Phi\) at the point \(0\in{\mathfrak{D}}(\Gamma)\) \\ \({\cal P}(O)\) & Space of all \(\mathbb{CP}^{1}\)-structures on a complex orbifold \(O\) \\ \(T_{\varphi},\bar{T}_{\varphi}\) & \((2,0)\) and \((0,2)\) components of the classical energy-momentum tensor on an orbifold \(O\) \\ \(T_{a},\bar{T}_{a}\) & Components of the classical energy-momentum tensor on each chart \(U_{a}\) \\ \({\cal Q}\) & Difference between the Fuchsian and Schottky projective connections, i.e. \({\rm Sch}(\pi_{\Gamma}^{-1})-{\rm Sch}(\pi_{\Sigma}^{-1})\) \\ \(\omega_{{}_{\rm WP}}\) & Symplectic form of the Weil-Petersson metric \\ \(\omega_{{}_{\rm TZ},i}^{\rm cusp}\) & Symplectic form of \(i^{th}\)-cuspidal Takhtajan-Zograf metric \\ \(\omega_{{}_{\rm TZ},i}^{\rm ell}\) & Symplectic form of \(i^{th}\)-elliptic Takhtajan-Zograf metric \\ \(\langle\cdot,\cdot\rangle_{{}_{\rm WP}}\) & Petersson inner product on \(T_{[\mu]}{\cal T}_{g,n}(\mathbf{m})\cong{\cal H}^{-1,1}(\mathbb{H},\Gamma^{\mu})\) \\ \(\langle\cdot,\cdot\rangle_{{}_{\rm cusp}}^{\rm cusp}\) & \(i^{th}\)-cuspidal Takhtajan-Zograf (TZ) inner product \\ \(\langle\cdot,\cdot\rangle_{{}_{\rm cusp}}^{\rm cusp}\) & The invariant inner product under \({\rm Mod}(\Gamma)\), i.e. \(\langle\cdot,\cdot\rangle^{\rm cusp}=\langle\cdot,\cdot\rangle_{1}^{\rm cusp }+\cdots+\langle\cdot,\cdot\rangle_{n_{p}}^{\rm cusp}\) \\ \(\langle\cdot,\cdot\rangle_{i}^{\rm ell}\) & \(i^{th}\)-elliptic Takhtajan-Zograf (TZ) inner product \\ \(\left(||\cdot||_{1}^{\rm Quill}\right)^{2}\) & Quillen metric on \(\lambda_{1}\) \\ \(||\cdot||_{k}^{\rm Quill}\) & Quillen norm on \(\lambda_{k}\) \\ \(\langle\cdot,\cdot\rangle_{{}_{Sch}}\) & Hermitian metric on \(\lambda_{Sch}\) \\ \(\lambda_{H}\) & Hodge line bundle over \({\mathfrak{M}}_{g,n}\) \\ \(\lambda_{0,n}(\mathfrak{s})\) & Holomorphic line bundle over \({\mathfrak{M}}_{0,n}\) \\ \(\lambda_{Sch}\) & Holomorphic \(\mathbb{Q}\)-line bundle over \({\mathfrak{M}}_{g,n}\) \\ \(\lambda_{1}\) & Hodge line bundle, i.e. the determinant line bundle associated with the Cauchy-Riemann operator \(\bar{\partial}_{1}\) \\ \(\lambda_{k}\) & Determinant line bundle associated with the Cauchy-Riemann operator \(\bar{\partial}_{k}\) \\ \(\lambda_{{}_{Hod}}\) & Hodge line bundle \((\lambda_{{}_{Hod}};\langle\cdot,\cdot\rangle_{{}_{Quil}})\) \\ \(f^{\mu}\) & Solution of Beltrami equation corresponding to a Beltrami differential \(\mu\) \\ \(F^{\mu}\) & Quasi-conformal mapping that satisfies \(F^{\mu}\circ J=J_{\mu}\circ f^{\mu}\) \\ \(\dot{f}^{\mu}_{+},\dot{f}^{\mu}_{-}\) & The derivatives \(\left.\left(\frac{\partial}{\partial\varepsilon}f^{\varepsilon\mu}\right) \right|_{\varepsilon=0}\), \(\left.\left(\frac{\partial}{\partial\bar{\varepsilon}}f^{\varepsilon\mu}\right) \right|_{\varepsilon=0}\), respectively \\ hol & Holonomy representation \\ \(\varrho_{\mu i}\) & Representations of Fuchsian or Schottky group corresponding to a Beltrami differential \(\mu_{i}\) \\ \(\rho(z)\) & Density of a hyperbolic metric on \(\mathbb{H}\) \\ \(E_{i}(z,s)\) & Eisenstein-Mass series associated with the cusp \(z_{n_{e}+i}\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline H & Hermitian metric defined on the holomorphic line bundle \(\lambda_{0,n}(\mathbf{s})\) defined by \(\mathsf{h}_{1}^{m_{1}h_{1}}\cdots\mathsf{h}_{n_{e}}^{m_{n_{e}}h_{n_{e}}}\mathsf{ h}_{n_{e}+l}\cdots\mathsf{h}_{n-1}\mathsf{h}_{n}^{-1}\) for zero genus and, \(\mathsf{h}_{1}^{m_{1}h_{1}}\cdots\mathsf{h}_{n_{e}}^{m_{n_{e}}h_{n_{e}}} \mathsf{h}_{n_{e}+1}\cdots\mathsf{h}_{n}\) for non-zero genus \\ \(c_{i}\) & Accessory parameters \\ \(C_{i}^{\epsilon}\) & Regularization circles around branch points and punctures \\ \(C_{i},C_{i}^{\prime}\) & Jordan curves constructing the boundary of the Schottky fundamental domain \\ \(\mathsf{D}_{i}^{\epsilon}\) & Disks of radius \(\epsilon\) around branch points and punctures \\ \(O_{\epsilon}\) & The region outside the regularizing disks, i.e. \(\mathbb{C}\backslash\bigcup_{i=1}^{n-1}\left\{w\,\Big{|}\,|w-w_{i}|<\epsilon \right\}\cup\left\{w\,\Big{|}\,|w|>\epsilon^{-1}\right\}\) \\ \(\jmath\) & Holomorphic fibration between \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) and \(\mathfrak{S}_{g}\) whose fibers are configuration spaces of \(n\) labeled points \\ \(\mathscr{F}_{n}\left(\hat{\mathbb{C}}\right)\) & Configuration space of complex \(n\)-tuples in \(\hat{\mathbb{C}}\) \\ \(\mathscr{L}_{i}\) & \(i^{th}\) relative dualizing sheaf on \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) or the \(i^{th}\) tautological line bundle \\ \(\mathscr{L}\) & \(\mathbb{Q}\)-line bundle defined by \(\bigotimes_{i=1}^{n}\mathscr{L}_{i}^{\mathsf{h}_{i}}\) \\ \(\mathsf{c}_{1}\) & First Chern class \\ \(c_{1}(E,\nabla)\) & First Chern form of a vector bundle \(E\) \\ \(\theta_{L_{k}^{-1}}\) & Boundary 1-form of Takhtajan-Zograf action \\ \(l_{k}\) & Left-hand lower element in the matrix representation of the generator \(L_{k}\in\mathrm{PSL}(2,\mathbb{C})\) for \(k=2,\ldots,g\) \\ \(\mathrm{deg}(\mathcal{D})\) & Degree of a divisor \\ \([\cdot/\cdot]\) & Quotient as an analytic orbifold/stack \\ \(\boldsymbol{\tau}\) & Period matrix \\ \(\det^{\prime}\Delta_{0}\) & Zeta-function regularized determinant of the Laplace operator in the hyperbolic \\ & metric \(\exp(\varphi)|dw|^{2}\) acting on functions \\ \(\mathfrak{F}_{g}\) & The function from \(\mathfrak{S}_{g}\) to \(\mathbb{C}\) given by \(\mathfrak{F}_{\mathfrak{g}}=\prod_{\{\gamma\}}\prod_{k=0}^{\infty}\left(1- \ q_{\gamma}^{1+k}\right)\) \\ \(q_{\gamma}\) & Multiplier of \(\gamma\in\Gamma\) \\ \(\mathfrak{F}_{g,n}(\boldsymbol{m})\) & Generalization of \(\mathfrak{F}_{g}\) to \(\mathfrak{S}_{g,n}(\boldsymbol{m})\) \\ \(B_{2}(x)\) & Second Bernoulli polynomial \\ \(Z_{\mathrm{Sz}}(s,\Gamma,\mathfrak{U})\) & Selberg zeta function \\ \(Z^{1-\mathrm{loop}}_{\mathrm{gravity}}\) & One-loop partition function of 3-dimensional gravity \\ \(V_{\alpha_{i}},V_{m_{i}}\) & Liouville vertex operators with charges \(\alpha_{i}\) \\ \(\boldsymbol{T}(w)\) & Conformal energy momentum tensor \\ \(\boldsymbol{h}_{m_{i}}\) & Conformal dimensions of vertex operators \(V_{m_{i}}\) \\ \(h_{\mathrm{cl}}(m_{i})\) & Classical limit of \(\boldsymbol{h}_{m_{i}}\) \\ \hline \end{tabular} List of symbols in the appendices \begin{tabular}{|l|l|} \hline \(\mathfrak{n}\) & Dimension or rank of different objects \\ \(g\) & genus of an orbifold Riemann surface \\ \(n_{e}\) & Number of elliptic fixed points or branch points \\ \(n_{p}\) & Number of parabolic fixed points or punctures \\ \(n\) & Number of all marked points, namely \(n_{e}+n_{p}\) \\ \(m_{i}\) & Orders of the marked points \\ \([\cdot/\cdot]\) & Quotient as an analytic orbifold/stack \\ \(X\) & Analytic subvariety \\ \((|X|,\mathscr{O}_{X})\) & Ringed space or complex analytic space \\ \(|X|\) & Underlying topological space of an analytic space \(X\) \\ \(X_{\text{red}}\) & Reduction of an analytic space \\ \(E\) & Vector bundles of different kind \\ \(\bar{E_{1}}\) & Complex conjugate of a vector bundle \\ \(\mathsf{M}_{r\times r}(\mathbb{C})\) & Space of complex \(r\times r\) matrices \\ \(L\) & Complex line bundle \\ \(\mathscr{L}\) & Holomorphic line bundle \\ \(\mathcal{K}_{X}\) & Canonical line bundle, i.e. \(\mathfrak{n}\)-th exterior power \(\bigwedge^{\mathfrak{n}}T^{*}_{(1,0)}X\) \\ \(\mathcal{K}_{O}\) & Canonical probidivisor \\ \(\mathcal{K}_{X}^{-1}\) & Anticanonical line bundle, i.e. The dual or inverse line bundle of \(\mathcal{K}_{X}\) \\ \(\mathscr{L}(\mathcal{D})\) & Holomorphic line bundle corresponding to a divisor \\ \(O\) & Complex analytic orbifold \\ \(X_{O}\) & Underlying complex analytic space for an orbifold \\ \(X_{O}^{\text{reg}}\) & Orbifold regular locus or the principal stratum \\ \(\mathbb{X}\) & A complex model space \\ \(f_{1},\ldots,f_{k}\) & A system of local defining functions for an analytic subvariety \\ \(\text{Reg}(X)\) & Set of all regular point of a subvariety \(X\) \\ \(\text{Sing}(X)\) & Singular locus or the set of singular points of of a subvariety \(X\), i.e. \(X\backslash\text{Reg}(X)\) \\ \(\Delta(\boldsymbol{\epsilon})\) & A small polydisc \\ \(\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}\) & Ring of all holomorphic functions on \(\Delta(\boldsymbol{\epsilon})\) \\ \(\mathfrak{O}_{X}\) & Ring of holomorphic functions on a subvariety \(X\), i.e. \(\mathfrak{O}_{\Delta(\boldsymbol{\epsilon})}/\mathfrak{I}(X)\) \\ \(\mathfrak{O}_{U}\) & Ring of holomorphic functions in some open subset \(U\subset\mathbb{C}^{\mathfrak{n}}\) containing the origin \\ \(\mathfrak{O}_{U,0}\) & Ring of germs of holomorphic functions at the origin \\ \(\mathfrak{O}_{X,0}\) & Ring of germs of holomorphic functions on the subvariety \(X\) defined by \\ \(\mathfrak{I}(X)\) & Ideal of the subvariety \(X\), i.e. ideal of all vanishing functions in \(X\) in the ring \\ \(\mathfrak{I}\) & Defining ideal of a subvariety \(X\), i.e ideal formed by the set of defining functions of \(X\) \\ \(\sqrt{\mathfrak{I}}\) & Radical ideal of a subvariety \(X\), i.e.\(\left\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \begin{tabular}{|l|l|} \hline \(\mathfrak{I}([X]_{0})\) & Ideal canonically associated to a germ \([X]_{0}\) of an analytic subvariety at the origin \\ & defined as the ideal of germs of all analytic functions vanishing on the subvariety \\ & \(X\) representing the germ \([X]_{0}\) \\ \(\mathfrak{m}_{X,x}\) & Maximal ideal of a ringed space \(X\) \\ \(\mathscr{J}\) & Coherent ideal used in the definition of the closed complex analytic subspace \\ \([X]_{0}\) & Germ of analytic subvariety \(X\) at \(0\) in \(\mathbb{C}^{\mathfrak{n}}\) \\ \([f]_{0}\) & Germ of holomorphic function \(f\) at the origin \\ \([X(\mathfrak{I})]_{0}\) & Locus of the ideal \(\mathfrak{I}\), i.e. a germ of an analytic subvariety at the origin in \(\mathbb{C}^{\mathfrak{n}}\) \\ & canonically associated to an ideal \(\mathfrak{I}\subseteq\mathfrak{O}_{\mathbb{C}^{\mathfrak{n}},0}\) \\ \(\mathscr{O}_{\mathbb{C}^{\mathfrak{n}}},\mathscr{O}_{U}\) & Sheaf of germs of holomorphic functions of \(\mathfrak{n}\) complex variables and its restriction to \(U\subset\mathbb{C}^{\mathfrak{n}}\), repectively \\ \(\mathscr{I}(X)\) & Sheaf of ideals of the analytic subvariety \(X\) \\ \(\mathscr{O}_{X}\) & Sheaf of germs of holomorphic functions on the subvariety \(X\), i.e. \(\mathscr{O}_{U}/\mathscr{I}(X)\) \\ \(\mathscr{I}\) & Coherent sheaf of \(\mathscr{O}_{U}\)-ideals \\ \(\mathscr{E}_{M}\) & Sheaf of germs of \(\mathcal{C}^{\infty}\) complex functions on a complex manifold \(M\) \\ \(\mathscr{E}_{M}^{r}\) & Direct sum sheaf defined by \(\underbrace{\mathscr{E}_{M}\oplus\cdots\oplus\mathscr{E}_{M}}_{r}\) \\ \(\mathscr{E}(E)\) & Sheaf of germs of \(\mathcal{C}^{\infty}\)-sections of a vector bundle \(E\) \\ \(\mathscr{E}_{M}^{(k,l)}(E)\) & Sheaf of germs of smooth sections of \(\bigwedge^{k,l}T^{*}M\otimes E\) \\ \(\mathscr{E}_{M}^{*}\) & Multiplicative sheaf of invertible \(\mathcal{C}^{\infty}\) complex functions on a complex manifold \(M\) \\ \(\mathscr{O}_{X}(E)\) & Analytic sheaf on \(X\) of germs of holomorphic sections in \(E\) \\ \(\mathscr{O}_{X}^{*}\) & Sheaf of nowhere-vanishing holomorphic functions, i.e. sheaf of invertible elements in \(\mathscr{O}_{X}\) \\ \(\mathscr{M}_{X}\) & Sheaf of meromorphic functions on \(X\) \\ \(\mathscr{M}_{X}^{*}\) & subsheaf of not-identically-zero meromorphic functions on \(X\) \\ \(\mathcal{F}_{O}\) & Orbisheaf \\ \(\tilde{\mathcal{F}}_{a}\) & Sheaves that construct an orbisheaf \(\mathcal{F}_{O}\) \\ \(\mathscr{O}_{O}\) & Structure orbisheaf of an orbifold \(O\) \\ \(\mathcal{F}_{X}\) & Sheaves on \(X_{O}\) coming from invariant local sections of orbisheaves \(\mathcal{F}_{O}\) \\ \(\Omega^{\mathfrak{n}}_{O}\) & canonical orbisheaf of a complex orbifold \(O\) of complex dimension \(\mathfrak{n}\) \\ \((|f|,f^{*})\) & Morphism of \(\mathbb{C}\)-ringed spaces \\ \(g^{*}_{ab}\) & Gluing isomorphisms in a analytic atlas \\ \(g^{\alpha\beta}\) & Transition matrix of a vector bundle \(E\) \\ \(\psi_{\alpha}\) & Trivialization of a vector bundle \(E\) on an open set \(V_{\alpha}\) \\ \(\pi\) & The projection from \(E\) to the base space in the definition of a vector bundle \\ pr & The projection \(V_{\alpha}\times\mathbb{C}^{r}\to V_{\alpha}\) in the definition of a Vector bundle \\ \(\delta\) & Connecting homomorphism for a holomorphic line bundle \(\mathscr{L}\) \\ \(\{u^{\alpha\beta}\}\) & System of transition functions for the holomorphic line bundle \(\mathscr{L}(\mathcal{D})\) \\ \(\varpi\) & Surjective analytic map invariant under \(\Gamma\) \\ \(|\varpi|\) & Quotient map sending \(y\) to its left orbit \(\Gamma(y)\) \\ \(\eta_{ab}\) & Change of charts for an orbifold \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(\Upsilon_{ab}\) & Monomorphism between \(\Gamma_{a}\) and \(\Gamma_{b}\) in orbifolds \\ \(f_{\rm orb}\) & Analytic orbifold automorphism \\ \(\varpi_{\rm orb}\) & Orbifold Galois covering \\ \(\pi_{\rm orb}\) & Orbifold projection map in the definition of orbibundles \\ \(\widehat{\eta}_{ab}\) & Holomorphic bundle isomorphism in the definition of orbibundles \\ \(\widehat{\Upsilon}_{a}\) & Monomorphism in the definition of orbibundles \\ \(f_{a}\) & Transition maps for a \((G,\mathbb{X})\)-structure \\ dev & Developing map \\ hol & Holonomy representation \\ \(\mathfrak{C}\) & Injective group homomorphism between \(\Gamma\) and \(G\) in a \((G,\mathbb{X})\)-structure \\ \(J\) & Klein's Hauptmodule, i.e. a meromorphic function on \(\mathbb{H}\) which is automorphic \\ & with respect to the Fuchsian group \(\Gamma\) \\ \(s\) & Smooth complex section of vector bundle \(E\) or Holomorphic section of a holomorphic vector bundle \(E\) \\ \(\boldsymbol{s}\) & \(k\)-frame of vector bundle \(E\), i.e. a collection \((s_{1},\ldots,s_{k})\) of \(k\) sections \(s_{i}\) of vector bundle \(E\) on \(V\) linearly independent at each point in \(V\) \\ \({\rm Hom}(E_{1},E_{2})\) & Homomorphism of vector bundles \(E_{1}\) and \(E_{2}\) \\ \(\bigwedge^{k}E\) & k-th exterior power of a vector bundle \\ \(\mathcal{A}^{k}(V,E)\) & Vector space of \(\mathcal{C}^{\infty}\) sections of \(\big{(}\bigwedge^{k}T^{*}M\big{)}\otimes E\) on \(V\subset M\), which are called \\ & differential forms on \(V\) with values in the vector bundle \(E\) \\ \(\mathcal{A}^{(k,l)}(E)\) & Vector space of smooth sections of this sheaf are \((k,l)\)-forms with values in \(E\) \\ \(\nabla\) & Connection of a vector bundle \(E\) or \(\mathcal{A}^{(k,l)}(E)\) \\ \(\{\nabla_{a}\}\) & \(\Gamma_{a}\)-equivariant Hermitian connections supported on each local uniformizing \\ & neighborhood \(\tilde{U}_{a}\) such that \(\nabla_{a}\)s are compatible with changes of charts \\ \(\Theta\) & Curvature of a connection \(\nabla\), i.e. \(\nabla\circ\nabla\) \\ \(A=(A_{ij})\) & Connection matrix with respect to a frame \\ \(\theta=(\theta_{ij})\) & Curvature matrix with respect to a frame \\ \(\mathsf{g}=(\mathfrak{g}_{ij})\) & Gauge transformation matrix \\ \(h\) & Hermitian metric on a complex vector bundle \(E\) \\ h & Hermitian metric on a complex orbifold \\ \(\tilde{\mathfrak{h}}_{a}^{\Gamma_{a}}\) & \(\Gamma_{a}\)-invariant (local) Hermitian metrics used in the definition of h \\ \(c_{i}(E,\nabla)\) & i-th Chern form of a vector bundle \(E\) \\ \(c_{i}(E)\) & i-th Chern class of a vector bundle \(E\) \\ Pic\((X)\) & Picard group of \(X\), i.e. the group \(H^{1}(X,\mathscr{O}_{X}^{*})\) which is the group of holomorphic \\ & line bundles on the analytic space \(X\) with group multiplication being the tensor \\ & product, and the inverse bundle being the dual bundle \\ Pic\({}^{0}(X)\) & Kernel of the connecting homomorphism \(\delta\) for a holomorphic line bundle \(\mathscr{L}\) \\ Div\((X)\) & Divisor group, i.e. the group constructed via the formal sum of Weil divisors \\ \({}_{H^{0}(X,\mathscr{M}_{X}/\mathscr{O}_{X}^{*})}\) & Abelian group of Cartier divisors on \(X\) \\ Cl\((X)\) & Divisor class group of Weil divisors modulo linear equivalence \\ CaCl\((X)\) & Group of Cartier divisor classes, i.e. Cartier divisors modulo principal divisors \\ \(\Gamma\) & A finite subgroup of the group \({\rm Aut}(X)\) of analytic automorphisms of \(X\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \(\Gamma_{y}\) & Isotropy subgroup or stabilizer subgroup of \(y\) \\ \(\Gamma(y)\) & Orbit of a point \(y\) \\ Fix(\(\gamma\)) & Fixpoint set \\ \(T\) & Thin subset \\ \(R_{\varpi}\) & Ramification locus of an analytic covering \\ \(B_{\varpi}\) & Branching locus of an analytic covering, i.e. \(\varpi(R_{\varpi})\) \\ Aut(\(\varpi\)) & Group of covering transformations or deck transformations, i.e. group of all automorphisms of an analytic covering \\ \({}_{\rm Gal(\gamma/X),\,Gal(\varpi)}\) & Galois group of an analytic covering, i.e. Aut(\(\varpi\)) \\ \(\pi_{1}\) & Fundamental group \\ Isom\({}^{+}\) & Group of orientation preserving isometries \\ \({\cal D}\) & Divisor \\ Div(\(M\)) & Weil or Cartier divisors on a smooth complex manifold \(M\) \\ \({\cal D}^{\rm reg}\) & Weil divisor defined on \({\rm Reg}(X)\) \\ \(\phi\) & Principal Cartier divisor or orbifold \(k\)-form \\ \({\cal R}_{i}\) & Prime divisors or irreducible hypersurfaces \\ \({\mathscr{R}}_{\varpi}\) & Ramification divisor of an analytic covering \\ \({\mathscr{B}}_{\varpi}\) & Branching divisor of an analytic covering \\ \({\mathscr{D}}\) & Branch divisor of an orbifold \\ \(H_{i}\) & Irreducible analytic hypersurface \\ Supp(\({\cal D}\)) & Support of a Weil divisor, i.e. \(\bigcup_{i}H_{i}\) \\ mult\({}_{H_{i}}({\cal D})\) & Multiplicity of a Weil divisor \({\cal D}\), i.e. the coefficient \(a_{i}\) in its definition \\ deg(\({\cal D}\)) & Degree of a Weil divisor, i.e. \(\sum_{i}{\rm mult}_{H_{i}}({\cal D})=\sum_{i}a_{i}\) \\ \(Z_{f}\) & Zero set of a holomorphic function \(f\) \\ ord(\(f\)) & Order of vanishing of a holomorphic function \(f\) \\ \([{\cal D}]\) & Linear system of divisors defined by \({\cal D}\), i.e. the set of all divisors on \(X\) that are \\ & linearly equivalent to \({\cal D}\) \\ deg(\(\varpi\)) & degree of an analytic covering \\ \# & Order of a group \\ \(\nu_{\varpi}\) & Branching function of a covering \\ \(m_{i}\) & Ramification indices of a covering \(\varpi\) along irreducible hypersurfaces \({\cal R}_{i}\) \\ \((X_{O},{\mathscr{D}})\) & Log pair \\ \(({\tt D},{\tt D},{\tt Z}_{m_{i}},{\rm f}_{i})\) & Charts on an orbifold Riemann surface \\ \(m_{i}\) & Branching index of an orbifold Riemann surface \\ \(U_{a}\) & Open subset in \(X_{O}\) in the definition of orbifold \\ \(\tilde{U}_{a}\) & Open subset in \({\mathbb{C}}^{\tt n}\) in the definition of orbifold \\ \(\Gamma_{a}\) & Subgroup of \({\rm GL}({\mathfrak{n}},{\mathbb{C}})\) in the definition of orbifold \\ \({\rm f}_{a}\) & Folding map in the definition of orbifold \\ \({\cal U}\) & Orbifold atlas \\ \({\cal U}_{\rm max}\) & Maximal orbifold atlas \\ \([{\cal U}]\) & Equivalence class of analytic orbifold atlases on \(X_{O}\) \\ \(\Gamma_{x}\) & isotropy group or the local group \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(m_{x}\) & Order of the local isotropy group \\ \(\mathsf{A}_{i}\) & Homotopy class of loops around a handle \\ \(\mathsf{B}_{i}\) & Homotopy class of loops around a hole \\ \(\mathsf{C}_{i}\) & Homotopy class of loops around a branch point \\ \(\mathsf{P}_{i}\) & Homotopy class of loops around a puncture \\ \(F\) & Number of faces \\ \(E\) & Number of edges \\ \(V\) & Number of vertices \\ \(\tilde{\phi}_{a}^{\Gamma_{a}}\) & \(\Gamma_{a}\)-invariant complex \(k\)-forms used in the definition of the orbifold \(k\)-form \\ \(\mathcal{E}^{p,q}(O)\) & Vector space of all orbifold \((p,q)\)-froms on an orbifold \(O\) \\ \(\mathcal{A}^{p,q}\left({}_{O,K_{O}^{k}}\otimes\right.\) & Vector space of smooth differential forms of type \((p,q)\) on \(O\) with values in \(K_{O}^{k}\otimes\overline{K}_{O}^{l}\) \\ \(\mathcal{E}^{k,l}(\mathbb{H},\Gamma)\) & Hilbert space of automorphic forms of weight \((2k,2l)\) with the natural scalar product \(\langle\phi_{1},\phi_{2}\rangle=\int_{X}\phi_{1}\overline{\phi_{2}}\rho^{-k- \ell+1}\) \\ \(\mathcal{H}^{k,\ell}(X)\) & space of harmonic \((k,\ell)\)-differentials that are square-integrable with respect to the hyperbolic metric on \(X=\mathbb{H}/\Gamma\) \\ \(\mathcal{P}\) & Complex structure \\ \(P\) & Projective structure \\ \(P^{\mu}\) & Projective structure fixed by the convex combination \(\frac{1}{\#H}\sum_{\mathrm{h}\in H}\mathrm{h}^{*}P\) \\ Sch & Schwarzian derivative \\ \(q_{a}\) & Elements of a quadratic differential \\ \(\phi\) & Liouville field \\ \(\alpha_{i},\beta_{i}\) & Hyperbolic generators of a Fuchsian group \\ \(\tau_{i}\) & Elliptic generator of a Fuchsian group \\ \(\kappa_{i}\) & Parabolic generator of a Fuchsian group \\ \(\lambda_{i}\) & Multipliers of elliptic generators of a Fuchsian group \\ \(\delta_{i}\) & Translation length of parabolic generators of a Fuchsian group \\ \(J_{k}^{(i)}\) & The \(k^{th}\) order coefficient of \(J\)'s expansion around the \(i^{th}\) marked point \\ \(c_{i}\) & Accessory parameters \\ \(h_{i}\) & Conformal weight corresponding to the order of a marked point \(m_{i}\) \\ \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) & Space of cusp forms of weight 4 for the group \(\Gamma\)-equivalently, meromorphic (2,0)-tensors/quadratic differentials on the Riemann orbisurface \(O\) \\ \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) & Harmonic Beltrami differentials, that are a subspace \(\Lambda^{*}\left(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\right)\) of \(\mathcal{A}^{-1,1}(\mathbb{H},\Gamma)\) \\ \(\mathcal{H}^{2,0}(O)\) & Space of quadratic differentials on an orbifold \\ \(\mathcal{H}^{-1,1}(O)\) & Space of harmonic differentials on an orbifold \\ \(q(z)\) & An element of \(\mathcal{H}^{2,0}(\mathbb{H},\Gamma)\) \\ \(\mu(z)\) & An element of \(\mathcal{H}^{-1,1}(\mathbb{H},\Gamma)\) \\ \(Q(w)\) & An element of \(\mathcal{H}^{2,0}(O)\) \\ \(M(w)\) & An element of \(\mathcal{H}^{-1,1}(O)\) \\ \hline \end{tabular}
2310.10259
Leveraging heterogeneous spillover effects in maximizing contextual bandit rewards
Recommender systems relying on contextual multi-armed bandits continuously improve relevant item recommendations by taking into account the contextual information. The objective of these bandit algorithms is to learn the best arm (i.e., best item to recommend) for each user and thus maximize the cumulative rewards from user engagement with the recommendations. However, current approaches ignore potential spillover between interacting users, where the action of one user can impact the actions and rewards of other users. Moreover, spillover may vary for different people based on their preferences and the closeness of ties to other users. This leads to heterogeneity in the spillover effects, i.e., the extent to which the action of one user can impact the action of another. Here, we propose a framework that allows contextual multi-armed bandits to account for such heterogeneous spillovers when choosing the best arm for each user. By experimenting on several real-world datasets using prominent linear and non-linear contextual bandit algorithms, we observe that our proposed method leads to significantly higher rewards than existing solutions that ignore spillover.
Ahmed Sayeed Faruk, Elena Zheleva
2023-10-16T10:34:41Z
http://arxiv.org/abs/2310.10259v1
# Leveraging heterogeneous spillover effects ###### Abstract Recommender systems relying on contextual multi-armed bandits continuously improve relevant item recommendations by taking into account the contextual information. The objective of these bandit algorithms is to learn the best arm (i.e., best item to recommend) for each user and thus maximize the cumulative rewards from user engagement with the recommendations. However, current approaches ignore potential spillover between interacting users, where the action of one user can impact the actions and rewards of other users. Moreover, spillover may vary for different people based on their preferences and the closeness of ties to other users. This leads to heterogeneity in the spillover effects, i.e., the extent to which the action of one user can impact the action of another. Here, we propose a framework that allows contextual multi-armed bandits to account for such heterogeneous spillovers when choosing the best arm for each user. By experimenting on several real-world datasets using prominent linear and non-linear contextual bandit algorithms, we observe that our proposed method leads to significantly higher rewards than existing solutions that ignore spillover. ## Introduction Contextual bandit algorithms leverage user attributes and actions to improve recommendation. Different users may respond differently to the recommendation (or treatment) received which leads to heterogeneity in responses (e.g., clicks or purchases). Contextual bandit algorithms deal with this heterogeneity by providing meaningful recommendations at an individual level and thus maximizing rewards [14, 1, 15]. Rewards can vary based on the application where the recommendations occur. Some reward examples include revenue from recommended products in e-commerce applications and clicks on recommended user-generated content in online social networks. When contextual bandit recommendations occur in social networks, they can spread from one user to another and overall rewards can be based on both direct recommendations and spillover. Network spillover refers to the phenomenon where the actions of one individual have an impact on the actions of others within the network. Spillover can happen in different ways, such as through the transmission of behaviors or attitudes or through the spread of information, ideas, or influence. For example, if an individual in a social network starts to exercise regularly, they may influence their immediate friends and followers to also start exercising, which can spread further throughout the network, leading to a broader adoption of the behavior. Understanding the effect of spillover is important in many fields, including marketing, economics, public health, psychology, and social policy. The impact of spillover can be either positive or negative and may occur intentionally or unintentionally. For example, in an education setting, offering incentives on course completion may spill over to untreated population in education through knowledge transmission paths [17, 18]. Similarly, exposure to advertisement in social networks can spill over through sharing or discussing the ad to non-exposed users via virtual connection from exposed users [16]. Different users may respond differently not only to the treatment received but also to the spillover from their network contacts which leads to heterogeneity in spillover. To illustrate the concepts in this paper, let's consider the following toy example: an advertisement company targets social network users with two types of ads, ads on politics and ads on sports. User A is one of the targeted users who posts about workouts and engages with fitness-related content regularly. Therefore, when a sport-related ad about downloading a fitness app comes to the news feed of A, A shares the ad link in their social media account. Due to sharing, the ad link appears in the news feed of two friends of A, namely B and C. B and C are not targeted users but B is also interested in fitness and downloads the app from the shared link of A while C is not interested in sports and does not download it. Due to the shared interest, it is more likely for A to influence B to download the fitness app than to influence C, which reflects spillover heterogeneity. Rewards (i.e., downloads) occur both based on treatment (A's download) and based on spillover (B's download). Current research on contextual bandit recommendations [14, 15, 16, 17] does not take into account the possibility of leveraging spillover to optimize rewards. Moreover, to the best of our knowledge, there are no studies dealing with the heterogeneity of spillover to maximize rewards during recommendation with contextual bandits. Thus, in this paper we set out to examine whether heterogeneous spillover effects can be utilized to maximize the overall rewards during contextual bandit recommendation in social networks. **Present work.** In this paper, we present a contextual bandit algorithm, \(NetCB\) that leverages heterogeneous spillover and neighborhood knowledge for maximizing rewards in networks. The algorithm incorporates neighborhood knowledge to improve bandit performance and sometimes goes against the bandit recommendation by providing sub-optimal recommendations to some potential users which can increase spillover and thus long-term rewards in networks. Our algorithm is flexible enough to use any state-of-the art contextual multi-armed bandit (MAB) as a subroutine. **Key idea and highlights.** To summarize, this paper makes the following contributions: * We define a novel problem setup for contextual bandit recommendation to maximize network rewards by introducing a dynamic heterogeneous spillover model. * We formulate a methodology which recommends when to go against the contextual bandit recommendation and leverage spillover to increase long-term network rewards. * Our experiments on real-world and semi-synthetic attributed network datasets show the effectiveness of our method when compared to state-of-the-art contextual bandit algorithms. ## Related Work **Recommendations with contextual bandits.** Reinforcement learning methods, MABs [10, 11, 12] and Markov Decision Processes (MDPs) [2], are widely used to recommend items to users. LinUCB [13, 14] and Contextual Thompson Sampling (CTS) [1] algorithms assume a linear relationship between the expected reward of an action and its context. The NeuralBandit1 [15], NeuralUCB [16], NeuralTS [17], NPR method [18] use neural networks to remove the constraint of linearity. However, they do not consider neighborhood information and the potential of spillover effects in maximizing rewards during recommendation. **Contextual bandit for networks.** The only works that consider contextual bandit for networks is in the context of influence maximization [20, 19, 1, 10]. The influence maximization problem aims at maximizing rewards in a social network with diffusion probabilities by finding the users for the initial treatment. Contextual bandit can provide an adaptive approach for selecting the most influential users to maximize influence on social networks. However, influence maximization is different from our work since we focus on the recommendation decisions not on the choice of nodes. **Spillover.** Spillover happens in many modern A/B test settings such as social networks and online marketplaces [1, 16, 17, 18] propose exposure mapping for addressing spillover. Yuan & Altenburger [21] propose a machine learning approach to automatically characterize network spillover conditions based on both the local network structures and the treatment assignments among users in the network neighborhood. Bargagli-Stoffi et al. [17] and Yuan et al. [20] propose tree-based methods for the estimation of heterogeneous effects in randomized experiments. All these works focus on spillover to avoid biased estimates of the treatment effect whereas our work focuses on leveraging the heterogeneity of network spillover to maximize rewards. ## Problem Description **Data model.** We define an attributed network graph \(G=(V,E)\), consisting of a set of \(n\) nodes \(V=\{v_{1},v_{2},\ldots,v_{n}\}\), a set of edges \(E=\{e_{ij},1\leq i,j\leq n\}\) where \(e_{ij}\) denotes that there is an edge from node \(v_{i}\in V\) to node \(v_{j}\in V\). The set of adjacent nodes of \(v_{i}\) is denoted with \(\mathcal{N}_{i}\) where \(\mathcal{N}_{i}=\{v_{j}:v_{j}\in V,e_{ij}\in E\}\). We define \(\mathcal{N}_{i}\) as the neighborhood of node \(v_{i}\) and each node \(v_{j}\in\mathcal{N}_{i}\) is a neighbor of node \(v_{i}\). Each node in the network belongs to one latent preference from a set of \(l\) possible latent preferences (or classes), \(C=\{c_{1},c_{2},\ldots,c_{l}\}\). We let \(X_{i}\) denote the pre-treatment \(d\)-dimensional feature vector of node \(v_{i}\) and \(x_{i}\in C\) refers to its latent preference type. We consider \(2l\)-dimensional dynamic neighbourhood feature for each node of the network. We denote the neighborhood feature vector of node \(v_{i}\) with \(X_{\mathcal{N}_{i}}\). The \(y_{i}\in\{0,1\}\) refers to the activation status of node \(v_{i}\) where \(y_{i}=1\) refers to active node \(v_{i}\) and \(y_{i}=0\) refers to inactive node \(v_{i}\). The contextual bandit contains a set of arms \(\mathcal{A}=\{c_{1},c_{2},\ldots,c_{l}\}\). We let denote the predicted (or recommended) latent preference type of node \(v_{i}\) by the contextual bandit with \(arm_{i}\in\mathcal{A}\) and the assigned treatment class of node \(v_{i}\) with \(t_{i}\in\{c_{1},c_{2},\ldots,c_{l}\}\). **Node activation.** The nodes in the network can be activated in two ways, through direct treatment and through spillover. _Direct treatment_ refers to treating a particular node in the network. The corresponding outcome without considering the spillover effects from its adjacent nodes is defined as the _direct treatment effect_. The average direct treatment effect, \(DTE\) can be estimated as below: \[DTE(V)\leftarrow\underset{v_{i}\in V}{\mathbb{E}}[y_{i}|t_{i}\neq\emptyset-y _{i}|t_{i}=\emptyset]\] The spillover occurs when the outcome of one node impacts the outcome of another node. Spillover effect refers to the outcome of a particular node in the network due to the impact of its active adjacent nodes. Let \(\mathcal{N}_{i}.\pi\) denote the vector of activation status of node \(v_{i}\)'s neighborhood \(\mathcal{N}_{i}\). The average spillover effect, \(SE(V)\) can be estimated as below: \[SE(V)\leftarrow\underset{v_{i}\in V}{\mathbb{E}}[y_{i}|t_{i}=t,\mathcal{N}_{i }.\pi]-\underset{v_{i}\in V}{\mathbb{E}}[y_{i}|t_{i}=t,\mathcal{N}_{i}.\pi= \emptyset]\] If a treated node becomes active, it can activate other inactive adjacent nodes in its neighborhood via spillover. Similarly, if a treated node remains inactive after the direct treatment, future adjacent active nodes in its neighborhood can the instance, we consider activate the inactive treated node via spillover effect. We call this spillover reverse spillover whereas the former one is forward spillover. **Direct treatment effect.** The effect of the direct treatment can vary across different groups based on individual characteristics or other factors. Therefore, we design two types of direct treatment based on the assigned treatment class and latent preference of the nodes. We define the direct treatment of node \(v_{i}\) as _aligned_ when its assigned treatment class, \(t_{i}\) is the same as its latent preference, \(x_{i}\). Similarly, we define the direct treatment of node \(v_{i}\) as _misaligned_ when its assigned treatment class, \(t_{i}\) is different from its latent preference, \(x_{i}\). We denote the probability of activating a particular node due to the aligned and misaligned direct treatment with \(p_{a}\) and \(p_{m}\), respectively, which can be formulated as follows: \[p_{a}\gets P(y_{i}=1|t_{i}=x_{i}) \tag{1}\] \[p_{m}\gets P(y_{i}=1|t_{i}\neq x_{i}) \tag{2}\] **Spillover effect.** To model spillover over the network, we use the independent cascade model (ICM) [1] where each of the inactive node's active neighbors has a probabilistic and independent chance to activate the node via spillover. This resembles the COVID-19 spread, where each social interaction may trigger an infection. We define an active node as a _source node_ if it impacts the outcome of its adjacent nodes. Nodes that get affected by the source node are considered _recipient nodes_. For example, if node \(v_{j}\) gets activated due to the spillover from active node \(v_{i}\), then \(v_{i}\) is the source node and \(v_{j}\) is the recipient node. The spillover can be heterogeneous which is assumed to be dependent on the the latent preference type of nodes and assigned treatment class of the source node. A spillover is considered _aligned_ when the treatment assignment class of an activated source node is the same as that of the latent preference type of the recipient node. Similarly, a spillover is considered _misaligned_ when the treatment assignment class of a source node is different from the latent preference type of the recipient node. We denote spillover probability with \(p_{sr}\) where \(s\in\{a,m\}\) refers to whether the treatment of the source node is aligned or misaligned and \(r\in\{a,m\}\) refers to whether the spillover is aligned or misaligned. We formulate four types of heterogeneous spillover probabilities where \(v_{i}\) is the source node and \(v_{j}\) is the recipient node: \[p_{aa}\gets P(y_{j}=1|t_{i}=x_{i},t_{i}=x_{j}) \tag{3}\] \[p_{am}\gets P(y_{j}=1|t_{i}=x_{i},t_{i}\neq x_{j}) \tag{4}\] \[p_{ma}\gets P(y_{j}=1|t_{i}\neq x_{i},t_{i}=x_{j}) \tag{5}\] \[p_{mm}\gets P(y_{j}=1|t_{i}\neq x_{i},t_{i}\neq x_{j}) \tag{6}\] The first two spillover probabilities \(p_{aa}\) and \(p_{ma}\) refer to aligned spillover probabilities from a source node that is activated due to aligned and misaligned treatment, respectively. Similarly, \(p_{am}\) and \(p_{mm}\) refer to misaligned spillover probabilities from a source node that is activated due to aligned and misaligned treatment, respectively. An example of \(p_{a}\), \(p_{m}\), \(p_{aa}\), \(p_{am}\), \(p_{ma}\), and \(p_{mm}\) is shown in Figure 1 where a toy network contains six nodes \(V\in\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\}\) and two possible preferences \(C\in\{P,S\}\) (e.g., Politics and Sports). Here, \(x_{1}=P\), \(x_{2}=S\), \(x_{3}=P\), \(x_{4}=S\), \(x_{5}=P\), \(x_{6}=P\). A detailed description of Figure 1 is included in Appendix A1. **A contextual bandit setup.** We consider stochastic \(l\)-armed contextual bandit setup with a total number of \(T\) rounds. At round \(i\in[T]\), an inactive node \(v_{i}\) arrives and the agent observes the context consisting of \(l\) feature vectors: \(\{X_{i,arm},X_{\mathcal{N},arm}|arm\in\mathcal{A}\}\). The feature vector, \(X_{i}\) along with the dynamic neighborhood feature vector, \(X_{\mathcal{N}_{i}}\) represents information of the node \(v_{i}\) and will be referred to as the context. The agent selects an action \(arm_{i}\) and receives a payoff, \(PO(v_{i},arm_{i})\). The action of an arm refers to assigning a treatment class to a node for direct treatment. The payoff is calculated by counting the total number of newly activated nodes due to the recommended arm's action, i.e., assigning a treatment class \(t_{i}\) on the arrival node \(v_{i}\) and corresponding spillover effect to its adjacent nodes. The total \(T\)-trial payoff of the bandit is defined as \(\sum_{i=1}^{T}PO(v_{i},arm_{i})\). Similarly, we define the optimal expected \(T\)-trial payoff as \(\mathbf{E}[\sum_{i=1}^{T}PO(v_{i},arm_{i}^{*})]\), where \(arm_{i}^{*}\) is the arm with maximum expected payoff at trial \(i\). Therefore, the T-trial cumulative regret of the bandit learning can be formulated as \(Regret\leftarrow\sum_{i=1}^{T}(PO(v_{i},arm_{i}^{*})-PO(v_{i},arm_{i}))\). The goal of the contextual bandit is to learn an arm-selection strategy so that expected regret is minimized. The following assumption is made about payoff generation for any round \(i\), \[PO(v_{i},arm_{i})=F(X_{i,arm_{i}},X_{\mathcal{N}_{i},arm_{i}})+\xi_{i}\] \(F\) can be a linear or non-linear function which needs to learn and \(\xi_{i}\) is the noise. The bandit learns the parameter for each arm with the arrival of nodes. Figure 1: Treatment-dependent heterogeneity in network spillover. ### Types of rewards in networks A reward refers to the activation of an inactive node due to direct treatment or spillover. When a node is activated, a reward of \(1\) is incurred; otherwise, the reward is \(0\). **Total rewards.** The total rewards in the network, \(R_{total}\) refers to the total number of activated nodes due to direct treatment effects and spillover effects, i.e., \(R_{total}\leftarrow\sum_{v_{i}\in V}y_{i}\) **Spillover rewards.** The spillover rewards in the network, \(R_{se}\) refers to the total number of activated nodes due to spillover, i.e., \(R_{se}\leftarrow\sum_{v_{i}\in V,arm_{i}=\emptyset}y_{i}\) **Direct treatment rewards.** The direct treatment rewards in the network, \(R_{dite}\) refers to the total number of activated nodes due to the direct treatment effects, i.e., \(R_{dite}\gets R_{total}-R_{se}\) **Bandit accuracy.** The bandit accuracy, \(B_{acc}\) refers to the ratio of total number aligned direct treatment recommendations to the total number of direct treatment recommendations during a contextual bandit learning, i.e., \(B_{acc}=\frac{\sum_{v_{i}\in V}(\mathds{1}|arm_{i}=x_{i}|)}{\sum_{v_{i}\in V }(\mathds{1}|arm_{i}\neq\emptyset))}\) Now, we are ready to define the problem we address in this paper. The goal of this paper is to develop an algorithm which can maximize the total rewards or minimize total regrets during bandit online learning. **Problem:**_Maximizing network rewards with contextual bandits_: Given an attributed network graph \(G=(V,E)\), a set of attributes \(X\) associated with each node, and a set of arms \(\mathcal{A}\) associated with the treatment classes, design an algorithm to recommend a treatment class to the inactive node \(v_{i}\) for direct treatment at round \(i\in[T]\) such that: 1. Total number of aligned treatment, \(\sum_{v_{i}\in V}\mathds{1}|arm_{i}=x_{i}|\) is maximized, therefore bandit accuracy, \(B_{acc}\) and direct treatment rewards, \(R_{dte}\) are maximized. 2. Total spillover probability in the network, \(\sum_{p,q\in\{a,m\}}p_{sr}\) is maximized, therefore spillover rewards, \(R_{se}\), are maximized. ## Algorithm for maximizing rewards with contextual bandits We design an algorithm that recommends the best treatment for a node and its neighborhood by considering its neighborhood knowledge and heterogeneous spillover effects to maximize rewards in the whole network. It incorporates existing contextual MAB as a subroutine and the contextual bandit setup we described while recommending a direct treatment assignment to a node. We account for dynamic neighborhood knowledge to accelerate learning rate and leverage heterogeneous spillover to maximize spillover effects. We call this algorithm \(NetCB\) which is described as below followed by its two main components: ### \(B_{acc}\) maximization to accelerate learning rate The bandit accuracy, \(B_{acc}\) maximization is achieved by aggregating static node features, \(X_{i}\) and dynamic neighborhood features, \(X_{\mathcal{N}_{i}}\) for node \(v_{i}\). The first \(l\) dimensions of \(X_{\mathcal{N}_{i}}\) refer to the percentages of adjacent nodes of node \(v_{i}\) getting direct treatment assignment with each element of \(C\) which is formulated as follows for \(k\in[1,l]\): \[X_{\mathcal{N}_{i}}[k]=\frac{\sum_{v_{j}\in\mathcal{N}_{i}}\mathds{1}[t_{j}=c _{k}\wedge arm_{j}\neq\emptyset]}{\sum_{v_{j}\in\mathcal{N}_{i}}\mathds{1}[t_{j }=\emptyset]}\] The remaining second \(l\) dimensions refer to the percentages of inactive adjacent nodes of node \(v_{i}\) after getting direct treatment assignment with each element of \(C\) which is formulated as follows for \(k\in[1,l]\): \[X_{\mathcal{N}_{i}}[k]=\frac{\sum_{v_{j}\in\mathcal{N}_{i}}\mathds{1}[y_{j}=0 \wedge t_{j}=c_{k}\wedge arm_{j}\neq\emptyset]}{\sum_{v_{j}\in\mathcal{N}_{i}} \mathds{1}[t_{j}=c_{k}]}\] The neighborhood feature of a node may change over time with the arrival of new nodes and thus \(X_{\mathcal{N}_{i}}\) is dynamic in all of our experiments where \(\sum_{k=1}^{l}X_{\mathcal{N}_{i}}[k]=\sum_{k=l+1}^{2l}X_{\mathcal{N}_{i}}[k]=1.0\) unless \(\sum_{v_{j}\in\mathcal{N}_{i}}\mathds{1}[t_{j}\neq\emptyset]=0\). ### Spillover maximization Getting the treatment recommendation for a node, \(NetCB\) algorithm estimates the expected rewards by observing the recommended treatments of the directly treated but inactive adjacent nodes. To estimate spillover, the recommended treatment class by the bandit is assumed to be the true latent preference of the node and its directly treated adjacent nodes. The expected rewards due to the direct treatment effect of the recommended arm, \(arm_{i}\) for the node and spillover effects in its adjacent nodes is denoted with \(E[R_{i}\,|\,arm_{i}]\) which is estimated as follows: \[E[R_{i}\,|\,arm_{i}]=p_{a}+p_{a}*\sum_{v_{j}\in\mathcal{N}_{i}}(p_{aa}*\mathds{1} [arm_{i}=arm_{j}\wedge\] \[y_{j}=0]+p_{am}*\mathds{1}[arm_{i}\neq arm_{j}\wedge y_{j}=0]) \tag{7}\] The expected rewards due to the direct treatment effect of each alternate arm, \(arm_{k}\in\mathcal{A}\,(arm_{k}\neq arm_{i})\) for the node \(v_{i}\) and spillover effects in its adjacent nodes is denoted with \(E[R_{i}\,|\,arm_{k}]\) which is estimated as follows: \[E[R_{i}\,|\,arm_{k}]=p_{m}+p_{m}*\sum_{v_{j}\in\mathcal{N}_{i}}(p_{ma}*\mathds{1 }[arm_{k}=arm_{j}\] \[\wedge y_{j}=0]+p_{mm}*\mathds{1}[arm_{k}\neq arm_{j}\wedge y_{j}=0]) \tag{8}\] We denote the alternate arm with highest expected reward for node \(v_{i}\) with \(\overline{arm_{i}}\), i.e., \(\overline{arm_{i}}=\operatorname*{arg\,max}_{arm_{k}}E[R_{i}\,|\,arm_{k}]\). The \(NetCB\) does not assign treatment class to node \(v_{i}\) with the bandit recommendation, \(arm_{i}\) when the expected rewards of the \(\overline{arm_{i}}\) is higher than that of the \(arm_{i}\). Instead the algorithm treats the node \(v_{i}\) with \(t_{i}=\overline{arm_{i}}\) but does not update the arm parameters for the corresponding observed payoffs so that the bandit does not get confused. We defer the illustration of Algorithm \(1\) in Appendix A2. The activation status of the node \(v_{i}\) along with its adjacent nodes may change due to the direct treatment effect for assigning a treatment class and corresponding spillover effect from \(v_{i}\). The contextual bandit learns the parameters of the arms, \(\mathcal{A}\) with the arrival of nodes and generalizes expected payoff from one node to another, and thus learns to recommend the best direct treatment more quickly for new nodes. ## Experiments In this section, we study the effectiveness of our proposed method using both real-world and semi-synthetic datasets in terms of \(R_{total}\), \(R_{dte}\), \(R_{se}\), \(B_{acc}\), and \(Regret\) with several baseline contextual bandit methods. ### Data representation **Real-world datasets.** We consider four real-world attributed network datasets. BlogCatalog1 is a network of social interaction among the bloggers listed on the BlogCatalog website. This dataset contains \(5,196\) nodes, \(343,486\) edges, \(8,189\) attributes, and \(6\) labels. The labels represent topic categories inferred through the metadata of the blogger interests. Flickr1 is a benchmark social network dataset which contains \(7,575\) nodes, \(479,476\) edges, \(12,047\) attributes, and \(9\) labels. Each node in this network represents a user, each edge represents a following relationship, and the labels represent the interest of groups of the users. Hateful dataset is sampled from the Hateful Users on Twitter dataset [17] and the sample contains \(3,218\) nodes, \(9,620\) edges, \(1,036\) attributes, and \(2\) labels. Each sample is classified as either "hateful" or "normal." Pubmed1 is a citation network dataset where each node represents a scientific publication about diabetes and each directed edge represents a citation. This dataset contains \(19,717\) nodes, \(44,338\) edges, \(500\) attributes, and \(3\) labels. Each publication is classified into one of the \(3\) labels. Footnote 1: All datasets available at [https://www4.comp.polyu.edu.hk/~jiemshi/datasets.html](https://www4.comp.polyu.edu.hk/~jiemshi/datasets.html) **Semi-synthetic dataset.** We generate semi-synthetic datasets by deviating homophily in the networks where homophily refers to the proportion of edges that connect two nodes from the same latent preference. The homophilic scores in Blogcatalog, Flickr, Hateful, Pubmed networks are \(0.4\), \(0.23\), \(0.73\), and \(0.8\), respectively. We deviate these scores to \(0.88\), \(0.88\), \(0.58\), and \(0.30\), respectively. We defer this illustration to Appendix A3. **Static features and latent preference of node** The pre-treatment node feature vector, \(X_{i}\), is \(d\)-dimensional. \(X_{i}\) is static in all experiments. To reduce computational complexity of the bandit algorithm, we reduce the dimension of \(X_{i}\) from \(d\) to \(500\) with truncated SVD [1] for all datasets except Pubmed. Thus, \(d=300\) for the Pubmed dataset but \(d=500\) for all other datasets. The latent preference of a node in the network is based on its label and \(l\) refers to the number of labels in the dataset. To incorporate dynamic neighborhood knowledge during bandit recommendation, we aggregate \(d\)-dimensional static node feature with \(2l\) dimensional dynamic neighborhood feature and consider it as a \(d+2l\) dimensional feature vector for each node in the network. ### Main algorithms and baselines **LinUCB.** This is the main baseline that considers the node features during recommendation [10]. The method ignores neighborhood knowledge and does not leverage heterogeneous spillover during treatment recommendation. The payoff of the directly treated node is calculated based on whether it is activated or not due to the treatment. The method assumes a linear dependency between the expected payoff of an action and context. **NeuralUCB.** This method also considers the node features during treatment recommendation but incorporates a neural network into the upper confidence bound (UCB) algorithm to capture more complex patterns in the data [20]. The payoff is calculated based on whether directly treated node is activated or not. This method can capture non-linear relationships and more complex dependencies among action, features, and rewards. **NeuralTS.** This method considers the feature of the node during treatment recommendation but leverages the Thompson Sampling (TS) strategy [21]. The payoff is calculated based on whether directly treated node is activated or not. The method also uses neural network to model the reward probabilities of actions which can handle unknown or changing reward distributions. **LinUCB + S.** This method is similar to that of \(LinUCB\) but leverages spillover during direct treatment recommendation to a node. **LinUCB + S + O.** This method is similar to that of \(LinUCB+S\) but uses an oracle while estimating expected rewards using Equation (7) and (8). The oracle generates the true latent preferences of the adjacent directly treated nodes. **NetCB.** This is our proposed method that accounts for neighborhood knowledge and leverages heterogeneous spillover during direct treatment recommendation to a node. The payoff of a directly treated node is calculated as described in our contextual bandit setup. **NetCB - S.** This method is similar to that of \(NetCB\) but does not leverage spillover during treatment recommendation to a node. **NetCB + O.** This method is similar to our method, \(NetCB\) but uses an oracle while estimating expected rewards using Equation (7) and (8). The oracle generates the true latent preferences of the adjacent directly treated nodes. ### Experimental setup We experiment with single-hop spillover setup in the networks where the nodes activated due to spillover effect cannot spread spillover to their adjacent nodes. Therefore, spillover cannot cascade over the network. We use existing contextual bandit MAB algorithms, e.g., LinUCB Li et al. (2010), NeuralUCB Zhou et al. (2020), and NeuralTS Zhang et al. (2020) as a subroutine for \(NetCB-S\), \(NetCB\), and \(NetCB+O\). In all of our experiments, we set \(p_{mm}=0\), \(p_{am}=0\). We set the hyper-parameter for confidence bound, \(\alpha=2.0\) when using LinUCB Li et al. (2010) as a subroutine. Before the start of an experiment with bandit, we set \(arm_{i}=\emptyset\), \(t_{i}=\emptyset\), and \(y_{i}=0\) for all \(v_{i}\in V\). **Experiment 1:Understanding the impact of activation probability on bandit accuracy, \(B_{acc}\).** To understand how \(B_{acc}\) changes with the increase in difference between \(p_{a}\) and \(p_{m}\), we run \(LinUCB\) with several combinations of \(p_{a}\) and \(p_{m}\), i.e., (\(0.1\), \(0.7\)), (\(0.3\), \(0.7\)), (\(0.5\), \(0.7\)) in real-world networks. We run \(25\) experiments with \(p_{aa}=0.3\), \(p_{ma}=0.3\), \(p_{ma}=0.3\), \(p_{am}=0.0\), \(p_{mm}=0.0\) to incorporate heterogeneous spillover. The cumulative average of the bandit accuracy, \(B_{acc}\) has been reported in Figure \(2\). **Experiment 2:Understanding impact of neighborhood knowledge and leveraging spillover on the bandit learning regrets, \(Regret\).** The difference in activation rates (\(p_{m}\) and \(p_{a}\)) plays a role in how well the bandit can learn to distinguish between different types of nodes. The higher the difference, the better the bandit accuracy as shown in Figure \(2\). However, neighborhood information helps the bandit learn to distinguish among them, regardless of the difference in activation rate for aligned and misaligned treatment. In real-world scenarios, \(p_{m}\) and \(p_{a}\) are unknown and therefore it is very important to incorporate neighborhood information. To understand whether dynamic neighborhood feature aggregation helps the bandit to learn faster, we run \(LinUCB\), \(NetCB\) - \(S\), \(LinUCB+S\), \(NetCB\), \(LinUCB+S+O\), and \(NetCB+O\) in both real-world and semi-synthetic networks with \(p_{a}=0.7\), and \(p_{m}=0.5\). We use LinUCB Li et al. (2010) as the subroutine for \(NetCB-S\) and \(NetCB\). We run \(25\) experiments by setting \(p_{aa}=0.1\), \(p_{ma}=0.1\) and \(p_{aa}=0.3\), \(p_{ma}=0.3\) while keeping \(p_{am}=0.0\), \(p_{mm}=0.0\). We compare \(LinUCB\) vs \(NetCB-S\) and \(LinUCB+S\) vs \(NetCB\) to investigate the effect of neighborhood knowledge. To investigate whether going against the bandit recommendation helps to increase spillover, We compare \(LinUCB\) vs \(LinUCB+S\) vs \(LinUCB+S\) and \(NetCB-S\) vs \(NetCB\) vs \(NetCB+O\). The cumulative average of the regrets, \(Regret\) has been reported in Figure \(3\). Table- \(1-2\) in the appendix include the bandit accuracy and regrets, and all types of rewards at the end of bandit experiments. **Experiment 3: Understanding how the contextual bandit choice as a subroutine impacts the bandit accuracy, \(B_{acc}\).** We want to investigate the impact of choosing the underlying contextual bandit algorithm as a subroutine and their hyper-parameter(s) on the learning performance. Therefore, we experiment \(NetCB\) with three contextual bandit subroutines- LinUCB Li et al. (2010), NeuralUCB Zhou et al. (2020), and NeuralTS Zhang et al. (2020) to understand the impact of bandit choice on \(B_{acc}\) and call these \(NetCB\)s as \(NetCB_{1}\), \(NetCB_{2}\), and \(NetCB_{3}\), respectively. We compare \(NetCB_{1}\) vs \(LinUCB\), \(NetCB_{2}\) vs \(NeuralUCB\), and \(NetCB_{3}\) vs \(NeuralTS\). We set the hyper-parameter for confidence bound, \(\alpha=2.0\) in LinUCB. For both NeuralUCB and NeuralTS, we set the regularization parameter, \(\lambda=0.01\), neural network width, \(m=20\), and exploration variance, \(\nu=1.0\). We set \(p_{a}=0.7\), \(p_{m}=0.5\), \(p_{aa}=0.3\), \(p_{ma}=0.3\), \(p_{am}=0.0\), and \(p_{mm}=0.0\) for all experiments and run \(25\) experiments. The cumulative averages of the bandit accuracy, \(B_{acc}\) for all methods have been reported in Figure \(4\). We defer additional experiments to Appendix A.4. ## Results ### Impact of varying direct treatment effects on bandit accuracy, \(B_{acc}\). With the increase in difference between \(p_{a}\) and \(p_{m}\), the learning rate or bandit accuracy increases in all networks as shown in Figure \(2\). The bandit can better differentiate among different types of nodes with higher difference between \(p_{a}\) and \(p_{m}\). For example, when \(p_{a}=0.7\) and \(p_{m}=0.1\), the bandit achieves around \(21.73\%\), \(26.97\%\), \(78.48\%\), and \(77.48\%\) accuracy at the end of experiment in Flickr, Blogcatalog, Pubmed, and Hateful networks, respectively. However, the bandit achieves \(13.60\%\), \(19.47\%\), \(54.11\%\), and \(61.32\%\) accuracy in Flickr, Blogcatalog, Pubmed, and Hateful networks, respectively, when \(p_{a}=0.7\) and \(p_{m}=0.5\). The cumulative bandit accuracy, \(B_{acc}\) becomes stable and does not change much after directly treating around \(10-15\%\) nodes in the networks as shown in Figure \(2\). The bandit accuracy tends to decrease with the increase in the total number of labels, \(l\) in the networks. For example, Pubmed network with \(l=3\) and Hateful network with \(l=2\) achieve very high accuracy while Flickr network with \(l=9\) achieves lowest accuracy at the end of the experiments as shown in Figure \(2\). ### Impact of neighborhood knowledge on \(Regret\). The aggregation of dynamic neighborhood knowledge reduces the bandit regrets, \(Regret\) compared to the corresponding baseline, especially in the high homophilic networks as shown in Figure \(3\). For example, in case of lower spillover (\(p_{aa}=0.1\), \(p_{ma}=0.1\)), the \(Regret\) in \(NetCB-S\) decreases by \(13.79\%\), \(16.58\%\), \(11.50\%\), and \(4.92\%\) compared to \(LinUCB\) in the semi-synthetic Flickr (Hombily: 0.88), semi-synthetic Blogcatalog (Hombily: 0.88), Pubmed (Hombily: 0.80), and Hateful (Hombily: 0.73) networks, respectively. These decreases in \(NetCB\) are \(11.52\%\), \(22.18\%\), \(14.29\%\), and \(3.81\%\) respectively, for these networks compared to \(LinUCB+S\). In case of higher spillover (\(p_{aa}=0.3\), \(p_{ma}=0.3\)), aggregating neighborhood knowledge helps to decrease the \(Regret\) more compared to their lower spillover counterparts. The impact of aggregating dynamic neighborhood knowledge is less in lower homophilic networks compared to that of higher homophilic networks as shown in Figure 3. To summarise, dynamic neighborhood information shows very helpful to achieve lower regrets in high homophilic networks, especially when the spillover is high. Impact of leveraging spillover on \(Regret\).Our proposed strategy to leverage spillover helps to decrease the, \(Regrets\) as shown in Figure 3 for \(LinUCB+S+O\) and \(NetCB+O\) compared to the baselines. For example, in case of higher spillover (\(p_{aa}=0.3\), \(p_{ma}=0.3\)), \(Regrets\) decreases by \(37.22\%\), \(41.77\%\), \(7.90\%\), \(3.49\%\) in \(LinUCB+S+O\) compared to \(LinUCB\) in the semi-synthetic Flickr (Hombily: 0.88), semi-synthetic Blogcatalog (Hombily: 0.88), Pubmed (Hombily: 0.80), and Hateful (Hombily: 0.73) networks, respectively. These decreases in \(NetCB+O\) are \(35.04\%\), \(24.32\%\), \(0.53\%\), and \(2.47\%\) respectively, compared to \(NetCB-S\) for these networks. To summarise, \(NetCB+O\) helps to decrease \(Regrets\), especially in high homophilic networks but the performance tends to decay in imbalanced networks (e.g., Hateful). Our proposed strategy to maximize spillover depends on the bandit accuracy, \(B_{acc}\), therefore, \(NetCB\) does not help much in the datasets we used. However, in case of having access to reliable latent preferences of the directly treated adjacent inactive nodes, \(Regrets\) in \(LinUCB+S\) and \(NetCB\) can reduce up to \(LinUCB+S+O\), and \(NetCB+O\), respectively. Figure 4: Comparison of cumulative bandit accuracy, \(B_{acc}\) between LinUCB vs \(NetCB_{1}\), NeuralUCB vs \(NetCB_{2}\), and NeuralTS vs \(NetCB_{3}\) by setting \(p_{m}=0.5\), \(p_{a}=0.7\)\(p_{ma}=0.3\), \(p_{aa}=0.3\), \(p_{mm}=0.0\),and \(p_{am}=0.0\) in Experiment 3. Figure 3: Comparison of cumulative regrets, \(Regret\) in real-world and semi-synthetic networks by setting \(p_{m}=0.5\), \(p_{a}=0.7\), \(p_{mm}=0.0\), and \(p_{am}=0.0\) in Experiment 2. Figure 2: Comparison of cumulative bandit accuracy, \(B_{acc}\) of \(LinUCB\) by varying activation probabilities while keeping \(p_{ma}=0.3\), \(p_{aa}=0.3\), \(p_{mm}=0.0\), and \(p_{am}=0.0\) in Experiment 1. Impact of bandit algorithm choice on learning performance.The bandit accuracy, \(B_{acc}\) of \(NetCB_{1}\) and \(NetCB_{3}\) in all networks looks higher in Figure 4 compared to the corresponding baseline methods, \(LinUCB\), and \(NeuralTS\), respectively. However, the performance of \(NetCB_{2}\) tends to rely on the choice and tuning of hyperparameters of its NeuralUCB [20] subroutine. The neural network fails to learn and converge in \(NeuralUCB\) for Flickr and Blogcatalog datasets and \(NetCB_{2}\) for Pubmed dataset. We defer the results of additional experiments to Appendix A5. ## Conclusion and future work We present \(NetCB\) to leverage neighborhood knowledge and the potential of heterogeneous spillover to maximize rewards in bandit online learning. We demonstrated four types of spillover and some of their potential effects in contextual bandit for networks with a single-hop spillover setup. Our experiments on real-world and semi-synthetic networks show that neighborhood knowledge helps to increase learning rate of the bandit in high homophilic networks. The change in bandit recommendation based on expected rewards has an impact on increasing network spillover when the learning rate of the bandit is good. One possible extension of our research is to design a multi-hop spillover setup for contextual bandit in networks and understand the impact of spillover on learning performance. Future works include understanding the performance of contextual bandit under temporal features, deriving regret bounds for \(NetCB\), and learning the value of treatment-dependent heterogeneous spillover probability in networks.
2307.02292
Measurement-induced phase transitions in the toric code
We show how distinct phases of matter can be generated by performing random single-qubit measurements on a subsystem of toric code. Using a parton construction, such measurements map to random Gaussian tensor networks, and in particular, random Pauli measurements map to a classical loop model in which watermelon correlators precisely determine measurement-induced entanglement. Measuring all but a 1d boundary of qubits realizes hybrid circuits involving unitary gates and projective measurements in 1+1 dimensions. We find that varying the probabilities of different Pauli measurements can drive transitions in the un-measured boundary between phases with different orders and entanglement scaling, corresponding to short and long loop phases in the classical model. Furthermore, by utilizing single-site boundary unitaries conditioned on the bulk measurement outcomes, we generate mixed state ordered phases and transitions that can be experimentally diagnosed via linear observables. This demonstrates how parton constructions provide a natural framework for measurement-based quantum computing setups to produce and manipulate phases of matter.
Amir-Reza Negari, Subhayan Sahu, Timothy H. Hsieh
2023-07-05T13:44:18Z
http://arxiv.org/abs/2307.02292v2
# Measurement-induced phase transitions in the toric code ###### Abstract We show how distinct phases of matter can be generated by performing random single-qubit measurements on a subsystem of toric code. Using a parton construction, such measurements map to random Gaussian tensor networks, and in particular, random Pauli measurements map to a classical loop model in which watermelon correlators precisely determine measurement-induced entanglement. Measuring all but a 1d boundary of qubits realizes hybrid circuits involving unitary gates and projective measurements in 1+1 dimensions. We find that varying the probabilities of different Pauli measurements can drive transitions in the un-measured boundary between phases with different orders and entanglement scaling, corresponding to short and long loop phases in the classical model. Furthermore, by utilizing single-site boundary unitaries conditioned on the bulk measurement outcomes, we generate mixed state ordered phases and transitions that can be experimentally diagnosed via linear observables. This demonstrates how parton constructions provide a natural framework for measurement-based quantum computing setups to produce and manipulate phases of matter. ###### Contents * I Introduction * II Setup * II.1 Toric code/Plaquette model * II.2 Measurement setup * II.3 Stabilizers and measurement * III Completely packed loop model * IV Measurement-induced entanglement between two distant regions * IV.1 MIE between two un-measured qubits * IV.2 MIE between two un-measured boundaries * V Measurement-induced phase transition in the Boundary * V.1 Measured toric code as a \(1+1\)d hybrid circuit * V.2 Bipartite entanglement in the un-measured boundary * VI Long-range order in the boundary state * VI.1 Spin glass order parameter * VI.2 Linear order parameter from adaptive circuits * VII General on-site measurements * VII.1 Tensor network representation of parton construction * VII.2 Measured toric code as a Gaussian tensor network * VII.3 Measured toric code as a Gaussian hybrid circuit * VII.4 Phase diagram of boundary MIE after general on-site measurement * VIII Discussion * VIII.1 Acknowledgments * A Loop patterns and quantum states in physical Hilbert space * A.1 Single un-measured boundary * A.2 Two un-measured boundaries * B Relation between Majorana partons and Jordan-Wigner fermions * A.3 Jordan-Wigner transformation of the boundary after bulk measurements ## I Introduction Investigating the quantum phases of matter that can be dynamically generated in a quantum processor using measurement, classical feedback, and local unitaries has been a fruitful area of research. There has been significant interest in using such hybrid circuits to manipulate entanglement patterns, starting with the observation of a measurement-induced entanglement transition between volume law to area law as the frequency of measurements is varied [1; 2; 3; 4; 5; 6; 7; 8; 9] (for a review, see [10; 11]). Even without any unitary gates, random measurements of multi-site operators can lead to not only various entanglement patterns but also distinct long-range orders, such as symmetry-protected topological order, spin-glass order, and intrinsic topological order [12; 13; 14; 15]. These orders can undergo phase transitions by adjusting the probabilities of competing measurements. A different context in which measurements take center stage is measurement-based quantum computation (MBQC) [16]. The MBQC approach involves starting with an entangled "resource state", such as the 2D cluster state, and sequentially performing single site measurements on the majority of the qubits, where the measurement basis can depend on the outcomes of previous mea surements. This results in the remaining un-measured qubits being directed towards a specific entangled state that encodes the outcome of a deterministic quantum computation. For example, measurements on a 2d resource state effectively realize a computation on the 1d boundary of the system, and the other dimension corresponds to the "time" direction of the computation. In this work, we ask the question: starting with an entangled resource state, can MBQC-type protocols lead to robust quantum phases of matter and transitions between them? We explore this question for the toric code ground state, which is an exactly solvable model of \(\mathbb{Z}_{2}\) topological order in two dimensions [17]. We find that by tuning the probabilities of measuring single-site Pauli \(X\),\(Y\), or \(Z\) in the toric code bulk, we can realize distinct phases in an un-measured boundary. These measurement-induced phases are characterized by the presence or absence of spin-glass order parameters and their entanglement scaling (area law vs. logarithmic scaling). As is the case for MBQC, the bulk measurements in the toric code effectively realize dynamics for a one-dimensional boundary, and in this case the effective dynamics are that of a hybrid circuit involving both unitaries and measurements. Thanks to the underlying entanglement of the toric code state, only single-qubit measurements are required to effectively realize non-trivial hybrid circuits involving two-qubit operations. Liu et al.[18] performed a related study on the 2D cluster state, an MBQC resource state which enables universal quantum computation, and discovered an entanglement transition from area to volume law in boundary qubits induced by measuring the bulk qubits. In contrast, in our setup with toric code, we find transitions in both entanglement (albeit without volume law) and other order parameters. A recent study [19] also considered an MBQC setup on the 2d cluster state and found evidence of distinct area law entanglement phases on the 1d boundary. One advantage of our setup is that we can understand such transitions analytically by relating the entanglement properties of the qubits to certain correlation functions of a corresponding 2D classical loop model with crossings. Such a model has short and long loop phases, which exactly correspond to the area law and the logarithmic scaling of entanglement in the 1d boundary. The summary of these results is presented in Fig. 1. As is the case with hybrid circuits without feedback (measurement outcomes are not used to inform future operations), the transition between quantum phases is only apparent in quantities nonlinear in the ensemble of quantum trajectories. In our examples, these nonlinear quantities can be spin-glass order parameters or entanglement measures. However, the spin glass order can be converted into ferromagnetic order (a linear observable) via feedback: we show that one layer of single-qubit unitaries, conditioned on the bulk measurement outcomes, can be applied to the boundary state to ensure that the resulting density matrix averaged over trajectories has long-range order that is observable. This constitutes a nontrivial quantum channel on a 2d array producing a long-range entangled mixed state in 1d; as in [20], it relies on measurement and unitary feedback, though the resulting mixed state is likely difficult to generate using only operations on the 1d system. This setup can be readily generalized from Pauli \(X,Y,Z\) measurement to arbitrary single-site projective measurements. We find that such measurements in \(2+0\)d map in general to Gaussian fermionic hybrid circuits in \(1+1\)d. This mapping allows us to import the results about entanglement phases generated by such circuits (for example [21; 22; 23]) onto the measurement-induced entanglement on the boundary state of the toric code. Even in the general on-site measurement setup, the phases with area law and logarithmic scaling of entanglement persist, albeit with distinct phase boundaries and transitions. The structure of the paper is as follows: in section II, we introduce the measurement setup for toric code. We Figure 1: (a) The bulk of a toric code ground state is measured in random Pauli bases (denoted by different colors), which induces correlations between the unmeasured 1d boundary qubits (in dashed box). (b) Depending on the relative frequencies of different Pauli measurements, the 1d boundary can have different entanglement scaling (area law vs. logarithmic scaling) and also different orders (spin-glass vs. paramagnetic). Such phases and transitions are analyzed by mapping to a 2d classical loop model. then map the stabilizer configurations after measurements to the completely packed loop model with crossing (CPLC) and summarize relevant results in section III. In section IV, we relate specific order parameters in the loop model to entanglement induced by measurements between different regions. In section V, we explain how the mapping leads to distinct entanglement patterns in un-measured boundary qubits and we demonstrate how the setup can be mapped to a 1+1D hybrid circuit. In section VI, we show how the presence or absence of a certain spin glass order distinguishes the two phases. Furthermore, we describe a simple adaptive protocol that modifies the boundary state and enables identification of the two phases based on linear order parameters of the state. In section VII we analyze general single-qubit measurements (beyond Pauli) on the toric code and map the resulting states to Gaussian tensor networks and Gaussian hybrid circuits. This section also contains a tensor network representation of the toric code ground state via parton construction, which may be of independent interest. Finally, in section VIII we conclude with a discussion of our results, including relations to the underlying sign structure and MBQC universality of the resource state. ## II Setup ### Toric code/Plaquette model The toric code is a lattice model of spin-1/2 degrees of freedom on the edges of a square lattice [17] which consists of commuting terms in its Hamiltonian called stabilizer operators. The toric code has two types of stabilizer operators: star (s) and plaquette (p) operators, \[H_{T}=-\sum_{s}\prod_{j\in s}X_{j}-\sum_{p}\prod_{j\in p}Z_{j} \tag{1}\] A closely related model is Wen's 2D plaquette model [24], where the spin-1/2 degrees of freedom are located at the vertices of a square lattice, and the Hamiltonian consists of only one type of 4-body stabilizer for every star \(s\) and plaquette \(p\), on the \(45^{\circ}\)-rotated lattice (see Fig. 2a): \[H_{W}=-\sum_{a\in p,s}X_{a+\hat{y}}Z_{a+\hat{x}}X_{a-\hat{y}}Z_{a-\hat{x}} \tag{2}\] These two models can be transformed into each other using a single layer of local Hadamard gates arranged on one (say, B) of the two sub-lattices (A and B) of the square lattice in the plaquette model. These gates interchange \(X\leftrightarrow Z\) on the B sub-lattice, which interchange the plaquette model and toric code, as depicted in Fig. 2a. On a torus defined by identifying the boundaries along \(x,y\) directions as marked in Fig. 2b, the toric code has four degenerate ground states, labeled by \(\pm 1\) eigenvalues of the logical operators \(O_{1},O_{2}\), which are strings of respectively Pauli-Z and Pauli-X operators depicted in Fig. 2b. Any eigenstate \(|G\rangle\) of the toric code admits an exact free fermion parton construction [24; 25] defined as follows. The Hilbert space of each qubit on site \(i\) can be enlarged into that of four Majorana fermions \(\gamma_{i,s}=\{\gamma_{i,1},\gamma_{i,2},\gamma_{i,3},\gamma_{i,4}\}\) along the edges connected to \(i\), followed by a projection onto the original qubit Hilbert space. Consider the free fermion state \(\ket{\psi}_{\text{free}}\) such that \(i\gamma_{i,s}\gamma_{j,s^{\prime}}=1\) when \((i,s),(j,s^{\prime})\) are on the same edge. To return the qubit Hilbert space, we must project the Majorana state to the \(+1\) sector of the operator \(D_{j}=\gamma_{j,1}\gamma_{j,2}\gamma_{j,3}\gamma_{j,4}\): \[\ket{G}=\prod_{j}\left(\frac{1+D_{j}}{2}\right)\ket{\psi}_{\text{free}}. \tag{3}\] Note that the initial free fermion state of the two Majorana modes on neighboring vertices can be oriented in two different ways, depending on whether we take the \(+1\) eigenstate of \(\pm i\gamma_{i,s}\gamma_{j,s^{\prime}}\). Different orientations (which are marked by \(s\to s^{\prime}\) to indicate \(+i\gamma_{s}\gamma_{s^{\prime}}\) in Fig. 2b) determine the particular eigenstate up to a global phase. In particular, the groundspace of the toric code on a torus is 4 dimensional; one representative ground state is described in our convention by the orientation shown in Fig. 2b, where the same orientation is taken along Figure 2: (a) Toric code stabilizers can be converted to Wen plaquette stabilizers via staggered Hadamard gates. The Wen plaquette admits a parton construction in which each qubit is split into four Majorana fermions subject to a constraint (represented by circle in right subfigure). (b) A ground state of the plaquette model can be constructed by projecting a free fermion state consisting of Majorana dimers into the physical qubit Hilbert space. all \(45^{\circ}\) lattice lines. Different logical sectors (choices of \(O_{1},O_{2}=\pm 1\)) of the plaquette model ground space can be represented by flipping all the orientations of the links \(ij\) along the non-trivial loops. We will focus our attention on the ground state defined by the orientation shown in Fig. 2b, which corresponds to \(O_{1},O_{2}=+1\). ### Measurement setup First we consider the case of measuring a subset of qubits \(M\) in the toric code in either \(Z\), \(Y\), or \(X\) bases, with respective probabilities \((1-q)(1-p)\), \(p\), and \(q(1-p)\), which we call the \((p,q)\) measurement protocol. Our objective is to analyze the entanglement structure and order in the remaining (un-measured) qubits \(M^{c}\) after the measurements on \(M\) are performed. The target quantities of interest are averaged over all realizations of both measurement configurations and their outcomes. In a later section, we will generalize to the case of measuring along any direction in the Bloch sphere. Due to the equivalence between the toric code and plaquette models via a Hadamard transformation on one sub-lattice, the \((p,q)\) scheme for toric code is equivalent to the \((p,q)\) scheme on A sublattice and \((p,1-q)\) scheme on B sublattice for the plaquette model. In the plaquette model, Pauli operators on site \(j\) correspond to Majorana fermion bilinear operators \[\begin{split} X_{j}&=i\gamma_{j,1}\gamma_{j,2}=i \gamma_{j,4}\gamma_{j,3}\\ Y_{j}&=i\gamma_{j,2}\gamma_{j,3}=i\gamma_{j,4}\gamma _{j,1}\\ Z_{j}&=i\gamma_{j,1}\gamma_{j,3}=i\gamma_{j,2}\gamma _{j,4},\end{split} \tag{4}\] where the right equalities follow from the physical Hilbert space condition (\(D_{j}=1\)). ### Stabilizers and measurement We first provide a brief overview of the Majorana stabilizer formalism specialized to our setting. The set of stabilizer generators \(\mathcal{G}\) is a set of products of Majorana fermions which are independent and mutually commute with each other. This set generates the stabilizer group \(\mathcal{S}\). In a Hilbert space of dimension \(2^{N}\), a set \(\mathcal{G}\) with exactly \(N\) generators uniquely defines the common eigenvector \(\ket{\psi}\) of any operator generated by \(\mathcal{G}\), such that \(s\ket{\psi}=\ket{\psi}\forall s\in\mathcal{S}\). If we measure the state \(\ket{\psi}\) with an operator \(P\) which is a product of Majorana fermions, the resulting state is still a Majorana stabilizer state and can be updated efficiently [26; 27]. There are two cases to consider. If \(P\) commutes with all the stabilizer generators \(g\in\mathcal{G}\), the measurement will not have any effect on the state, and the measurement outcome can be inferred from the sign of the operator in \(\mathcal{S}\), i.e., whether \(\pm P\in\mathcal{S}\). If \(P\) anti-commutes with some of the stabilizer generators, the measurement outcome \(\pm 1\) with equal probability. We also have to modify the set of generators \(\mathcal{G}\) - first we select one of the anti-commuting generators, denoted as \(g_{0}\), and multiply \(g_{0}\) with the remaining anti-commuting generators. Next, we replace \(g_{0}\) in \(\mathcal{G}\) by either \(\pm P\) depending on the measurement outcome, so that the new stabilizer set becomes \(\{\pm P\}\cup\{g_{0}g_{i}|\ \forall i\neq 0,g_{i}\text{ anti-commutes with }P\}\cup\{g_{i}|\ g_{i}\text{ commutes with }P\}\). We apply this formalism to the ground state \(\ket{G}\), which is the projected free fermion state \(\ket{\psi}_{\text{free}}\), stabilized by two-point Majorana fermion operators: \(i\gamma_{j,s}\gamma_{i,s^{\prime}}\). Our goal is to measure Pauli operators corresponding to different two-point Majorana operators on every site. Since these are physical qubit operators, they commute with the projection operator, and hence we can first consider their effect on the free fermion state before applying the projection operator at the end. We graphically track the free fermion state updates by connecting Majorana fermions with a line when they form a stabilizer operator together. When \(i\gamma_{j}\gamma_{i}\) is measured, there are two possible outcomes: (a) If there is already a connection between \(\gamma_{i}\) and \(\gamma_{j}\) in the initial state, no further updates are required, and, (b) If these two Majorana fermions are connected to other Majoranas (e.g., \(\gamma_{i}\) is connected to \(\gamma_{k}\) and \(\gamma_{l}\) is connected to \(\gamma_{j}\)), the update will connect \(\gamma_{j}\) to \(\gamma_{i}\), and the other Majoranas will be connected accordingly (e.g., \(\gamma_{l}\) to \(\gamma_{k}\)), as shown in Fig. 3a. The signs of stabilizers and measurement outcomes can be tracked and updated by using arrows on Majorana pairings, as illustrated in Fig. 3a. However, the signs will not be important when computing entanglement quantities or spin glass order parameters, in the case of \(X,Y,Z\), i.e. stabilizer, measurements. In the next sections, we will suppress the arrow notation for signs and return to the task of sign-tracking when discussing the linear order parameter in Sec. VI and on-site measurements in general directions in Sec. VII. ## III Completely packed loop model Measuring Pauli operators on each site generates three different patterns of pairings (Fig. 3b), and measuring all qubits tiles these patterns and results in different configurations of loops on a square lattice. On the two different sub-lattices of the square lattice, the factors \(q\) and \(1-q\) must be swapped, to reflect the staggered measurement scheme of the plaquette model. Consider a configuration of measurements or tilings, \(\mathcal{C}=(N_{x},N_{y},N_{z})\), which corresponds to \(N_{x},N_{y},N_{z}\) being the total number of \(X,Y,Z\) performed. Such a configuration has probability \(W_{\mathcal{C}}=p^{N_{y}}[(1-p)q]^{N_{x}}[(1-p)(1-q)]^{N_{z}}\) and partition function \(Z=\sum_{\mathcal{C}}W_{\mathcal{C}}\). The model and partition function are known as the completely packed loop model with crossings (CPLC), whose properties have been extensively studied in [28]. We will now review its important properties relevant to the questions addressed in this work. In [28] the authors found that the phase diagram con sists of a short loop phase and a long loop "Goldstone" phase, which are separated by a phase transition (see Fig. 4). This model can be described by the replica limit \(n\to 1\) of a \(\mathbb{Z}_{2}\) lattice gauge theory coupled to \(O(n)\) matter field. Its continuum description is a sigma model, which is massive in the short loop regime and massless in the Goldstone phase [28]. We focus on two order parameters which distinguish the phases. First, we consider the _watermelon correlation functions_\(G_{k}(i,j)\), which denote the probability that \(k\) distinct strands connect points \(i\) and \(j\), where \(k\) is even for the CPLC model. For instance, \(G_{4}\) is the probability that two nodes are connected by four distinct strands. Using renormalization group (RG) techniques on the sigma model, [28] found that in the Goldstone phase \[G_{k}(i,j)\sim\frac{C_{0}}{\ln\left(d_{ij}/r_{0}\right)^{k(k-1)}} \tag{5}\] where \(d_{ij}\) is the distance between \(i,j\) and \(C_{0},r_{0}\) are non-universal constants. In the short-loop phases on the other hand, the watermelon correlators decay as \(G_{k}(i,j)\sim e^{-d_{ij}/\xi}\), with correlation length \(\xi\). Next, we consider the _spanning number_ defined for a CPLC model on a cylinder, with two circular open boundaries. The spanning number counts the number of strands that connect the upper and lower boundaries. [28] found that in the Goldstone phase, the average spanning number scales with system size \(L\) as \[n_{s}\approx\frac{1}{2\pi}\left(\ln\frac{L}{L_{0}}+\ln\ln\frac{L}{L_{0}} \right), \tag{6}\] whereas it asymptotes to \(0\) in the short loop phase. To explore the entanglement properties of the toric code after measurements and their connection to the phase transitions in the loop model, we need to leave some qubits un-measured as measuring all qubits results in a trivial pure product state. Three scenarios are considered (see schematic description in Fig. 4): (I) _Measuring all but two qubits in the bulk._ In Sec. IV.1 we show that the entanglement induced between the two un-measured qubits is directly related to the watermelon correlation function. (II) _Measuring all but two boundaries._ In Sec. IV.2, we observe that the induced entanglement between the two boundaries of the cylinder is directly related to "spanning number" order parameter discussed in this section. Accordingly, in the short loop phase, the entanglement is asymptotically zero, while in the Goldstone phase, it exhibits logarithmic scaling with the system size. (III) _Measuring all but a single boundary._ We show in Sec. V that in this case the entanglement between contiguous bipartitions of the un-measured boundary exhibit a phase transition between area law and logarithmic law, reflecting the underlying loop model configurations. ## IV Measurement-induced entanglement between two distant regions In this section, we establish a connection between the average measurement-induced entanglement (MIE) of two distant unmeasured regions and various order parameters within the CPLC model. The MIE has been related to the underlying sign structure of the measured wavefunction in [29], and we will comment more on this in the concluding discussion. ### MIE between two un-measured qubits We first demonstrate that the measurement-induced entanglement (MIE) between two un-measured qubits at sites \(i\) and \(j\) is equivalent to the watermelon correlation \(G_{4}(i,j)\). Recall that measuring a qubit specifies a given pairing for the four Majoranas associated with the qubit. After all pairings at all sites except for \(i,j\) are specified, Figure 3: (a) The update process for measuring the Majorana bilinear \(\dot{\gamma}_{j}\gamma_{i}\) for a free fermion state which is a tensor product of Majorana dimers. The \(+1\) refers to the measurement outcome, which sets the arrow direction in the final state. If the measurement outcome is \(-1\) the final arrows need to be reversed. (b) Updated pairings from measuring \(X\), \(Y\), and \(Z\) Pauli operators on each site of the plaquette model. The updated pairings need to be tiled to form the global dimer state. We have neglected the sign tracking of the dimer pairs, which depends on the measurement outcomes. we must implement the projection operators \(D_{k}\) on every site, as in Eq. 3. Crucially, any _closed loop Majorana stabilizer commutes with the projection operator_ and is thus shared by both the free-fermion and the projected state in Eq. 3. However, if two qubits are left un-measured, then some Majorana stabilizers may be open strands ending at the un-measured sites. In this case the projection operator has a significant effect on the final stabilizers and hence the entanglement between the unmeasured qubits. To compute the measurement-induced entanglement, we analyze the three ways (Fig. 5) in which Majorana stabilizer strands terminate at the two vertices \(i,\ j\). (Any closed loop not coincident with \(i\) and \(j\) will not contribute any entanglement.) Denote a stabilizer strand connecting Majoranas \(\gamma_{i,s}\) and \(\gamma_{i^{\prime},s^{\prime}}\) as \((i_{s}i_{s^{\prime}}^{\prime})\). We suppress the sign information of the stabilizer in this notation. The three classes of configurations are (a) Each strand ends on Majoranas on the same vertex, i.e. we have \(2\ (i_{s}i_{s^{\prime}})\) and \(2\ (j_{s},j_{s^{\prime}})\) pairings. (b) Two strands end on the same vertex and two strands end on different vertices, i.e. there are \(1\ (i_{s}i_{s^{\prime}})\), \(2\ (i_{s},j_{s^{\prime}})\), and \(1\ (j_{s},j_{s^{\prime}})\) pairings. (c) All four strands terminate in different vertices, i.e. there \(4\ (i_{s},j_{s^{\prime}})\) pairings. Once we impose the local projection operators \(D_{i}=(i_{1}i_{2}i_{3}i_{4}),\ D_{j}=(j_{1}j_{2}j_{3}j_{4})\), the stabilizer generators need to be updated. Furthermore, to compute the entanglement between the two unmeasured qubits, we choose a canonical gauge for the stabilizers [4; 30] in which the stabilizers restricted to any subsystem are independent; in this canonical form, the entanglement across a bipartition is proportional to the number of stabilizers shared between both parties. We show examples of the stabilizer update into its canonical form in Fig. 5. In the right column in Fig. 5, we show the stabilizer generators obtained from these patterns of strands, set in their canonical form such that the stabilizer generators restricted to either \(i\) or \(j\) are independent. As can be seen, the number of such independent _connecting_ stabilizers are \(0,0,\) and \(2\) respectively, in the three types of Majorana pairings \(a,b,\) and \(c\). Thus, only configuration \(c\) contributes one bit of entanglement, and the average measurement-induced entanglement (MIE) generated between any two un-measured qubits on \(i\) and \(j\) is exactly given by the probability of \(4\) distinct loops connecting \(i\) and \(j\) in the CPLC model, i.e. the watermelon correlation function defined in the previous sec Figure 4: The schematic phase diagram of the completely packed loop model with crossings (CPLC) [28]. The diagram distinguishes between different phases based on three order parameters, highlighted in the panel below. These order parameters correspond to loop configurations that connect points marked by red dots. The first order parameter (I) quantifies the probability of four distinct strands connecting two red points on a torus, referred to as the “watermelon correlation function.” The second order parameter (II) measures the expected number of strands connecting the top and bottom boundaries on a cylinder, known as the “spanning number.” The third order parameter (III) measures the expected number of strands connecting two partitions of the top boundary on a cylinder with a fixed boundary condition at the bottom. All three quantities govern measurement-induced entanglement in the toric code state. Figure 5: Three possible pairings of unmeasured qubits \(i,j\) after all other qubits are measured. Left: stabilizer strands prior to physical qubit Hilbert space projections. Right side: stabilizers after projections, in canonical form. Note, in writing the stabilizers as Pauli strings, we have assumed that the sites \(i,j\) are in the same sub-lattice. Configurations \((a,b)\) are exclusively supported on \(i\) or \(j\) and do not contribute entanglement, while configuration \(c\) contributes one bit of entanglement. tion, \[\langle S_{\text{MIE}}(i,j)\rangle=(G_{4}^{\text{CPLC}}(i,j))\ln 2. \tag{7}\] Hence it follows from the results from [28] as quoted in Eq. 5 that the averaged MIE is long-ranged in the Goldstone phase and short-ranged in the short loop phase. ### MIE between two un-measured boundaries Now we consider the toric code on a cylinder and explore the effects of bulk measurements on the boundary chains of qubits; in particular, we focus on the scenario where both circular boundaries of the toric code are left unmeasured. For the purposes of this section, the exact boundary conditions don't matter, so we defer a discussion on the exact boundary conditions and the exact mapping of this scenario to the loop model to the next section. Here we just quote the final result, that under the Majorana loop mapping, the average entanglement between these two boundaries can be directly mapped to the'spanning number' in the loop model, as illustrated in Figure 4II. This can already be motivated from discussions in the earlier sub-section, where we showed that the entanglement between the two remote regions of the toric code corresponds to open strands connecting the regions in the CPLC model. However, we must also transform the stabilizers to their canonical forms in order to directly count their entanglement contribution. In Appendix A.2 we show that in this geometry, if there are \(n\geq 2\) such strands connecting the top and bottom boundaries in the loop model, we get \(n-2\) independent Majorana stabilizer generators in their canonical form connecting the top and bottom boundaries. The average \(n\) is just the spanning number of the loop model, so we get the following correspondence, \[\langle S_{\text{MIE}}\rangle=\frac{(n_{s}-2)\ln 2}{2}. \tag{8}\] This correspondence holds true only for \(n_{s}\geq 2\), otherwise, we have \(S_{\text{MIE}}=0\). As noted in Eq. 6, the average spanning number \(n_{s}\) scales logarithmically with \(L\) in the Goldstone phase and asymptotes to zero in the short loop phase. ## V Measurement-induced phase transition in the boundary In this section, we investigate the occurrence of measurement-induced phase transitions in a 1D chain of qubits on the boundary of a toric code state, where the remaining qubits are measured in random local Pauli bases (as depicted in Fig. 1). Many of the findings in this section can be generalized to different topologies (such as torus, cylinder, or plane) and partitioning schemes of the unmeasured 1D chain. We begin by examining a toric code implemented on a cylinder with two open circular boundaries. The boundary stabilizers are truncated, resulting in two types of boundary conditions: "rough" or "smooth". The rough condition arises when the plaquette stabilizers are truncated, while the smooth condition occurs when the star stabilizers are truncated. In Figure 6, the truncated toric code represented in the parton picture exhibits a simple form, where no distinction between rough and smooth is evident. Consequently, it is unnecessary to specify the type of boundary condition for the entanglement analysis. Moreover, one can prepare such a surface code state by measuring the toric code state on a torus along a horizontal line on one of the sub-lattices using a fixed basis; subsequently, a single layer of local unitary update is performed based on the measurement outcome. For concreteness, we will hereafter assume the pair of rough and smooth boundary conditions shown in Fig.6. In this section, we show that the boundary state after bulk measurements is precisely the state generated by a hybrid circuit in \(1+1\)d composed of measurements and unitaries. While this identification is general, for the particular case of the toric code, the corresponding hybrid circuit is a \(1+1\)d Ising symmetric circuit [31; 12]. This mapping is motivated by MBQC, whereby a circuit can be effectively realized by single-site measurements on a resource state. However, the toric code is not a universal resource state [32]; thus, single-site measurement on the toric code cannot represent all circuits. ### Measured toric code as a \(1+1\)d hybrid circuit Starting from the parton representation of the toric code, measuring each qubit modifies the pairings of four Figure 6: Boundary conditions of the parton groundstate of toric code on cylinder. The rough and smooth boundary conditions on two open edges of the cylinder are shown, along with the respective truncated stabilizers on the boundaries. The \(x\) direction is taken to have periodic boundary condition. neighboring Majoranas, as described in Sec. II. These pairings can be interpreted as world lines of two Majoranas undergoing circuit operations. In particular, the corresponding circuit consists of the following gates acting on two neighboring Majorana fermions: swap gate, identity gate, and fermion parity measurement, as illustrated in the bottom of Fig.7. Thus, starting from an \(N\times N\) toric code state on a cylinder, the bulk measurements realize a depth \(N\) free fermion circuit on \(2N\) Majorana fermions (Fig. 7 middle). Note that for a fixed configuration of measurement bases, different measurement outcomes correspond to hybrid circuits in \(1+1\)d differing only in the signs of the Majorana stabilizers. However, as noted earlier, the observables we are interested in-entanglement and spin-glass order-are not sensitive to these signs and only depend on the worldline connectivity. Thus, for these observables, the hybrid circuits for a given measurement bases configuration are equivalent, regardless of the outcomes. After all the projections onto the physical qubit Hilbert space are imposed, the Majorana hybrid circuit maps via Jordan-Wigner transformation to a depth \(N\) hybrid circuit of local unitaries and local measurements on a one-dimensional system of \(N\) qubits. This mapping is shown explicitly in Fig. 7. We note that this mapping is somewhat subtle, as the parton construction and Jordan-Wigner transformation are two different mappings from one qubit to respectively four and two Majorana fermions. Briefly, the reason why the claimed mapping works is because at the top boundary, the top two of the four Majoranas per parton decomposition are always paired in a nearest neighbor dimer state (Fig. 7 left), so the physical qubit state is solely determined by the bottom two Majoranas per site via the standard Jordan-Wigner mapping (see Appendix B for details). The Majorana fermion parity measurements realize the measurements of either the neighboring \(Z_{k}Z_{k+1}\) or on-site \(X_{k}\) measurement in the qubit circuit, depending on which sub-lattice the measurements are performed: \[M_{1}=i\gamma_{k,2}\gamma_{k+1,1} =Z_{k}Z_{k+1}\] \[M_{2}=i\gamma_{k,1}\gamma_{k,2} =X_{k}. \tag{9}\] Similarly, the Majorana swap gate implements either a two-qubit unitary \(U_{1}\) or on-site unitary \(U_{2}\): \[U_{1}:Y_{k}I_{k+1} \leftrightarrow X_{k}Z_{k+1}\] \[U_{2}:Z_{k} \leftrightarrow Y_{k}. \tag{10}\] These circuit operations preserve an Ising \(\mathbb{Z}_{2}\) symmetry \(\prod_{i}X_{i}\). The origin of this Ising symmetry of the hybrid circuit is the fact that the \(\prod_{i}X_{i}\) string operator, supported on the rough boundary, takes definite value for the initial surface code state, and any bulk measurement away from the boundary commutes with this boundary operator. The qubit circuit corresponding to the \((p,q)\) measurement protocol on the toric code (TC) state can be explicitly defined as follows (see also Table 1): (1) In odd times, perform one of the following operations on each pair of neighboring qubits: 2-qubit unitary \(U_{1}\) (TC measurement along \(Y\)) with probability \(p\), identity operation with probability \((1-p)q\) (TC measurement along \(X\)), and \(M_{1}\) measurement with probability \((1-p)(1-q)\) (TC measurement along \(Z\)). Figure 7: Pauli-\(X/Y/Z\) measurements on the bulk qubits, with the boundary chain of qubits left un-measured (left panel), correspond to a hybrid circuit with measurement and unitary gates. The measurement model is equivalent to a staggered measurement protocol on the Wen plaquette model. The plaquette model generates Majorana pairing patterns in that can be interpreted as world lines of Majorana fermions undergoing a free fermion hybrid circuit (middle panel). By Jordan-Wigner transformation, the same circuit can be identified with an Ising symmetric measurement and unitary circuit on qubits (right panel). (2) In even times, perform one of the following operations on each qubit: on-site unitary \(U_{2}\) with probability \(p\)(TC measurement along \(Y\)), on-site measurement \(M_{2}\) with probability \((1-p)q\) (TC measurement along \(X\)), identity operation and with probability \((1-p)(1-q)\) (TC measurement along \(Z\)). ### Bipartite entanglement in the un-measured boundary By identifying the bulk Majorana pairings with the classical loop model as in Sec. III, we can establish a direct mapping between the entanglement across a bipartition of the boundary state and a specific quantity depicted schematically in Figure 4III within the loop model. This quantity, which counts the number of strands connecting two parts of the boundary chain, diagnoses the long-range correlations in the Goldstone phase of the loop model. We also show in Appendix A.1 that for the one boundary setup, there is a one-to-one correspondence between the Majorana strands and the canonical stabilizer generators after implementing projections. Specifically, a configuration with \(n\) open strands connecting the two parts of the boundary corresponds to a quantum state with \(n\) canonical stabilizer generators connecting the two parts, thereby contributing \(\frac{n\ln 2}{2}\) units of entanglement. The bipartite entanglement between two parts of the boundary thus acts as an order parameter for the phase transition in the loop configurations of the CPLC model. In the Goldstone phase of CPLC, [31] found that the entanglement between a contiguous sub-region \(A\) of the qubit chain has a logarithmic scaling with a correction. Therefore, the entanglement of a contiguous subregion \(A\) of the un-measured boundary chain of the toric code also satisfies the same entanglement scaling in the Goldstone phase, \[\langle S_{A}\rangle\approx\frac{\ln(2)}{2}\left(\#\ \ln|A|+\frac{1}{4\pi}(\ln|A|)^{2} \right), \tag{11}\] while it obeys an area law in the short loop phase. ## VI Long-range order in the boundary state ### Spin glass order parameter As we showed in the previous section, the bulk measurements performed on the toric code can be mapped to the Ising-symmetric hybrid circuits studied by Sang et al. [31; 12]. Given only \(ZZ\) measurements, the steady state is a "random GHZ" state characterized by a random spin configuration superposed with the flipped configuration. This is also known as a spin glass state, and the spin glass order is captured by the Edwards-Anderson order parameter: \[O=\frac{1}{L}\sum_{i,j}^{L}\langle\psi|Z_{i}Z_{j}|\psi\rangle^{2} \tag{12}\] For spin glass order, \(O\sim L\), whereas for paramagnetic order, \(O\sim O(1)\). [12] studied the phase diagram of hybrid circuits involving \(ZZ\) and \(X\) measurements and random Ising-symmetric Clifford unitaries, and a stable spin glass phase was found. The toric code measurements map to a subset of symmetric Clifford unitaries- namely the free fermion operations defined above- and for this restricted class we provide a fermionic perspective on the spin glass order parameter and the extent of the spin glass phase. The central object in the spin glass order parameter is \(Z_{i}Z_{j}\), which for a stabilizer state can take three values (\(\pm 1\) or \(0\)). It maps via Jordan-Wigner transformation to \[Z_{i}Z_{j}=i\gamma_{i,2}\left(\prod_{k=i+1}^{k=j-1}i\gamma_{k,1}\gamma_{k,2} \right)\gamma_{j,1}. \tag{13}\] This string of Majoranas is nonzero if and only if all Majoranas within the interval \((i,j)\) are paired amongst themselves. (If any Majorana within the interval is paired with one outside, that dimer will anti-commute with the above string and render its expectation value zero.) In the short loop phase, configurations are composed of loops with a characteristic size \(\xi\), which is independent of \(L\). Hence, the probability that all Majoranas within an interval \((i,j)\) are paired up is independent of \(|i-j|\) for \(|i-j|\gg\xi\). For \(q<1/2\), this probability is a nonzero \(O(1)\) number independent of \(|i-j|\), and thus the spin-glass order parameter is extensive in the \(q<1/2\) short loop phase. ### Linear order parameter from adaptive circuits Due to the equal probabilities of \(Z_{i}Z_{j}\) having opposite signs \(\pm 1\), its average value is zero, making it necessary to use nonlinear order parameters such as the Edwards-Anderson order parameter defined in Eq. 12. However, in experimental setups, measuring nonlinear order parameters is generally challenging and requires post-selection of \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{1}{c|}{Sublattice (timesteps)} & Measurement & Probability & Qubit Circuit \\ \hline \hline & \(Y\) (\(Y\)) & \(p\) & \(U_{1}\) \\ B (odd) & \(X\) (\(Z\)) & \((1-p)q\) & Identity \\ & \(Z\) (\(X\)) & \((1-p)(1-q)\) & \(M_{1}\) \\ \hline \hline & \(Y\) (\(Y\)) & \(p\) & \(U_{2}\) \\ A (even) & \(X\) (\(X\)) & \((1-p)q\) & \(M_{2}\) \\ & \(Z\) (\(Z\)) & \((1-p)(1-q)\) & Identity \\ \hline \end{tabular} \end{table} Table 1: Circuit mapping of the \((p,q)\) measurement model on the toric code (TC). For completeness, the corresponding staggered measurement protocol for the Wen plaquette (WP) model is mentioned in the parentheses. the measurement outcomes. In practice it is more feasible to access expectation values of operators like \(\mathrm{Tr}(\rho O)\) which are linear in the density matrix \(\rho\), the ensemble of all measurement trajectories. Here we detail an efficient protocol for converting the (nonlinear) spin glass order into a linear order parameter. Our protocol employs an adaptive strategy that applies a layer of local unitaries on the boundary conditioned on the bulk measurement outcomes. The main objective of this adaptive unitary layer is to transform all the \(\pm\) values of \(Z_{i}Z_{j}\) stabilizer generators into positive values, thus eliminating the issue of sign cancellation and converting the spin glass order into long-range ferromagnetic order, which is linear in the state and can be accessed experimentally. The protocol consists of two main parts: (1) identify \(Z_{i}Z_{j}\) stabilizers and determine their signs, and (2) obtain single-site unitaries that can correct the negative signs to positive signs. Note that one approach for identifying and correcting the sign of \(Z_{i}Z_{j}\) operators in the stabilizer group is to simulate the entire evolution classically using the stabilizer formalism, with the knowledge of all the \(O(L^{2})\) bulk measurement outcomes. However, we propose a simpler algorithm that only requires access to the directions and outcomes of measurements within a correlation length \(O(\xi)\) from the boundary, i.e. \(O(L\xi)\) measurements. The algorithm is as follows: **A. \(Z_{i}Z_{j}\) generator graph construction:** We construct a graph whose vertices are the boundary qubit sites \(i\) and which has an edge \(ij\) if \(\pm Z_{i}Z_{j}\) is in the stabilizer group. By knowing the positions of \(X,Y,Z\) measurements, we can obtain the corresponding configuration of Majorana strands. If for any \(i<j\), \(\gamma_{i,2}\) is paired up with \(\gamma_{j,1}\), we draw an edge \(ij\) in the graph. As per Eq. 13, this pairing implies a \(Z_{i}Z_{j}\) stabilizer if and only if all the Majorana fermions in the interval \((i,j)\) are paired up internally. This can be checked for all the Majorana fermions in \((i,j)\) from the loop configuration. If indeed there are strands that exit the interval \((i,j)\), then we erase the edge \(ij\) in the graph as this doesn't correspond to a \(Z_{i}Z_{j}\) stabilizer. For the special case \(p=0\) without loop crossings, this second step is not necessary, as all the Majorana strands must be nested in this case. Note that the graph is a tree as the Majorana strands are all independent generators. We then obtain the sign of the \(Z_{i}Z_{j}\) stabilizers by tracking the sign of the arrows along the strand \(\gamma_{i,2}\gamma_{j,1}\) and all the intermediate strands \(\gamma_{k,1}\gamma_{k^{\prime},2}\) for \(k,k^{\prime}\in(i,j)\). This requires us to keep track of only \(O(L\xi)\) measurement outcomes, where \(\xi\) is the correlation length corresponding to the loop size in the underlying CPLC model. If \(Z_{i}Z_{j}=-1\), we color the edge \(ij\). **B. Correcting the sign:** From the previous step, we have a colored graph. Flipping a spin \(k\) (acting with the unitary \(X_{k}\)) changes the signs of all adjacent edges to the node \(k\). We can correct any negative sign on an edge by flipping all nodes on one side of the edge (see Fig. 8c). Repeating this process for each edge allows us to flip the sign of every edge individually. To correct all the edges with a minimal number of operations, a search algorithm within the tree can be performed, which has a polynomial complexity with respect to the system size. Given access to the measurement protocol and outcomes, this algorithm (with complexity polynomial in system size) can be executed by a classical computer to determine the necessary adaptive unitary protocol, resulting in all trajectories having non-negative \(Z_{i}Z_{j}\) correlations. The resulting ensemble of trajectories \(\rho\) has long-range ferromagnetic order \(\langle Z_{i}Z_{j}\rangle_{\rho}\) in the original spin glass phase. ## VII General on-site measurements We now consider the effect of measurements along an arbitrary direction \(n_{x}X+n_{y}Y+n_{z}Z\) on the toric code. These map to statistical models that go beyond the scope of the previously discussed CPLC model (Section III). In the following, we will demonstrate the representation of the measured toric code using Gaussian tensor networks Figure 8: (a) To convert any (nonlinear) spin glass long-range order of the boundary state into (linear) ferromagnetic long-range order, a layer of on-site unitaries conditioned on bulk measurement outcomes can be applied to the boundary. (b) The classical processing for the adaptive protocol involves using the distribution of Majorana strands to construct a graph in which an edge between nodes (qubits) \(i,j\) represents existence of a \(Z_{i}Z_{j}\) stabilizer. The edges are colored depending on whether the sign of the stabilizer is \(\pm 1\), which can be computed from the measurement outcomes along the strands. (c) The negative signs in the tree graph can be flipped by applying \(X_{k}\) on all nodes on one side of the negative edge (e.g. those encircled). (GTN) and the realization of Gaussian hybrid circuits within the virtual space. Furthermore, we will establish a connection with well-studied Gaussian hybrid circuits to showcase the robustness of the boundary MIE phase diagram obtained from the CPLC model in the general measurement case. ### Tensor network representation of parton construction The parton representation of the toric code state can be reinterpreted as a two-dimensional tensor network composed of local tensors \(\ket{T}\). These tensors consist of a single physical leg representing a qubit and four virtual legs representing the parton Majorana fermions. To construct a tensor network, contractions between spins or contractions between Majorana fermions are allowed. A contraction between spins involves projecting the two legs onto a maximally entangled state, and a contraction between Majorana fermions sharing an edge involves projecting the two Majorana fermions onto the eigenstate of the \(i\gamma_{i}\gamma_{j}\) operator with eigenvalue \(+1\). The latter assumes an orientation for each virtual bond which must be specified. We now describe the tensor network representation of the projected parton states of the toric code. First, we introduce the projection tensor that maps four Majorana fermions to one spin: \[P=\frac{\ket{G_{1}}\ket{\uparrow}+\ket{G_{2}}\ket{\downarrow}}{\sqrt{2}}. \tag{14}\] Here \(\ket{G_{1}}\) and \(\ket{G_{2}}\) are two fermionic stabilizer states of the four Majoranas, where \(\ket{G_{1}}\) is stabilized by \(i\gamma_{1}\gamma_{3}\), \(i\gamma_{2}\gamma_{4}\) and \(\ket{G_{2}}=i\gamma_{1}\gamma_{2}\ket{G_{1}}\). The orientation for bonds between tensors is specified in Fig 9 (a), and corresponds to the orientation of Majorana pairs described in Section II. ### Measured toric code as a Gaussian tensor network Measuring a qubit in the \(\vec{n}\) axis and obtaining outcome \(\pm 1\) corresponds to contracting the physical leg of a tensor with the qubit state \(\ket{\psi_{\pm\vec{n}}}\), the eigenstate of operator \(\vec{n}\cdot\vec{\sigma}=n_{x}X+n_{y}Y+n_{z}Z\) with eigenvalue \(\pm 1\). After contraction, the resulting projection tensor takes the form of a Gaussian fermionic tensor on the remaining Majorana legs: \[\begin{split}\ket{F_{\vec{n}}}&=\frac{\ket{G_{1}} \bra{\psi_{\vec{n}}}\uparrow\rangle+\ket{G_{2}}\bra{\psi_{\vec{n}}}\downarrow \rangle}{\sqrt{2}}\\ &=\frac{e^{i\phi}\cos\frac{\theta}{2}\ket{G_{1}}+\sin\frac{ \theta}{2}\ket{G_{2}}}{\sqrt{2}}\end{split} \tag{15}\] Here, \(\theta\) and \(\phi\) represent the spherical coordinates of the unit vector \(\vec{n}=(n_{x},n_{y},n_{z})=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\). The fermionic operators \(F^{1}\) and \(F^{2}\) which stabilize \(\ket{F_{\vec{n}}}\) (meaning that \(F^{1,2}\ket{F_{\vec{n}}}=\ket{F_{\vec{n}}}\)) are: \[\begin{split} F^{1}&=i\gamma_{1}\gamma_{3}\cos \theta+i\gamma_{1}\gamma_{2}\sin\theta\cos\phi+i\gamma_{1}\gamma_{4}\sin\theta \sin\phi\\ F^{2}&=i\gamma_{2}\gamma_{4}\cos\theta+i\gamma_{4} \gamma_{3}\sin\theta\cos\phi+i\gamma_{3}\gamma_{2}\sin\theta\sin\phi.\end{split}\] The independence, commutation, and unit-square properties of the fermionic operators \(F^{1}\) and \(F^{2}\) can be straightforwardly demonstrated. These two fermionic operators uniquely define \(\ket{F_{\vec{n}}}\) as a Gaussian state/tensor which in fact is the most general form for four Majorana fermions. The covariance matrix of the state (which is defined as \(\Gamma_{ij}=\langle\frac{i}{2}[\gamma_{i},\gamma_{j}]\rangle\)) in the \((\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4})\) basis is: \[\Gamma=\frac{1}{2}\begin{bmatrix}0&\sin\theta\cos\phi&\cos\theta&\sin \theta\sin\phi\\ -\sin\theta\cos\phi&0&-\sin\theta\sin\phi&\cos\theta\\ -\cos\theta&\sin\theta\sin\phi&0&-\sin\theta\cos\phi\\ -\sin\theta\sin\phi&-\cos\theta&\sin\theta\cos\phi&0\end{bmatrix}\] In summary, each qubit measurement in the \(\vec{n}\) direction results in a Gaussian tensor supported on virtual legs. In the setup where all but a top boundary of qubits are measured, all degrees of freedom except the top boundary of Figure 9: (a) The parton representation of the toric code state can be interpreted as a local tensor with 1 ‘physical’ qubit leg and 4 ‘virtual’ Majorana legs, along with an orientation for contraction of the virtual indices. In this case, the orientation corresponding to a toric code groundstate is specified. (b) Measuring the parton state, or equivalently, contracting the physical index with a state vector in a general direction, leads to a Gaussian state on the 4 ‘virtual’ Majorana degrees of freedom. This Gaussian state can be mapped to a Gaussian operation on 2 Majorana fermions, and can be directly identified with a non-unitary operator. qubits are contracted (Fig. 10). We show in appendix C that the Jordan-Wigner transformation of this boundary qubit state precisely yields the Gaussian tensor network state in which the boundary qubit projections and top row of Majorana contractions are removed (Fig.10 right). Consequently, performing measurements in the bulk of the toric code and employing a Jordan-Wigner transformation at the boundary results in a remaining state described by a Gaussian tensor network. In the measured toric code, each quantum trajectory can be represented by \(\{\vec{n}_{i}\}\), where \(\vec{n}_{i}\) denotes the measurement axis and outcome when measuring the qubit at site \(i\) along axis \(\vec{n}\). The probability of each trajectory arises from two sources. Firstly, there is the classical probability set by the protocol, denoted as \(w\left(\vec{n}\right)\). This is the probability of selecting measurement axis \(\vec{n}\), which is equivalent to selecting measurement axis \(-\vec{n}\). Therefore, the probability distribution \(w\) must satisfy \(w\left(\vec{n}\right)=w\left(-\vec{n}\right)\) and is determined in advance, without knowing the actual measurement outcomes sampled from the Born rule. If the measurements bases are chosen independently from site to site, the probability of a given set of measurement bases is \(w\left(\{\vec{n}_{i}\}\right)=\prod_{i}w\left(\vec{n}_{i}\right)\). Secondly, the Born rule for the measurement outcome determines the probability \(P(\vec{n})\) of the sign associated with \(\vec{n}\) based on the measurement. The Born probability \(P(\{\vec{n}_{i}\})\) of each trajectory \(\{\vec{n}_{i}\}\) is the norm of the wave function represented by the tensor network. The combined probability for a particular quantum trajectory is thus \(w(\{\vec{n}_{i}\})P(\{\vec{n}_{i}\})\). This notation can be used to represent the \(\left(p,q\right)\) measurement protocol for \(Z\), \(Y\), or \(X\) measurements discussed in previous sections. The weights associated with these measurements are \(w\left(Z\right)=w\left(-Z\right)=(1-q)(1-p)\), \(w\left(Y\right)=w\left(-Y\right)=p\), \(w\left(X\right)=w\left(-X\right)=q(1-p)\) and based on the stabilizer formalism, the Born probability for each plus and minus sign is equal to \(P(\vec{\sigma})=\frac{1}{2}\). Consequently, \(\sum_{\vec{\sigma}}w\left(\vec{\sigma}\right)P(\vec{\sigma})=1\). ### Measured toric code as a Gaussian hybrid circuit The Gaussian tensors in the virtual space can be understood as Gaussian operations that perform non-unitary transformations between the lower and upper legs of the virtual space, as illustrated in Figure 9(b). Building upon this observation, we establish a correspondence between a specific class of non-unitary Gaussian circuits in 1+1D and the random Gaussian tensor networks in 2D discussed in the previous section. These Gaussian circuits are characterized by a set of operators \(\{K_{\vec{n}}\}\) and a probability distribution \(\tilde{w}(\vec{n})\) associated with the operations. The operators \(K_{\vec{n}}\) act on the neighboring Majorana modes \(\gamma_{i}\) and \(\gamma_{i+1}\) within the virtual space, resulting in non-unitary Gaussian circuits [21; 22]. More explicitly, these operators can be expressed as: \[K_{\vec{n}}=\left(1-n_{x}^{2}\right)^{1/4}e^{-i\alpha(\vec{n})\gamma_{i}\gamma _{i+1}} \tag{16}\] where \(\alpha(\vec{n})\) is defined by \[e^{-2\mathrm{Re}[\alpha(\vec{n})]}=\left(\frac{1+n_{x}}{1-n_{x}}\right)^{1/2}; e^{i2\mathrm{Im}[\alpha(\vec{n})]}=\frac{in_{y}+n_{z}}{\left(n_{y}^{2}+n_{z}^{2} \right)^{1/2}} \tag{17}\] It is worth noting that when \(n_{x}\) is zero, the operations are unitary. However, when \(n_{x}\) is \(\pm 1\), the operations correspond to fermion parity measurements. For general \(\vec{n}\) with \(n_{x}\neq 0\), the non-unitary operation \(K_{\vec{n}}\) is a weak measurement of the fermion parity. The space-time geometry of the monitored Gaussian circuit acting on a Majorana chain is depicted in the right hand-side of Fig 10, and each operation is randomly chosen from the set of operations \(K_{\vec{n}}\). The probability of each trajectory is determined by two sources: the classical distribution \(\tilde{w}(\vec{n}_{i})\) and the Born probability \(\left\langle\psi\right|K_{\vec{n}_{i}}^{\dagger}K_{\vec{n}_{i}}\left|\psi\right\rangle\), where \(\left|\psi\right\rangle\) is the normalized wave function of the 1D fermion chain before the action of the operator \(K_{\vec{n}_{i}}\), and the updated wave function after the action is given by \(\frac{K_{\vec{n}_{i}}\left|\psi\right\rangle}{\left\|K_{\vec{n}_{i}}\left| \psi\right\rangle\right\|}\). Note that the classical probability should satisfy \(\tilde{w}((n_{x},n_{y},n_{z}))=\tilde{w}((-n_{x},n_{y},n_{z}))\) to not force any bias on the measurement outcomes. The first step in establishing the correspondence is to relate the operator representation of the tensor \(\left|F_{\vec{n}}\right\rangle\) in the tensor network to the set of operators in the Gaussian circuit. By performing explicit contractions of the tensor network, it can be shown that the operator representation of the tensor \(\left|F_{\vec{n}}\right\rangle\) is \(\frac{K_{\vec{n}}}{\sqrt{2}}\). This implies that the final state of the hybrid circuit and the tensor network, given the same set of directions, are equal up to a normalization factor. However, this normalization factor is only relevant for the probability of the trajectory. This mapping establishes a direct correspondence between the measurement directions and outcomes (labeled by \(\vec{n}\)) and a family of operators in the circuit representation parameterized by \(\vec{n}\) (Fig.10). Figure 10: Bulk measurements of general on-site operators on the toric code ground-state can be mapped to a Gaussian circuit on a chain of the virtual Majorana degrees of freedom. The qubit state supported on the top boundary on the left hand side maps under Jordan-Wigner transformation to the Gaussian state supported on the top boundary on the right hand side (see Appendix C for derivation). The second step is to establish a connection between \(\tilde{w}(\vec{n})\) and \(w(\vec{n})\) such that identical space-time configurations in both setups have the same probability. The probability of a specific trajectory in the tensor network can be expressed as \(P_{\{\vec{n}_{i}\}}=\prod_{i}w\left(\vec{n}_{i}\right)\left|\left|\prod_{i}\frac {K_{\vec{n}_{i}}}{\sqrt{2}}\left|\psi_{0}\right.\right|\right|^{2}\), \(\left|\psi_{0}\right.\) being the initial state / lower boundary of the toric code. By recursively defining \(\left|\psi_{t}\right>=\frac{K_{\vec{n}_{i}}|\psi_{t-1}\rangle}{||K_{\vec{n}_{i }}||\psi_{t-1}\rangle||}\) the probability can be written as: \(P_{\{\vec{n}_{i}\}}=\prod_{\vec{t}}\frac{w(\vec{n}_{i})}{2}||K_{\vec{n}_{i}} \left|\psi_{t-1}\right.||^{2}\), which establishes the correspondence with the trajectory probability in the circuit representation. The \(w(\vec{n})\) ensemble of measurements on toric code is thus the same as the \(\tilde{w}(\vec{n})=\frac{w(\vec{n})}{2}\) ensemble of non-unitary circuits on Majorana fermions, where the Gaussian operations \(K_{\vec{n}}\) are sampled with protocol probability \(\tilde{w}(\vec{n})\) (with the restriction \(\tilde{w}\left(\vec{n}\right)=\tilde{w}\left(-\vec{n}\right)\)). This establishes the mapping between the measurement protocol on toric code and a specific class of non-unitary Gaussian circuits on the virtual space. However, it is important to note that the integration domains for \(\tilde{w}(\vec{n})\) and \(w(\vec{n})\) are different. While \(w(\vec{n})\) is defined over a hemisphere due to its axis-based definition, \(\tilde{w}(\vec{n})\) is defined over the entire sphere. ### Phase diagram of boundary MIE after general on-site measurement Following the circuit mapping that connects different measurement protocols on toric code with Gaussian fermionic circuits, we can readily use results of the entanglement phase diagram of non-unitary Gaussian circuits to infer the phase diagram of the MIE in the boundary state after measuring the bulk of the toric code state on a cylinder, along any general directions. The corresponding circuit problem has been extensively studied both numerically and analytically recently, for e.g. see [21; 22; 23]. Corresponding to any Gaussian circuit ensemble that satisfies the condition \(\tilde{w}\left(\vec{n}\right)=\tilde{w}\left(-\vec{n}\right)\), we can find the bulk measurement protocol on the toric code that realizes that Gaussian circuit, via Eq. 17. These works have found the entanglement phase diagram to consist of regions of area law and critical logarithmic scaling separated by phase transitions, and we infer that the boundary MIE phase diagram also behaves similarly. This demonstrates that the MIE phase diagram we obtained by mapping the result of specific measurement protocol (along \(X,Y,\) or \(Z\) directions) to the CPLC model, is qualitatively robust to modifications of the measurement protocol to general on-site measurements. However, the specific phase boundaries and the nature of the phase transition between the area law and critical phases vary between different ensembles, as discussed in [22]. ## VIII Discussion We explored the use of the measurement-based quantum computing (MBQC) setup for generating and manipulating quantum phases of matter. Specifically, we focused on the toric code, a topologically ordered state, and mapped the effects of random Pauli measurements to a classical loop model, allowing for an analytical understanding for measurement-induced entanglement. Additionally, we mapped general on-site measurements to Gaussian tensor networks and hybrid circuits. We found that the entanglement pattern imprinted on the un-measured qubits following measurement of the bulk of the toric code groundstate undergoes a phase transition that reflects the transition in the corresponding classical loop model. When a boundary chain of qubits is left un-measured, the boundary state can have either area law or logarithmic scaling of entanglement entropy, depending on the relative \(X,Y,Z\) Pauli measurement frequencies. Additionally, we found that these states can also be distinguished by a spin-glass order parameter. This allowed us to devise an adaptive protocol conditioned on the measurement results, which can steer the boundary state into a ferromagnetically ordered state, and this can be efficiently probed in experiments. Because of the relative simplicity of bulk single-site measurements and the fact that toric code states have already been realized in quantum hardware [33; 34], the setup described in this work are experimentally relevant. Our MBQC-based setup provides a way to simulate \(d+1\)-spacetime dimensional hybrid circuits by one layer of local measurements on a \(d+1\) dimensional entangled resource states. In quantum devices with limited coherence times, this setup may provide a promising practical route towards simulating such hybrid circuits. Our work illustrates how parton constructions can be leveraged in MBQC schemes, and it is worth exploring generalizations especially in higher-dimensional resource states. For example, fraction orders admit Majorana parton descriptions [35; 36], and one can consider the effect of measurements on such states. Another noteworthy example is the Levin-Wen 3D plaquette model [37], which serves as a natural generalization of the projection of a free parton state prepared on the edges. One can analyze the effect of measurements by a very similar mapping to a loop model in three dimensions. Furthermore, considering higher dimensional un-measured manifolds might lead to more complex entanglement structures. It is also interesting to investigate the relation between measurement-induced entanglement (MIE) in random bases to other aspects of wavefunction complexity. For example, MIE after measurements in a fixed basis can diagnose the sign structure in that particular basis [29], and randomizing the measurement bases may partially probe the robustness of the sign structure to local unitary transformations ("intrinsic sign structure"). There may also be connections between the universality of the resource state in MBQC and the entanglement pattern induced by measurement. For example, a similar bulk measurement protocol on cluster states, which are universal MBQC resources, leads to a phase transition between area and volume-law states [18]. This can be contrasted with the toric code case (which is not a resource for universal MBQC) in this work, where any subregion of the boundary state has at most logarithmic scaling of entanglement entropy. ### Acknowledgments We thank Xiao Chen, Paul Herringer, Peter Lu, Adam Nahum, and Beni Yoshida for useful discussions and Amin Moharramipour for carefully reading our manuscript. This work was supported by the Perimeter Institute for Theoretical Physics (PI), the Natural Sciences and Engineering Research Council of Canada (NSERC), and an Ontario Early Researcher Award. Research at PI is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities.
2302.07534
Reliable optimization of arbitrary functions over quantum measurements
As the connection between classical and quantum worlds, quantum measurements play a unique role in the era of quantum information processing. Given an arbitrary function of quantum measurements, how to obtain its optimal value is often considered as a basic yet important problem in various applications. Typical examples include but not limited to optimizing the likelihood functions in quantum measurement tomography, searching the Bell parameters in Bell-test experiments, and calculating the capacities of quantum channels. In this work, we propose reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with certain gradient algorithms. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions.
Jing Luo, Jiangwei Shang
2023-02-15T09:07:15Z
http://arxiv.org/abs/2302.07534v1
# Reliable optimization of arbitrary functions over quantum measurements ###### Abstract As the connection between classical and quantum worlds, quantum measurements play a unique role in the era of quantum information processing. Given an arbitrary function of quantum measurements, how to obtain its optimal value is often considered as a basic yet important problem in various applications. Typical examples include but not limited to optimizing the likelihood functions in quantum measurement tomography, searching the Bell parameters in Bell-test experiments, and calculating the capacities of quantum channels. In this work, we propose reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with certain gradient algorithms. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions. quantum measurement; Gilbert's algorithm; convex optimization; nonconvex optimization ## 1 Introduction In quantum information science, numerous complex mathematical problems remain to be solved. Since the set of quantum states as well as quantum measurements form convex sets, various important tasks in this field, such as the calculation of ground state energy, violation of the Bell inequality, and the detection and quantification of quantum entanglement [1, 2], conform to the framework of convex optimization theory. The primary tool in convex optimization is semidefinite programming (SDP) [3, 4], which can be used to derive relaxed constraints and provide accurate solutions for a large number of computationally challenging tasks. However, serious drawbacks also exist for SDP including its slow computation speed and low accuracy. For instance, SDP can only compute up to four qubits in quantum state tomography (QST), while improved superfast algorithms [5] can quickly go up to eleven qubits with a higher precision. Consequently, developing more efficient algorithms in convex optimization is becoming more and more crucial as quantum technologies rapidly advance. Recently, an efficient convex optimization algorithm [6] was proposed by Brierley _et al._ based on the so-called Gilbert's algorithm [7]. Concurrently, Ref. [8] used Gilbert's algorithm to investigate whether nonlocal relationships can be distinguished in polynomial time. In Ref. [9], Gilbert's algorithm was employed as a tool to satisfy certain constraints, based on which two reliable convex optimization schemes over the quantum state space were proposed. In addition, some nonconvex optimization algorithms were also brought out for QST, for instance the one in Ref. [10] is faster and more accurate as compared to previous approaches. One notices that all these studies concern only the optimization over quantum state space, with the consideration over quantum measurement space being rarely mentioned. In fact, various important and meaningful problems related to quantum measurements exist in convex optimization, including for example, searching the Bell parameters in Bell-test experiments [11], optimizing the correlation of quantum measurements under different measurement settings [12, 13, 14, 15], and maximizing the likelihood functions in quantum measurement tomography. Meanwhile, characterization of quantum measurements forms the basis for quantum state tomography [16, 17, 18] and quantum process tomography [19, 20, 21]. Therefore, convex optimization over the quantum measurement space stands as an independent yet important problem in quantum information theory. However, the space of quantum measurements is much more complex as compared to the quantum state space since it is possible to produce an infinite variety of different measurement outcomes as long as the probabilities for these outcomes sum to one. Recently, Ref. [22] proposed a method to optimize over the measurement space based on SDP, but it fails to solve complex tasks due to the intrinsic problem with SDP. Worst of all, nonconvex functions [23] easily appear in the space of quantum measurements. Unlike convex functions, local optima might be found during the process of optimization. Hence, nonconvex optimization is regarded as more difficult than convex optimization. In this work, we propose two reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with the direct-gradient (DG) algorithm as well as the accelerated projected gradient (APG) algorithm. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions. This work is organized as follows. In Sec. 2, we propose two reliable algorithms for optimizing over quantum measurement space by combining Gilbert's algorithm with the DG and APG algorithms respectively. The universality of our method is demonstrated by several examples with both convex and nonconvex functions in Sec. 3. The last Sec. 4 is the summary. **2. Function optimization** In the quantum state space \(\mathcal{Q}\), an arbitrary state \(\rho\) should satisfy the conditions \[\rho \geq 0\,, \tag{1}\] \[\text{tr}(\rho) = 1\,. \tag{2}\] Given a smaller convex subset \(\mathcal{C}\in\mathcal{Q}\), Gilbert's algorithm can be used to approximately find the closest state \(\rho^{\mathcal{C}}\in\mathcal{C}\) with respect to \(\rho\)[9]. In general, for an arbitrary matrix \(M\) in the matrix space \(\mathcal{M}\), we employ Gilbert's algorithm to search for the closest quantum state \(\rho^{\mathcal{Q}}\in\mathcal{Q}\) with respect to \(M\). Throughout this work, let's denote the operation by using Gilbert's algorithm as \[\rho^{\mathcal{Q}}\equiv S\big{(}M\big{)}\,. \tag{3}\] Given experimental data, it is critical to identify the measurement settings that are most compatible with the data. Here, we consider the quantum measurement space \(\mathcal{Q}\) as all the positive operator-valued measures (POVMs). A quantum measurement device is characterized by a set of operators \(\big{\{}\Pi_{I}\big{\}}\), which have to satisfy two constraints \[\Pi_{I} \geq 0\,, \tag{4}\] \[\sum_{I=1}^{L}\Pi_{I} = \mathbb{I}\,, \tag{5}\] where \(L\) is the total number of operators in the set. Denote a function \(\mathcal{F}\Big{[}\big{\{}\Pi_{I}\big{\}}\Big{]}\) defined over the quantum measurement space \(\mathcal{Q}\). We assume that \(\mathcal{F}\Big{[}\big{\{}\Pi_{I}\big{\}}\Big{]}\) is differentiable with the gradient \(\nabla\mathcal{F}\Big{[}\big{\{}\Pi_{I}\big{\}}\Big{]}\equiv G\Big{[}\big{\{} \Pi_{I}\big{\}}\Big{]}\). The objective is to optimize \(\mathcal{F}\Big{[}\big{\{}\Pi_{I}\big{\}}\Big{]}\) over the entire quantum measurement space, and we have \[\text{optimize} \mathcal{F}\Big{[}\big{\{}\Pi_{I}\big{\}}\Big{]}\,, \tag{6a}\] \[\text{s.t.} \big{\{}\Pi_{I}\big{\}}\in\mathcal{Q}\,. \tag{6b}\] A simple gradient method is very likely to take \(\big{\{}\Pi_{I}\big{\}}\) outside of the quantum measurement space, for this we employ Gilbert's algorithm to guarantee the condition in Eq. (4). In addition, we rewrite the POVM as \(\big{\{}\Pi_{I}\big{\}}=\big{\{}\Pi_{1},\Pi_{2},\ldots,\Pi_{L-1},\mathbb{I}- \sum_{I=1}^{L-1}\Pi_{I}\big{\}}\) to satisfy the condition in Eq. (5). Then, the structure of optimization proceeds as follows. Taking the to-be-minimized objective function as an example, for the \((k+1)\)th iteration, first update the \((L-1)\) elements foremost of the measurement operators with the DG scheme to get \[\begin{split}\Pi_{l,k+1}&=\Pi_{l,k}-\epsilon G\Big{(} \Pi_{l,k}\Big{)}\\ &\equiv DG\Big{[}\Pi_{l,k},G\Big{(}\Pi_{l,k}\Big{)},\epsilon \Big{]}.\end{split} \tag{7}\] Here, \(\epsilon\) represents the step size of the update which can be any positive value, and \(k\) is the number of iterations. Second, normalize the measurement operators \(\Pi_{l,k+1}\) as density matrices \(\rho_{l,k+1}\), such that \[\rho_{l,k+1}=\frac{\Pi_{l,k+1}}{\text{tr}(\Pi_{l,k+1})}\,, \tag{8}\] which could be nonphysical. Third, use Gilbert's algorithm to project \(\rho_{l,k+1}\) back to the quantum state space \(\mathcal{Q}\), i.e., \(\rho_{l,k+1}\rightarrow\rho_{l,k+1}^{\mathcal{Q}}=\mathcal{S}(\rho_{l,k+1})\). Finally, reconstruct the physical measurement operators as \[\left\{\Pi_{l,k+1}^{\mathcal{O}}=\rho_{l,k+1}^{\mathcal{Q}}t_{l,k+1}\right\} _{l=1}^{L-1}, \tag{9}\] \[\Pi_{L,k+1}^{\mathcal{O}}=\mathbb{I}-\sum_{l=1}^{L-1}\Pi_{l,k+1}^{\mathcal{O }}\,, \tag{10}\] where the parameter \(t_{l}\) is obtained by fixing the obtained \(\rho_{l,k+1}^{\mathcal{Q}}\) to get \(\left\{t_{l,k+1}\right\}_{l=1}^{L-1}=\text{argmin}\,\mathcal{F}\Big{[}\big{[} t_{l,k+1}\big{]}_{l=1}^{L-1}\Big{]}\). Here, to ensure that the first \((L-1)\) measurement operators satisfy condition Eq. (4), only \(t_{l,k+1}\geq 0\) is required since \(\rho_{l,k+1}^{\mathcal{Q}}\geq 0\) is guaranteed by using Gilbert's algorithm. Meanwhile, in order to ensure the last element of the new POVM satisfying the condition in Eq. (4), let \[\Pi_{L,k+1}^{\mathcal{O}}=\mathbb{I}-\sum_{l=1}^{L-1}\Bigl{(}\rho_{l,k+1}^{ \mathcal{O}}t_{l,k+1}\Bigr{)}\geq 0\,. \tag{11}\] Hence, we get the new POVM \(\left\{\Pi_{k+1,l}^{\mathcal{Q}}\right\}\) that satisfies the condition in Eq. (6b) after each iteration. Whenever the difference between the values of the adjacent iterations is less than a certain threshold, the iteration stops and the optimal POVM is obtained. Otherwise, continue with the iteration and the step size is controlled by a step factor \(\beta\). When \(\mathcal{F}_{k}<\mathcal{F}_{k-1}\), the step size is appropriately selected. When \(\mathcal{F}_{k}>\mathcal{F}_{k-1}\), it indicates that the step size selection is too large, and the step factor \(\beta\) needs to be used to adjust the step size. See the DG algorithm in Algorithm 1. However, the DG algorithm owns some disadvantages, such as slow optimization speed and low accuracy. For faster convergence, one can choose the APG algorithm [5; 24]. The APG algorithm adjusts the direction of the gradient at each step, which improves the convergence speed of the algorithm. In simple terms, the APG algorithm has introduced a companion operator \(E_{l,k}=\Pi_{l,k}+\frac{\theta_{k-1}-1}{\theta_{k}}\Big{(}\Pi_{l,k}-\Pi_{l,k-1 }\Big{)}\), which provides the momentum of the previous step controlled by the parameter \(\theta\), to update the measurement operators \(\Pi_{l,k}=E_{l,k-1}-\epsilon G\Big{(}E_{l,k-1}\Big{)}\). See the specific process shown in Algorithm 2. ``` Input:\(\epsilon>0\), \(0<\beta<1\), choose any \(\left\{\Pi_{l,0}\right\}_{l=1}^{L-1}\in\Omega\), \(\mathcal{F}_{0}=\mathcal{F}\big{[}\big{[}\Pi_{l,0}\big{]}\big{]}\). Output:\(\left\{\Pi_{l}\right\}\). 1for\(k=1,\cdots,\)do 2for\(l=1,\cdots,L-1\)do 3 Update \(\Pi_{l,k}=\text{DG}\big{[}\Pi_{l,k-1},G\big{(}\Pi_{l,k-1}\big{)},\epsilon\big{]}\). Calculate \(\rho_{l,k}\) and \(\rho_{l,k}^{Q}=\mathcal{S}\big{(}\rho_{l,k}\big{)}\). 4 5 Gain \(\left\{l_{L}\right\}_{l=1}^{L-1}=\text{argmin}\ \mathcal{F}_{k}\left[\left\{l_{L}\right\}_{l=1}^{L-1}\right]\). Calculate \(\left\{\Pi_{l,k}^{Q}\right\}\), \(\mathcal{F}_{k}=\mathcal{F}\big{[}\big{[}\Pi_{l,k}^{Q}\big{]}\big{]}\). 6 Termination criterion! 7if\(\mathcal{F}_{k}>\mathcal{F}_{k-1}\)then 8 Reset \(\epsilon=\beta\epsilon\), and \(\left\{\Pi_{l,k}\right\}=\left\{\Pi_{l,k-1}^{Q}\right\}\). 9 10 end if 11 12 end for ``` **Algorithm 1**DG algorithm ``` Input:\(\epsilon>0\), \(0<\beta<1\), choose any \(\left\{\Pi_{l,0}\right\}_{l=1}^{L-1}\in\Omega\), \(\left\{E_{l,0}\right\}=\left\{\Pi_{l,0}\right\}\), \(\theta_{0}=1\), and \(\mathcal{F}_{0}=\mathcal{F}\big{[}\big{[}\Pi_{l,0}\big{]}\big{]}\). Output:\(\left\{\Pi_{l}\right\}\). 1for\(k=1,\cdots,\)do 2for\(l=1,\cdots,L-1\)do 3 Update \(\Pi_{l,k}=E_{l,k-1}-\epsilon\,G\big{(}E_{l,k-1}\big{)}\). Calculate \(\rho_{l,k}\) and \(\rho_{l,k}^{Q}=\mathcal{S}\big{(}\rho_{l,k}\big{)}\). 4 5 end for 6 Gain \(\left\{l_{L}\right\}_{l=1}^{L-1}=\text{argmin}\ \mathcal{F}_{k}\left[\left\{l_{L}\right\}_{l=1}^{L-1}\right]\). Calculate \(\left\{\Pi_{l,k}^{Q}\right\}\), \(\mathcal{F}_{k}=\mathcal{F}\big{[}\big{[}\Pi_{l,k}^{Q}\big{]}\big{]}\). 7 Termination criterion! 8if\(\mathcal{F}_{k}>\mathcal{F}_{k-1}\)then 9 Reset \(\epsilon=\beta\epsilon\), and \(\left\{\Pi_{l,k}\right\}=\left\{\Pi_{l,k-1}^{Q}\right\}\). \(\left\{E_{l,k}\right\}=\left\{\Pi_{l,k}\right\}\), and \(\theta_{k}=1\). 10 11else 12 Set \(\theta_{k}=\frac{1}{2}\Big{(}1+\sqrt{1+4\theta_{k-1}^{2}}\Big{)}\). Update \(\left\{E_{l,k}\right\}=\left\{\Pi_{l,k}+\frac{\theta_{k-1}-1}{\theta_{k}} \big{(}\Pi_{l,k}-\Pi_{l,k-1}\big{)}\right\}\). 13 14 end if 15 16 end for ``` **Algorithm 2**APG algorithm ## 3 Applications In this section, we demonstrate the efficacy of our algorithms by optimizing arbitrary convex as well as nonconvex functions over the space of quantum measurements. ### Convex functions In quantum measurement tomography [25; 26; 27], a set of known probe states \(\rho_{m}\) is measured to provide the information needed to reconstruct an unknown POVM \(\left\{\Pi_{l}\right\}\). The probability that the device would respond to the quantum state \(\rho_{m}\) by producing the outcome \(\Pi_{l}\) is given by \[p_{lm}=\text{tr}\big{(}\rho_{m}\Pi_{l}\big{)}\,. \tag{12}\] Typically, the linear inversion method [28] can be used to get the ideal POVM, but nonphysical results are likely to be obtained. Then, the maximum likelihood estimation (MLE) [29] is proposed to reconstruct the POVM that satisfies all the conditions. However, MLE fails to return any meaningful results when the target POVM is of low rank, which is quite typical especially in higher-dimensional spaces. These problems can be avoided by using our algorithms. To estimate the operators \(\left\{\Pi_{l}\right\}\), we maximize the likelihood function \[\mathcal{L}\!\left[\left\{\Pi_{l}\right\}\right]=\prod_{l=1}^{L}\prod_{m=1}^{M} \!\left[\text{tr}\!\left(\rho_{m}\Pi_{l}\right)\right]^{\!\tilde{l}_{m}}, \tag{13}\] where \(M\) is the number of different input states \(\rho_{m}\), and \[f_{lm}=\frac{n_{lm}}{n}\,, \tag{14}\] with \(n_{lm}\) denoting the number of \(l\)th outcome when measuring the \(m\)th state \(\rho_{m}\), and \(n\) representing the total number of measured input states. One can see that \(\mathcal{L}\!\left[\left\{\Pi_{l}\right\}\right]\) is not strictly concave, while the log-likelihood \(\ln\mathcal{L}\!\left[\left\{\Pi_{l}\right\}\right]\) is. Here, we minimize the negative log-likelihood function \(\mathcal{F}\!\left[\left\{\Pi_{l}\right\}\right]=-\ln\mathcal{L}\!\left[ \left\{\Pi_{l}\right\}\right]\) with \[\ln\mathcal{L}\!\left[\left\{\Pi_{l}\right\}\right]=\sum_{l=1}^{L}\sum_{m=1}^ {M}f_{lm}\ln p_{lm}\,. \tag{15}\] To satisfy the condition in Eq. (5), rewrite the objective function as \[\ln\mathcal{L}\!\left[\left\{\Pi_{l}\right\}\right]=\sum_{l=1}^{L-1}\sum_{m=1} ^{M}f_{lm}\ln\!\left[\text{tr}\!\left(\rho_{m}\Pi_{l}\right)\right]+\sum_{m=1} ^{M}f_{lm}\ln\!\left\{\text{tr}\!\left[\rho_{m}\!\left(\mathbb{I}-\sum_{l=1}^{ L-1}\Pi_{l}\right)\right]\right\}. \tag{16}\] The gradient of \(\ln\mathcal{L}\!\left[\left\{\Pi_{l}\right\}\right]\) with respect to \(\Pi_{l}\) is \[\nabla\ln\mathcal{L}\!\left(\Pi_{l}\right)=\sum_{m=1}^{M}\!\left[\left(\frac{ f_{lm}}{p_{lm}}-\frac{f_{lm}}{1-\sum_{l=1}^{L-1}p_{lm}}\right)\!\rho_{m}\right]\!. \tag{17}\] For numerical simulations, we mainly consider Pauli measurements which are the most commonly-used measurements in quantum information processing. Then the cases of one qubit, one qutrit, two qubits, and two qutrits are used for the experimental setup respectively. Specifically, the setups of these four scenarios are described below. #### 3.1.1 One qubit For one qubit, we take the eigenstates of \(\sigma_{z}\) and the superposition states \(-\frac{1}{\sqrt{2}}\!\left(\left|0_{z}\right\rangle\pm\left|1_{z}\right\rangle\right)\) and \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}\right\rangle\pm i\left|1_{z}\right\rangle\right)\) as the input states. In the measurement setup, we select the projection of the spin along the \(x\) axis, i.e., \[\Pi_{1}=\left|0_{x}\right\rangle\!\left\langle 0_{x}\right|;\quad\Pi_{2}= \left|1_{x}\right\rangle\!\left\langle 1_{x}\right|. \tag{18}\] #### 3.1.2 One qutrit For one qutrit, we use 12 different input states: three eigenstates of \(\sigma_{z}\), \(\left|-1_{z}\right\rangle\), \(\left|0_{z}\right\rangle\) and \(\left|1_{z}\right\rangle\), and nine superposition states \(\frac{1}{\sqrt{2}}\!\left(\left|-1_{z}\right\rangle+e^{i\phi_{j}}\!\left|0_{z} \right\rangle\right)\), \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}\right\rangle+e^{i\phi_{j}}\!\left|1_{z }\right\rangle\right)\) and \(\frac{1}{\sqrt{2}}\!\left(\left|-1_{z}\right\rangle+e^{i\phi_{j}}\!\left|1_{z }\right\rangle\right)\), where \(j=1,2,3\); and \(\phi_{1}=0\), \(\phi_{2}=\frac{\pi}{2}\), and \(\psi_{3}=\pi\). The device measures the projection of the spin along the \(x\) axis, and the POVM are projectors \[\Pi_{1}=\left|-1_{x}\right\rangle\!\left\langle-1_{x}\right|;\quad\Pi_{2}= \left|0_{x}\right\rangle\!\left\langle 0_{x}\right|;\quad\Pi_{3}=\left|1_{x} \right\rangle\!\left\langle 1_{x}\right|. \tag{19}\] #### 3.1.3 Two qubits In the case of two qubits, we take the tensor products of the four eigenstates of two Pauli-\(Z\) operators \(\left|0_{z}0_{z}\right\rangle\), \(\left|1_{z}1_{z}\right\rangle\), \(\left|0_{z}1_{z}\right\rangle\), \(\left|1_{z}0_{z}\right\rangle\) and the superposition states \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}0_{z}\right\rangle+e^{i\phi_{j}}\!\left|0_{z }1_{z}\right\rangle\right)\), \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}0_{z}\right\rangle+e^{i\phi_{j}}\!\left|1_ {z}0_{z}\right\rangle\right)\), \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}0_{z}\right\rangle+e^{i\phi_{j}}\!\left|1_ {z}0_{z}\right\rangle\right)\), \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}0_{z}\right\rangle+e^{i\phi_{j}}\!\left|0_ {z}0_{z}\right\rangle+e^{i\phi_{j}}\!\left|0_{z}0_{z}\right\rangle\), \(\frac{1}{\sqrt{2}}\!\left(\left|0_{z}0_{z}\right\rangle+e^{i\phi_{j}}\! \(e^{i\psi_{j}}|1_{z}1_{z}\rangle\)) \(\frac{1}{\sqrt{2}}\bigg{(}|0_{z}1_{z}\rangle+e^{i\psi_{j}}|1_{z}0_{z}\rangle\bigg{)}\), \(\frac{1}{\sqrt{2}}\bigg{(}|0_{z}1_{z}\rangle+e^{i\psi_{j}}|1_{z}1_{z}\rangle \bigg{)}\), \(\frac{1}{\sqrt{2}}\bigg{(}|1_{z}0_{z}\rangle+e^{i\psi_{j}}|1_{z}1_{z}\rangle \bigg{)}\) as the probe states, where \(j=1,2,3;\psi_{1}=0,\psi_{2}=\frac{\pi}{2}\), and \(\psi_{3}=\pi\). Then, we choose the following POVM for the experimental simulation \[\begin{split}\Pi_{1}&=|0_{x}0_{x}\rangle\langle 0_{x}0_{ x}|\,;\quad\Pi_{2}=|0_{x}1_{x}\rangle\langle 0_{x}1_{x}|\,;\\ \Pi_{3}&=|1_{x}0_{x}\rangle\langle 1_{x}0_{x}|\,;\quad \Pi_{4}=|0_{x}0_{x}\rangle\langle 0_{x}0_{x}|\,.\end{split} \tag{20}\] #### 3.1.4 Two qutrits Finally, for the case of two qutrits, we perform a numerical simulation of the Stern-Gerlach apparatus measuring two particles with spin-1. We assume 45 different input states: \(|1_{z}-1_{z}\rangle\), \(|-1_{z}0_{z}\rangle\), \(|-1_{z}1_{z}\rangle\), \(|0_{z}-1_{z}\rangle\), \(|0_{z}0_{z}\rangle\), \(|0_{z}1_{z}\rangle\), \(|1_{z}0_{z}\rangle\), \(|1_{z}1_{z}\rangle\), \(|-1_{z}-1_{z}\rangle\), and 36 superposition states. In the simulation, the device measures the projection of the spin along the \(x\) axis, and the POVM are projectors \[\begin{split}\Pi_{1}&=|0_{x}1_{x}\rangle\langle 0_{ x}1_{x}|\,;\quad\Pi_{2}=|0_{x}-1_{x}\rangle\langle 0_{x}-1_{x}|\,;\\ \Pi_{3}&=|1_{x}0_{x}\rangle\langle 1_{x}0_{x}|\,; \quad\Pi_{4}=|1_{x}-1_{x}\rangle\langle 1_{x}-1_{x}|\,;\\ \Pi_{5}&=|0_{x}0_{x}\rangle\langle 0_{x}0_{x}|\,; \quad\Pi_{6}=|-1_{x}0_{x}\rangle\langle-1_{x}0_{x}|\,;\\ \Pi_{7}&=|1_{x}1_{x}\rangle\langle 1_{x}1_{x}|\,; \quad\Pi_{8}=|-1_{x}1_{x}\rangle\langle-1_{x}1_{x}|\,;\\ \Pi_{9}&=|-1_{x}-1_{x}\rangle\langle-1_{x}-1_{x}|\,. \end{split} \tag{21}\] For each case of simulation, the number of measurements for each probe state is 300, \(10^{5}\), \(10^{5}\), and \(5\times 10^{5}\) respectively. Then according to the frequency obtained by the simulated data, we use our algorithm to reconstruct the POVM. The fidelity between different POVM elements is defined as the fidelity between the two states \(\sigma\) and \(\rho\), i.e., \[F(\sigma,\rho)\coloneqq\bigg{(}\text{tr}\sqrt{\sqrt{\sigma}\rho}\sqrt{\sigma }\bigg{)}^{2}=F\bigg{(}\frac{\Pi_{I}}{\text{tr}(\Pi_{I})}\,,\frac{\Pi_{j}}{ \text{tr}(\Pi_{j})}\bigg{)}. \tag{22}\] In addition, the overall fidelity between two POVMs \(\big{[}\Pi_{I}\big{]}_{I=1}^{L}\) and \(\big{[}\Pi_{J}\big{]}_{j=1}^{L}\) on an \(d\)-dimensional Hilbert space is defined by \[F(\Pi_{I},\Pi_{j})\coloneqq\left[\sum_{I=1}^{L}w_{I}\sqrt{F\bigg{(}\frac{\Pi_ {I}}{\text{tr}(\Pi_{I})},\frac{\Pi_{J}}{\text{tr}(\Pi_{j})}\bigg{)}}\right]^{ 2}, \tag{23}\] with \(w_{I}=\frac{\sqrt{\text{tr}(\Pi_{I})\text{tr}(\Pi_{J})}}{d}\)[30]. The overall fidelities of the reconstructed POVMs are shown in Fig. 1. Figures 2 and 3 present the variations of fidelity of the POVM elements reconstructed using DG algorithm and APG algorithm with respect to the number of iteration steps in different cases. We can see that these two algorithms are almost identical in accuracy, and the fidelities of the measurement operators are close to 1. Generally speaking, the APG algorithm converges faster than the DG algorithm. In addition, one notices that the fidelity of the last element in some of the simulations is not always increasing, which is a result of the constraint that we set in Eq. (11). ### Nonconvex functions Quantum detector self-characterization (QDSC) tomography is another method for characterizing quantum measurements. Unlike quantum measurement tomography, this method does not require to know the specific form of the input probe states, but directly optimizes the cost function based on the measurement statistic \(\mathbf{f_{m}}\) to reconstruct the measurements. For POVM with \(L\) outcomes detected by \(m\) states, a data set of the measurement statistic \(f_{lm}\) is obtained. We write the distribution of the data for each state as a vector \[\mathbf{f_{m}}=\left(\begin{array}{c}f_{1m}\\ f_{2m}\\ \vdots\\ f_{Lm}\end{array}\right). \tag{24}\] For the one qubit case, define \(N_{i,l}=\mathbf{b}_{l}^{T}\mathbf{b}_{l}\) and write the POVM as \[\Pi_{l}=a_{l}I+\mathbf{b}_{l}\cdot\mathbf{\sigma} \tag{25}\] under the Bloch representation, where \(i\) and \(l\) represent the number of rows \(i\) and columns \(l\) of the matrix \(N\), \(\mathbf{a}=(a_{1}\cdots a_{L})^{T}\), \(\mathbf{b}_{l}=(b_{l,x},b_{l,y},b_{l,z})\), \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\), \(1\leq i,l\leq L\). The matrix \(N\) and vector \(\mathbf{a}\) can be represented as \[N_{i,l}=\mathbf{b}_{l}^{T}\mathbf{b}_{l}=\frac{1}{2}\text{tr}(\Pi_{i}\Pi_{l})-\frac{1} {4}\text{tr}(\Pi_{i})\text{tr}(\Pi_{l})\,, \tag{26}\] \[a_{l}=\frac{1}{2}\text{tr}(\Pi_{l})\,. \tag{27}\] Figure 1: For different cases, the two algorithms are compared to reconstruct the overall fidelity of the measurements. The number of measurements used in each simulation for each probe state is 300, \(10^{5}\), \(10^{5}\), and \(5\times 10^{5}\) respectively. For most cases, the APG algorithm converges faster than the DG algorithm. Then, optimization of the cost function \(\mathcal{F}\!\left(N^{+},\mathbf{a}\right)\) is given by [23] \[\min \sum_{m}\left[1-\left(\mathbf{f_{m}}-\mathbf{a}\right)^{T}N^{+}\!\left(\mathbf{ f_{m}}-\mathbf{a}\right)\right]^{2},\] (28a) s.t. \[a_{l}^{2}-N_{l,l}>=0\,, \tag{28b}\] where \(N^{+}\) stands for the Moore-Penrose pseudoinverse of \(N\). One notices that the objective function is nonconvex. Optimization of nonconvex functions is difficult as local minima might be found. Interestingly, we find that our algorithm can also be used to optimize nonconvex functions. Since our algorithm guarantees the conditions for quantum measurements, one only needs to optimize the objective function regardless of the constraint in Eq. (28b). For numerical simulations, we choose 50 probe states: \[\frac{1}{2}\!\left(\mathbb{I}+\sigma_{z}\right),\quad\frac{1}{2}\!\left( \mathbb{I}-\sigma_{z}\right),\quad\frac{1}{2}\!\left(\mathbb{I}+\sin\frac{i \pi}{4}\cos\frac{n\pi}{8}\sigma_{x}+\sin\frac{i\pi}{4}\sin\frac{n\pi}{8} \sigma_{y}+\cos\frac{i\pi}{4}\sigma_{z}\right), \tag{29}\] where \(i=1,2,\cdots,6\); \(n=1,2,\cdots,8\). And we use the two-dimensional SIC POVM as the measurement device, and each state is measured 200 times. The APG algorithm is used to optimize the objective function. First, select any set of POVM operators in the measurement space, use Eqs. (26) and (27) to obtain the initial values \(N_{k}^{+}\) and \(\mathbf{a}_{k}\) respectively. Similarly, we calculate the gradient of the objective function in Eq. (28a). The gradient of the objective function is given by \[\mathcal{F}\!\left(\mathbf{a}\right)=\sum_{m}2\!\left(1-\mathbf{f_{m}}-\mathbf{a}\right)^{ T}\!N^{+}\!\left(\mathbf{f_{m}}-\mathbf{a}\right)\!\left(\left(N^{+}\right)^{T}\!\mathbf{f_{m}} +N^{+}\!\mathbf{f_{m}}\!-\!\left[N^{+}+\left(N^{+}\right)^{T}\right]\!\mathbf{a} \right), \tag{30}\] Figure 2: For different cases of the quantum measurement tomography, fidelities of the measurements obtained by the DG algorithm vary with the number of iteration steps. In general, the fidelity of each POVM element saturates to the maximum very quickly. \[\delta\mathcal{F}\big{(}N^{+}\big{)}=\sum_{m}-2\big{(}1-\mathbf{f_{m}}-\mathbf{a}\big{)}^{T }N^{+}\big{(}\mathbf{f_{m}}-\mathbf{a}\big{)}^{2}\big{(}\mathbf{f_{m}}-\mathbf{a}\big{)}^{T}\,. \tag{31}\] The values of \(N_{k+1}\) and \(\mathbf{a_{k+1}}\) are obtained by iterating over \(N_{k}\) and \(\mathbf{a_{k}}\) using gradient descent, then \(\mathbf{b}_{l,k+1}\) is obtained by decomposing \(N_{k+1}\). In the experiment, we specify that the reference frame, i.e., the vector \(\mathbf{b}_{1}\) is parallel to the \(z\) direction of the Bloch sphere, and set the \(xz\) plane of the Bloch sphere as the plane determined by the vectors \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\). This is equivalent to \(b_{1,x}=b_{1,y}=b_{2,y}=0\). Then, \(\big{\{}\Pi_{l,k+1}\big{\}}_{l=1}^{L-1}\) can be obtained by using Eq. (25), which is the update for \(\big{\{}\Pi_{l,k}\big{\}}_{l=1}^{L-1}\). The fidelity of each POVM element can approach 1 in a very small number of iteration steps; see Fig. 4. Then the fidelities of the measurements are compared with the ones reported in Ref. [23], demonstrating that the performance of our algorithm is slightly better; see Fig. 5. ## 4 Summary We have proposed two reliable algorithms for optimizing arbitrary functions over the quantum measurement space. For demonstration, we have shown several examples on the convex function of quantum measurement tomography with different dimensions as well as nonconvex function of one qubit in quantum detector self-characterization tomography. Surprisingly, our method does not encounter the problem of rank deficiency. Compared with SDP, our method can be easily applied to higher-dimensional cases as well as to optimize nonconvex functions. Moreover, our method reports better results as compared to previous approaches. For future work, we will consider the optimization over the joint space of quantum states and quantum measurements, for tasks such as calculating the capacity of quantum channels. Figure 3: For different cases of the quantum measurement tomography, fidelities of the measurements obtained by the APG algorithm vary with the number of iteration steps. In general, the fidelity of each POVM element saturates to the maximum very quickly. **Funding:** This work has been supported by the National Natural Science Foundation of China (Grants No. 11805010, No. 12175014, and No. 92265115). **Acknowledgments:** We thank Ye-Chao Liu for fruitful discussions. **Conflicts of Interest:** The authors declare no conflicts of interest.
2302.08607
Adaptive Axonal Delays in feedforward spiking neural networks for accurate spoken word recognition
Spiking neural networks (SNN) are a promising research avenue for building accurate and efficient automatic speech recognition systems. Recent advances in audio-to-spike encoding and training algorithms enable SNN to be applied in practical tasks. Biologically-inspired SNN communicates using sparse asynchronous events. Therefore, spike-timing is critical to SNN performance. In this aspect, most works focus on training synaptic weights and few have considered delays in event transmission, namely axonal delay. In this work, we consider a learnable axonal delay capped at a maximum value, which can be adapted according to the axonal delay distribution in each network layer. We show that our proposed method achieves the best classification results reported on the SHD dataset (92.45%) and NTIDIGITS dataset (95.09%). Our work illustrates the potential of training axonal delays for tasks with complex temporal structures.
Pengfei Sun, Ehsan Eqlimi, Yansong Chua, Paul Devos, Dick Botteldooren
2023-02-16T22:19:04Z
http://arxiv.org/abs/2302.08607v1
# Adaptive Axonal Delays in Feedforward Spiking Neural Networks for Accurate Spoken Word Recognition ###### Abstract Spiking neural networks (SNN) are a promising research avenue for building accurate and efficient automatic speech recognition systems. Recent advances in audio-to-spike encoding and training algorithms enable SNN to be applied in practical tasks. Biologically-inspired SNN communicates using sparse asynchronous events. Therefore, spike-timing is critical to SNN performance. In this aspect, most works focus on training synaptic weights and few have considered delays in event transmission, namely axonal delay. In this work, we consider a learnable axonal delay capped at a maximum value, which can be adapted according to the axonal delay distribution in each network layer. We show that our proposed method achieves the best classification results reported on the SHD dataset (92.45%) and NTIDIGITS dataset (95.09%). Our work illustrates the potential of training axonal delays for tasks with complex temporal structures. Pengfei Sun\({}^{1}\), Ehsan Eqlimi\({}^{1}\), Yansong Chua\({}^{2}\), Paul Devos\({}^{1}\), Dick Botteldooren\({}^{1}\)+\({}^{1}\)WAVES Research Group, Ghent University, Belgium \({}^{2}\)China Nanhu Academy of Electronics and Information Technology, China Footnote †: This research is supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO2003000). + Footnote †: This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO2003000). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300). + Footnote †: This research is also also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO2003000). + Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie where \(W_{ij}^{l-1}\) indicates the synaptic weight from neuron \(j\) to \(i\) at layer \(l-1\) and \(u_{i}^{l}\) refers to the membrane potential of neuron \(i\) in layer \(l\), while \(s_{j}^{l-1}\) is the incoming spike pattern from the preceding neuron \(j\). In this experiment, we use the response signal \(a(t)=(\epsilon*s^{J-1})(t)\) to describe the response of neurons by convolving input spikes \(s^{l-1}(t)\) with the response kernel \(\epsilon\), where \(\epsilon(t)=\frac{t}{\tau_{s}}\exp(1-\frac{t}{\tau_{s}})\Theta(t)\). Here, \(\Theta(t)\) represents the Heaviside step function. Likewise, the refractory signal can be described as \((\nu*s^{J})(t)\), where \(\nu(t)=-2\theta_{u}\,\frac{1}{\nu_{t}}\exp(1-\frac{t}{\tau_{s}})\Theta(t)\). Here, the parameter \(\tau_{s}\) and \(\tau_{s}\) are the time constant of the corresponding kernels. An output is generated whenever the \(u_{i}^{l}\) surpasses the pre-defined threshold \(\theta_{u}\). This spike-generation process can be formulated as \[s_{i}^{l}(t)=\Theta(u_{i}^{l}(t)-\theta_{u}) \tag{2}\] ### Axonal delay module and adaptive delay caps In Fig.1, the axonal delay module and an adaptive training scheduler for delay caps are shown. The axonal delay is part of spike transmission, and we formulate it such that it can be jointly learned with an optimization algorithm. \[s_{d}^{l}(\hat{t})=\delta(t-d^{l})*s^{J}(t) \tag{3}\] In layer \(l\), \(d^{l}\) represents the set of delays \(\{d_{1},d_{2},..,d_{n}\}\) subject to the constraint that \(\{d_{1}<d_{2},..,<d_{n}\leq\theta_{d}\}\). Meanwhile, \(s_{d}^{l}(\hat{t})\) denotes the spike trains output by the delay module at a shifted time \(\hat{t}\). From the optimization point of view, constraining the delay value can facilitate learning. Here, we compute the fraction of neurons against the total number of neurons within a sliding window [24] in the adaptive training scheduler so as to optimize training. Consider the sliding window \(m\), when the fraction of delay neurons within this window exceeds the pre-defined cap fraction \(\alpha_{\theta}\), the sliding window right-shifts by 1, and the delay cap \(\theta_{d}\) also increases by 1. Pseudo-code of the proposed adaptive training scheduler is presented in Algorithm 1. During training (in the second while loop), the delay will be clipped as follows \[d=max(0,min(d,\theta_{d})) \tag{4}\] where the \(\theta_{d}\) and delays \(d\) will be adaptively adjusted according to our scheduler. ## 3 Experimental Setup ### Datasets We evaluate our proposed methods on the SHD [25] and NTIDIGITS [26] datasets. These datasets are the spike patterns processed by the artificial cochlear models[25, 26] from the original audio signals and have high temporal complexity [27]. The SHD dataset consists of 10,420 utterances of varying durations (0.24\(s\) to 1.17\(s\)) by 12 speakers. It contains a total of 20 digits classes from '0' to '9' in English and German language. We adopt the same data preprocessing method in [28], and the same train/validation/test split of the dataset as the official [25]. The NTIDIGITS is another event-based spoken recognition task. The original TIDIGITS is processd by the 64-channel CochleaAMS1b sensor and its output is recorded. In total, there are 11 spoken digits (the digits '0' to '9' in the Figure 1: Illustration of how the adaptive delay caps are determined and the axonal delays adjusted. The generated spikes \(s^{1}(t)\) will be shifted in time by \(d_{i}\) and then output as spike trains \(s_{d}^{1}(\hat{t})\) in the axonal delay module. The adaptive scheduler will adjust the delay cap accordingly. The delay value may be the same across neurons, such as the top two neurons with the same delay value \(d_{1}\). The layer can be a traditional convolutional layer, dense layer, or recurrent layer. English language and 'oh'). We follow the same train and test split as used in [26]. ### Implementation details In our experiment, we use the Adam optimizer to jointly update the synaptic weight and axonal delay with a constant learning rate of 0.1 and a minibatch size of 128. The initial caps for delay are set to 64 for SHD and 128 for NTIDIGITS, respectively. The pre-train epochs \(x\) are set as 40 and the simulation time step is 1 \(ms\) for both datasets. Table 1 lists the parameters we used in our experiments. We train our SNN on the SLAYER-Pytorch framework [8]. This recently introduced GPU-accelerated software is publicly available and has proven to be effective for training SNN. For both tasks, we use an SNN with two hidden fully-connected layers with 128 hidden units for SHD and 256 neurons for NTIDIGITS. The total number of model parameters is approximately 0.11M for SHD dataset and 0.08M for NTIDIGITS dataset. ## 4 Results ### Ablation Study of Different Cap Fraction and Sliding Window Size The window size and cap fraction are important parameters to get a reasonable delay cap. To evaluate their influence, we design an ablation study to compare the influence, which contains 5 different window sizes and 3 different fraction parameters. The impact of these parameters on SHD and NTIDIGITS datasets is shown in Fig 2. For the SHD dataset, we observe that a small fraction can always get good results, and the best results can be obtained by controlling the number of the largest two delayed neurons within 5% of the total number of neurons, which means the window size is set as 2 and the cap fraction is 5%. While for the NTIDIGITS dataset, the bigger window size and fraction are more helpful, and the accuracy keeps going up when the window size grows bigger besides the situation of window size 5 and cap fraction 10%. ### Shd The result of performance provided by feed-forward SNN, recurrent spiking neural network (RSNN) with adaptive firing threshold, heterogeneous RSNN, RSNN with temporal attention, and our proposed method is given in Table 2. As can be seen from Table 2, our proposed method achieves an accuracy of 92.45%, which is the best performance reported for \begin{table} \begin{tabular}{c c c c c c} \hline **Dataset** & \(\tau_{s}\) & \(\tau_{r}\) & **Initial**\(\theta_{d}\) & \(\theta_{u}\) & \(T_{steps}\) \\ \hline SHD & 1 & 1 & 64 & 10 & 150 \\ NTIDIGITS & 5 & 5 & 128 & 10 & 150 \\ \hline \end{tabular} \end{table} Table 1: Detailed parameters settings for different datasets Figure 2: Ablation study of different sliding window sizes \(m\) and cap fractions \(\alpha_{\theta}\) based on (Top) SHD and (Bottom) N-TIDIGITS. The x-axis indicates the sliding window size, and the y-axis refers to the accuracy. There are 3 different cap fractions (5%, 10%, and 15%) and 5 different window sizes (1,2,3,4, and 5) we use in our experiment. this task. More importantly, it can be observed that the RSNN outperforms the feed-forward SNN, which implies that the task is inherently dynamic. However, our results show that a feed-forward SNN, without recurrent connections, but with an adaptive axonal delay module, can achieve flexible short-term memory and better accuracy, using fewer parameters. ### Nitigits When the ANN (RNN and Phased-LSTM) are applied to the digits recognition task, they achieve an accuracy of 90.90% and 91.25% (see Table 2), respectively. However, these networks cannot fully exploit the advantages of sparse event-based information and have to rely on the event synthesis algorithm, effectively losing the advantage gained when processing information embedded in spike-timing. Using our adaptive axonal delay module, we can achieve the best performance of 95.09%, and compared with Zhang et al.[30] which directly use the spike train level features, our model can improve 1.4% performance while only using 23% of parameters. ### Effect of the Delay Value As shown in Table 3, the delay cap is important to the performance of the model. For both audio classification tasks, the performance is competitive without limiting the range of delay values, demonstrating the effectiveness of the axonal delay module. However, it is still necessary to limit the delay range, and our experiments show that an appropriate delay cap will improve the classification ability of the model. Taking the SHD dataset as an example, our adaptive training scheduler can determine the optimal delay distribution, and this distribution enables the model to achieve the best performance. For other combinations of delay caps, the obtained accuracy drops, may indicate that each network structure has an optimal delay cap, but it is difficult to find the delay caps manually. Our method provides an adaptive training scheduler to search for these optimal parameters. It is worth noting that the NTIDIGITS dataset conforms to this phenomenon. ## 5 Conclusions In this paper, we integrate the learnable axonal delay module into the spiking neuron model and then introduce an adaptive training scheduler to adjust the caps of axonal delay in each network layer. Compared to previous work that adopts a static delay cap, our proposed method significantly improves the classification capability without extra parameters. Furthermore, our adaptive scheduler can be easily integrated into the existing delay module and determine the optimal delay distribution of the network adaptively. We achieve the best performance in SHD (92.45%) and NTIDIGITS (95.09%) datasets with the fewest parameters. These results suggest that a neuron axonal delay with an adaptive delay cap can be used to model a lightweight flexible short-term memory module so as to achieve an accurate and efficient spoken word recognition system. We conjecture that the axonal delay mechanism introduces a form of short-term memory without increasing the number of trainable parameters. For certain data-sets in ASR, whereby 1) information is organized in short sequences, without need for long-term memory, and 2), data is limited in size and hence prone to overfitting, the axonal delay mechanism may work best, in combination with a feed-forward, small SNN. Our experiments seem to agree with the above and further confirm the great potential of using spike timing as part of the solution to an ASR problem. Furthermore, the use of spike-based losses [31] can expedite decision-making, thereby reducing the impact of additional latency even more. \begin{table} \begin{tabular}{c c c c} \hline Datasets & **Method** & **Params** & **Accuracy** \\ \hline \multirow{6}{*}{SHD} & Feed-forward SNN [25] & 0.11 MB & \(48.6\pm 0.9\%\) \\ & RSNN [25] & 1.79 MB & \(83.2\pm 1.3\%\) \\ & RSNN with Adaption [28] & 0.14 MB & \(84.4\%\) \\ & Heterogeneous RSNN [20] & 0.11 MB & \(82.7\pm 0.8\%\) \\ & SNN with Time Attention [29] & 0.12 MB & \(91.08\%\) \\ & **This work (m=2, \(\alpha_{\theta}\) = 5\%)** & **0.11 MB** & \(\mathbf{92.45\%}\) \\ \hline \multirow{6}{*}{NTIDIGITS} & GRU-RNN [26]\({}^{\dagger}\) & 0.11 MB & \(90.90\%\) \\ & Phased-LSTM [26]\({}^{\dagger}\) & 0.61 MB & \(91.25\%\) \\ \cline{1-1} & ST-RSBP [30] & 0.35 MB & \(93.63\pm 0.27\%\) \\ \cline{1-1} & SNN with Axonal delay [23] & 0.08 MB & \(94.45\%\) \\ \cline{1-1} & **This work (m=4, \(\alpha_{\theta}\) = 10\%)** & **0.08 MB** & \(\mathbf{95.09\%}\) \\ \hline \end{tabular} \({}^{\dagger}\) Non-SNN implementation. \end{table} Table 2: Comparison with the state-of-the-art in terms of network size and accuracy. \begin{table} \begin{tabular}{c c c c c} \hline **Dataset** & **Method** & \((\theta_{d_{d}},\theta_{d_{d}})\) & **Params** & **Accuracy** \\ \hline \multirow{6}{*}{\(\mathbf{\Delta\theta}\)} & Manual & (0, 0) & 108,820 & 67.05\% \\ & Manual & (64, 64) & 109,076 & 86.84\% \\ & Manual & (128, 128) & 109,076 & 87.24\% \\ & Adaptive & (107, 175) & 109,076 & \(\mathbf{92.45\%}\) \\ & Manual & (\(+\infty,+\infty\)) & 109,076 & \(\mathbf{84.99\%}\) \\ \hline \multirow{6}{*}{\(\mathbf{\Delta\theta}\)} & Manual & (0, 0) & 85,259 & 78.86\% \\ & Manual & (128, 128) & 85,771 & 94.19\% \\ \cline{1-1} & Adaptive & (215, 215) & 85,771 & \(\mathbf{95.09\%}\) \\ \cline{1-1} & Manual & (\(+\infty,+\infty\)) & 85,771 & 93.83\% \\ \hline \end{tabular} \end{table} Table 3: Ablation studies for different delay cap methods and the effect of the delay cap \(\theta_{d}\) in the axonal delay module. \(\theta_{d_{d}}\) indicates the delay of \(\hat{r}^{th}\) layer. ’Manual’ refers to using a static delay cap, while ’Adaptive’ refers to using our proposed adaptive scheduler.
2301.11606
Reciprocity laws for $(\varphi_L,Γ_L)$-modules over Lubin-Tate extensions
In the Lubin-Tate setting we study pairings for analytic $(\varphi_L,\Gamma_L)$-modules and prove an abstract reciprocity law which then implies a relation between the analogue of Perrin-Riou's Big Exponential map as developed by Berger and Fourquaux and a $p$-adic regulator map whose construction relies on the theory of Kisin-Ren modules generalising the concept of Wach modules to the Lubin-Tate situation.
Peter Schneider, Otmar Venjakob
2023-01-27T09:15:28Z
http://arxiv.org/abs/2301.11606v1
# Reciprocity laws for \((\varphi_{L},\Gamma_{L})\)-modules over Lubin-Tate extensions ###### Abstract In the Lubin-Tate setting we study pairings for analytic \((\varphi_{L},\Gamma_{L})\)-modules and prove an abstract reciprocity law which then implies a relation between the analogue of Perrin-Riou's Big Exponential map as developed by Berger and Fourquaux and a \(p\)-adic regulator map whose construction relies on the theory of Kisin-Ren modules generalising the concept of Wach modules to the Lubin-Tate situation. ###### Contents * 1 Introduction * 2 Notation * 3 Wach-modules a la Kisin-Ren * 3.1 Wach-modules * 3.2 The determinant of the crystalline comparison isomorphism * 3.3 Non-negative Hodge-Tate weights * 4 \((\varphi_{L},\Gamma_{L})\)-modules over the Robba ring * 4.1 Robba rings of character varieties * 4.1.1 The additive character variety and its Robba ring * 4.1.2 The multiplicative character variety and its Robba ring * 4.1.3 Twisting * 4.1.4 The LT-isomorphism, part 1 * 4.2 Consequences of Serre duality * 4.2.1 Cohomology with compact support * 4.2.2 Serre duality for Stein spaces * 4.2.3 Duality for boundary sections * 4.3 \((\varphi_{L},\Gamma_{L})\)-modules * 4.3.1 The usual Robba ring * 4.3.2 The LT-isomorpism, part 2 * 4.3.3 \(\varphi_{L}\)-modules * 4.3.4 The Robba ring of a group * 4.3.5 Locally \(\mathbb{Q}_{p}\)-analytic versus locally \(L\)-analytic distribution algebras. * 4.3.6 \((\varphi,\Gamma)\)-modules * 5 4.3.7 The structure of \(M^{\psi_{M}=0}\) * 4.3.8 Descent * 4.3.9 The Mellin transform and twists * 4.4 Explicit elements * 4.5 Pairings * 4.5.1 The residuum pairing for modules * 4.5.2 The duality pairing \(<,>_{\Gamma_{L}}\) for the group Robba ring * 4.5.3 A residuum identity and an alternative description of \(<,>_{\Gamma_{L}}\) * 4.5.4 The Iwasawa pairing for \((\varphi_{L},\Gamma_{L})\)-modules over the Robba ring * 4.5.5 The abstract reciprocity formula * 5 Application * 5.1 The regulator map * 5.1.1 The basic example * 5.2 Relation to Berger's and Fourquaux' big exponential map * 5.2.1 Some homological algebra * 5.2.2 Koszul complexes * 5.2.3 Continuous and analytic cohomology * 5.2.4 The interpolation formula for the regulator map * A Cup products and local Tate duality * B Iwasawa cohomology and descent ## 1 Introduction Classically explicit reciprocity laws or formulas usually mean an explicit computation of Hilbert symbols or (local) cup products using e.g. differential forms, (Coleman) power series etc. and a bunch of manifestations of this idea exists in the literature due to Artin-Hasse, Iwasawa, Wiles, Kolyvagin, Vostokov, Bruckner, Coleman, Sen, de Shalit, Fesenko, Bloch-Kato, Benois... In the same spirit Perrin-Riou's reciprocity law gives an explicit calculation of the Iwasawa cohomology pairing in terms of big exponential and regulator maps for crystalline representations of \(G_{\mathbb{Q}_{p}}\); more precisely, the latter maps are adjoint to each other when also involving the crystalline duality paring after base change to the distribution algebra corresponding to the cyclotomic situation. The motivating question for this article is to investigate what happens if one replaces the cyclotomic \(\mathbb{Z}_{p}\)-extension by a Lubin-Tate extension \(L_{\infty}\) over some finite extension \(L\) over \(\mathbb{Q}_{p}\) with Galois group \(\Gamma_{L}=G(L_{\infty}/L)\) and Lubin-Tate character \(\chi_{LT}:G_{L}\to o_{L}^{\times}\) which all arise from a Lubin-Tate formal group attached to a prime \(\pi_{L}\in o_{L}\) the additive group of the ring of integers \(o_{L}\) of \(L\); by \(q\) we denote the cardinality of the residue field \(o_{L}/o_{L}\pi_{L}\). We try to extend the above sketched cyclotomic picture to the Lubin-Tate case at least for \(L\)_-analytic_ crystalline representations of the absolute Galois group \(G_{L}\) of \(L\). As pointed out in [16] already, the character \(\tau:=\chi_{cyc}\cdot\chi_{LT}^{-1}\) plays a crucial role. To this aim we study \((\varphi_{L},\Gamma_{L})\)-modules over different Robba rings with coefficients in suitable complete intermediate fields \(L\subseteq K\subseteq\mathbb{C}_{p}.\) The starting point is the theory of Schneider and Teitelbaum: In [ST2] they introduced the rigid analytic group variety \(\mathfrak{X}\) over \(L,\) which parametrizes the locally \(L\)-analytic characters of \(o_{L},\) and similarly \(\mathfrak{X}^{\times}\cong\mathfrak{X}_{\Gamma_{L}}\) for the locally \(L\)-analytic groups \(o_{L}^{\times}\cong\Gamma_{L},\) the isomorphisms being induced by (the inverse of) \(\chi_{LT}.\) Under the assumption that the period \(\Omega\) of the dual of the fixed Lubin-Tate group belongs to \(K\) they establish an isomorphism \(\kappa:\mathbf{B}_{K}\cong\mathfrak{X}_{K}\) of rigid analytic varieties over \(K,\) called the Lubin-Tate isomorphism, where \(\mathbf{B}\) denotes the rigid analytic open unit disk and the index \(K\) indicates base change to \(K.\) In sections 4.1, 4.3.1, 4.3.4 we recall or introduce the Robba rings \(\mathcal{R}_{K}(\mathfrak{Y})\) for all the above varieties \(\mathfrak{Y}.\) We call \(\mathcal{R}_{K}(\Gamma_{L}):=\mathcal{R}_{K}(\mathfrak{X}_{\Gamma_{L}})\) also the Robba group ring as we can consider it as an extension of the locally \(L\)-analytic distribution algebra \(D(\Gamma_{L},K)\) with coefficients in \(K\) as follows: The Fourier isomorphism \(D(o_{L},K)\cong\mathcal{O}_{K}(\mathfrak{X})\) onto the ring of holomorphic functions on \(\mathfrak{X}\) induces the Mellin-transform, i.e., a topological isomorphism between \(D(\Gamma_{L},K)\cong D(o_{L}^{\times},K)\) and the \(D(o_{L}^{\times},K)\)-submodule \((\mathcal{O}_{K}(\mathfrak{X}))^{\psi_{L}^{\mathfrak{X}}=0}\) of \(\mathcal{O}_{K}(\mathfrak{X})\) on which the \(\psi_{L}^{\mathfrak{X}}\)-operator - to be recalled in section 4.1.1 - acts as zero. As a special case of the following theorem we extend the Mellin transform to an isomorphism of \(\mathcal{R}_{K}(\Gamma_{L})\) and \((\mathcal{R}_{K}(\mathfrak{X}))^{\psi_{L}=0}.\) **Theorem 1** (Theorem 4.3.23).: _If \(M\) denotes a \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-module over \(\mathcal{R}_{K}(\mathfrak{X})\) for any complete intermediate field \(L\subseteq K\subseteq\mathbb{C}_{p}\), then \(M^{\psi_{L}=0}\) is a free \(\mathcal{R}_{K}(\Gamma_{L})\)-module of rank \(\operatorname{rk}_{\mathcal{R}_{K}(\mathfrak{X})}M.\)_ For \(\mathbf{B}\) instead of \(\mathfrak{X}\) an analogous statement holds, if \(K\) contains \(\Omega;\) technically, this is the case we prove first (see Theorem 4.3.21) and which then, after involving the Lubin-Tate isomorphism, descends to the Theorem. Under this condition on \(K\) we may illustrate that via Fourier theory and the Lubin-Tate isomorpshism the locally \(L\)-analytic distribution algebra \(D(o_{L},K)\) becomes isomorphic to the subring \(\mathcal{O}_{K}(\mathbf{B})\subseteq\mathcal{R}_{K}(\mathbf{B})\) consisting of those functions which converge on the full open unit disk, while the functions in \(\mathcal{R}_{K}(\mathbf{B})\) in general only converge on some annulus \(r\leq|Z|<1\) for some radius \(0<r<1.\) This isomorphism induces the Mellin-transform, i.e., a topological isomorphism between \(D(o_{L}^{\times},K)\) and the \(D(o_{L}^{\times},K)\)-submodule \((\mathcal{O}_{K}(\mathbf{B}))^{\psi_{L}=0}\) of \(\mathcal{O}_{K}(\mathbf{B})\) on which the \(\psi_{L}\)-operator - up to a scalar a left inverse of the Lubin-Tate \(\varphi_{L}\)-operator - acts as zero. A second ingredient is Serre duality on the above rigid analytic varieties \(\mathfrak{Y},\) which induces - as developed in this generality in section 4.2 - a residue pairing \[\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{Y})}\times\mathcal{R}_{L}(\mathfrak{Y}) \to K\] in 4.2.7 for the differentials \(\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{Y})}\) and also a pairing \[<\,\ >_{\mathfrak{Y}}:\mathcal{R}_{K}(\mathfrak{Y})\times\mathcal{R}_{K}( \mathfrak{Y})\longrightarrow K,\] see (47), (51). For \(\mathfrak{Y}=\mathfrak{X}_{\Gamma_{L}}\) the latter induces topological isomorphisms \[\operatorname{Hom}_{K,cts}(\mathcal{R}_{K}(\Gamma_{L}),K)\cong\mathcal{R}_{K} (\Gamma_{L})\ \ \ \text{and}\ \ \operatorname{Hom}_{K,cts}(\mathcal{R}_{K}(\Gamma_{L})/D(\Gamma_{L},K),K)\cong D (\Gamma_{L},K).\] For a \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}:=\mathcal{R}_{K}(\mathfrak{Y})\) with \(\mathfrak{Y}\) either equal to \(\mathfrak{X}\) or \(\mathbf{B},\) we finally use these isomorphisms to define on the one hand the two Iwasawa pairings \[\{\,\ \}_{M,Iw}^{0}:\tilde{M}^{\psi_{L}=0}\times M^{\psi_{L}=0}\to\mathcal{R}_{ K}(\Gamma_{L})\] \[\{\,\ \}_{M,Iw}:\check{M}^{\psi_{L}=\frac{q}{\pi_{L}}}\times M^{\psi_{L}=1} \to D(\Gamma_{L},K),\] where \(\check{M}:=\operatorname{Hom}_{\mathcal{R}}(M,\Omega^{1}_{\mathcal{R}})\). They are linked by the commutative diagram Now assume that \(M\) arises as \(D^{\dagger}_{rig}(W)\) under Berger's equivalence of categories, if \(\mathfrak{Y}=\mathbf{B}\), and as \(D^{\dagger}_{rig}(W)_{\mathfrak{X}}\) under the equivalence from [BSX], if \(\mathfrak{Y}=\mathfrak{X}\), (see Thm. 4.5.28) from an \(L\)-analytic, crystalline representation \(W\) of \(G_{L}\), whence \(\check{M}\cong D^{\dagger}_{rig}(W^{*}(\chi_{LT}))\) and \(\check{M}\cong D^{\dagger}_{rig}(W^{*}(\chi_{LT}))_{\mathfrak{X}}\), respectively. Then, on the other hand we obtain the pairing \[[\,\ ]_{D_{cris,L}(W)}:\mathcal{R}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(W^{*}( \chi_{LT}))\times\mathcal{R}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(W)\to\mathcal{R }_{K}(\Gamma_{L})\] by base extension of the usual crystalline duality pairing - if \(\mathfrak{Y}=\mathbf{B}\) assuming \(\Omega\in K\) -, see (138). The work of Kisin-Ren and Berger-Schneider-Xie, respectively, provides comparison isomorphisms \[\operatorname{comp}_{M}:M[\frac{1}{t_{\mathfrak{Y}}}]\cong\mathcal{R}[\frac{1 }{t_{\mathfrak{Y}}}]\otimes_{L}D_{cris,L}(W)\] and \[\operatorname{comp}_{\check{M}}:\check{M}[\frac{1}{t_{\mathfrak{Y}}}]\cong \mathcal{R}[\frac{1}{t_{\mathfrak{Y}}}]\otimes_{L}D_{cris,L}(W^{*}(\chi_{LT})).\] Here \(t_{\mathbf{B}}:=t_{LT}:=\log_{LT}(Z)\in\mathcal{R}\) denotes the Lubin-Tate period which stems from the Lubin-Tate logarithm while \(t_{\mathfrak{X}}=\log_{\mathfrak{X}}\) as defined before Remark 4.2.9. The Lubin-Tate character \(\chi_{LT}\) induces isomorphism \(\Gamma_{L}\xrightarrow{\cong}o_{L}^{\times}\) as well as \(Lie(\Gamma_{L})\xrightarrow{\cong}L\), and we let \(\nabla\in Lie(\Gamma_{L})\) be the preimage of \(1\). Then the abstract reciprocity law we prove is the following statement. **Theorem 2** (Theorem 4.5.32).: _For all \(x\in\check{M}^{\psi_{L}=0}\) and \(y\in M^{\psi_{L}=0}\), for which the crystalline pairing is defined via the comparison isomorphism, it holds_ \[\frac{q-1}{q}\{\nabla x,y\}_{M,Iw}^{0}=[x,y]_{D_{cris,L}(W)},\] _if \(\mathfrak{Y}=\mathfrak{X}\), while the analogous statement for \(\mathfrak{Y}=\mathbf{B}\) holds upon assuming \(\Omega\in K\)._ As explained in more detail at the beginning of section 4.5 the proof of this abstract reciprocity law is mainly based on the insight, how the residue maps of \(\mathfrak{X}\) and \(\mathfrak{X}^{\times}\) and hence their associated pairings \(<\,\ >_{\mathfrak{X}}\) and \(<\,\ >_{\mathfrak{X}^{\times}}\) are related to each other by Theorem 4.5.12 in subsection 4.5.3. As an application for \(\mathfrak{Y}=\mathbf{B}\) we show in section 5 the adjointness of big exponential and regulator maps. Recall that Berger and Fourquaux have constructed for \(V\) an \(L\)-analytic representation of \(G_{L}\) and an integer \(h\geqslant 1\) such that * \(\operatorname{Fil}^{-h}D_{cris,L}(V)=D_{cris,L}(V)\) and * \(D_{cris,L}(V)^{\varphi_{L}=\pi_{L}^{-h}}=0\) a _big exponential map_ a la Perrin-Riou \[\Omega_{V,h}:(\mathcal{O}_{K}(\mathbf{B}))^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V) \to D_{\mathrm{rig}}^{\dagger}(V)^{\psi_{L}=\frac{q}{\pi_{L}}},\] which up to comparison isomorphism is for \(h=1\) given by \(f=(1-\varphi_{L})x\mapsto\nabla x\) and which interpolates Bloch-Kato exponential maps \(exp_{L,V(\chi_{LT}^{c})}\). On the other hand, based on an extension of the work of Kisin and Ren in the first section, we construct for a lattice \(T\subseteq V,\) such that \(V(\tau^{-1})\) is \(L\)-analytic and crystalline and such that \(V\) does not have any quotient isomorphic to \(L(\tau),\) a _regulator map_ a la Loeffler and Zerbes \[\mathcal{L}_{V}^{0}:H_{Iw}^{1}(L_{\infty}/L,T)\cong D_{LT}(T(\tau^{-1}))^{\psi _{L}=1}\to(\mathcal{O}_{K}(\mathbf{B}))^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V( \tau^{-1}))\] as applying the operator \[1-\frac{\pi_{L}}{q}\varphi_{L}\] up to comparison isomorphism. Then we derive from the abstract version above with \(W=V(\tau^{-1})\) the following reciprocity formula **Theorem 3** (Theorem 5.2.1).: _Assume that \(V^{*}(1)\) is \(L\)-analytic. If \(\operatorname{Fil}^{-1}D_{cris,L}(V^{*}(1))=D_{cris,L}(V^{*}(1))\) and \(D_{cris,L}(V^{*}(1))^{\varphi_{L}=\pi_{L}^{-1}}=D_{cris,L}(V^{*}(1))^{\varphi_ {L}=1}=0\), then the following diagram commutes:_ \[\begin{CD}D_{\mathrm{rig}}^{\dagger}(V^{*}(1))^{\psi_{L}=\frac{q}{\pi_{L}}} \times\hskip 28.452756ptD(V(\tau^{-1}))^{\psi_{L}=1}\xrightarrow{\{,\}_{Iw}}D( \Gamma_{L},\mathbb{C}_{p})\\ \Omega_{V^{*}(1),1}\hskip 28.452756pt\raisebox{-14.226378pt}{\includegraphics[]{ eps-10.eps}}\hskip 28.452756pt\raisebox{-14.226378pt}{\includegraphics[]{ eps-11. ## 2 Notation Let \(\mathbb{Q}_{p}\subseteq L\subset\mathbb{C}_{p}\) be a field of finite degree \(d\) over \(\mathbb{Q}_{p}\), \(o_{L}\) the ring of integers of \(L\), \(\pi_{L}\in o_{L}\) a fixed prime element, \(k_{L}=o_{L}/\pi_{L}o_{L}\) the residue field, \(q:=|k_{L}|\) and \(e\) the absolute ramification index of \(L\). We always use the absolute value \(|\ |\) on \(\mathbb{C}_{p}\) which is normalized by \(|\pi_{L}|=q^{-1}\). We **warn** the reader, though, that we will repeatedly use the references [BSX], [FX], [Laz], [Sc1], [Sc2], [ST], and [ST2] in which the absolute value is normalized differently from this paper by \(|p|=p^{-1}\). Our absolute value is the \(d\)th power of the one in these references. The transcription of certain formulas to our convention will usually be done silently. We fix a Lubin-Tate formal \(o_{L}\)-module \(LT=LT_{\pi_{L}}\) over \(o_{L}\) corresponding to the prime element \(\pi_{L}\). We always identify \(LT\) with the open unit disk around zero, which gives us a global coordinate \(Z\) on \(LT\). The \(o_{L}\)-action then is given by formal power series \([a](Z)\in o_{L}[[Z]]\). For simplicity the formal group law will be denoted by \(+_{LT}\). The power series \(\frac{\partial(X+_{LT}Y)}{\partial Y}_{|(X,Y)=(Z,0)}\) is a unit in \(o_{L}[[Z]]\) and we let \(g_{LT}(Z)\) denote its inverse. Then \(g_{LT}(Z)dZ\) is, up to scalars, the unique invariant differential form on \(LT\) ([Haz, SS5.8]). We also let \[\log_{LT}(Z)=Z+\ldots \tag{1}\] denote the unique formal power series in \(L[[Z]]\) whose formal derivative is \(g_{LT}\). This \(\log_{LT}\) is the logarithm of \(LT\) ([Lan, 8.6]). In particular, \(g_{LT}dZ=d\log_{LT}\). The invariant derivation \(\partial_{\rm inv}\) corresponding to the form \(d\log_{LT}\) is determined by \[f^{\prime}dZ=df=\partial_{\rm inv}(f)d\log_{LT}=\partial_{\rm inv}(f)g_{LT}dZ\] and hence is given by \[\partial_{\rm inv}(f)=g_{LT}^{-1}f^{\prime}. \tag{2}\] For any \(a\in o_{L}\) we have \[\log_{LT}([a](Z))=a\cdot\log_{LT}\qquad\mbox{and hence}\qquad ag_{LT}(Z)=g _{LT}([a](Z))\cdot[a]^{\prime}(Z) \tag{3}\] ([Lan, 8.6 Lemma 2]). Let \(T_{\pi}\) be the Tate module of \(LT\). Then \(T_{\pi}\) is a free \(o_{L}\)-module of rank one, say with generator \(\eta\), and the action of \(G_{L}:={\rm Gal}(\overline{L}/L)\) on \(T_{\pi}\) is given by a continuous character \(\chi_{LT}:G_{L}\longrightarrow o_{L}^{\times}\). Let \(T_{\pi}^{\prime}\) denote the Tate module of the \(p\)-divisible group Cartier dual to \(LT\) with period \(\Omega\) (depending on the choice of a generator of \(T_{\pi}^{\prime}\)), which again is a free \(o_{L}\)-module of rank one. The Galois action on \(T_{\pi}^{\prime}\cong T_{\pi}^{*}(1)\) is given by the continuous character \(\tau:=\chi_{cyc}\cdot\chi_{LT}^{-1}\), where \(\chi_{cyc}\) is the cyclotomic character. For \(n\geq 0\) we let \(L_{n}/L\) denote the extension (in \(\mathbb{C}_{p}\)) generated by the \(\pi_{L}^{n}\)-torsion points of \(LT\), and we put \(L_{\infty}:=\bigcup_{n}L_{n}\). The extension \(L_{\infty}/L\) is Galois. We let \(\Gamma_{L}:={\rm Gal}(L_{\infty}/L)\) and \(H_{L}:={\rm Gal}(\overline{L}/L_{\infty})\). The Lubin-Tate character \(\chi_{LT}\) induces an isomorphism \(\Gamma_{L}\stackrel{{\cong}}{{\longrightarrow}}o_{L}^{\times}\). Henceforth we use the same notation as in [SV15]. In particular, the ring endomorphisms induced by sending \(Z\) to \([\pi_{L}](Z)\) are called \(\varphi_{L}\) where applicable; e.g. for the ring \(\mathscr{A}_{L}\) defined to be the \(\pi_{L}\)-adic completion of \(o_{L}[[Z]][Z^{-1}]\) or \(\mathscr{B}_{L}:=\mathscr{A}_{L}[\pi_{L}^{-1}]\) which denotes the field of fractions of \(\mathscr{A}_{L}\). Recall that we also have introduced the unique additive endomorphism \(\psi_{L}\) of \(\mathscr{B}_{L}\) (and then \(\mathscr{A}_{L}\)) which satisfies \[\varphi_{L}\circ\psi_{L}=\pi_{L}^{-1}\cdot trace_{\mathscr{B}_{L}/\varphi_{L}( \mathscr{B}_{L})}\.\] Moreover, projection formula \[\psi_{L}(\varphi_{L}(f_{1})f_{2})=f_{1}\psi_{L}(f_{2})\qquad\text{for any $f_{i}\in\mathscr{B}_{L}$}\] as well as the formula \[\psi_{L}\circ\varphi_{L}=\frac{q}{\pi_{L}}\cdot\mathrm{id}\] hold. An etale \((\varphi_{L},\Gamma_{L})\)-module \(M\) comes with a Frobenius operator \(\varphi_{M}\) and an induced operator denoted by \(\psi_{M}\). Let \(\mathbf{\widehat{E}}^{+}:=\varprojlim o_{\mathbb{C}_{p}}/po_{\mathbb{C}_{p}}\) with the transition maps being given by the Frobenius \(\varphi(a)=a^{p}\). We may also identify \(\mathbf{\widehat{E}}^{+}\) with \(\varprojlim o_{\mathbb{C}_{p}}/\pi_{L}o_{\mathbb{C}_{p}}\) with the transition maps being given by the \(q\)-Frobenius \(\varphi_{q}(a)=a^{q}\). Recall that \(\mathbf{\widehat{E}}^{+}\) is a complete valuation ring with residue field \(\overline{\mathbb{F}_{p}}\) and its field of fractions \(\mathbf{\widehat{E}}=\varprojlim\mathbb{C}_{p}\) being algebraically closed of characteristic \(p\). Let \(\mathfrak{m}_{\mathbf{\widehat{E}}}\) denote the maximal ideal in \(\mathbf{\widehat{E}}^{+}\). The \(q\)-Frobenius \(\varphi_{q}\) first extends by functoriality to the rings of the Witt vectors \(W(\mathbf{\widehat{E}}^{+})\subseteq W(\mathbf{\widehat{E}})\) and then \(o_{L}\)-linearly to \(W(\mathbf{\widehat{E}}^{+})_{L}:=W(\mathbf{\widehat{E}}^{+})\otimes_{o_{L_{0}} }o_{L}\subseteq W(\mathbf{\widehat{E}})_{L}:=W(\mathbf{\widehat{E}})\otimes_ {o_{L_{0}}}o_{L}\), where \(L_{0}\) is the maximal unramified subextension of \(L\). The Galois group \(G_{L}\) obviously acts on \(\mathbf{\widehat{E}}\) and \(W(\mathbf{\widehat{E}})_{L}\) by automorphisms commuting with \(\varphi_{q}\). This \(G_{L}\)-action is continuous for the weak topology on \(W(\mathbf{\widehat{E}})_{L}\) (cf. [GAL, Lemma 1.5.3]). Sometimes we omit the index \(q,L\), or \(M\) from the Frobenius operator, but we always write \(\varphi_{p}\) when dealing with the \(p\)-Frobenius. ## 3 Wach-modules a la Kisin-Ren ### Wach-modules In this section we recall the theory of Wach-modules a la Kisin-Ren [KR] (with the simplifying assumption that - in their notation - \(K=L\), \(m=1\) etc.). By sending \(Z\) to \(\omega_{LT}\in W(\widetilde{\mathbf{E}}^{+})_{L}\) (see directly after [12, Lem. 4.1]) we obtain an \(G_{L}\)-equivariant, Frobenius compatible embedding of rings \[o_{L}[[Z]]\longrightarrow W(\widetilde{\mathbf{E}}^{+})_{L}\] the image of which we call \(\mathbf{A}_{L}^{+}\), it is a subring of \(\mathbf{A}_{L}\) (the image of \(\mathscr{A}_{L}\) in \(W(\widetilde{\mathbf{E}})_{L}\)). The latter ring is a complete discrete valuation ring with prime element \(\pi_{L}\) and residue field the image \(\mathbf{E}_{L}\) of \(k_{L}((Z))\hookrightarrow\widetilde{\mathbf{E}}\) sending \(Z\) to \(\omega:=\omega_{LT}\mod\pi_{L}\). We form the maximal integral unramified extension (= strict Henselization) \(\mathbf{A}_{L}^{nr}\) of \(\mathbf{A}_{L}\) inside \(W(\widetilde{\mathbf{E}})_{L}\). Its \(p\)-adic completion \(\mathbf{A}\) still is contained in \(W(\widetilde{\mathbf{E}})_{L}\). Note that \(\mathbf{A}\) is a complete discrete valuation ring with prime element \(\pi_{L}\) and residue field the separable algebraic closure \(\mathbf{E}_{L}^{sep}\) of \(\mathbf{E}_{L}\) in \(\widetilde{\mathbf{E}}\). By the functoriality properties of strict Henselizations the \(q\)-Frobenius \(\varphi_{q}\) preserves \(\mathbf{A}\). According to [KR, Lemma 1.4] the \(G_{L}\)-action on \(W(\widetilde{\mathbf{E}})_{L}\) respects \(\mathbf{A}\) and induces an isomorphism \(H_{L}=\ker(\chi_{LT})\xrightarrow{\cong}\mathrm{Aut}^{cont}(\mathbf{A}/ \mathbf{A}_{L})\). We set \(\mathbf{A}^{+}:=\mathbf{A}\)\(\cap W(\widetilde{\mathbf{E}}^{+})_{L}\). Set \(Q:=\frac{\left\lceil\pi_{L}\right\rceil(\omega_{LT})}{\omega_{LT}}\in\mathbf{ A}_{L}^{+}\), which satisfies per definitionem \(\varphi_{L}(\omega_{LT})=Q\cdot\omega_{LT}\). Following [KR] we write \(\mathcal{O}=\mathcal{O}_{L}(\mathbf{B})\) for the ring of rigid analytic functions on the open unit disk \(\mathbf{B}\) over \(L\), or equivalently the ring of power series in \(Z\) over \(L\) converging in \(\mathbf{B}\). Via sending \(\omega_{LT}\) to \(Z\) we view \(\mathbf{A}_{L}^{+}\) as a subring of \(\mathcal{O}\). We denote by \(\mathrm{Mod}_{\mathcal{O}}^{\varphi_{L},\Gamma_{L},an}\) the category consisting of finitely generated free \(\mathcal{O}\)-modules \(\mathcal{M}\) together with the following data: 1. an isomorphism \(1\otimes\varphi_{\mathcal{M}}:(\varphi_{L}^{*}\mathcal{M})[\frac{1}{Q}] \cong\mathcal{M}[\frac{1}{Q}]\). 1 Footnote 1: By \(\varphi_{L}^{*}\mathcal{M}\) we understand the module \(\mathcal{O}\otimes_{\mathcal{O},\varphi_{L}}\mathcal{M}\). 2. a semi-linear \(\Gamma_{L}\)-action on \(\mathcal{M}\), commuting with \(\varphi_{\mathcal{M}}\) and such that the induced action on \(D(\mathcal{M}):=\mathcal{M}/\omega_{LT}\mathcal{M}\) is trivial. We note that, since \(\mathcal{M}/\omega_{LT}\mathcal{M}=\mathcal{M}[\frac{1}{Q}]/\omega_{LT} \mathcal{M}[\frac{1}{Q}]\) the map \(\varphi_{\mathcal{M}}\) induces an \(L\)-linear endomorphism of \(D(\mathcal{M})\), which we denote by \(\varphi_{D(\mathcal{M})}\). As a consequence of (1) it, in fact, is an automorphism. The \(\Gamma_{L}\)-action on \(\mathcal{M}\) is differentiable ([BSX, Lemma 3.4.13]) 2, and the corresponding derived action of \(\mathrm{Lie}(\Gamma_{L})\) is \(L\)-bilinear ([BSX, Remark 3.4.15])3. Footnote 2: Note that the statements in (loc. cit.) are all over the character variety; but by the introduction to §3.4 they are also valid over the open unit ball - with even easier proofs. Footnote 3: In [KR] being \(L\)-analytic is an extra condition in the definition of \(\mathrm{Mod}_{\mathcal{O}}^{\varphi_{L},\Gamma_{L},an}\), which by this remark is automatically satisfied, whence the corresponding categories, with and without the superscript ‘an’ in [KR] coincide! Similarly, we denote by \(\mathrm{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L},\Gamma_{L},an}\) the category consisting of finitely generated free \(\mathbf{A}_{L}^{+}\)-modules \(N\) together with the following data: 1. an isomorphism \(1\otimes\varphi_{N}:(\varphi_{L}^{*}N)[\frac{1}{Q}]\cong N[\frac{1}{Q}]\). 4 Footnote 4: By \(\varphi_{L}^{*}N\) we understand the module \(\mathbf{A}_{L}^{+}\otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N\), and formally \(\varphi_{N}\) is a map from \(N\) to \(N[\frac{1}{Q}]\). 2. a semi-linear \(\Gamma_{L}\)-action on \(N\), commuting with \(\varphi_{N}\) and such that the induced action on \(N/\omega_{LT}N\) is trivial. The map \(\varphi_{N}\) induces an \(L\)-linear automorphism of \(D(N):=N[\frac{1}{p}]/\omega_{LT}N[\frac{1}{p}]\) denoted by \(\varphi_{D(N)}\). Obviously we have the base extension functor \(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}-:\mathrm{Mod}_{\mathbf{A}_{L}^{+}}^{ \varphi_{L},\Gamma_{L},an}\longrightarrow\mathrm{Mod}_{\mathcal{O}}^{\varphi_ {L},\Gamma_{L},an}\). It satisfies \[D(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N)=D(N). \tag{4}\] We write \(\mathrm{Mod}_{\mathcal{O}}^{\varphi_{L},\Gamma_{L},0}\) for the full subcategory of \(\mathrm{Mod}_{\mathcal{O}}^{\varphi_{L},\Gamma_{L},an}\) consisting of all \(\mathcal{M}\) such that \(\mathcal{R}\otimes_{\mathcal{O}}\mathcal{M}\) is pure of slope \(0\). Here \(\mathcal{R}\) denotes the Robba ring. By \(\mathrm{Mod}_{L}^{F,\varphi_{q}}\) we denote the category of finite dimensional \(L\)-vector spaces \(D\) equipped with an \(L\)-linear automorphism \(\varphi_{q}:D\xrightarrow{\cong}D\) and a decreasing, separated, and exhaustive filtration, indexed by \(\mathbb{Z}\), by \(L\)-subspaces. In \(\mathrm{Mod}_{L}^{F,\varphi_{q}}\) we have the full subcategory \(\mathrm{Mod}_{L}^{F,\varphi_{q},wa}\) of weakly admissible objects. For \(D\) in \(\mathrm{Mod}_{L}^{F,\varphi_{q},wa}\) let \(V_{L}(D)=\mathrm{Fil}^{0}(B_{cris,L}\otimes_{L}D)^{\varphi_{q}=1}\) where, as usual, \(B_{cris,L}:=B_{cris}\otimes_{L_{0}}L\). In order to formulate the crystalline comparison theorem in this context we also consider the category \(\mathrm{Mod}_{L_{0}\otimes\mathbb{Q}_{p}L}^{F,\varphi_{q}}\) of finitely generated free \(L_{0}\otimes_{\mathbb{Q}_{p}}L\)-modules \(\mathfrak{D}\) equipped with a \((\varphi_{p}\otimes\mathrm{id})\)-linear automorphism \(\varphi_{q}:\mathfrak{D}\xrightarrow{\cong}\mathfrak{D}\) and a decreasing, separated, and exhaustive filtration on \(\mathfrak{D}_{L}:=\mathfrak{D}\otimes_{L_{0}}L\), indexed by \(\mathbb{Z}\), by \(L\otimes_{\mathbb{Q}_{p}}L\)-submodules. For \(\mathfrak{D}\) in \(\mathrm{Mod}_{L_{0}\otimes\mathbb{Q}_{p}L}^{F,\varphi_{q}}\) we define, as usual, \[V(\mathfrak{D}):=(B_{cris}\otimes_{L_{0}}\mathfrak{D})^{\varphi=1}\cap \mathrm{Fil}^{0}(B_{dR}\otimes_{L}\mathfrak{D}_{L})\.\] Let \(\mathrm{Rep}_{o_{L},f}(G_{L})\) denote the category of finitely generated free \(o_{L}\)-modules equipped with a continuous linear \(G_{L}\)-action and \(\mathrm{Rep}_{o_{L},f}^{cris,an}(G_{L})\) the full subcategory of those \(T\) which are free over \(o_{L}\) and such that the representation \(V:=L\otimes_{o_{L}}T\) is crystalline and _analytic_, i.e., satisfying that, if \(D_{dR}(T):=(T\otimes_{\mathbb{Z}_{p}}B_{dR})^{G_{L}}\), the filtration on \(D_{dR}(T)_{\mathfrak{m}}\) is trivial for each maximal ideal \(\mathfrak{m}\) of \(L\otimes_{\mathbb{Q}_{p}}L\) which does not correspond to the identity \(\mathrm{id}:L\to L\). Correspondingly we let \(\mathrm{Rep}_{L}^{cris}(G_{L})\), resp. \(\mathrm{Rep}_{L}^{cris,an}(G_{L})\), denote the category of continuous \(G_{L}\)-representations in finite dimensional \(L\)-vector spaces which are crystalline, resp. crystalline and analytic. The base extension functor \(L\otimes_{o_{L}}-\) induces an equivalence of categories \(\mathrm{Rep}_{o_{L},f}^{cris,an}(G_{L})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p} \xrightarrow{\cong}\mathrm{Rep}_{L}^{cris,an}(G_{L})\). Here applying \(\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\) to a \(\mathbb{Z}_{p}\)-linear category means applying this functor to the Hom-modules. For \(V\) in \(\mathrm{Rep}_{L}^{cris,an}(G_{L})\) we set \(D_{cris,L}(V):=(B_{cris,L}\otimes_{L}V)^{G_{L}}=(B_{cris}\otimes_{L_{0}}V)^{G_{L}}\) and \(D_{cris}(V):=(B_{cris}\otimes_{\mathbb{Q}_{p}}V)^{G_{L}}\). The usual crystalline comparison theorem says that \(D_{cris}\) and \(V\) are equivalences of categories between \(\mathrm{Rep}_{L}^{cris}(G_{L})\) and the subcategory of weakly admissible objects in \(\mathrm{Mod}_{L_{0}\otimes_{\mathbb{Q}_{p}}L}^{F,\varphi_{q}}\). **Lemma 3.1.1**.: _([15, Lemma 5.3] and subsequent discussion, or [17, Cor. 3.3.1]) There is a fully faithful \(\otimes\)-functor_ \[\begin{split}\tilde{\ \ \ }:\mathrm{Mod}_{L}^{F,\varphi_{q}}& \longrightarrow\mathrm{Mod}_{L_{0}\otimes_{\mathbb{Q}_{p}}L}^{F, \varphi}\\ D&\longmapsto\tilde{D}:=L_{0}\otimes_{\mathbb{Q}_{p }}D\,\end{split}\] _whose essential image consists of all analytic objects, i.e., those for which the filtration on the non-identity components is trivial. A quasi-inverse functor from the essential image is given by sending \(\mathfrak{D}\) to the base extension \(L\otimes_{L_{0}\otimes\mathbb{Q}_{p}L}\mathfrak{D}\) for the multiplication map \(L_{0}\otimes_{\mathbb{Q}_{p}}L\to L\)._ Lemma 3.1.1 implies that \[D_{cris,L}(V)\tilde{\ \ }\cong D_{cris}(V)\qquad\text{for any $V$ in $\mathrm{Rep}_{L}^{cris,an}(G_{L})$}. \tag{5}\] We denote by \(\mathfrak{M}^{et}(\mathbf{A}_{L})\) the category of etale \((\varphi_{q},\Gamma_{L})\)-modules over \(\mathbf{A}_{L}\) (cf. [15, Def. 3.7]) and by \(\mathfrak{M}^{et}_{f}(\mathbf{A}_{L})\) the full subcategory consisting of those objects, which are finitely generated free as \(\mathbf{A}_{L}\)-module. For \(M\) in \(\mathfrak{M}^{et}_{f}(\mathbf{A}_{L})\), resp. for \(T\) in \(\operatorname{Rep}_{o_{L},f}(G_{L})\), we put \(V(M):=(\mathbf{A}\otimes_{\mathbf{A}_{L}}M)^{\varphi_{q}\otimes\varphi_{M}=1}\), resp. \(D_{LT}(T):=(\mathbf{A}\otimes_{o_{L}}T)^{\ker(\chi_{LT})}\). Having defined all of the relevant categories (and most of the functors) we now contemplate the following diagram of functors: The arrows without decoration are the obvious natural ones. The following pairs of functors are quasi-inverse \(\otimes\)-equivalences of \(\otimes\)-categories: * \((D_{LT},V)\) by [11, Thm. 1.6]; * \((D_{cris,L},V_{L})\) by the crystalline comparison theorem ([12, Rem. 3.6.7]) and Lemma 3.1.1; * \((D,\mathcal{M})\) by [11, Prop. 2.2.6] (or [2, Thm. 3.4.16]) and [11, Cor. 2.4.4], to which we also refer for the definition of the functor \(\mathcal{M}\). In particular, all functors in the above diagram are \(\otimes\)-functors. The second arrow in the left column, resp. the left arrow in the upper horizontal row, is an equivalence of categories by [11, Cor. 2.4.2], resp. by [11, Cor. 3.3.8]. The lower square and the upper triangle are commutative for trivial reasons. We list a few additional properties of these functors. **Remark 3.1.2**.: 1. _For any_ \(M\) _in_ \(\mathfrak{M}^{et}_{f}(\mathbf{A}_{L})\) _the inclusion_ \(V(M)\subseteq\mathbf{A}\otimes_{\mathbf{A}_{L}}M\) _extends to an isomorphism_ (6) _which is compatible with the_ \(\varphi_{q}\)_- and_ \(\Gamma_{L}\)_-actions on both sides._ 2. _The functors_ \(D_{LT}\)_,_ \(V\)_, and_ \(V(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}-)\) _respect exact sequences (of abelian groups)._ _._ 3. _([BSX, Prop. 3.4.14]) For any_ \(\mathcal{M}\) _in_ \(\mathrm{Mod}_{\mathcal{O}}^{\omega_{L},T_{L},an}\) _the projection map_ \(\mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]\longrightarrow D(\mathcal{M})\) _restricts to an isomorphism_ \(\mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]^{\Gamma_{L}}\stackrel{{ \cong}}{{\longrightarrow}}D(\mathcal{M})\) _such that the diagram_ \[\begin{CD}\mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]^{\Gamma_{L}} \stackrel{{\cong}}{{\longrightarrow}}D(\mathcal{M})\\ @V{\varphi_{\mathcal{M}}}V{}V@V{}V{\varphi_{D(\mathcal{M})}}V\\ \mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]^{\Gamma_{L}}\stackrel{{ \cong}}{{\longrightarrow}}D(\mathcal{M})\end{CD}\] _is commutative; moreover,_ \(\mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]=\mathcal{O}[\frac{\omega_{LT}}{t_{LT}} ]\otimes_{L}\mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]^{\Gamma_{L}}\cong \mathcal{O}[\frac{\omega_{LT}}{t_{LT}}]\otimes_{L}D(\mathcal{M})\)_._ Now we recall that \(A_{cris}\) is the \(p\)-adic completion of a divided power envelope of \(W(\widetilde{\mathbf{E}}^{+})\) and let \(A_{cris,L}:=A_{cris}\otimes_{L_{0}}L\). The inclusion \(W(\widetilde{\mathbf{E}}^{+})\subseteq A_{cris}\) induces an embedding \(\mathbf{A}_{L}^{+}\subseteq W(\widetilde{\mathbf{E}}^{+})_{L}\subseteq A_{cris,L}\). We observe that \(t_{LT}=\log_{LT}(\omega_{LT})\) belongs to \(B_{cris,L}^{\times}\). Indeed, by [10, SSIII.2] we know that \(\varphi_{p}(B_{max})\subseteq B_{cris}\subseteq B_{max}\), whence we obtain \[\varphi_{q}(B_{max}\otimes_{L_{0}}L)\subseteq B_{cris,L}\subseteq B_{max} \otimes_{L_{0}}L,\] where the definition of \(B_{max}\) can be found in (loc. cit.). By [10, Prop. 9.10, Lem. 9.17,SS9.7]\(t_{LT}\) and \(\omega_{LT}\) are invertible in \(B_{max,L}\subseteq B_{max}\otimes_{L_{0}}L\) (This reference assumes that the power series \([\pi_{L}](Z)\) is a polynomial. But, by some additional convergence considerations, the results can be seen to hold in general (cf. [GAL, SS2.1] for more details)). Hence, by the above inclusions and using that \(\varphi_{q}(t_{LT})=\pi_{L}t_{LT}\), we see that \(t_{LT}\) is a unit \(B_{cris,L}\). In particular, we have an inclusion \(A_{cris,L}[\frac{1}{\pi_{L}},\frac{1}{t_{LT}}]\subseteq B_{cris,L}\). Moreover, since \(\varphi_{q}(\omega_{LT})=Q\omega_{LT}\) is invertible in \(\varphi_{q}(B_{max}\otimes_{L_{0}}L)\), the elements \(\omega_{LT}\) and \(Q\) are units in \(B_{cris,L}\) as well. In particular, we have an inclusion \[\mathbf{A}^{+}[\frac{1}{\omega_{LT}}]\ \ \subseteq B_{cris,L}. \tag{7}\] Next we shall recall in Lemma 3.1.4 below that the above inclusion \(\mathbf{A}_{L}^{+}\subseteq A_{cris,L}\) extends to a (continuous) ring homomorphism \[\mathcal{O}\to A_{cris,L}[\frac{1}{\pi_{L}}]\subseteq B_{cris,L}. \tag{8}\] For \(\alpha\in\widetilde{\mathbf{E}}^{+}\cong\mathrm{proj}\lim_{n}o_{\mathbb{C}_{p}}\) we denote by \(\alpha^{(0)}\) as usual its zero-component. **Lemma 3.1.3**.: _The following diagram of \(o_{L_{0}}\)-modules is commutative_ (9) _where \(J:=\ker(\Theta)\), \(\Theta(\sum_{n\geq 0}[\alpha_{n}]p^{n}))=\sum_{n\geq 0}\alpha_{n}^{(0)}p^{n}\) and similarly \(\Theta_{L}(\sum_{n\geq 0}[\alpha_{n}]\pi_{L}^{n}))=\sum_{n\geq 0}\alpha_{n}^{(0)}\pi_{L}^{n}\), while \(u\) denotes the canonical map as defined in [FF, Lem. 1.2.3], it sends Teichmuller lifts \([\alpha]\) with respect to \(W(\widetilde{\mathbf{E}}^{+})\) to the Teichmuller lift \([\alpha]\) with respect to \(W(\widetilde{\mathbf{E}}^{+})_{L}\)._ Proof.: First of all we recall from [GAL, Lem. 1.6.1] that \(\Theta\) and \(\Theta_{L}\) are continuous and show that also \(u\) is continuous, each time with respect to the weak topology, of which a fundamental system of open neighbourhoods consists of \[U_{\mathfrak{a},m}:=\{(b_{0},b_{1},\ldots)\in W(\widetilde{\mathbf{E}}^{+})|b_{ 0},\ldots,b_{m-1}\in\mathfrak{a}\}=\sum_{i=0}^{m-1}V_{p}^{i}([\mathfrak{a}])+p ^{m}W(\widetilde{\mathbf{E}}^{+})\] and similarly \(U_{\mathfrak{a},m}^{L}:=\{(b_{0},b_{1},\ldots)\in W(\widetilde{\mathbf{E}}^{+ })_{L}|b_{0},\ldots,b_{m-1}\in\mathfrak{a}\}\) for open ideals \(\mathfrak{a}\) of \(\widetilde{\mathbf{E}}^{+}\) and \(m\geq 0\); see SS1.5 in (loc. cit.). By \(o_{L_{0}}\)-linearity, we see that \(u(p^{m}W(\widetilde{\mathbf{E}}^{+}))\subseteq p^{m}W(\widetilde{\mathbf{E}} ^{+})_{L}\). Using the relation \[u(V_{p}x)=\frac{p}{\pi_{L}}V_{\pi_{L}}(u(F^{f-1}x))\] from [FF, Lem. 1.2.3]5, where \(V_{?}\) denotes the Verschiebung, one easily concludes that Footnote 5: Note the typos in (loc. cit.) where \(u(V_{p}x)=\frac{p}{\pi_{L}}V_{\pi_{L}}(F^{f-1}u(x))\) is stated. Moreover, one has the relation \(u(F^{f}x)=Fu(x)\). \[u(V_{p}^{i}([b]))=(\frac{p}{\pi_{L}})^{i}V_{\pi_{L}}^{i}([b^{p^{i(f-1)}}]),\] whence \(u(U_{\mathfrak{a},m})\subseteq U_{\mathfrak{a},m}^{L}\) and continuity of \(u\) follows. Since the commutativity is clear on Teichmuller lifts and on \(p\) by \(o_{L_{0}}\)-linearity, which generate a dense ideal, the result follows by continuity. The following lemma generalizes parts from [PR, Prop. 1.5.2.]. **Lemma 3.1.4**.: _Sending \(f=\sum_{n\geq 0}a_{n}Z^{n}\) to \(f(\omega_{LT})\) induces a continuous map_ \[\mathcal{O}\to A_{cris,L}[\frac{1}{\pi_{L}}],\] _where the source carries the Frechet-topology while the target is a topological \(o_{L_{0}}\)-module, of which the topology is uniquely determined by requiring that \(A_{cris,L}\) is open, i.e., the system \(p^{m}A_{cris,L}\) with \(m\geq 0\) forms a basis of open neighbourhoods of \(0\)._ Proof.: First of all, the relation \(J^{p}\subseteq pA_{cris}\) from [PR, SS1.4.1, bottom of p. 96] (note that \(J^{p}\subseteq W_{p}(R)\) regarding the notation in (loc. cit.) for the last object) implies easily by flat base change \[J^{p}_{L}\subseteq pA_{cris,L} \tag{10}\] with \(J_{L}:=J\otimes_{o_{L_{0}}}o_{L}\). By [GAL, Lem. 2.1.12] we know that \(\omega_{LT}\) belongs to \(\ker(\Theta_{L})\). Now we claim that there exists a natural number \(r^{\prime}\) such that \(\omega_{LT}^{r^{\prime}}\) lies in \(W_{1}:=J_{L}+pW(\widetilde{\mathbf{E}}^{+})_{L}\), whence for \(r:=pr^{\prime}\) we have \(\omega_{LT}^{r}\in W_{p}\) with \(W_{m}:=W_{1}^{m}\) for all \(m\geq 0\). To this aim note that diagram (9) induces the following commutative diagram with exact lines (11)0 where the map \(\mu\) is induced by sending \(a\otimes b\) to \(ab\) and a reference for the middle vertical isomorphism is [GAL, Prop. 1.1.26]. By the snake lemma the cokernel of the left vertical map is isomorphic to \[\ker(\mu) \subseteq\ker\left((o_{\mathbb{C}_{p}}\otimes_{o_{L_{0}}}o_{L})/p(o _{\mathbb{C}_{p}}\otimes_{o_{L_{0}}}o_{L})\to\bar{k}\right)\] \[=\ker\left(o_{\mathbb{C}_{p}}/po_{\mathbb{C}_{p}}\otimes_{k}o_{L}/ po_{L}\to o_{\mathbb{C}_{p}}/\mathfrak{m}_{\mathbb{C}_{p}}\otimes_{k}o_{L}/ \pi_{L}o_{L}\right)\] \[=\mathfrak{m}_{\mathbb{C}_{p}}\otimes_{k}o_{L}/po_{L}+o_{\mathbb{ C}_{p}}/po_{\mathbb{C}_{p}}\otimes_{k}\pi_{L}o_{L}/po_{L}\] and thus consists of nilpotent elements whence the claim follows. Here \(\mathfrak{m}_{\mathbb{C}_{p}}\) denotes the maximal ideal of \(o_{\mathbb{C}_{p}}\). Now let \(f=\sum_{n\geqslant 0}a_{n}Z^{n}\) satisfy that \(|a_{n}|\rho^{n}\) tends to zero for all \(\rho<1\). Writing \(n=q_{n}r+r_{n}\) with \(0\leqslant r_{n}<r\), we have \[a_{n}\omega_{LT}^{n}=a_{n}\omega_{LT}^{r_{n}}(\omega_{LT}^{r})^{q_{n}}\in a_{ n}W_{pq_{n}}\subseteq a_{n}p^{q_{n}}A_{cris,L},\] where the last inclusion follows from (10). But \(|a_{n}p^{q_{n}}|\leqslant|a_{n}|_{p}p^{1-\frac{n}{r}}\) tends to \(0\) for \(n\to\infty\). Thus the series \(\sum_{n\geqslant 0}a_{n}\omega_{LT}^{n}\) converges in \(A_{cris,L}[\frac{1}{\pi_{L}}]\). Moreover, since one has \(sup|a_{n}p^{-1+\frac{n}{r}}|\leqslant p\|f\|_{\rho}\) for the usual norms \(\|\cdot\|_{\rho}\) if \(1>\rho>p^{-\frac{1}{r}}\), we obtain for any \(m\) that \[\{f|\ \|f\|_{\rho}<p^{-m-1}\}\subseteq\{f\in\mathcal{O}|f(\omega_{LT})\in p^{m} A_{cris,L}\},\] whence the latter set, which is the preimage of \(p^{m}A_{cris,L}\), is open. This implies continuity. **Lemma 3.1.5**.: _The big square in the middle is a commutative square of \(\otimes\)-functors (up to a natural isomorphism of \(\otimes\)-functors)._ Proof.: We have to establish a natural isomorphism \[L\otimes_{o_{L}}V(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N)\cong V_{L}(D( \mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N))\qquad\text{for any $N$ in $\operatorname{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L},\Gamma_{L},an}$}. \tag{12}\] In fact, we shall prove the dual statement, i.e., using (4), that \[(L\otimes_{o_{L}}V(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N))^{*}\cong V_{ L}(D(N))^{*}, \tag{13}\] where \({}^{*}\) indicates the \(L\)-dual. From the canonical isomorphisms \[\operatorname{Hom}_{\mathbf{A}_{L},\varphi_{q}}(M,\mathbf{A}) \cong\operatorname{Hom}_{\mathbf{A},\varphi_{q}}(\mathbf{A} \otimes_{\mathbf{A}_{L}}M,\mathbf{A})\] \[\cong\operatorname{Hom}_{\mathbf{A},\varphi_{q}}(\mathbf{A} \otimes_{o_{L}}V(M),\mathbf{A})\] \[\cong\operatorname{Hom}_{o_{L}}(V(M),\mathbf{A}^{\varphi_{q}=1})\] \[\cong\operatorname{Hom}_{o_{L}}(V(M),o_{L}),\] where we used (6) for the second isomorphism and write \(M\) for \(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N\), we conclude that the left hand side of (13) is canonically isomorphic to \(\operatorname{Hom}_{\mathbf{A}_{L},\varphi_{q}}(\mathbf{A}_{L}\otimes_{ \mathbf{A}_{L}^{+}}N,\mathbf{A})\otimes_{o_{L}}L\). Let \(\mathbf{A}^{+}:=\mathbf{A}\cap W(\mathbf{\widetilde{E}}^{+})_{L}\). On the one hand, by [KR, Lem. 3.2.1], base extension induces an isomorphism \[\operatorname{Hom}_{\mathbf{A}_{L}^{+},\varphi_{q}}(N,\mathbf{A}^{+}[\tfrac{ 1}{\omega_{LT}}])\xrightarrow{\simeq}\operatorname{Hom}_{\mathbf{A}_{L}, \varphi_{q}}(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N,\mathbf{A})\.\] On the other hand, in [KR, Prop. 3.2.3] they construct a natural isomorphism \[\operatorname{Hom}_{\mathbf{A}_{L}^{+},\varphi_{q}}(N,\mathbf{A}^{+}[\tfrac{1}{ \omega_{LT}}])\otimes_{\partial_{L}}L\xrightarrow{\cong}\operatorname{Hom}_{L, \varphi_{q},\operatorname{Fil}}((N/\omega_{LT}N)[\tfrac{1}{p}],B_{cris,L}). \tag{14}\] 6 Therefore, the left hand side of (13) becomes naturally isomorphic to Footnote 6: \(\otimes_{\partial_{L}}L\) is missing in the reference! \[\operatorname{Hom}_{L,\varphi_{q},\operatorname{Fil}}(D(N),B_{cris,L})\cong V _{L}(D(N)^{*}), \tag{15}\] where the last isomorphism is straightforward. Thus the proof of (13) is reduced to the canonical identity \[V_{L}(D(N)^{*})\cong V_{L}(D(N))^{*}. \tag{16}\] This can be proved in the same way as in [F1, Rem. 3.4.5 (iii), Rem. 3.6.7]: Since \(V_{L}\) is a rigid \(\otimes\)-functor, it preserves inner Hom-objects, in particular duals. In order to see that (12) is compatible with tensor products note that base change, taking \(L\)-duals or applying comparison isomorphisms are \(\otimes\)-compatible. Thus the claim is reduced to the tensor compatiblity of the isomorphism (14) the construction of which we therefore recall from [KR]. It is induced by a natural map \[\operatorname{Hom}_{\mathbf{A}_{L}^{+}}(N,\mathbf{A}^{+}[\tfrac{1}{\omega_{ LT}}])\otimes_{\partial_{L}}L\longrightarrow\operatorname{Hom}_{L}((N/\omega_{LT}N)[ \tfrac{1}{p}],B_{cris,L})\] which comes about as follows. Let \(f\in\operatorname{Hom}_{\mathbf{A}_{L}^{+}}(N,\mathbf{A}^{+}[\tfrac{1}{\omega _{LT}}])\). By composing \(f\) with the inclusion (7) we obtain \(f_{1}:N\to B_{cris,L}\). By base extension to \(\mathcal{O}\) via (8) and then localization in \(Q\) the map \(f_{1}\) gives rise to a map \(f_{2}:(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N)[\tfrac{1}{Q}]\to B_{cris,L}\). This one we precompose with the isomorphism \(1\otimes\varphi_{N}\) to obtain \[f_{3}:(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N)[\tfrac{1}{Q}] \xrightarrow{\cong}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N)[\tfrac{1}{Q}] \to B_{cris,L}\.\] Now we observe the inclusions \[(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N)[\tfrac{1}{Q}] \subseteq(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N)[\tfrac{\omega _{LT}}{t_{LT}}]\supseteq(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N )[\varphi_{L}(\tfrac{\omega_{LT}}{t_{LT}})]\\ =\mathcal{O}\otimes_{\mathcal{O},\varphi_{L}}\left((\mathcal{O} \otimes_{\mathbf{A}_{L}^{+}}N)[\tfrac{\omega_{LT}}{t_{LT}}]\right)\,.\] They only differ by elements which are invertible in \(B_{cris,L}\). Therefore giving the map \(f_{3}\) is equivalent to giving a map \(f_{4}:\mathcal{O}\otimes_{\mathcal{O},\varphi_{L}}\left((\mathcal{O}\otimes_{ \mathbf{A}_{L}^{+}}N)[\tfrac{\omega_{LT}}{t_{LT}}]\right)\to B_{cris,L}\). Finally we use Remark 3.1.2.iii which gives the map \[\xi:(N/\omega_{LT}N)[\tfrac{1}{p}]\xrightarrow{\cong}\left((\mathcal{O} \otimes_{\mathbf{A}_{L}^{+}}N)[\tfrac{\omega_{LT}}{t_{LT}}]\right)^{\Gamma_{L }}\xrightarrow{\cong}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N)[\tfrac{\omega _{LT}}{t_{LT}}]\.\] By precomposing \(f_{4}\) with \(1\otimes\xi\) we at last arrive at a map \(f_{5}:(N/\omega_{LT}N)[\tfrac{1}{p}]\to B_{cris,L}\). From this description the compatibility with tensor products is easily checked. Suppose that \(N\) is in \(\operatorname{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L},\Gamma_{L},an}\) and put \(T:=V(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N)\) in \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\). Then, by Remark 3.1.2.iii and Lemma 3.1.5, we have a natural isomorphism of \(\otimes\)-functors \[\operatorname{comp}:\mathcal{O}[\tfrac{\omega_{LT}}{t_{LT}}]\otimes_{\mathbf{ A}_{L}^{+}}N\xrightarrow{\cong}\mathcal{O}[\tfrac{\omega_{LT}}{t_{LT}}]\otimes_{L}D_{cris,L}(L\otimes_{\partial_{L}}T) \tag{17}\] which is compatible with the diagonal \(\varphi\)'s on both sides. In the proof of [KR, Cor. 3.3.8] it is shown that, for any \(T\) in \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\), there exists an \(\mathbf{A}_{L}^{+}\)-submodule \(\mathfrak{M}\subseteq D_{LT}(T)\) which 1. lies in \(\operatorname{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L,\Gamma_{L},an}}\) with \(\varphi_{\mathfrak{M}}\) and the \(\Gamma_{L}\)-action on \(\mathfrak{M}\) being induced by the \((\varphi_{q},\Gamma_{L})\)-structure of \(D_{LT}(T)\), and 2. satisfies \(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}\mathfrak{M}=D_{LT}(T)\). Note that property (N2) implies that \(\mathfrak{M}\) is \(p\)_-saturated in \(D_{LT}(T)\)_, i.e., \(\mathfrak{M}[\frac{1}{p}]\cap D_{LT}(T)=\mathfrak{M}\), since \(\mathbf{A}_{L}^{+}\) is obviously \(p\)-saturated in \(\mathbf{A}_{L}\) We once and for all pick such an \(N(T):=\mathfrak{M}\). This defines a functor \[N:\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\longrightarrow\operatorname{ Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L,\Gamma_{L},an}}\] which is quasi-inverse to the upper left horizontal arrow in the above big diagram. Note that \(N\) is in a unique way a \(\otimes\)-functor by [Sa, I.4.4.2.1]. **Remark 3.1.6**.: _For \(T\) in \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\) and \(N:=N(T)\) in \(\operatorname{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L,\Gamma_{L},an}}\) we have:_ 1. _If_ \(L\otimes_{o_{L}}T\) _is a positive_7 _analytic crystalline representation, then_ \(N\) _is stable under_ \(\varphi_{N}\)_;_ Footnote 7: i.e., the Hodge-Tate weights are non-positive, i.e., \(gr^{j}D_{dR}(T)\neq 0\) implies that \(j\geq 0\). 2. _If the Hodge-Tate weights of_ \(L\otimes_{o_{L}}T\) _are all_ \(\geq 0\)_, then we have_ \(N\subseteq\mathbf{A}_{L}^{+}\cdot\varphi_{N}(N)\)_, where the latter means the_ \(\mathbf{A}_{L}^{+}\)_-span generated by_ \(\varphi_{N}(N)\)_._ Proof.: The corresponding assertions for \(\mathcal{M}:=\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N\) are contained in [BSX, Cor. 3.4.9]. Let \(n_{1},\ldots,n_{d}\) be an \(\mathbf{A}_{L}^{+}\)-basis of \(N\). For (i) we have to show that \(\varphi_{N}(n_{j})\in N\) for any \(1\leq j\leq d\). Writing \(\varphi_{N}(n_{j})=\sum_{i=1}^{d}f_{ij}n_{i}\) we know from the definition of the category \(\operatorname{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L,\Gamma_{L},an}}\) that \(f_{ij}\in\mathbf{A}_{L}^{+}[\frac{1}{Q}]\) and from the above observation that \(f_{ij}\in\mathcal{O}\). This reduces us to showing that \(o_{L}[\llbracket Z\rrbracket][\frac{1}{Q}]\cap\mathcal{O}\subseteq o_{L}[ \llbracket Z\rrbracket]\). Suppose therefore that \(Q^{r}h=f\) for some \(r\geq 1\), \(h\in\mathcal{O}\), and \(f\in o_{L}[\llbracket Z\rrbracket]\). The finitely many zeros of \(Q\in o_{L}[\llbracket Z\rrbracket]\), which are the nonzero \(\pi_{L}\)-torsion points of the Lubin-Tate formal group, all lie in the open unit disk. By Weierstrass preparation it follows that \(Q\) must divide \(f\) already in \(o_{L}[\llbracket Z\rrbracket]\). Hence \(h\in o_{L}[\llbracket Z\rrbracket]\). For (ii) we have to show that \(n_{j}=\sum_{i=1}^{d}f_{ij}\varphi_{N}(n_{i})\), for any \(1\leq j\leq d\), with \(f_{ij}\in\mathbf{A}_{L}^{+}\). For the same reasons as in the proof of (1) we have \(n_{j}=\sum_{i=1}^{d}f_{ij}^{\prime}\varphi_{N}(n_{i})=\sum_{i=1}^{d}f_{ij}^{ \prime\prime}\varphi_{N}(n_{i})\) with \(f_{ij}^{\prime}\in\mathbf{A}_{L}^{+}[\frac{1}{Q}]\) and \(f_{ij}^{\prime\prime}\in\mathcal{O}\). Then \(\sum_{i=1}^{d}(f_{ij}^{\prime}-f_{ij}^{\prime\prime})\varphi_{N}(n_{i})=0\). But, again by the definition of the category \(\operatorname{Mod}_{\mathbf{A}_{L}^{+}}^{\varphi_{L,\Gamma_{L},an}}\), the \(\varphi_{N}(n_{i})\) are linearly independent over \(\mathbf{A}_{L}^{+}[\frac{1}{Q}]\) and hence over \(\mathcal{O}[\frac{1}{Q}]\). It follows that \(f_{ij}^{\prime}=f_{ij}^{\prime\prime}\in\mathbf{A}_{L}^{+}\). First we further investigate any \(T\) in \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\) whose **Hodge-Tate weights are all**\(\leq 0\), i.e., which is positive. For this purpose we need the ring \(\mathbf{A}^{+}=\mathbf{A}\cap W(\mathbf{\widetilde{E}}^{+})_{L}\). One has the following general fact. **Lemma 3.1.7**.: _Let \(F\) be any nonarchimedean valued field which contains \(o_{L}/\pi_{L}o_{L}\), and let \(o_{F}\) denote its ring of integers; we have:_ 1. _Let_ \(\alpha\in W(F)_{L}\) _be any element; if the_ \(W(o_{F})_{L}\)_-submodule of_ \(W(F)_{L}\) _generated by_ \(\{\varphi_{q}^{i}(\alpha)\}_{i\geq 0}\) _is finitely generated then_ \(\alpha\in W(o_{F})_{L}\)_._ _._ 2. _Let_ \(X\) _be a finitely generated free_ \(o_{L}\)_-module, and let_ \(M\) _be a finitely generated_ \(W(o_{F})_{L}\)_-submodule of_ \(W(F)_{L}\otimes_{o_{L}}X\)_; if_ \(M\) _is_ \(\varphi_{q}\otimes\mathrm{id}\)_-invariant then_ \(M\subseteq W(o_{F})_{L}\otimes_{o_{L}}X\)_._ Proof.: i. This is a simple explicit calculation as given, for example, in the proof of [Co1, Lem. III.5]. ii. This is a straightforward consequence of i. **Proposition 3.1.8**.: _For positive \(T\) in \(\mathrm{Rep}^{cris,an}_{o_{L},f}(G_{L})\) we have_ \[N(T)\subseteq D^{+}_{LT}(T):=(\mathbf{A}^{+}\otimes_{o_{L}}T)^{\ker(\chi_{LT} )}\,\] _and \(N(T)\) is \(p\)-saturated in \(D^{+}_{LT}(T)\)._ Proof.: By Remark 3.1.6.i the \(\mathbf{A}^{+}_{L}\)-submodule \(N(T)\) of \(W(\mathbf{\widehat{E}})_{L}\otimes_{o_{L}}T\) is \(\varphi_{q}\otimes\mathrm{id}\)-invariant (and finitely generated). Hence we may apply Lemma 3.1.7.ii to \(M:=W(\mathbf{\widehat{E}}^{+})_{L}\cdot N(T)\) and obtain that \(N(T)\subseteq(W(\mathbf{\widehat{E}}^{+})_{L}\otimes_{o_{L}}T)\cap(\mathbf{A }\otimes_{o_{L}}T)^{\ker(\chi_{LT})}=D^{+}_{LT}(T)\). Since \(N(T)\) is even \(p\)-saturated in \(D_{LT}(T)\), the same holds with respect to the smaller \(D^{+}_{LT}(T)\). **Corollary 3.1.9**.: _For positive \(T\) in \(\mathrm{Rep}^{cris,an}_{o_{L},f}(G_{L})\) the \(\mathbf{A}^{+}_{L}\)-module \(D^{+}_{LT}(T)\) is free of the same rank as \(N(T)\)._ Proof.: By the argument in the proof of [Co1, Lem. III.3] the \(\mathbf{A}^{+}_{L}\)-module \(D^{+}_{LT}(T)\) always is free of a rank less or equal to the rank of \(N(T)\). The equality of the ranks in the positive case then is a consequence of Prop. 3.1.8. Next we relate \(N(T)\) to the construction in [Be, Prop. II.1.1]. **Proposition 3.1.10**.: _Suppose that \(T\) in \(\mathrm{Rep}^{cris,an}_{o_{L},f}(G_{L})\) is positive. For \(N:=N(T)\) we then have:_ 1. \(N\) _is the unique_ \(\mathbf{A}^{+}_{L}\)_-submodule of_ \(D_{LT}(T)\) _which satisfies (N1) and (N2)._ 2. \(N\) _is also the unique_ \(\mathbf{A}^{+}_{L}\)_-submodule of_ \(D^{+}_{LT}(T)\) _which satisfies:_ 1. \(N\) _is free of rank equal to the rank of_ \(D^{+}_{LT}(T)\)_;_ 2. \(N\) _is_ \(\Gamma_{L}\)_-invariant;_ 3. _the induced_ \(\Gamma_{L}\)_-action on_ \(N/\omega_{LT}N\) _is trivial;_ 4. \(\omega^{r}_{LT}D^{+}_{LT}(T)\subseteq N\) _for some_ \(r\geq 0\)_._ Proof.: Let \(P=P(\mathbf{A}^{+}_{L})\) denote the set of height one prime ideals of \(\mathbf{A}^{+}_{L}\). It contains the prime ideal \(\mathfrak{p}_{0}:=(\omega_{LT})\). _Step 1:_ We show the existence of a unique \(\mathbf{A}^{+}_{L}\)-submodule \(N^{\prime}\) of \(D^{+}_{LT}(T)\) which satisfies (a) - (d), and we show that this \(N^{\prime}\) is \(\varphi_{q}\)-invariant. _Existence:_ We begin by observing that the \(\mathbf{A}^{+}_{L}\)-submodule \(N:=N(T)\) of \(D^{+}_{LT}(T)\) has the properties (a), (b), and (c), but possibly not (d). In particular, the quotient \(D^{+}_{LT}(T)/N\) is an \(\mathbf{A}^{+}_{L}\)-torsion module. Hence the localizations \(N_{\mathfrak{p}}=D^{+}_{LT}(T)_{\mathfrak{p}}\) coincide for all but finitely many \(\mathfrak{p}\in P\). By [B-CA, VII.4.3 Thm. 3] there exists a unique intermediate \(\mathbf{A}^{+}_{L}\)-module \(N\subseteq N^{\prime}\subseteq D^{+}_{LT}(T)\) which is finitely generated and reflexive and such that \(N^{\prime}_{\mathfrak{p}_{0}}=N_{\mathfrak{p}_{0}}\) and \(N^{\prime}_{\mathfrak{p}}=D^{+}_{LT}(T)_{\mathfrak{p}}\) for any \(\mathfrak{p}\in P\backslash\{\mathfrak{p}_{0}\}\). Since \(\mathbf{A}^{+}_{L}\) is a two dimensional regular local ring the finitely generated reflexive module \(N^{\prime}\) is actually free, and then, of course, must have the same rank as \(N\) and \(D^{+}_{LT}(T)\). We also have \(N^{\prime}=\bigcap_{\mathfrak{p}}N^{\prime}_{\mathfrak{p}}=N_{\mathfrak{p}_{0}} \cap\bigcap_{\mathfrak{p}\neq\mathfrak{p}_{0}}D^{+}_{LT}(T)_{\mathfrak{p}}\). Since \(\mathfrak{p}_{0}\) is preserved by \(\varphi_{D^{+}_{LT}(T)}\) and \(\Gamma_{L}\) it follows that \(N^{\prime}\) is \(\varphi_{D^{+}_{LT}(T)}\)- and \(\Gamma_{L}\)-invariant. Next the identities \[L\otimes_{o_{L}}N/\omega_{LT}N=N_{\mathfrak{p}_{0}}/\omega_{LT}N_{\mathfrak{p} _{0}}=N^{\prime}_{\mathfrak{p}_{0}}/\omega_{LT}N^{\prime}_{\mathfrak{p}_{0}}=L \otimes_{o_{L}}N^{\prime}/\omega_{LT}N^{\prime}\supseteq N^{\prime}/\omega_{LT }N^{\prime}\] show that the induced \(\Gamma_{L}\)-action on \(N^{\prime}/\omega_{LT}N^{\prime}\) is trivial. By using [B-CA, VII.4.4 Thm. 5] we obtain, for some \(m_{1},\ldots,m_{d}\geqslant 0\), a homomorphism of \({\bf A}^{+}_{L}\)-modules \(D^{+}_{LT}(T)/N^{\prime}\longrightarrow\oplus_{i=1}^{d}{\bf A}^{+}_{L}/ \mathfrak{p}_{0}^{m_{i}}{\bf A}^{+}_{L}\) whose kernel is finite. Any finite \({\bf A}^{+}_{L}\)-module is annihilated by a power of the maximal ideal in \({\bf A}^{+}_{L}\). We see that \(D^{+}_{LT}(T)/N^{\prime}\) is annihilated by a power of \(\mathfrak{p}_{0}\), which proves (d). _Uniqueness:_ Observing that \(\gamma(\omega_{LT})=[\chi_{LT}(\gamma)](\omega_{LT})\) for any \(\gamma\in\Gamma_{L}\) ([GAL, Lem. 2.1.15]) this is exactly the same computation as in the uniqueness part of the proof of [Be, Prop. II.1.1]. _Step 2:_ We show that \(N^{\prime}\) is \(p\)-saturated in \(D^{+}_{LT}(T)\). By construction we have \((N^{\prime})_{(\pi_{L})}=D^{+}_{LT}(T)(\pi_{L})\). This implies that the \(p\)-torsion in the quotient \(D^{+}_{LT}(T)/N^{\prime}\) is finite. On the other hand, both modules, \(N^{\prime}\) and \(D^{+}_{LT}(T)\), are free of the same rank. Hence the finitely generated \({\bf A}^{+}_{L}\)-module \(D^{+}_{LT}(T)/N^{\prime}\) has projective dimension \(\leqslant 1\) and therefore has no nonzero finite submodule (cf. [NSW, Prop. 5.5.3(iv)]). _Step 3:_ We show that \(N^{\prime}=N\). Since both, \(N\) and \(N^{\prime}\), are \(p\)-saturated in \(D^{+}_{LT}(T)\) it suffices to show that the free \({\bf B}^{+}_{L}\)-modules \(N(V):=N[\frac{1}{p}]\) and \(N^{\prime}(V):=N^{\prime}[\frac{1}{p}]\) over the principal ideal domain \({\bf B}^{+}_{L}:={\bf A}^{+}_{L}[\frac{1}{p}]\) coincide. As they are both \(\Gamma_{L}\)-invariant, so is the annihilator ideal \(I:=ann_{{\bf B}^{+}_{L}}(N^{\prime}(V)/N(V))\). Hence, by a standard argument as in [Be, Lem. 1.3.2], the ideal \(I\) is generated by an element \(f\) of the form \(\omega_{LT}^{\alpha_{0}}\prod_{n\geqslant 1}^{s}\varphi_{L}^{n-1}(Q)^{ \alpha_{n}}\) with certain \(\alpha_{n}\geqslant 0\), \(0\leqslant n\leqslant s\), for some (minimal) \(s\geqslant 0\). Since \(N(V)_{(\omega_{LT})}=N^{\prime}(V)_{(\omega_{LT})}\) by the construction of \(N^{\prime}\), it follows that \(\alpha_{0}=0\). Assuming that \(M:=N^{\prime}(V)/N(V)\neq 0\) we conclude that \(s\geqslant 1\) (with \(\alpha_{s}\geqslant 1\)), i.e., that, with \(\mathfrak{p}_{n}:=(\varphi_{L}^{n-1}(Q))\), we have \(M_{\mathfrak{p}_{s}}\neq 0\) while \(M_{\mathfrak{p}_{s+1}}=0\). We claim that \((\varphi_{L}^{\bullet}M)_{\mathfrak{p}_{s+1}}\neq 0\). First note that we have have an exact sequence \[0\rightarrow({\bf B}^{+}_{L})^{d}\xrightarrow{A}({\bf B}^{+}_{L})^{d} \to M\to 0\,\] with \(f\) dividing \(\det(A)\in{\bf B}^{+}_{L}\backslash({\bf B}^{+}_{L})^{\times}_{\mathfrak{p}_{s}}\), which induces an exact sequence \[0\rightarrow({\bf B}^{+}_{L})^{d}\xrightarrow{\varphi_{L}(A)}({\bf B}^{+}_{L}) ^{d}\rightarrow\varphi_{L}^{\bullet}M\to 0\.\] Since \(\varphi_{L}(f)=\prod_{n\geqslant 2}^{s+1}\varphi_{L}^{n-1}(Q)^{\alpha_{n-1}}\) divides \(\det(\varphi_{L}(A))\) we conclude that \(\det(\varphi_{L}(A))\) belongs to \(\mathfrak{p}_{s+1}\) which implies the claim. Now consider the following diagram with exact rows The upper isomorphism comes from the definition of the category \(\operatorname{Mod}_{{\bf A}^{+}_{L}}^{\varphi_{L},\Gamma_{L},an}\) in which \(N\) lies. The map \((\varphi_{L}^{\bullet}N^{\prime}(V))[\frac{1}{Q}]\to N^{\prime}(V)[\frac{1}{Q}]\) is injective since \(\varphi_{L}^{\bullet}N^{\prime}\to N^{\prime}\) is the restriction of the isomorphism \(\varphi_{L}^{*}D_{LT}(T)\xrightarrow{\cong}D_{LT}(T).\) By the snake lemma and as \(Q\notin\mathfrak{p}_{s+1}\) we obtain an injection \[0\neq(\varphi_{L}^{*}M)_{\mathfrak{p}_{s+1}}\hookrightarrow M_{\mathfrak{p}_{s+1 }}=0\,\] which is a contradiction. Thus \(M=0\) as had to be shown. **Remark 3.1.11**.: 1. \(N(o_{L}(\chi_{LT}^{-1}))=\omega_{LT}\mathbf{A}_{L}^{+}\otimes_{o_{L}}o_{L} \eta^{\otimes-1}\) _and_ \(N(o_{L})=\mathbf{A}_{L}^{+}.\)__ 2. _Let_ \(o_{L}(\chi)=o_{L}t_{0}\) _with_ \(\chi:G_{L}\to\o_{L}^{\times}\) _unramified. Then there exists an_ \(a\in W(\bar{k}_{L})_{L}^{\times}\) _with_ \(\sigma a=\chi^{-1}(\sigma)a\) _for all_ \(\sigma\in G_{L}\) _by Remark_ 3.2.4__ 8_; in particular,_ Footnote 8: Since \(\pi_{L}\) has trivial \(G_{L}\)-action the period \(a\) there can be normalized such it becomes a unit in \(W(\bar{k}_{L})_{L}\). \[N(o_{L}(\chi))=D_{LT}^{+}(o_{L}(\chi))=\mathbf{A}_{L}^{+}n_{0}\quad\text{for $n_{0}=a\otimes t_{0}$},\] _where_ \(\Gamma_{L}\) _fixes_ \(n_{0}\) _and_ \(\varphi_{N(o_{L}(\chi))}(n_{0})=cn_{0}\) _with_ \(c:=\frac{\varphi_{L}(a)}{a}\in o_{L}^{\times}\)_._ Proof.: Each case belongs to a positive representation \(T\): in all cases the right hand side of the equality satisfies the properties characterizing \(N(T)\) in Prop. 3.1.10.ii (cf. [GAL, Lem. 2.1.15]). **Lemma 3.1.12**.: _For any \(T\in\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\) we have:_ 1. \(N(T)\) _is the unique_ \(\mathbf{A}_{L}^{+}\)_-submodule of_ \(D_{LT}(T)\) _which satisfies (N1) and (N2);_ 2. \(N(T(\chi_{LT}^{-r}))\cong\omega_{LT}^{r}N(T)\otimes_{o_{L}}o_{L}\eta^{\otimes -r}\)_._ Proof.: First we choose \(r\geqslant 0\) such that \(T(\chi_{LT}^{-r})\) is positive. Sending \(N\) to \(\omega_{LT}^{r}N(T)\otimes_{o_{L}}o_{L}\eta^{\otimes-r}\subseteq D_{LT}(T) \otimes_{o_{L}}o_{L}\eta^{\otimes-r}\) viewed in \(D_{LT}(T)\otimes_{o_{L}}o_{L}\eta^{\otimes-r}\cong D_{LT}(T(\chi_{LT}^{-r}))\) sets up a bijection between the \(\mathbf{A}_{L}^{+}\)-submodules of \(D_{LT}(T)\) and \(D_{LT}(T(\chi_{LT}^{-r}))\), respectively. One checks that \(N\) satisfies (N1) and (N2) if and only if its image does. Hence i. and ii. (for such \(r\)) are a consequence of Prop. 3.1.10.i. That ii. holds in general follows from the obvious transitivity property of the above bijections. **Proposition 3.1.13**.: _Let \(T\) be in \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\) of \(o_{L}\)-rank \(d\) and such that \(V=L\otimes_{o_{L}}T\) is positive with Hodge-Tate weights \(-r=-r_{d}\leqslant\cdots\leqslant-r_{1}\leqslant 0\). Taking (17) as an identification we then have_ \[(\tfrac{t_{LT}}{\omega_{LT}})^{r}\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T) \subseteq\mathcal{O}\otimes_{L}D_{cris,L}(L\otimes_{o_{L}}T)\subseteq \mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T) \tag{18}\] _with elementary divisors_ \[[\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T):\mathcal{O}\otimes_{L}D_{cris,L} (L\otimes_{o_{L}}T)]=[(\tfrac{t_{LT}}{\omega_{LT}})^{r_{1}}:\cdots:(\tfrac{t_ {LT}}{\omega_{LT}})^{r_{d}}].\] Proof.: We abbreviate \(D:=D_{cris,L}(V)\). By the definition of the functor \(\mathcal{M}\) in [KR] we have \[\mathcal{O}\otimes_{L}D\subseteq\mathcal{M}(D)\subseteq(\tfrac{t_{LT}}{\omega _{LT}})^{-r}\mathcal{O}\otimes_{L}D. \tag{19}\] On the other hand, the commutativity of the big diagram before Remark 3.1.2 says that \(\mathcal{M}(D)\cong\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T)\). This implies the inclusions (18). Concerning the second part of the assertion we first of all note that, although \(\mathcal{O}\) is only a Bezout domain, it does satisfy the elementary divisor theorem ([ST1, proof of Prop. 4.4]). We may equivalently determine the elementary divisors of the \(\mathcal{O}\)-module \(\mathcal{M}(D)/(\mathcal{O}\otimes_{L}D)\). The countable set \(\mathbb{S}\) of zeros of the function \(\frac{t_{LT}}{\omega_{LT}}\in\mathcal{O}\) coincides with the set of nonzero torsion points of our Lubin-Tate formal group, each occurring with multiplicity one. The first part of the assertion implies that the module \(\mathcal{O}\)-module \(\mathcal{M}(D)/(\mathcal{O}\otimes_{L}D)\) is supported on \(\mathbb{S}\). Let \(\mathcal{M}_{z}(D)\), resp. \(\mathcal{O}_{z}\), denote the stalk in \(z\in\mathbb{S}\) of the coherent sheaf on \(\mathbf{B}\) defined by \(\mathcal{M}(D)\), resp. \(\mathcal{O}\). The argument in the proof of [BSX, Prop. 1.1.10] then shows that we have \[\mathcal{M}(D)/(\mathcal{O}\otimes_{L}D)=\prod_{z\in\mathbb{S}}\mathcal{M}_{z} (D)/(\mathcal{O}_{z}\otimes_{L}D)\.\] The ring \(\mathcal{O}_{z}\) is a discrete valuation ring with maximal ideal \(\mathfrak{m}_{z}\) generated by \(\frac{t_{LT}}{\omega_{LT}}\). We consider on its field of fractions \(\operatorname{Fr}(\mathcal{O}_{z})\) the \(\mathfrak{m}_{z}\)-adic filtration and then on \(\operatorname{Fr}(\mathcal{O}_{z})\otimes_{L}D\) the tensor product filtration. By [Kis, Lem. 1.2.1(2)] (or [BSX, Lem. 3.4.4]) we have \[\mathcal{M}_{z}(D)\cong\operatorname{Fil}^{0}(\operatorname{Fr}(\mathcal{O}_{z })\otimes_{L}D)\qquad\text{for any }z\in\mathbb{S},\] and this isomorphism preserves \(\mathcal{O}_{z}\otimes_{L}D\). At this point we let \(0\leqslant s_{1}<\ldots<s_{m}<r\) denote the jumps of the filtration \(\operatorname{Fil}^{\bullet}D\), i.e., the \(r_{j}\) but without repetition. We write \[D=D_{1}\oplus\ldots\oplus D_{m}\quad\text{such that}\quad\operatorname{Fil}^{ s_{i}}D=D_{i}\oplus\ldots\oplus D_{m}\.\] For the following computation let, for notational simplicity, \(R\) denote any \(L\)-algebra which is a discrete valuation ring with maximal ideal \(\mathfrak{m}\). We compute \[\operatorname{Fil}^{0}(\operatorname{Fr}(R)\otimes_{L}D) =\sum_{j\in\mathbb{Z}}\mathfrak{m}^{-j}\otimes_{L}\operatorname{ Fil}^{j}D=\sum_{j=0}^{r}\mathfrak{m}^{-j}\otimes_{L}\operatorname{Fil}^{j}D\] \[=\sum_{i=1}^{m}\mathfrak{m}^{-s_{i}}\otimes_{L}\operatorname{ Fil}^{s_{i}}D=\sum_{i=1}^{m}\sum_{j=i}^{m}\mathfrak{m}^{-s_{i}}\otimes_{L}D_{j}\] \[=\sum_{j=1}^{m}(\sum_{i=1}^{j}\mathfrak{m}^{-s_{i}})\otimes_{L}D_ {j}=\sum_{j=1}^{m}\mathfrak{m}^{-s_{j}}\otimes_{L}D_{j}\.\] Hence we obtain \[\operatorname{Fil}^{0}(\operatorname{Fr}(R)\otimes_{L}D)/(R\otimes_{L}D)= \oplus_{j=1}^{m}\mathfrak{m}^{-s_{j}}/R\otimes_{L}D_{j}\cong\oplus_{j=1}^{m} R/\mathfrak{m}^{s_{j}}\otimes_{L}D_{j}\.\] By combining all of the above we finally arrive at \[\mathcal{M}(D)/(\mathcal{O}\otimes_{L}D) =\prod_{z\in\mathbb{S}}\mathcal{M}_{z}(D)/(\mathcal{O}_{z} \otimes_{L}D)\cong\prod_{z\in\mathbb{S}}\operatorname{Fil}^{0}(\operatorname{ Fr}(\mathcal{O}_{z})\otimes_{L}D)/(\mathcal{O}_{z}\otimes_{L}D)\] \[\cong\prod_{z\in\mathbb{S}}(\oplus_{j=1}^{m}\mathcal{O}_{z}/ \mathfrak{m}_{z}^{s_{j}}\otimes_{L}D_{j})=\oplus_{j=1}^{m}(\prod_{z\in\mathbb{ S}}(\mathcal{O}_{z}/(\frac{t_{LT}}{\omega_{LT}})^{s_{j}}\mathcal{O}_{z} \otimes_{L}D_{j}))\] \[=\oplus_{j=1}^{m}(\prod_{z\in\mathbb{S}}\mathcal{O}_{z}/(\frac{t_ {LT}}{\omega_{LT}})^{s_{j}}\mathcal{O}_{z})\otimes_{L}D_{j}=\oplus_{j=1}^{m} \mathcal{O}/(\frac{t_{LT}}{\omega_{LT}})^{s_{j}}\mathcal{O}\otimes_{L}D_{j}\.\] For a first application of this result we recall the comparison isomorphism (20) for any \(T\) in \(\operatorname{Rep}^{cris,an}_{o_{L},f}(G_{L})\), \(V:=L\otimes_{o_{L}}T\), and \(N(V):=N(T)[\frac{1}{\pi_{L}}]\). The left horizontal inclusion comes from the fact that \(\frac{t_{LT}}{\omega_{LT}}\) is a multiple of \(Q\) in \(\mathcal{O}\). In particular, we have the commutative diagram where \(\varphi_{cris}\) denotes the \(q\)-Frobenius on \(D_{cris,L}(V)\) and where \[N^{(\varphi)}(V):=\text{the }\mathbf{A}_{L}^{+}\text{-submodule of }N(V)[\frac{1}{Q}]\text{ generated by the image of }N(V)\text{ under }\varphi_{N(V)}.\] We note that, since \(Q\) is invertible in \(\mathbf{A}_{L}\), \(N^{(\varphi)}(V)\) can also be viewed as the \(\mathbf{A}_{L}^{+}\)-submodule of \(D_{LT}(V)=\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N(V)\) generated by the image of \(N(V)\) under \(\varphi_{D_{LT}(V)}\). from this one easily deduces (use the projection formula for the \(\psi\)-operator) that the map \(\psi_{D_{LT}(V)}\) on \(D_{LT}(V)\) restricts to an operator \[\psi_{N(V)}:N^{(\varphi)}(V)\longrightarrow N(V)\.\] **Corollary 3.1.14**.: _Assume that the Hodge-Tate weights of \(V\) are all in \([0,r]\). Then we have_ \[\operatorname{comp}(N(V))\subseteq\mathcal{O}\otimes_{L}D_{cris,L}(V),\ \operatorname{comp}(N^{(\varphi)}(V))\subseteq\mathcal{O}\otimes_{L}D_{cris,L}( V),\text{ and} \tag{22}\] \[\operatorname{comp}(N^{(\varphi)}(V)^{\psi_{N(V)}=0})\subseteq \mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V). \tag{21}\] Proof.: Apply Prop. 3.1.13 to \(T(\chi_{LT}^{-r})\), then divide the resulting (left) inclusion in (18) by \(t_{LT}^{r}\) and tensor with \(o_{L}(\chi_{LT}^{r})\). This gives the first inclusion by Lemma 3.1.12 upon noting that \(t_{LT}^{r}D_{cris,L}(L\otimes_{o_{L}}T)\otimes_{L}L\eta^{\otimes-r}=D_{cris,L} (L\otimes_{o_{L}}T(\chi_{LT}^{-r}))\). The second inclusion easily derives from the first by using that the map \(\operatorname{comp}\) is compatible with the \(\varphi\)'s. For the third inclusion we consider any element \(x=\sum_{i}f_{i}\varphi_{N(V)}(x_{i})\in N^{(\varphi)}(V)\), with \(f_{i}\in\mathbf{A}_{L}^{+}\) and \(x_{i}\in N(V)\), such that \(\psi_{N(V)}(x)=\sum_{i}\psi_{L}(f_{i})x_{i}=0\). We choose an \(L\)-basis \(e_{1},\dots,e_{m}\) of \(D_{cris,L}(V)\) and write \(\operatorname{comp}(x_{i})=\sum_{j}f_{ij}\otimes e_{j}\) with \(f_{ij}\in\mathcal{O}\). Then \[0=\operatorname{comp}(\psi_{N(V)}(x))=\sum_{i}\psi_{L}(f_{i})\operatorname{ comp}(x_{i})=\sum_{i}\sum_{j}\psi_{L}(f_{i})f_{ij}\otimes e_{j}\] and it follows that \[\psi_{L}(\sum_{i}f_{i}\varphi_{L}(f_{ij}))=\sum_{i}\psi_{L}(f_{i})f_{ij}=0\,\] i.e., that \(\sum_{i}f_{i}\varphi_{L}(f_{ij})\in\mathcal{O}^{\psi_{L}=0}.\) On the other hand we compute \[\operatorname{comp}(x) =\sum_{i}f_{i}\varphi_{N(V)}(x_{i})=\sum_{i}f_{i}(\varphi_{L} \otimes\varphi_{cris})(\operatorname{comp}(x_{i}))\] \[=\sum_{i}\sum_{j}f_{i}(\varphi_{L}(f_{ij})\otimes\varphi_{cris}(e _{j}))\] \[=\sum_{j}\big{(}\sum_{i}f_{i}\varphi_{L}(f_{ij})\big{)}\otimes \varphi_{cris}(e_{j})\.\] **Corollary 3.1.15**.: _In the situation of Prop. 3.1.13 we have_ \[D_{cris,L}(V)\cong\left(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T)\right)^{ \Gamma_{L}}.\] Proof.: We set \(\mathcal{M}:=\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T)\) and identify \(D(\mathcal{M})\) and \(D_{cris,L}(V)\) based on Lemma 3.1.5 and using (18). The proof of [KR, Prop. (2.2.6)] combined with Remark 3.1.2 iii. implies the commutativity of the following diagram in which the right vertical map is the canonical inclusion while the left vertical map stems from the definition of the functor \(\mathcal{M}\) as in (19) (which also implies the commutativity of the left triangle). Taking \(\Gamma_{L}\)-invariants and using the fact that the upper line induces the isomorphism \(D(\mathcal{M})\cong\mathcal{M}[\frac{\omega_{LT}}{t_{LT}}]^{\Gamma_{L}}\) in Remark 3.1.2 (iii) the result follows. **Corollary 3.1.16**.: _In the situation of Prop. 3.1.13 we have \(Q^{r}N(V)\subseteq N^{(\varphi)}(V)\)._ Proof.: In the present situation \(\varphi_{N(V)}:N(V)\to N(V)\) is an semilinear endomorphism of \(N(V)\) by Remark 3.1.6(i). Then \(\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}=\varphi_{L}\otimes \varphi_{N(V)}:\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)\to\mathcal{O} \otimes_{\mathbf{A}_{L}^{+}}N(V)\) is an endomorphism as well. The corresponding linearized maps are \[\varphi_{N(V)}^{lin}:\mathbf{A}_{L}^{+}\otimes_{\mathbf{A}_{L}^ {+},\varphi_{L}}N(V) \xrightarrow{\simeq}N^{(\varphi)}(V)\subseteq N(V)\] \[f\otimes x \longmapsto f\varphi_{N(V)}(x)\] and \[\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin}= \operatorname{id}_{\mathcal{O}}\otimes\varphi_{N(V)}^{lin}:\mathcal{O} \otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N(V)=\mathcal{O}\otimes_{\mathbf{A}_ {L}^{+}}\left(\mathbf{A}_{L}^{+}\otimes_{\mathbf{A}_{L}^{+},\varphi_{L}}N(V) \right)\\ \longrightarrow\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)\.\] Since \(\mathcal{O}\) is flat over \(\mathbf{A}_{L}^{+}[\frac{1}{\pi_{L}}]\) it follows that \[\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)/\operatorname{im}(\varphi_{ \mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin})=\mathcal{O}\otimes_{ \mathbf{A}_{L}^{+}}(N(V)/N^{(\varphi)}(V))\.\] But \(\mathcal{O}\) is even faithfully flat over \(\mathbf{A}_{L}^{+}[\frac{1}{\pi_{L}}]\). Hence the natural map \[N(V)/N^{(\varphi)}(V)\longrightarrow\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V) /\operatorname{im}(\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin})\] is injective. This reduces us to proving that \[Q^{r}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V))\subseteq\operatorname{im }(\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin})\.\] As for any object in the category \(\operatorname{Mod}_{\mathcal{O}}^{\varphi_{L},\gamma_{L},an}\), we do have \[Q^{h}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V))\subseteq\operatorname{im }(\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin})\.\] for some sufficiently big integer \(h\). On the other hand, (18) says that \[(\frac{t_{LT}}{\omega_{LT}})^{r}\mathrm{comp}\big{(}\mathcal{O}\otimes_{ \mathbf{A}_{L}^{+}}N(V)\big{)}\subseteq\mathcal{O}\otimes_{L}D_{cris,L}(V) \subseteq\mathrm{comp}\big{(}\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V) \big{)}\.\] Since \(\varphi_{cris}\) is bijective we can sharpen the right hand inclusion to \[\mathcal{O}\otimes_{L}D_{cris,L}(V)\subseteq\mathrm{comp}\big{(}\operatorname {im}(\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin})\big{)}\.\] It follows that \((\frac{t_{LT}}{\omega_{LT}})^{r}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)) \subseteq\operatorname{im}(\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N (V)}^{lin})\). Since the greatest common divisor of \(Q^{h}\) and \((\frac{t_{LT}}{\omega_{LT}})^{r}\) is \(Q^{\min(h,r)}\) we finally obtain that \(Q^{r}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V))\subseteq\operatorname{im} (\varphi_{\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(V)}^{lin})\). **Corollary 3.1.17**.: _In the situation of Prop. 3.1.13 we have, with regard to an \(\mathbf{A}_{L}^{+}\)-basis of \(N:=N(T)\) and with \(s:=\sum_{i=1}^{d}r_{i}\), that_ \[\det(\varphi_{N}:N(T)\to N(T))=\det(\varphi_{N(V)}:N(V)\to N(V))=Q^{s}\] _up to an element in \(o_{L}^{\times}\cdot(\varphi_{L}-1)\big{(}(\mathbf{A}_{L}^{+})^{\times}\big{)}\)._ Proof.: Note first that \(N\) is \(\varphi_{N}\)-stable by Remark 3.1.6(i). Moreover, the determinant of \(\varphi_{N}\) acting on \(N(V)\) equals the determinant of \(\varphi_{L}\otimes\varphi_{N}\) acting on \(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T)\), since we can take for both an \(\mathbf{A}_{L}^{+}\)-basis of \(N(T)\). Since \(\varphi_{L}(\frac{t_{LT}}{\omega_{LT}})=\frac{\pi_{L}}{Q}\frac{t_{LT}}{ \omega_{LT}}\), by proposition 3.1.13 the latter determinant equals \((\frac{\pi_{L}}{Q})^{-s}\) multiplied by the determinant of \(\varphi_{L}\otimes\)Frob acting on \(\mathcal{O}\otimes_{L}D_{cris,L}(V)\). The latter is equal to the determinant of Frob on \(D_{cris,L}(V)\), which is \(\pi_{L}^{s}\) up to a unit in \(o_{L}\) since the filtered Frobenius module \(D_{cris,L}(V)\) is weakly admissible. This shows the claim up to an element in \(o_{L}^{\times}\cdot(\varphi_{L}-1)(\mathcal{O}^{\times})\). But \(\mathcal{O}^{\times}=\pi_{L}^{\mathbb{Z}}\times(\mathbf{A}_{L}^{+})^{\times}\) by [1, (4.8)]. Hence \((\varphi_{L}-1)(\mathcal{O}^{\times})=(\varphi_{L}-1)\big{(}(\mathbf{A}_{L}^{+ })^{\times}\big{)}\). ### The determinant of the crystalline comparison isomorphism Let \(T\) be any object in \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\) of \(o_{L}\)-rank \(d\) and such that \(V=L\otimes_{o_{L}}T\) has Hodge-Tate weights \(-r=-r_{d}\leqslant\cdots\leqslant-r_{1}\); we set \(s:=\sum_{i=1}^{d}r_{i}\), \(N:=N(T)\) and \(\mathcal{M}=\mathcal{O}\otimes N\). Consider the integral lattice \[\mathcal{D}:=\mathcal{D}(T)\subseteq D_{cris,L}(V)\] which is defined as the image of \(N/\omega_{LT}N\subseteq D(N)\) under the natural isomorphisms \(D(N)\cong D(\mathcal{M})\cong D_{cris,L}(V)\) arising from Lemma 3.1.5 and (4). Then with \(N(-)\) also \(\mathcal{D}(-)\) is a \(\otimes\)-functor. The aim of this subsection is to prove the following result. **Proposition 3.2.1**.: _With regard to bases of \(T\) and \(\mathcal{D}\) the determinant of the crystalline comparison isomorphism_ \[B_{cris,L}\otimes_{L}V\cong B_{cris,L}\otimes_{L}D_{cris,L}(V)\] _belongs to \(t_{LT}^{s}W(\bar{k}_{L})_{L}^{\times}\)._ We write \(\bigwedge V\) for the highest exterior power of \(V\) over \(L\). **Remark 3.2.2**.: _If \(V\) is \(L\)-analytic (Hodge-Tate, crystalline), then so is \(\bigwedge V.\)_ Since \(D_{cris,L}\) is a tensor functor, we are mainly reduced to consider characters \(\rho:G_{L}\to L^{\times},\) for which we denote by \(V_{\rho}\) its representation space. **Remark 3.2.3**.: 1. _If_ \(V_{\rho}\) _is Hodge-Tate, then_ \(\rho\) _coincides on an open subgroup of the inertia group_ \(I_{L}\) _of_ \(G_{L}\) _with_ \[\prod_{\sigma\in\Sigma_{L}}\sigma^{-1}\circ\chi_{\sigma L,LT}^{n_{\sigma}},\] _for some integers_ \(n_{\sigma},\) _where_ \(\Sigma_{L}\) _denotes the set of embeddings of_ \(L\) _into_ \(\bar{L}\) _and_ \(\chi_{\sigma L,LT}\) _is the Lubin-Tate character for_ \(\sigma L\) _and_ \(\sigma(\pi_{L}).\)__ 2. _If, in addition,_ \(V_{\rho}\) _is_ \(L\)_-analytic, then_ \(\rho\) _coincides on an open subgroup of the inertia group_ \(I_{L}\) _with_ \(\chi_{LT}^{n}\) _for some integer_ \(n.\)__ Proof.: This follows from [Se0, III.A4 Prop. 4 as well as III.A5 Thm. 2 and its corollary]. **Remark 3.2.4**.: _Let \(\rho\) be a crystalline (hence Hodge-Tate) and \(L\)-analytic character. We then have:_ 1. _If_ \(\rho\) _factorizes through_ \(G(L^{\prime}/L)\) _for some discretely valued Galois extension_ \(L^{\prime}\) _of_ \(L\)_, then the determinant of the crystalline comparison isomorphism for_ \(V_{\rho}\) _belongs to_ \((W(\bar{k}_{L})_{L}[\frac{1}{p}])^{\times}\) _(with respect to arbitrary bases of_ \(V\) _and_ \(D_{cris,L}(V)\)_.)_ 2. _If_ \(\rho\) _has Hodge-Tate weight_ \(-s\)_, then the determinant of the crystalline comparison isomorphism for_ \(V_{\rho}\) _lies in_ \(t_{LT}^{s}(W(\bar{k}_{L})_{L}[\frac{1}{p}])^{\times}\)_._ 3. \(\rho\) _is of the form_ \(\chi_{LT}^{n}\chi^{un}\) _with an integer_ \(n\) _and an unramified character_ \(\chi^{un}\)_._9__ Footnote 9: Also the converse statement is true: Indeed, any unramified character is locally algebraic (by definition, see [Se0]), whence HT. By [Se0, A3 Prop 3, A5 Thm 2] the Hodge Tate weights are all zero, whence \(\rho\) is \(L\)-analytic. It is crystalline as it is admissible, i.e. there is a period in \(\mathbb{C}_{p}^{\times},\) which in this case needs to lie in the fixed field under inertia, which is contained in \(B_{cris}.\)__ Proof.: We shall write \(K_{0}\) for the maximal absolutely unramified subextension of \(K,\) any algebraic extension of \(\mathbb{Q}_{p}.\) Taking \(G_{L^{\prime}}\)-invariants of the comparison isomorphism shows that the latter is already defined over \[B_{cris,L}^{G_{L^{\prime}}}=(L\otimes_{L_{0}}B_{cris})^{G_{L^{\prime}}}=L \otimes_{L_{0}}(B_{cris})^{G_{L^{\prime}}}=L\otimes_{L_{0}}\widehat{L_{0}^{ \prime}}\subseteq W(\bar{k}_{L})_{L}[\frac{1}{p}],\] whence (i). Using Remark 3.2.3 (ii) and applying (i) to \(\rho\chi_{LT}^{-n}\) gives (ii). By the same argument it suffices to prove (iii) in the case of Hodge-Tate weight \(0.\) Then its period lies in the completion of the maximal unramified extension of \(L\) by (i), whence the claim that \(\rho\) is unramified follows, as the inertia subgroup of \(G_{L}\) must act trivially. By Prop. 3.1.8 we have \[N(T)\subseteq D_{LT}^{+}(T)\subseteq\mathbf{A}^{+}\otimes_{o_{L}}T\] if \(T\) is positive. Using (N2) and the isomorphism \[\mathbf{A}\otimes_{\mathbf{A}_{L}}D_{LT}(T)\cong\mathbf{A}\otimes_{o_{L}}T\] we obtain a canonical injection \[\mathbf{A}^{+}\otimes_{\mathbf{A}_{L}^{+}}N(T)\hookrightarrow\mathbf{A}^{+} \otimes_{o_{L}}T. \tag{23}\] **Proposition 3.2.5**.: _If \(T\) is positive, then the determinant of (23) with respect to bases of \(N(T)\) and \(T\) is contained in \(\omega_{LT}^{s}(\mathbf{A}_{L}^{+})^{\times}\cdot W(\bar{k}_{L})_{L}^{\times}\)._ Proof.: Let \(M\in M_{d}(\mathbf{A}^{+})\) be the matrix of a basis of \(N(T)\) with respect to a basis of \(T\) and \(P\in M_{d}(\mathbf{A}_{L}^{+})\) the matrix of \(\varphi_{L}\) with respect to the same basis of \(N(T)\). Then we have \(\varphi_{L}(M)=MP\). By Corollary 3.1.17 we have \(\det(P)=Q^{s}\varphi_{L}(f)f^{-1}u\) for some \(f\in(\mathbf{A}_{L}^{+})^{\times}\) and \(u\in o_{L}^{\times}\). But \(Q=\varphi_{L}(\omega_{LT})\omega_{LT}^{-1}\). We deduce that \[\varphi_{L}(\det(M))=\varphi_{L}(\omega_{LT}^{s}f)(\omega_{LT}^{s}f)^{-1}u \det(M)\,\,\text{i.e., that }(\omega_{LT}^{s}fa)^{-1}\det(M)\in\mathbf{A}^{\varphi_{L}=1}=o_{L}\.\] with \(a\in W(\bar{k}_{L})_{L}^{\times}\) such that \(\varphi_{L}(a)/a=u\). It follows that \(\det(M)\in\omega_{LT}^{s}o_{L}(\mathbf{A}_{L}^{+})^{\times}\cdot W(\bar{k}_{L })_{L}^{\times}\). But we also have \(\det(M)\in\mathbf{A}^{\times}\). Hence we finally obtain \(\det(M)\in\omega_{LT}^{s}o_{L}(\mathbf{A}_{L}^{+})^{\times}\cdot W(\bar{k}_{L })_{L}^{\times}\cap\mathbf{A}^{\times}=\omega_{LT}^{s}(\mathbf{A}_{L}^{+})^{ \times}\cdot W(\bar{k}_{L})_{L}^{\times}\). **Remark 3.2.6**.: _For \(T=o_{L}(\chi)\) with unramified \(\chi\) as in Remark 3.1.11 the map (23) maps the basis \(n_{0}\) to \(a\otimes t_{0}\)._ **Lemma 3.2.7**.: _If \(T\) is positive, then we have:_ 1. \(\mathcal{O}\otimes_{o_{L}}\mathcal{D}(T)=\mathcal{O}\otimes_{L}D_{cris,L}(V) \subseteq\operatorname{comp}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N(T))\)_;_ 2. _the determinant of the inclusion in i. with respect to bases of_ \(\mathcal{D}(T)\) _and_ \(N(T)\) _belongs to_ \((\frac{t_{LT}}{\omega_{LT}})^{s}(\mathbf{A}_{L}^{+})^{\times}\)_._ 3. _for_ \(T=o_{L}(\chi)\) _with unramified_ \(\chi\) _as in Remark_ 3.1.11_:_ \(\operatorname{comp}(n_{0})=\varphi_{L}(a)\otimes t_{0}=ca\otimes t_{0}\in D_{ cris,L}(V)\) _with_ \(c=\frac{\varphi_{L}(a)}{a}\in o_{L}^{\times}\); _in particular, the element_ \(a\otimes t_{0}\) _is a basis of_ \(\mathcal{D}(T)\)_._ Proof.: By construction the comparison isomorphism (17) is of the form \[\operatorname{comp}=\operatorname{id}_{\mathcal{O}[\frac{\omega_{LT}}{t_{LT}} ]}\otimes_{L}\operatorname{comp}_{0}\] with \[\operatorname{comp}_{0}:\left(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N[\tfrac{ \omega_{LT}}{t_{LT}}]\right)^{\Gamma_{L}}\xrightarrow[\mathrm{pr}]{\simeq}N/ \omega_{LT}N[\tfrac{1}{p}]=D(N)\xrightarrow[\mathrm{pr}]{\simeq}D_{cris,L}(V)\] the right hand arrow being the natural isomorphism from Lemma 3.1.5. For positive \(T\) we know in addition from the proof of Lemma 3.1.15 that \(\left(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N\right)^{\Gamma_{L}}=\left( \mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N[\tfrac{\omega_{LT}}{t_{LT}}]\right)^{ \Gamma_{L}}\). We deduce that \[\operatorname{comp}(\mathcal{O}\otimes_{\mathbf{A}_{L}^{+}}N)\supseteq \mathcal{O}\otimes_{L}\operatorname{comp}_{0}(\left(\mathcal{O}\otimes_{ \mathbf{A}_{L}^{+}}N\right)^{\Gamma_{L}})=\mathcal{O}\otimes_{L}D_{cris,L}(V)\.\] By Prop. 3.1.13 we know that the determinant in ii. is of the form \((\frac{t_{LT}}{\omega_{LT}})^{s}f(\omega_{LT})\) with \(f(\omega_{LT})\in\mathcal{O}^{\times}\). On the other hand, if we base change the inclusion in i. to \(L=\mathcal{O}/\omega_{LT}\mathcal{O}\) then we obtain the base change from \(o_{L}\) to \(L\) of the isomorphism \(\mathcal{D}\cong N/\omega_{LT}N\). By our choice of bases the determinant of the latter lies in \(o_{L}^{\times}\). Since evaluation in zero maps \((\frac{t_{LT}}{\omega_{LT}})^{s}f(\omega_{LT})\) to \(f(0)\) it follows that \(f(0)\) belongs to \(o_{L}^{\times}\) and hence ([11, (4.8)]) that \(f(\omega_{LT})\) belongs to \((\mathbf{A}_{L}^{+})^{\times}\). Now we prove iii.: By the above description of \(\mathrm{comp}_{0}\) we have to show that the image \(\bar{n}_{0}\in D(N(T))\) of \(n_{0}\) is mapped to \(ca\otimes t_{0}\) under the natural isomorphism from Lemma 3.1.5. Since under the crystalline comparison isomorphisms these elements are sent to \(a\otimes(a^{-1}\otimes\bar{n}_{0})\in B_{cris,L}\otimes_{L}V_{L}(D(N))\) and \(ca\otimes t_{0}\in B_{cris,L}\otimes_{o_{L}}T\), respectively, it suffices to show that the map (12) sends \(a^{-1}\otimes n_{0}\in L\otimes_{o_{L}}V(\mathbf{A}_{L}\otimes_{\mathbf{A}_{ L}^{+}}N)\) (which corresponds to \(t_{0}\) under the canonical isomorphism \(T\cong V(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N)\)) to \((ca)^{-1}\otimes\bar{n}_{0}\in V_{L}(D(N))\). Dualizing, this is equivalent to the claim that the map (13) sends the dual basis \(\delta_{a^{-1}\otimes n_{0}}\in(L\otimes_{o_{L}}V(M))^{*}\) of \(a^{-1}\otimes n_{0}\) to \(\delta_{(ca)^{-1}\otimes\bar{n}_{0}}\in V_{L}(D(N))^{*}\). Note that the isomorphism \[(L\otimes_{o_{L}}V(M))^{*}\cong L\otimes_{o_{L}}\mathrm{Hom}_{\mathbf{A}_{L}, \varphi_{q}}(\mathbf{A}_{L}\otimes_{\mathbf{A}_{L}^{+}}N,\mathbf{A})\cong L \otimes_{o_{L}}\mathrm{Hom}_{\mathbf{A}_{L}^{+},\varphi_{q}}(N,\mathbf{A}^{+} [\tfrac{1}{\omega_{LT}}])\] sends \(\delta_{a^{-1}\otimes n_{0}}\) to \(a\delta_{n_{0}}\). Thus it suffices to show that the map (14) sends \(a\delta_{n_{0}}\) to \(ca\delta_{\bar{n}_{0}}\) in \(\mathrm{Hom}_{L,\varphi_{q},\mathrm{Fil}}((N/\omega_{LT}N)[\tfrac{1}{p}],B_{cris,L})\), since the latter corresponds under (15) to \(ca\otimes\delta_{\bar{n}_{0}}\in V_{L}(D(N)^{*})\) which in turn corresponds to \(\delta_{(ca)^{-1}\otimes\bar{n}_{0}}\) under (16). If \(f=a\delta_{n_{0}}\), which is the map which sends \(n_{0}\) to \(a\), then - in the notation of the proof of Lemma 3.1.5 - \(f_{1}\) and \(f_{2}\) share this property, while \(f_{3}\) (and hence \(f_{4}\)) sends \(c^{-1}n_{0}\) to \(a\), because \(\varphi_{N}(c^{-1}n_{0})=c^{-1}\varphi_{L}(a)a^{-1}n_{0}=n_{0}\). Then \(f_{5}\) sends \(c^{-1}\bar{n}_{0}\) to \(a\), because \(\xi(c^{-1}\bar{n}_{0})=c^{-1}n_{0}\). Altogether this means, that \(a\delta_{n_{0}}\) is mapped to \(\varphi_{L}(a)\delta_{n_{0}}=ca\delta_{\bar{n}_{0}}\) as claimed. Proof of Prop. 3.2.1.: The functor \(D_{cris,L}(-)\) on crystalline Galois representations is a \(\otimes\)-functor and commutes with exterior powers, and the crystalline comparison isomorphism is compatible with tensor products and exterior powers. The analogous facts hold for the functor \(N(-)\) and hence for the functor \(\mathcal{D}(-)\) (by base change). The case of the functor \(N(-)\) reduces, by using the properties (N1) and (N2) in Lemma 3.1.12 (i), to the case of the functor \(D_{LT}(-)\). Here the properties can easily be seen by the comparison isomorphism (6). Upon replacing \(T\) by its highest exterior power we may and do assume that the \(o_{L}\)-module \(T\) has rank \(1\). In addition by twisting \(T\) if necessary with a power of \(\chi_{LT}\) we may and do assume that \(T\) is positive with \(s=0\), i.e., unramified by 3.2.4. In this case it is clear that - using the notation of Lemma 3.2.7 iii. - the crystalline comparison isomorphism sends \(t_{0}\) to \(a\otimes t_{0}\). Since the latter is also a basis of \(\mathcal{D}(T)\) by the same Lemma, the proposition follows. ### Non-negative Hodge-Tate weights Now assume that for \(T\) in \(\mathrm{Rep}_{o_{L},f}^{cris,an}(G_{L})\) the **Hodge-Tate weights are all**\(\geq 0\) and set \(N:=N(T)\). By [16, Remark 3.2.i.-ii.] the map \(\psi_{L}\) preserves \(\mathbf{A}_{L}^{+}\). It follows that \(\psi_{D_{LT}(T)}\) maps \(\mathbf{A}_{L}^{+}\cdot\varphi_{N}(N)\) - and hence \(N\) by Remark 3.1.6.i(2) - into \(N\). The following lemmata generalize those of [1, Appendix A]. **Lemma 3.3.1**.: _For \(m\geq 1\), there exists \(Q_{m}\in o_{L}[[\![Z]\!]]\) such that_ \[\psi_{L}(\frac{1}{\omega_{LT}^{m}})=\frac{\pi_{L}^{m-1}+\omega_{LT}Q_{m}( \omega_{LT})}{\omega_{LT}^{m}}.\] Proof.: According to the paragraph after Remark 2.1 in [15] combined with Remark 3.2 ii. in (loc. cit.) we have that \[h(\omega_{LT}):=\omega_{LT}^{m}\psi_{L}(\frac{1}{\omega_{LT}^{m}})=\psi_{L}( \frac{[\pi_{L}]^{m}}{\omega_{LT}^{m}})\in\mathbf{A}_{L}^{+}.\] Obviously there exists \(Q_{m}\in o_{L}[\![Z]\!]\) such that \[h(\omega_{LT})-h(0)=\omega_{LT}Q_{m}(\omega_{LT}).\] Thus the claim follows from \[h(0) = \varphi_{L}(h(\omega_{LT}))_{|\omega_{LT}=0}=\varphi_{L}\circ \psi_{L}(\frac{[\pi_{L}]^{m}}{\omega_{LT}^{m}})_{|\omega_{LT}=0}=\pi_{L}^{-1} \sum_{a\in LT_{1}}\left(\frac{[\pi_{L}]^{m}(a+_{LT}\,\omega_{LT})}{(a+_{LT}\, \omega_{LT})^{m}}\right)_{|\omega_{LT}=0}\] \[= \pi_{L}^{-1}\sum_{a\in LT_{1}}\left(\frac{[\pi_{L}](\omega_{LT}) }{a+_{LT}\,\omega_{LT}}\right)_{|\omega_{LT}=0}^{m}=\pi_{L}^{m-1},\] because \(\left(\frac{[\pi_{L}](\omega_{LT})}{a+_{LT}\omega_{LT}}\right)_{|\omega_{LT} =0}=\pi_{L}\) for \(a=0\) and \(=0\) otherwise. **Lemma 3.3.2**.: _We have_ \[\psi_{D_{LT}(T)}(\pi_{L}D_{LT}(T)+\omega_{LT}^{-1}N(T))\subseteq\pi_{L}D_{LT }(T)+\omega_{LT}^{-1}N(T)\] _and, for \(k\geqslant 1,\)_ \[\psi_{D_{LT}(T)}(\pi_{L}D_{LT}(T)+\omega_{LT}^{-(k+1)}N(T))\subseteq\pi_{L}D_ {LT}(T)+\omega_{LT}^{-k}N(T).\] Proof.: By Remark 3.1.6 (2) we can write any \(x\in N(T)\) in the form \(x=\sum a_{i}\varphi_{N}(x_{i})\) with \(a_{i}\in\mathbf{A}_{L}^{+}\) and \(x_{i}\in N(T).\) Therefore \(\psi_{D_{LT}(T)}(\omega_{LT}^{-(k+1)}x)=\sum\psi_{L}(\omega_{LT}^{-(k+1)}a_{i })x_{i}\) by the projection formula. Since \(\psi_{L}\) preserves \(\mathbf{A}_{L}^{+}\) and is \(o_{L}\)-linear we conclude by Lemma 3.3.1 that \(\psi_{L}(\omega_{LT}^{-(k+1)}a_{i})\) belongs to \(\pi_{L}\mathbf{A}_{L}+\omega_{LT}^{-k}\mathbf{A}_{L}^{+}\), whenever \(k\geqslant 1,\) from which the second claim follows as \(\psi_{D_{LT}(T)}(\pi_{L}D_{LT}(T))\subseteq\pi_{L}D_{LT}(T)\) by \(o_{L}\)-linearity of \(\psi_{D_{LT}(T)}.\) For \(k=0\) finally, \(\psi_{L}(\omega_{LT}^{-1}a_{i})\) belongs to \(\omega_{LT}^{-1}\mathbf{A}_{L}^{+}\), from which the first claim follows. **Lemma 3.3.3**.: _If \(k\geqslant 1\) and \(x\in D_{LT}(T)\) satisfies \(\psi_{D_{LT}(T)}(x)-x\in\pi_{L}D_{LT}(T)+\omega_{LT}^{-k}N(T),\) then \(x\) belongs to \(\pi_{L}D_{LT}(T)+\omega_{LT}^{-k}N(T).\)_ Proof.: Since \(D_{LT}(T)/\pi_{L}D_{LT}(T)\) is a finitely generated (free) \(k_{L}((\omega_{LT}))\)-module there exists an integer \(m\geqslant 0\) such that \(x\in\pi_{L}D_{LT}(T)+\omega_{LT}^{-m}N(T);\) let \(l\) denote the smallest among them. Assume that \(l>k.\) Then Lemma 3.3.2 shows that \[\psi_{D_{LT}(T)}(x)\in\pi_{L}D_{LT}(T)+\omega_{LT}^{-(l-1)}N(T).\] Hence \(\psi_{D_{LT}(T)}(x)-x\) would belong to \(\pi_{L}D_{LT}(T)+\omega_{LT}^{-l}N(T)\) but not to \((\pi_{L}D_{LT}(T)+\omega_{LT}^{-(l-1)}N(T))\), a contradiction to our assumption. It follows that \(l\leqslant k,\) and we are done. **Lemma 3.3.4**.: _It holds \(D_{LT}(T)^{\psi_{D_{LT}(T)}=1}\subseteq\omega_{LT}^{-1}N(T),\) i.e.,_ \[D_{LT}(T)^{\psi_{D_{LT}(T)}=1}=\left(\omega_{LT}^{-1}N(T)\right)^{\psi_{D_{LT}(T )}=1}.\] Proof.: By induction on \(k\geqslant 1\) we will show that \(D_{LT}(T)^{\psi_{DLT^{(T)}}=1}\subseteq\pi_{L}^{k}D_{LT}(T)+\omega_{LT}^{-1}N(T)\), i.e., writing \(x=\pi_{L}^{k}y_{k}+n_{k}\in D_{LT}(T)^{\psi_{DLT^{(T)}}=1}\) the sequence \(n_{k}\) will \(\pi_{L}\)-adically converge in \(\omega_{LT}^{-1}N(T)\) with limit \(x\). In order to show the claim assume \(x\in D_{LT}(T)^{\psi_{DLT^{(T)}}=1}.\) As in the previous proof there exists some minimal integer \(m\geqslant 0\) such that \(x\in\pi_{L}D_{LT}(T)+\omega_{LT}^{-m}N(T).\) Then \(m=1\) and we are done since otherwise Lemma 3.3.3 implies that \(m\) can be decreased by \(1.\) This proves the claim for \(k=1.\) By our induction hypothesis we can write \(x\in D_{LT}(T)^{\psi_{DLT^{(T)}}=1}\) as \(x=\pi_{L}^{k}y+n\) with \(y\in D_{LT}(T)\) and \(n\in\omega_{LT}^{-1}N(T).\) The equation \(\psi_{DLT(T)}(x)=x\) implies that \(\psi_{DLT(T)}(n)-n=\pi_{L}^{k}(\psi_{DLT(T)}(y)-y).\) In the proof of Lemma 3.3.2 we have seen that \(\psi_{DLT(T)}(n)-n\in\omega_{LT}^{-1}N(T).\) Note that \(\pi_{L}^{k}D_{LT}(T)\cap\omega_{LT}^{-1}N(T)=\pi_{L}^{k}\omega_{LT}^{-1}N(T)\) because \(\mathbf{A}_{L}/\omega_{LT}^{-1}\mathbf{A}_{L}^{+}\) has no \(\pi_{L}\)-torsion. Therefore \(\psi_{DLT(T)}(y)-y\in\omega_{LT}^{-1}N(T),\) whence \(y,\) by Lemma 3.3.3, belongs to \(\pi_{L}D_{LT}(T)+\omega_{LT}^{-1}N(T)\) so that we can write \(x=\pi_{L}^{k}(\pi_{L}y^{\prime}+n^{\prime})+n=\pi_{L}^{k+1}y^{\prime}+(\pi_{L} ^{k}n^{\prime}+n)\) as desired. Set \(V:=T\otimes_{o_{L}}L.\) **Lemma 3.3.5**.: _If \(D_{cris,L}(V)^{\varphi_{q}=1}\neq 0,\) then \(V\) has the trivial representation \(L\) as quotient, i.e., the co-invariants \(V_{G_{L}}\) are non-trivial._ Proof.: Let \(W=V^{*}\) be the \(L\)-dual of \(V.\) Then, by [15, (51)] we have \[(V_{G_{L}})^{*}\cong H^{0}(L,W)\cong D_{cris,L}(W)^{\varphi_{q}=1}\cap(B_{dR}^ {+}\otimes_{L}W)^{G_{L}}=D_{cris,L}(W)^{\varphi_{q}=1}\neq 0,\] because \((B_{dR}^{+}\otimes_{L}W)^{G_{L}}=(B_{dR}\otimes_{L}W)^{G_{L}}\cong D_{cris,L} (W)\) since the Hodge-Tate weights of \(W\) are \(\leqslant 0.\) **Lemma 3.3.6**.: _If \(V\) does not have any quotient isomorphic to the trivial representation \(L\), then \(D_{LT}(T)^{\psi_{DLT(T)}=1}\subseteq N(T)\), i.e.,_ \[D_{LT}(T)^{\psi_{DLT^{(T)}}=1}=N(T)^{\psi_{DLT^{(T)}}=1}\.\] Proof.: Because of Lemma 3.3.4 it suffices to show that \((\omega_{LT}^{-1}N(T))^{\psi_{DLT^{(T)}}=1}\subseteq N(T).\) Let \(e_{1},\ldots,e_{d}\) be a basis of \(N:=N(T)\) over \(\mathbf{A}_{L}^{+}.\) Then, by Remark 3.1.6 (ii) there exist \(\beta_{ij}=\sum_{\ell\geqslant 0}\beta_{ij,\ell}\omega_{LT}^{\ell}\in \mathbf{A}_{L}^{+}\) such that \(e_{i}=\sum_{j=1}^{d}\beta_{ij}\varphi_{N}(e_{j}).\) Now assume that \(\omega_{LT}^{-1}n=\sum_{i=1}^{d}\alpha_{i}e_{i}=\sum_{i,j}\alpha_{i}\beta_{ij} \varphi_{N}(e_{j})\) belongs to \((\omega_{LT}^{-1}N)^{\psi_{DLT^{(T)}}=1}\) with \(\alpha_{i}=\sum_{\ell\geqslant-1}\alpha_{i,\ell}\omega_{LT}^{\ell}\in\omega_{ LT}^{-1}\mathbf{A}_{L}^{+}.\) By the projection formula this implies, for \(1\leqslant j\leqslant d,\) \[\alpha_{j}=\psi_{L}(\sum_{i=1}^{d}\alpha_{i}\beta_{ij})\equiv\omega_{LT}^{-1} \sum_{i=1}^{d}\alpha_{i,-1}\beta_{ij,0}\mod\mathbf{A}_{L}^{+}\] because \(\psi_{L}(\omega_{LT}^{-1})\equiv\omega_{LT}^{-1}\mod\mathbf{A}_{L}^{+}\) by Lemma 3.3.1, whence \[\varphi_{L}(\omega_{LT})\varphi_{L}(\alpha_{j})\equiv\sum_{i=1}^{d}\alpha_{i,- 1}\beta_{ij,0}\mod\omega_{LT}\mathbf{A}_{L}^{+}.\] It follows from the definition of \(\beta_{ij}\) that \[\varphi_{N}(n)=\sum_{j}\varphi_{L}(\omega_{LT})\varphi_{L}(\alpha_{j})\varphi_{ N}(e_{j})\equiv\sum_{j,i}\alpha_{i,-1}\beta_{ij,0}\varphi_{N}(e_{j})\equiv\sum_{i} \alpha_{i,-1}e_{i}\equiv n\mod\omega_{LT}N,\] i.e., that \(D_{cris,L}(V)\cong N/\omega_{LT}N[\frac{1}{p}]\) (by (4) and Lemma 3.1.5) contains an eigenvector for \(\varphi_{q}\) with eigenvalue \(1,\) if \(\omega_{LT}^{-1}n\) does not belong to \(N.\) Now the result follows from Lemma 3.3.5. \((\varphi_{L},\Gamma_{L})\)-modules over the Robba ring ### Robba rings of character varieties Throughout our coefficient field \(K\) is a complete intermediate extension \(L\subseteq K\subseteq\mathbb{C}_{p}\). For any reduced affinoid variety \(\mathfrak{Y}\) over \(\mathbb{Q}_{p}\) of \(L\) we let \(|\mid_{\mathfrak{Y}}\) denote the supremum norm on the affinoid algebra \(\mathcal{O}_{K}(\mathfrak{Y})\) of \(K\)-valued holomorphic functions on \(\mathfrak{Y}\). It is submultiplicative and defines the intrinsic Banach topology of this algebra. #### 4.1.1 The additive character variety and its Robba ring Let \(\mathbf{B}_{1}\) denote the rigid \(\mathbb{Q}_{p}\)-analytic open disk of radius one around the point \(1\in\mathbb{Q}_{p}\). The rigid analytic group variety \[\mathfrak{X}_{0}:=\mathbf{B}_{1}\otimes_{\mathbb{Z}_{p}}\operatorname{Hom}_{ \mathbb{Z}_{p}}(o_{L},\mathbb{Z}_{p})\] over \(\mathbb{Q}_{p}\) (which noncanonically is a \(d\)-dimensional open unit polydisk) parametrizes the locally \(\mathbb{Q}_{p}\)-analytic characters of the additive group \(o_{L}\): the point \(z\otimes\beta\) is sent to the character \(\chi_{z\otimes\beta}(a):=z^{\beta(a)}\). It is shown in [10, SS2] that the rigid analytic group variety \(\mathfrak{X}\) over \(L\), which parametrizes the locally \(L\)-analytic characters of \(o_{L}\), is the common zero set in \(\mathfrak{X}_{0/L}\) of the functions \[\sum_{j=1}^{d}z_{j}\otimes\beta_{j}\longmapsto\sum_{j=1}^{d}(\beta_{j}(t_{i}) -t_{i}\cdot\beta_{j}(1))\cdot\log(z_{j})\] for \(1\leq i\leq d\); here \(t_{1},\ldots,t_{d}\) is a \(\mathbb{Z}_{p}\)-basis of \(o_{L}\) and \(\beta_{1},\ldots,\beta_{d}\) is the corresponding dual basis. It is one dimensional, smooth, and connected. As a closed analytic subvariety of the Stein space \(\mathfrak{X}_{0}\) the rigid variety \(\mathfrak{X}\) is Stein as well. For any \(a\in o_{L}\) the map \(b\longmapsto ab\) on \(o_{L}\) is locally \(L\)-analytic. This induces an action of the multiplicative monoid \(o_{L}\backslash\{0\}\) first on the \(\mathbb{Z}_{p}\)-module \(\operatorname{Hom}_{\mathbb{Z}_{p}}(o_{L},\mathbb{Z}_{p})\) and then on the varieties \(\mathfrak{X}_{0}\) and \(\mathfrak{X}\). The latter actions further induce actions on the rings of \(K\)-valued holomorphic functions \(\mathcal{O}_{K}(\mathfrak{X}_{0})\twoheadrightarrow\mathcal{O}_{K}( \mathfrak{X})\), which we will denote by \((a,f)\mapsto a_{\bullet}(f)\). We also have induced translation actions of \(o_{L}\backslash\{0\}\) on the vectors spaces \(C^{an}_{\mathbb{Q}_{p}}(o_{L},K)\), resp. \(C^{an}(o_{L},K)\), of \(K\)-valued locally \(\mathbb{Q}_{p}\)-analytic, resp. \(L\)-analytic, functions on \(o_{L}\) and then by duality on the spaces \(D_{\mathbb{Q}_{p}}(o_{L},K)\twoheadrightarrow D(o_{L},K)\) of locally \(\mathbb{Q}_{p}\)-analytic and locally \(L\)-analytic distributions on \(o_{L}\), respectively; they will be denoted by \((a,\lambda)\mapsto a_{\bullet}(\lambda)\). By [10, Thm. 2.3] we have the Fourier isomorphism \[D(o_{L},K) \stackrel{{\simeq}}{{\longrightarrow}}\mathcal{O}_{K }(\mathfrak{X})\] \[\lambda \longmapsto F_{\lambda}(\chi)=\lambda(\chi). \tag{24}\] One easily checks that this isomorphism is \(o_{L}\backslash\{0\}\)-equivariant. In the following we will denote the endomorphism \((\pi_{L})_{\ast}\) in all situations also by \(\varphi_{L}\). The Fourier isomorphism maps the Dirac distribution \(\delta_{a}\), for any \(a\in o_{L}\), to the evaluation function \(\operatorname{ev}_{a}(\chi):=\chi(a)\). #### The \(\psi\)-operator and the Mellin transform **Lemma 4.1.1**.: _The endomorphism \(\varphi_{L}\) makes \(\mathcal{O}_{K}(\mathfrak{X})\) into a free module over itself of rank equal to the cardinality of \(o_{L}/\pi_{L}o_{L}\); a basis is given by the functions \(\operatorname{ev}_{a}\) for \(a\) running over a fixed system of representatives for the cosets in \(o_{L}/\pi_{L}o_{L}\)._ Proof.: This is most easily seen by using the Fourier isomorphism which reduces the claim to the corresponding statement about the distribution algebra \(D(o_{L},K)\). But here the ring homomorphism \(\varphi_{L}\) visibly induces an isomorphism between \(D(o_{L},K)\) and the subalgebra \(D(\pi_{L}o_{L},K)\) of \(D(o_{L},K)\). Let \(R\subseteq o_{L}\) denote a set of representatives for the cosets in \(o_{L}/\pi_{L}o_{L}\). Then the Dirac distributions \(\{\delta_{a}\}_{a\in R}\) form a basis of \(D(o_{L},K)\) as a \(D(\pi_{L}o_{L},K)\)-module. **Lemma 4.1.2**.: _The \(o_{L}^{\times}\)-action on \(D(o_{L},K)\cong\mathcal{O}_{K}(\mathfrak{X})\) extends naturally to a (jointly) continuous \(D(o_{L}^{\times},K)\)-module structure._ Proof.: In a first step we consider the case \(K=L\), so that \(K\) is spherically complete. By [1, Cor. 3.4] it suffices to show that \(C^{an}(G,K)\) as an \(o_{L}^{\times}\)-representation is locally analytic. This means we have to establish that, for any \(f\in C^{an}(G,K)\), the orbit map \(a\longmapsto a^{*}(f)\) on \(o_{L}^{\times}\) is locally analytic. But this map is the image of the locally analytic function \((a,g)\longmapsto f(ag)\) under the isomorphism \(C^{an}(o_{L}^{\times}\times G,K)=C^{an}(o_{L}^{\times},C^{an}(G,K))\) in [1, Lemma.1]. Now let \(K\) be general. All tensor products in the following are understood to be formed with the projective tensor product topology. By the universal property of the latter the jointly continuous bilinear map \(D(o_{L}^{\times},L)\times\mathcal{O}_{L}(\mathfrak{X})\to\mathcal{O}_{L}( \mathfrak{X})\) extends uniquely to a continuous linear map \(D(o_{L}^{\times},L)\widehat{\otimes}_{L}\mathcal{O}_{L}(\mathfrak{X})\to \mathcal{O}_{L}(\mathfrak{X})\). This further extends to the right hand map in the sequence of continuous \(K\)-linear maps \[\big{(}K\widehat{\otimes}_{L}D(o_{L}^{\times},L)\big{)}\widehat{\otimes}_{K} \big{(}K\widehat{\otimes}_{L}\mathcal{O}_{L}(\mathfrak{X})\big{)}\to K \widehat{\otimes}_{L}\big{(}D(o_{L}^{\times},L)\widehat{\otimes}_{L} \mathcal{O}_{L}(\mathfrak{X})\big{)}\to K\widehat{\otimes}_{L}\mathcal{O}_{L} (\mathfrak{X})\.\] The left hand map is the obvious canonical one. We refer to [1, SS10.6] for the basics on scalar extensions of locally convex vector spaces. The same reasoning as in the proof of [1, Prop. 2.5.ii] shows that \(K\widehat{\otimes}_{L}\mathcal{O}_{L}(\mathfrak{X})=\mathcal{O}_{K}(\mathfrak{ X})\). It remains to check that \(K\widehat{\otimes}_{L}D(o_{L}^{\times},L)=D(o_{L}^{\times},K)\) holds true as well. For any open subgroup \(U\subseteq o_{L}^{\times}\) we have \(D(o_{L}^{\times},-)=\bigoplus_{a\in o_{L}^{\times}/U}\delta_{a}D(U,-)\). Hence it suffices to check that \(K\widehat{\otimes}_{L}D(U,L)=D(U,K)\) for one appropriate \(U\). But \(o_{L}^{\times}\) contains such a subgroup \(U\) which is isomorphic to the additive group \(o_{L}\) so that \(D(U,-)\cong D(o_{L},-)\cong\mathcal{O}_{-}(\mathfrak{X})\). In this case we had established our claim already. The operator \(\varphi_{L}\) has a distinguished \(K\)-linear continuous left inverse \(\psi_{L}^{D}\) which is defined to be the dual of the map \[C^{an}(o_{L},K) \longrightarrow C^{an}(o_{L},K)\] \[f \longmapsto(\pi_{L})!(f)(a):=\begin{cases}f(\pi_{L}^{-1}a)&\text{ if }a \in\pi_{L}o_{L},\\ 0&\text{ otherwise},\end{cases}\] and then, via the Fourier transform, induces an operator \(\psi_{L}^{\mathfrak{X}}\) on \(\mathcal{O}_{K}(\mathfrak{X})\). One checks that for Dirac distributions we have \[\psi_{L}^{D}(\delta_{a})=\begin{cases}\delta_{\pi_{L}^{-1}a}&\text{if }a\in\pi_{L}o_{L},\\ 0&\text{otherwise}.\end{cases} \tag{25}\] Together with Lemma 4.1.1 this implies the following. **Lemma 4.1.3**.: _If \(R_{0}\subseteq o_{L}\) is a set of representatives for the nonzero cosets in \(o_{L}/\pi_{L}o_{L}\) then_ \[\ker(\psi_{L}^{\mathfrak{X}})=\oplus_{a\in R_{0}}\operatorname{ev}_{a}\cdot \varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))\.\] We also recall the resulting projection formula \[\psi_{L}^{\mathfrak{X}}(\varphi_{L}(F_{1})F_{2})=F_{1}\psi_{L}^{\mathfrak{X}}(F_{2 })\qquad\text{for any }F_{1},F_{2}\in\mathcal{O}_{K}(\mathfrak{X}).\] Sometimes it will be useful to view \(\psi_{L}^{\mathfrak{X}}\) as a normalized trace operator. Since \(\mathcal{O}_{K}(\mathfrak{X})\) is a free module over \(\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))\) of rank \(q\) we have the corresponding trace map \[trace_{\mathcal{O}_{K}(\mathfrak{X})/\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X} ))}:\mathcal{O}_{K}(\mathfrak{X})\longrightarrow\varphi_{L}(\mathcal{O}_{K}( \mathfrak{X}))\.\] **Remark 4.1.4**.: \(\psi_{L}^{\mathfrak{X}}=\frac{1}{q}\varphi_{L}^{-1}\circ trace_{\mathcal{O}_{ K}(\mathfrak{X})/\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))}\)_._ Proof.: Since the functions \(\mathrm{ev}_{a}\) generate a dense subspace in \(\mathcal{O}_{K}(\mathfrak{X})\) ([ST1, Lem. 3.1] the proof of which remains valid for general \(K\) by [PGS, Cor. 4.2.6 and Thm. 11.3.5]) it suffices, by the continuity of all operators involved, to check the asserted equality on the functions \(\mathrm{ev}_{a}\). As before we choose a set of representatives \(R\subseteq o_{L}\) for the cosets \(o_{L}/\pi_{L}o_{L}\), so that the functions \(\mathrm{ev}_{c}\), for \(c\in R\), form a basis of \(\mathcal{O}_{K}(\mathfrak{X})\) over \(\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))\). _Case 1_: Let \(a\in o_{L}^{\times}\). Then \(\psi_{L}^{\mathfrak{X}}(\mathrm{ev}_{a})=0\) by (25). On the other hand \(\mathrm{ev}_{a}\cdot\mathrm{ev}_{c}=\mathrm{ev}_{a+c}\in\mathrm{ev}_{c^{ \prime}}\cdot\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))\) for some \(c\neq c^{\prime}\in R\). Hence the matrix of multiplication by \(\mathrm{ev}_{a}\) w.r.t. to our choice of basis has only zero entries on the diagonal. This means that \(trace_{\mathcal{O}_{K}(\mathfrak{X})/\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X} ))}(\mathrm{ev}_{a})=0\). _Case 2_: Let \(a\in\pi_{L}o_{L}\). Then \(\psi_{L}^{\mathfrak{X}}(\mathrm{ev}_{a})=\mathrm{ev}_{\pi_{L}^{-1}a}\). On the other hand the matrix of multiplication by \(\mathrm{ev}_{a}\) now is the diagonal matrix with constant entry \(\mathrm{ev}_{a}=\varphi_{L}(\mathrm{ev}_{\pi_{L}^{-1}a})\). We see that \(\frac{1}{q}\varphi_{L}^{-1}(trace_{\mathcal{O}_{K}(\mathfrak{X})/\varphi_{L}( \mathcal{O}_{K}(\mathfrak{X}))}(\mathrm{ev}_{a}))=\frac{1}{q}\varphi_{L}^{-1}( q\varphi_{L}(\mathrm{ev}_{\pi_{L}^{-1}a}))=\mathrm{ev}_{\pi_{L}^{-1}a}\). In order to establish a formula for the composition \(\varphi_{L}\circ\psi_{L}^{\mathfrak{X}}\) we let \(\mathfrak{X}[\pi_{L}]:=\ker(\mathfrak{X}\xrightarrow{\pi_{L}^{\bullet}} \mathfrak{X})\). Then \(\mathfrak{X}[\pi_{L}](\mathbb{C}_{p})\) is the character group of the finite group \(o_{L}/\pi_{L}o_{L}\). The points in \(\mathfrak{X}[\pi_{L}](\mathbb{C}_{p})\) are defined over some finite extension \(K_{1}/K\). For any \(\zeta\in\mathfrak{X}[\pi_{L}](\mathbb{C}_{p})\) we have the continuous translation operator \[\mathcal{O}_{K_{1}}(\mathfrak{X}) \longrightarrow\mathcal{O}_{K_{1}}(\mathfrak{X})\] \[F \longmapsto(\zeta F)(\chi):=F(\chi\zeta)\.\] **Proposition 4.1.5**.: 1. _For any_ \(F\in\mathcal{O}_{K_{1}}(\mathfrak{X})\) _we have_ \[[o_{L}:\pi_{L}o_{L}]\cdot\varphi_{L}\circ\psi_{L}^{\mathfrak{X}}(F)=\sum_{ \zeta\in\mathfrak{X}[\pi_{L}](\mathbb{C}_{p})}\zeta F\.\] 2. \(\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))=\{F\in\mathcal{O}_{K}(\mathfrak{X} ):{}_{\zeta}F=F\text{ for any }\zeta\in\mathfrak{X}[\pi_{L}](\mathbb{C}_{p})\}\)_._ Proof.: i. Again it suffices to consider any \(F=\mathrm{ev}_{a}\). We compute \[(\sum_{\zeta}\zeta\,\mathrm{ev}_{a})(\chi) =\sum_{\zeta}\mathrm{ev}_{a}(\chi\zeta)=\chi(a)\sum_{\zeta}\zeta(a)\] \[=\begin{cases}[o_{L}:\pi_{L}o_{L}]\cdot\chi(a)&\text{if }a\in\pi_{L}o_{L},\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}[o_{L}:\pi_{L}o_{L}]\cdot\mathrm{ev}_{a}(\chi)& \text{if }a\in\pi_{L}o_{L},\\ 0&\text{otherwise}.\end{cases}\] On the other hand \[\varphi_{L}(\psi_{L}^{\mathfrak{X}}(\mathrm{ev}_{a}))=\varphi_{L}\Big{(}\begin{cases} \mathrm{ev}_{\pi_{L}^{-1}a}&\text{if }a\in\pi_{L}o_{L},\\ 0&\text{otherwise}\end{cases}\Big{)}=\begin{cases}\mathrm{ev}_{a}&\text{if }a\in\pi_{L}o_{L},\\ 0&\text{otherwise}.\end{cases}\] ii. If \({}_{\zeta}F=F\) for any \(\zeta\in\mathfrak{X}[\pi_{L}](\mathbb{C}_{p})\) then \(\varphi_{L}(\psi_{L}^{\mathfrak{X}}(F))=F\) by i. On the other hand \[(_{\zeta}\varphi_{L}(F))(\chi)=\varphi_{L}(F)(\chi\zeta))=F(\pi_{L}^{*}(\chi) \pi_{L}^{*}(\zeta))=F(\pi_{L}^{*}(\chi))=\varphi_{L}(F)(\chi)\.\] We have observed in the above proof that the functions \(\mathrm{ev}_{a}\), for \(a\in o_{L}\), generate a dense subspace of \(\mathcal{O}_{K}(\mathfrak{X})\). Considering the topological decomposition \[\mathcal{O}_{K}(\mathfrak{X}) =\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))\oplus\mathcal{O}_{K} (\mathfrak{X})^{\psi_{L}^{\mathfrak{X}}=0}\] \[F =\varphi_{L}(\psi_{L}^{\mathfrak{X}}(F))+(F-\varphi_{L}(\psi_{L} ^{\mathfrak{X}}(F))) \tag{26}\] we see, using (25), that the \(\mathrm{ev}_{a}\) for \(a\in\pi_{L}o_{L}\), resp. the \(\mathrm{ev}_{u}\) for \(u\in o_{L}^{\times}\), generate a dense subspace of \(\varphi_{L}(\mathcal{O}_{K}(\mathfrak{X}))\), resp. of \(\mathcal{O}_{K}(\mathfrak{X})^{\psi_{L}^{\mathfrak{X}}=0}\). In view of Lemma 4.1.2 the obvious formula \(u_{*}(\mathrm{ev}_{a})=\mathrm{ev}_{ua}\) together with the fact, that the Dirac distributions \(\delta_{u}\), for \(u\in o_{L}^{\times}\), generate a dense subspace of \(D(o_{L}^{\times},K)\), then imply that the decomposition (26) is \(D(o_{L}^{\times},K)\)-invariant. **Lemma 4.1.6**.: _(Mellin transform) The natural inclusion \(D(o_{L}^{\times},K)\hookrightarrow D(o_{L},K)\) combined with the Fourier isomorphism induces the map_ \[\mathfrak{M}:D(o_{L}^{\times},K) \stackrel{{\cong}}{{\longrightarrow}}D(o_{L},K)^{ \psi_{L}^{D}=0} \cong\mathcal{O}_{K}(\mathfrak{X})^{\psi_{L}^{\mathfrak{X}}=0}\] \[\lambda \longmapsto\lambda(\delta_{1}) \cong\lambda(\mathrm{ev}_{1})\] _which is a topological isomorphism of \(D(o_{L}^{\times},K)\)-modules._ Proof.: The disjoint decomposition into open sets \(o_{L}=\pi_{L}o_{L}\cup o_{L}^{\times}\) induces the linear topological decomposition \(D(o_{L},K)=\varphi_{L}(D(o_{L},K))\oplus D(o_{L}^{\times},K)\). The assertion follows by comparing this with the decomposition (26).11 Footnote 11: The map \(D(o_{L}^{\times},K)\to D(G,K)\) sending \(\lambda\) to \(\lambda(\delta_{1})\) is the inclusion map since \(\delta_{u}(\delta_{1})=\delta_{u}\). **The Robba ring** We recall a few facts from [BSX] about the analytic structure of the character variety \(\mathfrak{X}\). As a general convention all **radii**\(r\) which will occur throughout the paper are **assumed to lie in \((0,1)\cap p^{\mathbb{Q}}\)**. Let \(\mathbf{B}_{1}(r)\), resp. \(\mathbf{B}(r)\), denote the \(\mathbb{Q}_{p}\)-affinoid disk of radius \(r\) around \(1\), resp. around \(0\), and let \(\mathbf{B}_{1}^{-}(r)\) be the open disk of radius \(r\) around \(1\). We put \[\mathfrak{X}_{0}(r):=\mathbf{B}_{1}(r)\otimes_{\mathbb{Z}_{p}}\mathrm{Hom}_{ \mathbb{Z}_{p}}(o_{L},\mathbb{Z}_{p})\quad\text{and}\quad\mathfrak{X}(r):= \mathfrak{X}\cap\mathfrak{X}_{0}(r)_{/L}\.\] These are affinoid subgroups of \(\mathfrak{X}_{0}\) and \(\mathfrak{X}\), respectively, which are respected by the action of the monoid \(o_{L}\backslash\{0\}\). Since \(\mathfrak{X}(r)\hookrightarrow\mathfrak{X}_{0}(r)_{/L}\) is a closed immersion of affinoid varieties the restriction map between the affinoid algebras \(\mathcal{O}_{K}(\mathfrak{X}_{0}(r))\twoheadrightarrow\mathcal{O}_{K}( \mathfrak{X}(r))\) is a strict surjection of Banach algebras. The families \(\{\mathfrak{X}_{0}(r)\}_{r}\), resp. \(\{\mathfrak{X}(r)\}_{r}\), form an increasing admissible covering of \(\mathfrak{X}_{0}\) resp. \(\mathfrak{X}\), which exhibits the latter as a quasi-Stein space. Hence \(\mathcal{O}_{K}(\mathfrak{X}_{0}(r))\), resp. \(\mathcal{O}_{K}(\mathfrak{X}(r))\), is the completion of \(\mathcal{O}_{K}(\mathfrak{X}_{0})\), resp. \(\mathcal{O}_{K}(\mathfrak{X})\), in the supremum norm \(|\ |_{\mathfrak{X}_{0}(r)}\), resp. \(|\ |_{\mathfrak{X}(r)}\). The structure of the affinoid variety \(\mathfrak{X}(r_{0})\) is rather simple for any radius \(r_{0}<p^{-\frac{d}{p-1}}\). Then ([BSX, Lem. 1.16]) the map \[\mathbf{B}(r_{0})_{/L} \stackrel{{\cong}}{{\longrightarrow}}\mathfrak{X}( r_{0})\] \[y \longmapsto\chi_{y}(a):=\exp(ay) \tag{27}\] is an isomorphism of \(L\)-affinoid groups. Taking, somewhat unconventionally, \(\exp-1\) as coordinate function on \(\mathbf{B}(r_{0})\) we may view \(\mathcal{O}_{K}(\mathbf{B}(r_{0}))\) as the Banach algebra of all power series \(f=\sum_{i\geqslant 0}c_{i}(\exp-1)^{i}\) such that \(c_{i}\in K\) and \(\lim_{i\to\infty}|c_{i}|r_{0}^{i}=0\); the norm is \(|f|_{\mathbf{B}(r_{0})}:=\max_{i}|c_{i}|r_{0}^{i}\). Since \(\exp-1\) corresponds under the above isomorphism to the function \(\mathrm{ev}_{1}-1\) on \(\mathfrak{X}(r_{0})\) we deduce that \[\mathcal{O}_{K}(\mathfrak{X}(r_{0}))=\{f=\sum_{i\geqslant 0}c_{i}(\mathrm{ev}_{ 1}-1)^{i}:c_{i}\in K\text{ and }\lim_{i\to\infty}|c_{i}|r_{0}^{i}=0\} \tag{28}\] is a Banach algebra with the supremum norm \(|f|_{\mathfrak{X}(r_{0})}=\max_{i}|c_{i}|r_{0}^{i}\). Next we need to explain the admissible open subdomains \(\mathfrak{X}_{I}\) of \(\mathfrak{X}\), where the \(I\subseteq(0,1)\) are certain intervals (cf. [BSX, SS2.1]). First of all we have the admissible open subdomains \[\mathfrak{X}_{(r,1)}:=\mathfrak{X}\backslash\mathfrak{X}(r)\.\] To introduce the relevant affinoid subdomains we also need the open disk \(\mathbf{B}_{1}^{-}(r)\) of radius \(r\) around \(1\). This allows us to first define the admissible open subdomains \(\mathfrak{X}_{0}^{-}(r):=(\mathbf{B}_{1}^{-}(r)\otimes_{\mathbb{Z}_{p}}\mathrm{ Hom}_{\mathbb{Z}_{p}}(o_{L},\mathbb{Z}_{p}))_{/L}\) and \(\mathfrak{X}^{-}(r):=\mathfrak{X}\cap\mathfrak{X}_{0}^{-}(r)\) of \(\mathfrak{X}_{0}\) and \(\mathfrak{X}\), respectively. For \(r\leqslant s\) we then have the admissible open subdomains \[\mathfrak{X}_{0}[r,s]:=\mathfrak{X}_{0}(s)\backslash\mathfrak{X}_{0}^{-}(r) \subseteq\mathfrak{X}_{0}\quad\text{and}\quad\mathfrak{X}_{[r,s]}:=\mathfrak{ X}(s)\backslash\mathfrak{X}^{-}(r)=\mathfrak{X}\cap\mathfrak{X}_{0}[r,s] \subseteq\mathfrak{X}\.\] We recall that the \(\mathfrak{X}_{[r,s]}\) are actually affinoid varieties. There are the obvious inclusions \(\mathfrak{X}_{[r,s]}\subseteq\mathfrak{X}(s)\) and \(\mathfrak{X}_{[r,s]}\subseteq\mathfrak{X}_{(r^{\prime},1)}\) provided \(r^{\prime}<r\). Moreover, \(\mathfrak{X}_{(r^{\prime},1)}\) is the increasing admissible union of the \(\mathfrak{X}_{[r,s]}\) for \(r^{\prime}<r\leqslant s<1\). Hence \[\mathcal{O}_{K}(\mathfrak{X}_{(r^{\prime},1)})=\lim_{r^{\prime}<r\leqslant s <1}\mathcal{O}_{K}(\mathfrak{X}_{[r,s]})\,\] which exhibits the Frechet algebra structure of the left side. We point out that these subdomains \(\mathfrak{X}_{I}\) all are invariant under \(o_{L}^{\times}\). Their behaviour with respect to \(\pi_{L}^{*}\) is more complicated. We recall from [BSX, Lem. 2.11] that, for any radius \(p^{-\frac{dp}{p-1}}\leqslant r<1\) we have \[(\pi_{L}^{*})^{-1}(\mathfrak{X}_{(r,1)})\subseteq\mathfrak{X}_{(r^{1/p},1)}. \tag{29}\] It is technically necessary in the following to sometimes only work with a smaller set of radii. We put \[S_{0}:=[p^{-\frac{d}{e}-\frac{d}{p-1}},p^{-\frac{d}{p-1}})\cap p^{\mathbb{Q}} \subseteq[p^{-\frac{dp}{p-1}},p^{-\frac{d}{p-1}})\,\] \(S_{n}:=S_{0}^{\frac{1}{p^{n}}}\) for \(n\geqslant 1\), and \(S_{\infty}:=\bigcup_{n\geqslant 1}S_{n}\). Note that the sets \(S_{n}\) are pairwise disjoint. The point is ([BSX, Prop. 1.20]) that for \(s\in S_{\infty}\) we know that \(\mathfrak{X}(s)\) becomes isomorphic to a closed disk over \(\mathbb{C}_{p}\). Let \(s_{n}\) for \(n\geqslant 0\), denote the left boundary point of the set \(S_{n}\). Then we have the following result ([BSX, Prop. 2.1]). **Proposition 4.1.7**.: _For any \(n\geqslant 0\) the rigid variety \(\mathfrak{X}_{(s_{n},1)}\) is quasi-Stein with respect to the admissible covering \(\{\mathfrak{X}_{[r,s]}\}\) where \(s_{n}<r\leqslant s<1\), \(r\in S_{n}\), and \(s\in\bigcup_{m\geqslant n}S_{m}\). In particular, the affinoid algebra \(\mathcal{O}_{K}(\mathfrak{X}_{[r,s]})\) is the completion of \(\mathcal{O}_{K}(\mathfrak{X}_{(s_{n},1)})\) with respect to the supremum norm \(|\ |\mathfrak{x}_{[r,s]}\)._ Obviously, with \(\mathfrak{X}\) each \(\mathfrak{X}_{(s_{n},1)}\) is one dimensional and smooth. But, in order to be able to apply later on Serre duality to the spaces \(\mathfrak{X}_{(s_{n},1)}\), we need to show that they are actually Stein spaces. This means that we have to check that the admissible covering in Prop. 4.1.7 has the property that \(\mathfrak{X}_{[r^{\prime},s^{\prime}]}\) is relatively compact in \(\mathfrak{X}_{[r,s]}\) over \(L\) ([BGR, SS9.6.2]) for any \(r<r^{\prime}\leqslant s^{\prime}<s\). We simply write \(U\Subset X\) for an affinoid subdomain \(U\) being relatively compact over \(L\) in an \(L\)-affinoid variety \(X\). **Lemma 4.1.8**.: _Let \(U\Subset X\subseteq X^{\prime}\) be affinoid subdomains of the affinoid variety \(X^{\prime}\); we then have:_ 1. _If_ \(U\Subset X\) _then_ \(U\Subset X^{\prime}\)_;_ 2. _suppose that_ \(U=U_{1}\cup\ldots\cup U_{m}\) _is an affinoid covering; if_ \(U_{i}\Subset X\) _for any_ \(1\leqslant i\leqslant m\) _then_ \(U\Subset X\)_._ Proof.: Let \(A\to B\) be the homomorphism of affinoid algebras which induces the inclusion \(U=\operatorname{Sp}(B)\subseteq X=\operatorname{Sp}(A)\). It is not difficult to see that the property \(U\Subset X\) is equivalent to the homomorphism \(A\to B\) being inner w.r.t. \(L\) in the sense of [Ber, Def. 2.5.1]. Therefore i., resp. ii., is a special case of Cor. 2.5.5, resp. Lemma 2.5.10, in [Ber]. **Proposition 4.1.9**.: \(\mathfrak{X}_{(s_{n},1)}\)_, for any \(n\geqslant 0\), is a Stein space._ Proof.: Since, by Prop. 4.1.7, we already know that the \(\mathfrak{X}_{(s_{n},1)}\) are quasi-Stein. Hence it remains to show that \(\mathfrak{X}_{[r^{\prime},s^{\prime}]}\Subset\mathfrak{X}_{[r,s]}\) for any \(r<r^{\prime}\leqslant s^{\prime}<s\). Looking first at \(\mathfrak{X}_{0}\), let \(\mathfrak{B}_{1}[r,s]\subseteq\mathfrak{B}_{1}\) denote the affinoid annulus of inner radius \(r\) and outer radius \(s\). Fixing coordinate functions \(z_{1},\ldots,z_{d}\) on \(\mathfrak{X}_{0}\) we have the admissible open covering \[\mathfrak{X}_{0}[r,s]=\bigcup_{i=1}^{d}\mathfrak{X}_{0}^{(i)}[r,s]\quad\text {with}\quad\mathfrak{X}_{0}^{(i)}[r,s]:=\{x\in\mathfrak{X}_{0}(s):|z_{i}(x)| \geqslant r\}.\] The affinoid varieties of this covering have the direct product structure \[\mathfrak{X}_{0}^{(i)}[r,s]=\mathfrak{B}_{1}(s)\times\ldots\mathfrak{B}_{1}(s )\times\mathfrak{B}_{1}[r,s]\times\mathfrak{B}_{1}(s)\times\ldots\mathfrak{B}_ {1}(s)\] with the annulus being the \(i\)th factor. It immediately follows that \(\mathfrak{X}_{0}^{(i)}[r^{\prime},s^{\prime}]\Subset\mathfrak{X}_{0}^{(i)}[r,s ]\) ([BGR, Lem. 9.6.2.1]). Since relative compactness is preserved by passing to closed subvarieties we deduce that \(\mathfrak{X}\cap\mathfrak{X}_{0}^{(i)}[r^{\prime},s^{\prime}]\Subset\mathfrak{ X}\cap\mathfrak{X}_{0}^{(i)}[r,s]\) for any \(1\leqslant i\leqslant d\). Applying now Lemma 4.1.8 we conclude first that \(\mathfrak{X}\cap\mathfrak{X}_{0}^{(i)}[r^{\prime},s^{\prime}]\Subset\mathfrak{ X}_{0}[r,s]\) and then that \(\mathfrak{X}_{[r^{\prime},s^{\prime}]}\Subset\mathfrak{X}_{[r,s]}\). We finally recall that the Robba ring of \(\mathfrak{X}\) over \(K\) is defined as the locally convex inductive limit \(\varinjlim_{\mathfrak{Y}}\mathcal{O}_{K}(\mathfrak{X}\backslash\mathfrak{Y})\) where \(\mathfrak{Y}\) runs over all affinoid subdomains of \(\mathfrak{X}\). Since any such \(\mathfrak{Y}\) is contained in some \(\mathfrak{X}(r)\) we have \[\mathcal{R}_{K}(\mathfrak{X})=\varinjlim_{n\geqslant 0}\mathcal{O}_{K}( \mathfrak{X}_{(s_{n},1)})\,\] and we view \(\mathcal{R}_{K}(\mathfrak{X})\) as the locally convex inductive limit of the Frechet algebras \(\mathcal{O}_{K}(\mathfrak{X}_{(s_{n},1)})\). By [BSX] Prop. 1.20 the system \(\mathfrak{X}_{(s_{n},1)/\mathbb{C}_{p}}\) is isomorphic to a decreasing system of one dimensional annuli. This implies: * \(\mathcal{R}_{K}(\mathfrak{X})\) is the increasing union of the rings \(\mathcal{O}_{K}(\mathfrak{X}_{(s_{n},1)})\) and contains \(\mathcal{O}_{K}(\mathfrak{X})\); * each \(\mathcal{O}_{K}(\mathfrak{X}_{(s_{n},1)})\) as well as \(\mathcal{R}_{K}(\mathfrak{X})\) are integral domains. The action of the monoid \(o_{L}\backslash\{0\}\) on \(\mathcal{O}_{K}(\mathfrak{X})\) extends naturally to a continuous action on \(\mathcal{R}_{K}(\mathfrak{X})\) ([BSX, Lem. 2.12]). In fact, this action extends further uniquely to a separately continuous action of \(D(o_{L}^{\times},K)\)-action on \(\mathcal{R}_{K}(\mathfrak{X})\). This is a special case of the later Prop. 4.3.10 which implies that we will have such an action on any \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-module over \(\mathcal{R}_{K}(\mathfrak{X})\). Via the isomorphism \(\chi_{LT}:\Gamma_{L}\xrightarrow{\cong}o_{L}^{\times}\) we later on will view this as a \(D(\Gamma_{L},K)\)-action. In order to extend the \(\psi\)-operator to the Robba ring we need the following fact. **Lemma 4.1.10**.: _The morphism \(\pi_{L}^{*}:\mathfrak{X}\to\mathfrak{X}\) is finite, faithfully flat, and etale._ Proof.: The character variety \(\mathfrak{X}^{\prime}\) of the subgroup \(\pi_{L}o_{L}\subseteq o_{L}\) is isomorphic to \(\mathfrak{X}\) via \[\mathfrak{X} \xrightarrow{\cong}\mathfrak{X}^{\prime}\] \[\chi \longmapsto\chi^{\prime}(\pi_{L}a):=\chi(a)\.\] We have the commutative diagram The oblique arrow is finite and faithfully flat by the proof of [Eme, Prop. 6.4.5]. For its etaleness it remains to check that all its fibers are unramified. This can be done after base change to \(\mathbb{C}_{p}\). Then, since this arrow is a homomorophism of rigid groups, all fibers are isomorphic. But the fiber in the trivial character of \(\pi_{L}o_{L}\) is isomorphic to \(\operatorname{Sp}(\mathbb{C}_{p}[o_{L}/\pi_{L}o_{L}])\cong\operatorname{Sp}( \mathbb{C}_{p}^{q})\). It follows that \(\pi_{L}^{*}\) has these properties as well. Since the subsequent reasoning will be needed again in the next section in an analogous situation we proceed in an axiomatic way. **Suppose** that: * \(\rho:\mathfrak{Y}\to\mathfrak{Z}\) is a finite and faithfully flat morphism of quasi-Stein spaces over \(K\). In particular, the induced map \(\rho^{*}:\mathcal{O}_{K}(\mathfrak{Z})\to\mathcal{O}_{K}(\mathfrak{Y})\) is injective. Moreover, the finiteness of \(\rho\) implies that the preimage under \(\rho\) of any affinoid subdomain in \(\mathfrak{Y}\) is an affinoid subdomain in \(\mathfrak{Y}\) ([BGR, Prop. 9.4.4.1(i)]) and hence that \(\rho^{*}\) is continuous. * \(\mathcal{O}_{K}(\mathfrak{Y})\) is finitely generated free as a \(\rho^{*}(\mathcal{O}_{K}(\mathfrak{Z}))\)-module. Fix a corresponding basis \(f_{1},\dots,f_{h}\in\mathcal{O}_{K}(\mathfrak{Y})\). **Proposition 4.1.11**.: _For any admissible open subset \(\mathfrak{U}\subseteq\mathfrak{Z}\) we have_ \[\mathcal{O}_{K}(\rho^{-1}(\mathfrak{U}))=\mathcal{O}_{K}(\mathfrak{Y})\otimes _{\mathcal{O}_{K}(\mathfrak{Z})}\mathcal{O}_{K}(\mathfrak{U})\] _is free with basis \(f_{1},\dots,f_{h}\) over \(\mathcal{O}_{K}(\mathfrak{U})\)._ Proof.: Since \(\rho\) is finite, \(\rho_{*}\mathcal{O}_{\mathfrak{Y}}\) is a coherent \(\mathcal{O}_{\mathfrak{Z}}\)-module by [BGR, Prop. 9.4.4.1(ii)]. Gruson's theorem (cf. [BSX, Prop. 1.13]) then tells us that \(\rho_{*}\mathcal{O}_{\mathfrak{Y}}\) is, in fact, a free \(\mathcal{O}_{\mathfrak{Z}}\)-module with basis \(f_{1},\dots,f_{h}\). We observe that the definition of the Robba ring \(\mathcal{R}_{K}(\mathfrak{X})\) above was completely formal and works precisely the same way for any quasi-Stein space. Hence we have available the Robba rings \(\mathcal{R}_{K}(\mathfrak{Y})\) and \(\mathcal{R}_{K}(\mathfrak{Z})\). Since the morphism \(\rho:\mathfrak{Y}\to\mathfrak{Z}\) is finite the preimage under \(\rho\) of any affinoid subdomain in \(\mathfrak{Z}\) is an affinoid subdomain in \(\mathfrak{Y}\) ([BGR, Prop. 9.4.4.1(i)]). We note again that the preimage under \(\rho\) of any affinoid subdomain in \(\mathfrak{Z}\) is an affinoid subdomain in \(\mathfrak{Z}\). The injective map \(\rho^{*}:\mathcal{O}_{K}(\mathfrak{Z})\subseteq\mathcal{O}_{K}(\mathfrak{Y})\) therefore extends to a natural homomorphism of rings \[\rho^{*}:\mathcal{R}_{K}(\mathfrak{Z})\longrightarrow\mathcal{R}_{K}( \mathfrak{Y}). \tag{30}\] **Remark 4.1.12**.: _The homomorphism (30) is injective._ Proof.: We fix an admissible covering \(\mathfrak{Z}=\bigcup_{j\geqslant 1}\mathfrak{U}_{j}\) by an increasing sequence of affinoid subdomains \(\mathfrak{U}_{j}\subseteq\mathfrak{Z}\). As \(\rho\) is a finite map, \(\mathfrak{Y}=\bigcup_{j\geqslant 1}\rho^{-1}(\mathfrak{U}_{j})\) again is an admissible covering by affinoid subdomains. It follows that \(\mathcal{R}_{K}(\mathfrak{Y})=\varinjlim_{j}\mathcal{O}_{K}(\mathfrak{Y}) \rho^{-1}(\mathfrak{U}_{j}))\), and therefore it suffices to show the injectivity of the maps \(\rho^{*}:\mathcal{O}_{K}(\mathfrak{Z}\backslash\mathfrak{U}_{j})\to\mathcal{ O}_{K}(\mathfrak{Y})\rho^{-1}(\mathfrak{U}_{j}))\). But this is clear since the map \(\rho:\mathfrak{Y})\rho^{-1}(\mathfrak{U}_{j})\to\mathfrak{Z}\backslash \mathfrak{U}_{j}\) is faithfully flat. **Corollary 4.1.13**.: \(\mathcal{R}_{K}(\mathfrak{Y})=\mathcal{O}_{K}(\mathfrak{Y})\otimes_{\mathcal{ O}_{K}(\mathfrak{Z})}\mathcal{R}_{K}(\mathfrak{Z})\) _is free over \(\rho^{*}(\mathcal{R}_{K}(\mathfrak{Z}))\) with the basis \(f_{1},\ldots,f_{h}\). In fact, the map_ \[\mathcal{R}_{K}(\mathfrak{Z})^{h} \stackrel{{\cong}}{{\longrightarrow}}\mathcal{R}_{K }(\mathfrak{Y})\] \[(z_{1},\ldots,z_{h}) \longmapsto\sum_{i=1}^{h}\rho^{*}(z_{i})f_{i}\] _is a homeomorphism._ Proof.: By passing to locally convex limits this follows from Prop. 4.1.11 which says that the map \[\mathcal{O}_{K}(\mathfrak{U})^{h} \stackrel{{\cong}}{{\longrightarrow}}\mathcal{O}_{K }(\rho^{-1}(\mathfrak{U}))\] \[(z_{1},\ldots,z_{h}) \longmapsto\sum_{i=1}^{h}\rho^{*}(z_{i})f_{i}\] is a continuous bijection between Frechet spaces and hence a homeomorphism by the open mapping theorem. By the Lemmas 4.1.1 and 4.1.10 the above applies to the morphism \(\pi_{L}^{*}:\mathfrak{X}\to\mathfrak{X}\) and we obtain the following result. **Proposition 4.1.14**.: _Let \(R\subseteq o_{L}\) be a set of representatives for the cosets in \(o_{L}/\pi_{L}o_{L}\). Then the Robba ring \(\mathcal{R}_{K}(\mathfrak{X})\) is a free module over \(\varphi_{L}(\mathcal{R}_{K}(\mathfrak{X}))\) with basis \(\{\mathrm{ev}_{a}\}_{a\in R}\)._ In particular we have the trace map \[trace_{\mathcal{R}_{K}(\mathfrak{X})/\varphi_{L}(\mathcal{R}_{K}(\mathfrak{X} ))}:\mathcal{R}_{K}(\mathfrak{X})\longrightarrow\varphi_{L}(\mathcal{R}_{K}( \mathfrak{X}))\] and therefore may introduce the operator \[\psi_{L}^{\mathfrak{X}}:=\frac{1}{q}\varphi_{L}^{-1}\circ trace_{\mathcal{R} _{K}(\mathfrak{X})/\varphi_{L}(\mathcal{R}_{K}(\mathfrak{X}))}:\mathcal{R}_{ K}(\mathfrak{X})\longrightarrow\mathcal{R}_{K}(\mathfrak{X})\.\] Because of Remark 4.1.4 it extends the operator \(\psi_{L}^{\mathfrak{X}}\) on \(\mathcal{O}_{K}(\mathfrak{X})\), which justifies denoting it by the same symbol. By construction it is a left inverse of \(\varphi_{L}\) and satisfies the projection formula. Furthermore, as a consequence of Cor. 4.1.13, \(\psi_{L}^{\mathfrak{X}}\) is continuous. #### 4.1.2 The multiplicative character variety and its Robba ring In this section we consider the multiplicative group \(o_{L}^{\times}\) as a locally \(L\)-analytic group. We introduce the open subgroups \(U_{n}:=1+\pi_{L}^{n}o_{L}\) for \(n\geqslant 1\). Correspondingly we have the inclusion of distribution algebras \(D(U_{n+1},K)\subseteq D(U_{n},K)\subseteq D(o_{L}^{\times},K)\). There is an integer \(n_{0}\geqslant 1\) such that, for any \(n\geqslant n_{0}\), the logarithm series induces an isomorphism of locally \(L\)-analytic groups \(\log:U_{n}\xrightarrow{\cong}\pi_{L}^{n}o_{L}\). We then introduce the isomorphisms \(\ell_{n}:=\pi_{L}^{-n}\log:U_{n}\xrightarrow{\cong}o_{L}\) together with the algebra isomorphisms \[\ell_{n*}:D(U_{n},K)\xrightarrow{\cong}D(o_{L},K)\cong\mathcal{O}_{K}(\mathfrak{ X}) \tag{31}\] which they induce. As for \(o_{L}\) in the previous section we have rigid analytic varieties (over \(L\)) of locally \(L\)-analytic characters \(\mathfrak{X}^{\times}\) for \(o_{L}^{\times}\) and \(\mathfrak{X}_{n}^{\times}\) for \(U_{n}\) as well (cf. [12, Thm. 2.3, Lemma 2.4, Cor. 3.7] and [13, Prop.s 6.4.5 and 6.4.6]): * \(\ell_{n}^{*}:\mathfrak{X}\xrightarrow{\cong}\mathfrak{X}_{n}^{\times}\) is, for \(n\geqslant n_{0}\), an isomorphism of group varieties. * The restriction map \(\rho_{n}:\mathfrak{X}^{\times}\longrightarrow\mathfrak{X}_{n}^{\times}\) is a finite faithfully flat covering ([13] proof of Prop. 6.4.5). * \(\mathfrak{X}^{\times}\) and \(\mathfrak{X}_{n}^{\times}\) are one dimensional Stein spaces. (As group varieties they are separated and equidimensional.) * For \(n\geqslant n_{0}\) the variety \(\mathfrak{X}_{n}^{\times}\) is smooth and \(\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times})\) is an integral domain. * The Fourier transforms \[D(o_{L}^{\times},K)\xrightarrow{\cong}\mathcal{O}_{K}(\mathfrak{X}^{\times}) \qquad\text{and}\qquad D(U_{n},K)\xrightarrow{\cong}\mathcal{O}_{K}(\mathfrak{ X}_{n}^{\times})\] sending a distribution \(\mu\) to the function \(F_{\mu}(\chi):=\mu(\chi)\) are isomorphisms of Frechet algebras. As a consequence of the properties of the morphism \(\rho:=\rho_{n}:\mathfrak{X}^{\times}\rightarrow\mathfrak{X}_{n}^{\times}\) the homomorphism \(\rho^{*}:\mathcal{O}_{K}(\mathfrak{X}_{n}^{\times})\rightarrow\mathcal{O}_{ K}(\mathfrak{X}^{\times})\) is injective and extends to an injective homomorphism \(\rho^{*}:\mathcal{R}_{K}(\mathfrak{X}_{n}^{\times})\rightarrow\mathcal{R}_{K}( \mathfrak{X}^{\times})\) (cf. Remark 4.1.12). **Lemma 4.1.15**.: * \(\mathcal{O}_{K}(\mathfrak{X}^{\times})=\mathbb{Z}[o_{L}^{\times}]\otimes_{ \mathbb{Z}[U_{n}]}\mathcal{O}_{K}(\mathfrak{X}_{n}^{\times})\)_._ * \(\mathcal{R}_{K}(\mathfrak{X}^{\times})=\mathcal{O}_{K}(\mathfrak{X}^{\times}) \otimes_{\mathcal{O}_{K}(\mathfrak{X}_{n}^{\times})}\mathcal{R}_{K}(\mathfrak{ X}_{n}^{\times})=\mathbb{Z}[o_{L}^{\times}]\otimes_{\mathbb{Z}[U_{n}]} \mathcal{R}_{K}(\mathfrak{X}_{n}^{\times})\)_._ Proof.: i. Let \(u_{1},\ldots,u_{h}\in o_{L}^{\times}\) be a set of representatives for the cosets of \(U_{n}\) in \(o_{L}^{\times}\). We then have the decomposition into open subsets \(o_{L}^{\times}=u_{1}U_{n}\cup\ldots\cup u_{h}U_{n}\). It follows that \[D(o_{L}^{\times},K)=\delta_{u_{1}}D(U_{n},K)\oplus\ldots\oplus\delta_{u_{h}}D( U_{n},K)=\mathbb{Z}[o_{L}^{\times}]\otimes_{\mathbb{Z}[U_{n}]}D(U_{n},K)\] is, in particular, a free \(D(U_{n},K)\)-module of rank \(h=[o_{L}^{\times}:U_{n}]\). Using the Fourier isomorphism we obtain that \(\mathcal{O}_{K}(\mathfrak{X}^{\times})\) is a free \(\mathcal{O}_{K}(\mathfrak{X}_{n}^{\times})\)-module over the basis \(\mathrm{ev}_{u_{1}},\ldots,\mathrm{ev}_{u_{h}}\). ii. Because of i. the assumptions before Prop. 4.1.11 are satisfied and the present assertion is a special case of Cor. 4.1.13. **Lemma 4.1.16**.: _The morphism \(\rho\) is etale._ Proof.: This is the same argument as in the proof of Lemma 4.1.10. **Corollary 4.1.17**.: \(\mathfrak{X}^{\times}\) _is smooth._ Proof.: This follows from the lemma since \(\mathfrak{X}_{n}^{\times}\) is smooth for \(n\geq n_{0}\). **Remark 4.1.18**.: _If \(n\geq m\) then all the above assertions hold analogously for the finite morphism \(\rho_{m,n}:\mathfrak{X}_{m}^{\times}\longrightarrow\mathfrak{X}_{n}^{\times}\). In particular, all \(\mathfrak{X}_{n}^{\times}\) are smooth._ Suppose that \(n\geq n_{0}\). Then, due to the isomorphism \(\ell_{n}^{*}:\mathfrak{X}\xrightarrow{\cong}\mathfrak{X}_{n}^{\times}\), everything which was defined for and recalled about \(\mathfrak{X}\) in section 4.1.1 holds correspondingly for \(\mathfrak{X}_{n}^{\times}\). In particular we have the admissible open subdomains \(\mathfrak{X}_{n}^{\times}(r)\), \(\mathfrak{X}_{n,(r,1)}^{\times}\), and \(\mathfrak{X}_{n,[r,s]}^{\times}\). For \(n\geq m\geq n_{0}\) we have the commutative diagram (32) **Lemma 4.1.19**.: _Let \(n\geq n_{0}\) and \(m\geq 0\); for any \(p^{-\frac{dp}{p-1}}\leq r<1\) the map \(\rho_{n,n+me}^{*}:\mathcal{O}_{K}(\mathfrak{X}_{n+me}^{\times})\to\mathcal{O} _{K}(\mathfrak{X}_{n}^{\times})\) extends to an isometric homomorphism of Banach algebras_ \[(\mathcal{O}_{K}(\mathfrak{X}_{n+me}^{\times}(r)),|\ _{\mathfrak{X}_{n+me}^{ \times}(r)})\longrightarrow(\mathcal{O}_{K}(\mathfrak{X}_{n}^{\times}(r^{ \frac{1}{p^{m}}})),|\ _{\mathfrak{X}_{n}^{\times}(r^{\frac{1}{p^{m}}})})\.\] Proof.: By the above commutative diagram (32) our assertion amounts to the statement that the map \((\pi_{L}^{me})^{*}:\mathfrak{X}\to\mathfrak{X}\) restricts to a surjection \(\mathfrak{X}(r^{\frac{1}{p^{m}}})\to\mathfrak{X}(r)\). In [ST2, Lem. 3.3] this is shown to be the case for the map \((p^{m})^{*}\). But \(p^{m}\) and \(\pi_{L}^{me}\) differ by a unit \(u\in o_{L}^{\times}\), and \(u^{*}\) preserves \(\mathfrak{X}(r)\). #### 4.1.3 Twisting Consider any locally \(L\)-analytic group \(G\) and fix a locally \(L\)-analytic character \(\chi:G\to L^{\times}\). Then multiplication by \(\chi\) is a \(K\)-linear topological isomorphism \(C^{an}(G,K)\xrightarrow{\chi}C^{an}(G,K)\). We denote the dual isomorphism by \[Tw_{\chi}^{D}:D(G,K)\xrightarrow{\cong}D(G,K)\,\] i.e., \(Tw_{\chi}^{D}(\mu)=\mu(\chi-)\), and call it the twist by \(\chi\). For Dirac distributions we obtain \(Tw_{\chi}^{D}(\delta_{g})=\chi(g)\delta_{g}\). Suppose now that \(G\) is one of the groups \(o_{L}\) or \(U_{n}\subseteq o_{L}^{\times}\) of the previous subsections, and let \(\mathfrak{X}_{G}\) denote its character variety. Then \(\chi\) is an \(L\)-valued point \(z_{\chi}\in\mathfrak{X}_{G}(L)\). Using the product structure of the variety \(\mathfrak{X}_{G}\) we similarly have the twist operator \[Tw_{z}^{\mathfrak{X}_{G}}:\mathcal{O}_{K}(\mathfrak{X}_{G})\xrightarrow{ \cong}\mathcal{O}_{K}(\mathfrak{X}_{G})\,\ f\longmapsto f(z-)\.\] As any rigid automorphism multiplication by a rational point respects the system of affinoid subdomains and hence the system of their complements. Hence \(Tw_{z}^{\mathfrak{X}_{G}}\) extends straightforwardly to an automorphism \(Tw_{z}^{\mathfrak{X}_{G}}:\mathcal{R}_{K}(\mathfrak{X}_{G})\xrightarrow{ \cong}\mathcal{R}_{K}(\mathfrak{X}_{G})\). The following properties are straightforward to check: 1. Under the Fourier isomorphism \(Tw_{\chi}^{D}\) and \(Tw_{z_{\chi}}^{\mathfrak{X}_{G}}\) correspond to each other. 2. \(Tw_{z_{1}}^{\mathfrak{X}_{G}}\circ Tw_{z_{2}}^{\mathfrak{X}_{G}}=Tw_{z_{1}\cdot z _{2}}^{\mathfrak{X}_{G}}.\) 3. If \(\alpha:G_{1}\xrightarrow{\cong}G_{2}\) is an isomorphism between two of our groups then, for any \(z\in\mathfrak{X}_{G_{2}}(L),\) the twist operators \(Tw_{\alpha*(z)}^{\mathfrak{X}_{G_{1}}}\) and \(Tw_{z}^{\mathfrak{X}_{G_{2}}}\) correspond to each other under the isomorphism \(\alpha_{*}:\mathcal{R}_{K}(\mathfrak{X}_{G_{1}})\xrightarrow{\cong}\mathcal{R }_{K}(\mathfrak{X}_{G_{2}}).\) #### 4.1.4 The LT-isomorphism, part 1 We write \(\mathbf{B}\) for the open unit ball over \(L.\) The Lubin-Tate formal \(o_{L}\)-module gives \(\mathbf{B}\) an \(o_{L}\)-action via \((a,z)\mapsto[a](z).\) If \(\mathcal{O}_{K}(\mathbf{B})\) is the ring of power series in \(Z\) with coefficients in \(K\) which converge on \(\mathbf{B}(\mathbb{C}_{p}),\) then the above \(o_{L}\)-action on \(\mathbf{B}\) induces an action of the monoid \(o_{L}\backslash\{0\}\) on \(\mathcal{O}_{K}(\mathbf{B})\) by \((a,F)\mapsto F\circ[a].\) Similarly as before we let \(\varphi_{L}\) denote the action of \(\pi_{L}.\) Next we consider the continuous operator \[tr:\mathcal{O}_{K}(\mathbf{B}) \longrightarrow\mathcal{O}_{K}(\mathbf{B})\] \[f(z) \longmapsto\sum_{y\in\ker([\pi_{L}])}f(y+_{LT}z)\.\] Coleman has shown (cf. [12, SS2]) that \(tr(Z^{i})\in\operatorname{im}(\varphi_{L})\) for any \(i\geq 0.\) Hence, since \(\varphi_{L}\) is a homeomorphism onto its image, we have \(\operatorname{im}(tr)\subseteq\operatorname{im}(\varphi_{L})\) and hence, since \(\varphi_{L}\) is injective, we may introduce the \(K\)-linear operator \[\psi_{L}:\mathcal{O}_{K}(\mathbf{B})\longrightarrow\mathcal{O}_{K}(\mathbf{B} )\qquad\text{such that }\pi_{L}^{-1}tr=\varphi_{L}\circ\psi_{L}.\] One easily checks that \(\psi_{L}\) is equivariant for the \(o_{L}^{\times}\)-action and satisfies the projection formula \(\psi_{L}(f_{1}\varphi_{L}(f_{2}))=\psi_{L}(f_{1})f_{2}\) as well as \(\psi_{L}\circ\varphi_{L}=\frac{q}{\pi_{L}}.\) Furthermore, we fix a generator \(\eta^{\prime}\) of \(T_{\pi}^{\prime}\) as \(o_{L}\)-module and denote by \(\Omega=\Omega_{\eta^{\prime}}\) the corresponding period. In the following we **assume that \(\Omega\) belongs to \(K.\)** From [12, Thm. 3.6] we recall the LT-isomorphism \[\kappa^{*}:\mathcal{O}_{K}(\mathfrak{X}) \xrightarrow{\cong}\mathcal{O}_{K}(\mathbf{B})\] \[F \mapsto[z\mapsto F(\kappa_{z})], \tag{33}\] where \(\kappa_{z}(a)=1+F_{\eta^{\prime}}([a](z))\) with \(1+F_{\eta^{\prime}}(Z):=\exp\left(\Omega\log_{LT}(Z)\right).\) It is an isomorphism of topological rings which is equivariant with respect to the action by the monoid \(o_{L}\backslash\{0\}\) (as a consequence of [12, Prop. 3.1]). Moreover, Lemma 4.1.2 implies that the \(o_{L}^{\times}\)-action on \(\mathcal{O}_{K}(\mathbf{B})\) extends to a jointly continuous \(D(o_{L}^{\times},K)\)-module structure (by descent even for general \(K\)) and that the LT-isomorphism is an isomorphism of \(D(o_{L}^{\times},K)\)-modules. By the construction of the LT-isomorphism we have \[\kappa^{*}(\operatorname{ev}_{a})=\exp(a\Omega\log_{LT}(Z))\in o_{\mathbb{C}_{ p}}[[Z]]\qquad\text{for any }a\in o_{L}.\] Hence Lemma 4.1.3 implies that \[\kappa^{*}(\ker(\psi_{L}^{\mathfrak{X}}))=\sum_{a\in R_{0}}\exp(a\Omega\log_{ LT}(Z))\varphi_{L}(\mathcal{O}_{K}(\mathbf{B}))\] where \(R_{0}\subseteq o_{L}\) denotes a set of representatives for the nonzero cosets in \(o_{L}/\pi_{L}o_{L}.\) Using that \(\log_{LT}(Z_{1}+_{LT}Z_{2})=\log_{LT}(Z_{1})+\log_{LT}(Z_{2})\) we compute \[tr(\exp(a\Omega\log_{LT}(Z)) =\sum_{y\in\ker([\pi_{L}])}\exp(a\Omega\log_{LT}(y+_{LT}Z))\] \[=\big{(}\sum_{y\in\ker([\pi_{L}])}\exp(a\Omega\log_{LT}(y))\big{)} \exp(a\Omega\log_{LT}(Z))\] \[=\big{(}\sum_{y\in\ker([\pi_{L}])}\kappa_{y}(a)\big{)}\exp(a \Omega\log_{LT}(Z))\.\] But the \(\kappa_{y}\) for \(y\in\ker([\pi_{L}])\) are precisely the characters of the finite abelian group \(o_{L}/\pi_{L}oL.\) Hence \(\sum_{y\in\ker([\pi_{L}])}\kappa_{y}(a)=0\) for \(a\in R_{0}.\) It follows that \(\kappa^{*}(\ker(\psi_{L}^{\mathfrak{X}}))=\ker(\psi_{L}).\) We conclude that under the LT-isomorphism \(\psi_{L}\) corresponds to \(\frac{q}{\pi_{L}}\psi_{L}^{\mathfrak{X}}\) using the fact that we also have a decomposition \[\mathcal{O}_{K}(\mathbf{B})=\sum_{a\in o_{L}/\pi_{L}}\exp(a\Omega\log_{LT}(Z) )\varphi_{L}(\mathcal{O}_{K}(\mathbf{B})). \tag{34}\] In the following we denote by \[\mathfrak{M}_{LT}:D(\Gamma_{L},K)\xrightarrow{\cong}\mathcal{O}_{K}(\mathbf{B })^{\psi_{L}=0}\] the composite \[D(\Gamma_{L},K)\cong D(o_{L}^{\times},K)\cong\mathcal{O}_{K}(\mathfrak{X})^{ \psi_{L}^{\mathfrak{X}}=0}\cong\mathcal{O}_{K}(\mathbf{B})^{\psi_{L}=0}\] where the first isomorphism is induced by the character \(\chi_{LT}:\Gamma_{L}\xrightarrow{\cong}o_{L}^{\times}\), the second one is the Mellin transform \(\mathfrak{M}\) from Lemma 4.1.6 while the third one is the LT-isomorphism. By inserting the definitions we obtain the explicit formula \[\mathfrak{M}_{LT}(\lambda)(z)=\lambda(\kappa_{z}\circ\chi_{LT})\.\] The construction of the above map \(\mathfrak{M}_{LT}\) is related to the pairing \[\{\,\ \}:\mathcal{O}_{K}(\mathbf{B})\times C^{an}(o_{L},K) \to K\] \[(F,f) \mapsto\mu(f),\quad\text{ where }\mu\in D(o_{L},K)\text{ is such that }\mu(\kappa_{z})=F(z),\] in [ST2, lem. 4.6] by the following commutative diagram: **Remark 4.1.20**.: _(\(\Omega\in K\)) For any \(F\in\mathcal{O}_{K}(\mathbf{B})^{\psi_{L}=0}\) and any \(f\in C^{an}(o_{L},K)\) such that \(f|o_{L}^{\times}=0\) we have \(\{F,f\}=0\)._ Proof.: We have seen above that under the LT-isomorphism \(\psi_{L}\) corresponds, up to a nonzero constant, to \(\psi_{L}^{\mathfrak{X}}\) and hence further under the Fourier isomorphism to \(\psi_{L}^{D}\). It therefore suffices to show that for any \(\mu\in D(o_{L},K)^{\psi_{L}^{D}=0}\) we have \(\mu(f)=0\). For this we define \(\tilde{f}:=f(\pi_{L}-)\in C^{an}(o_{L},K)\) and note that \((\pi_{L})_{!}(\tilde{f})=f\). By the definition of \(\psi_{L}^{D}\) we therefore obtain, under our assumption on \(\mu\), that \(\mu(f)=\mu(f)-\psi_{L}^{D}(\mu)(\tilde{f})=\mu(f-(\pi_{L})_{!}(\tilde{f}))=\mu (0)=0\). **Lemma 4.1.21**.: _(\(\Omega\in K\)) For any \(F\in\mathcal{O}_{K}(\mathbf{B})^{\psi_{L}=1}\) and \(n\geq 0\) we have_ \[\mathfrak{M}_{LT}^{-1}((1-\frac{\pi_{L}}{q}\varphi_{L})F)(\chi_{LT}^{n})= \Omega^{-n}(1-\frac{\pi_{L}^{n+1}}{q})(\partial_{\mathrm{inv}}^{n}F)_{|Z=0}.\] Proof.: Note that \((1-\frac{\pi_{L}}{q}\varphi_{L})F\) belongs to \(\mathcal{O}_{K}(\mathbf{B})^{\psi_{L}=0}\). Let \(\mathrm{inc}_{!}\in C^{an}(o_{L},K)\) denote the extension by zero of the inclusion \(o_{L}^{\times}\subseteq o_{L}\), and let \(\mathrm{id}:o_{L}\to K\) be the identity function. Using the above commutative diagram the assertion reduces to the equality \[\{(1-\frac{\pi_{L}}{q}\varphi_{L})F,\mathrm{inc}_{!}^{n}\}=\Omega^{-n}(1- \frac{\pi_{L}^{n+1}}{q})(\partial_{\mathrm{inv}}^{n}F)_{|Z=0}.\] By Remark 4.1.20 we may replace on the left hand side the function \(\mathrm{inc}_{!}^{n}\) by the function \(\mathrm{id}^{n}\). Next we observe that \(x\ \mathrm{id}^{n}(x)=\mathrm{id}^{n+1}(x)\). Hence, by [ST2, Lem. 4.6(8)], i.e., \(\{F,xf(x)\}=\{\Omega^{-1}\partial_{\mathrm{inv}}F,f\}\) and induction, the left hand side is equal to \[\{(1-\frac{\pi_{L}}{q}\varphi_{L})F,\mathrm{id}^{n}\} =\{\Omega^{-n}\partial_{\mathrm{inv}}^{n}((1-\frac{\pi_{L}}{q} \varphi_{L})F),\mathrm{id}^{0}\}\] \[=\{\Omega^{-n}(1-\frac{\pi_{L}^{n+1}}{q}\varphi_{L})(\partial_{ \mathrm{inv}}^{n}F),\mathrm{id}^{0}\}\quad\text{since $\partial_{\mathrm{inv}} \varphi_{L}=\pi_{L}\varphi_{L}\partial_{\mathrm{inv}}$ by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: For the rest of this section we **assume** not only that \(K\) contains \(\Omega\) but also that the action of \(G_{L}\) on \(\mathbb{C}_{p}\) leaves \(K\) invariant. The LT-isomorphism is a topological ring isomorphism \[K\widehat{\otimes}_{L}\mathcal{O}_{L}(\mathfrak{X})=\mathcal{O}_{K}(\mathfrak{ X})\cong\mathcal{O}_{K}(\mathbf{B})=K\widehat{\otimes}_{L}\mathcal{O}_{L}( \mathbf{B})\] (see [BSX, Prop. 2.1.5 ii.] for the outer identities). On both sides we have the obvious coefficientwise \(G_{L}\)-action induced by the Galois-action on the tensor-factor \(K\). We use the following notation: * \(\sigma\in G_{L}\) acting _coefficientwise_ on \(\mathcal{O}_{K}(\mathbf{B})\) is denoted by: \(F\mapsto{}^{\sigma}F\); the corresponding fixed ring is \(\mathcal{O}_{K}(\mathbf{B})^{G_{L}}=\mathcal{O}_{L}(\mathbf{B})\). * The coefficientwise action on \(\mathcal{O}_{K}(\mathfrak{X})\) transfers to the _twisted action_ on \(\mathcal{O}_{K}(\mathbf{B})\) by [ST2, before Cor. 3.8] given as \(F\mapsto{}^{\sigma*}F:={}^{\sigma}F\circ[\tau(\sigma^{-1})]\); the corresponding fixed ring is \(\mathcal{O}_{K}(\mathbf{B})^{G_{L},*}=\mathcal{O}_{L}(\mathfrak{X})=D(o_{L},L)\). **Remark 4.1.23**.: _Note that the \(o_{L}\backslash\{0\}\)-action and hence the \(D(o_{L}^{\times},L)\)-module structure commute with both \(G_{L}\)-actions. Moreover, \(\psi_{L}\) commutes with the \(G_{L}\)-actions as well._ Recall that using the notation from [ST2, Lem. 4.6, 1./2.] the function \(1+F_{a\!\pi^{\prime}}(Z)=\exp\big{(}a\Omega_{\eta^{\prime}}\log_{LT}(Z)\big{)}\) corresponds to the Dirac distribution \(\delta_{a}\) of \(a\in o_{L}\) under the Fourier isomorphism. **Lemma 4.1.24**.: _Let \(\sigma\) be in \(G_{L},\,t^{\prime}\in T^{\prime}_{\pi}\) and \(a\in o_{L}\). Then_ * \(\sigma(\Omega_{t^{\prime}})=\Omega_{\tau(\sigma)t^{\prime}}=\Omega_{t^{\prime }}\tau(\sigma)\) _and_ * \({}^{\sigma}F_{a\!\pi^{\prime}}=F_{a\!\pi^{\prime}}\circ[\tau(\sigma)]=F_{a\! \pi(\sigma)\eta^{\prime}}\)_._ Proof.: (i) The Galois equivariance of the pairing \((\,\ ):T^{\prime}_{\pi}\otimes_{o_{L}}\mathbb{C}_{p}\to\mathbb{C}_{p}\), from (loc. cit. before Prop. 3.1) with \((t^{\prime},x)=\Omega_{t^{\prime}}x\) implies that \[\sigma(\Omega_{t^{\prime}})=\Omega_{\sigma(t^{\prime})}=\Omega_{\tau(\sigma)t ^{\prime}}\] while the \(o_{L}\)-invariance of that pairing implies that the latter expression equals \(\Omega_{t^{\prime}}\tau(\sigma)\). (ii) This is immediate from (i) and the definition of \(F_{a}\) taking equation (3) into account. **Proposition 4.1.25**.: * _The isomorphism (LT together with Fourier)_ \(\mathfrak{E}:D(o_{L},K)\cong\mathcal{O}_{K}(\mathfrak{X})\cong\mathcal{O}_{K}( \mathbf{B})\) _restricts to an isomorphism_ \[D(o_{L},K)^{G_{L},\tilde{*}}=\mathcal{O}_{K}(\mathfrak{X})^{G_{L}}\cong \mathcal{O}_{K}(\mathbf{B})^{G_{L}}=\mathcal{O}_{L}(\mathbf{B})\] _of_ \(D(o_{L}^{\times},L)\)_-modules._ * _The Mellin transform restricts to an isomorphism of_ \(D(o_{L}^{\times},L)\)_-modules_ \[D(o_{L}^{\times},K)^{G_{L},*}=\mathcal{O}_{K}(\mathfrak{X})^{G_{L},\psi_{L}=0 }\cong\mathcal{O}_{L}(\mathbf{B})^{\psi_{L}=0}.\] _Here the \(G_{L}\)-action on the distribution rings on the left hand sides is induced from the coefficientwise action on \(\mathcal{O}_{K}(\mathbf{B})\) and \(\mathcal{O}_{K}(\mathbf{B})^{\psi_{L}=0}\) via the LT-isomorphism and Mellin transform, respectively._ Proof.: (i) and (ii) follow from passing to the fixed vectors with respect to the coefficientwise \(G_{L}\)-action and Remark 4.1.23. In order to express the \(D(o_{L}^{\times},L)\)-module \(D(o_{L}^{\times},K)^{G_{L},*}\) in the above proposition more explicitly we describe the previous two actions on \(\mathcal{O}_{K}(\mathbf{B})\) now on \(D(o_{L},K)\): * The coefficientwise \(G_{L}\)-action on \(D(o_{L},K)=K\widehat{\otimes}_{L}D(o_{L},L)\), which corresponds to the twisted action on \(\mathcal{O}_{K}(\mathbf{B})\), will be written as \(\lambda\mapsto{}^{\sigma}\lambda\). * The \(G_{L}\)-action given by \(\lambda\mapsto\tau(\sigma)_{*}(^{\sigma}\lambda)\) corresponds to the coefficientwise action on \(\mathcal{O}_{K}(\mathbf{B})\). Note that for \(\lambda\in D(o_{L}^{\times},K)\) we have \(\tau(\sigma)_{*}(\lambda)=\delta_{\tau(\sigma)}\lambda\), where the right hand side refers to the product of \(\lambda\) and the Dirac distribution \(\delta_{\tau(\sigma)}\) in the ring \(D(o_{L}^{\times},K)\). Then we conclude that \[D(o_{L}^{\times},K)^{G_{L},*}=\{\lambda\in D(o_{L}^{\times},K)|\ ^{\sigma} \lambda=\delta_{\tau(\sigma)^{-1}}\lambda\text{ for all }\sigma\in G_{L}\}.\] ### Consequences of Serre duality Recall that in any quasi-separated rigid analytic variety the complement of any affinoid subdomain is admissible open ([Sch, SS3 Prop. 3(ii)]). This applies in particular to quasi-Stein spaces since they are separated by definition. For a rigid analytic variety \(\mathfrak{Y}\) we will denote by \(\operatorname{Aff}(\mathfrak{Y})\) the set of all affinoid subdomains of \(\mathfrak{Y}\). We have seen that \(\mathfrak{X}\), \(\mathfrak{X}^{\times}\), and \(\mathfrak{X}^{\times}_{n}\) for \(n\geq 1\) all are \(1\)-(equi)dimensional smooth Stein spaces. #### 4.2.1 Cohomology with compact support We slightly rephrase the definition of cohomology with compact support given in [vdP, SS1] in the case of a Stein space \(\mathfrak{Y}\) over \(L\). For any abelian sheaf \(\mathcal{F}\) on \(\mathfrak{Y}\) and any \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y})\) we put \[H^{0}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F}):=\ker(\mathcal{F}(\mathfrak{ Y})\longrightarrow\mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U}))\.\] This is a left exact functor in \(\mathcal{F}\), and we denote by \(H^{*}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\) its right derived functors. Since quasi-Stein spaces have no coherent cohomology the relative cohomology sequence ([vdP, Lem. 1.3]) gives rise, for a coherent sheaf \(\mathcal{F}\), to the exact sequence \[0\to H^{0}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\rightarrow\mathcal{F}( \mathfrak{Y})\rightarrow\mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U} \to H^{1}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\to 0. \tag{36}\] We then define the cohomology with compact support as \[H^{*}_{c}(\mathfrak{Y},\mathcal{F}):=\varinjlim_{\mathfrak{U}\in\operatorname{ Aff}(\mathfrak{Y})}H^{*}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\.\] Again, if \(\mathcal{F}\) is a coherent sheaf we obtain the exact sequence \[0\to H^{0}_{c}(\mathfrak{Y},\mathcal{F})\rightarrow\mathcal{F}( \mathfrak{Y})\rightarrow\varinjlim_{\mathfrak{U}\in\operatorname{Aff}( \mathfrak{Y})}\mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U})\to H^{1}_{c}( \mathfrak{Y},\mathcal{F})\to 0\.\] Suppose in the following that \(j:\mathfrak{Y}_{0}\rightarrow\mathfrak{Y}\) is an open immersion of Stein spaces (over \(L\)) which, for simplicity we view as an inclusion. **Lemma 4.2.1**.: _For any \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y}_{0})\) the covering \(\mathfrak{Y}=(\mathfrak{Y})\backslash\mathfrak{U}\cup\mathfrak{Y}_{0}\) is admissible._ Proof.: This follows from [vdP, Lem. 1.1]. **Lemma 4.2.2**.: _For any \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y}_{0})\) and any sheaf \(\mathcal{F}\) on \(\mathfrak{Y}\) the natural map_ \[H^{*}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\xrightarrow[\cong]{\mathrm{res }}H^{*}_{\mathfrak{U}}(\mathfrak{Y}_{0},\mathcal{F})\] _is bijective._ Proof.: Recall that for an injective sheaf on \(\mathfrak{Y}\) its restriction to \(\mathfrak{Y}_{0}\) is injective as well. Hence, by using an injective resolution, it suffices to proof the assertion for \(*=0\). _Injectivity_: Let \(f\in H^{0}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\) such that \(f|\mathfrak{Y}_{0}=0\). Since \(f|\mathfrak{Y}\backslash\mathfrak{U}=0\) as well it follows from Lemma 4.2.1 that \(f=0\). _Surjectivity_: Let \(g\in H^{0}_{\mathfrak{U}}(\mathfrak{Y}_{0},\mathcal{F})\) so that \(g|\mathfrak{Y}_{0}\backslash\mathfrak{U}=0\). Using Lemma 4.2.1 again we may define a preimage \(f\) of \(g\) by \(f|\mathfrak{Y}\backslash\mathfrak{U}:=0\) and \(f|\mathfrak{Y}_{0}:=g\). By passing to inductive limits we obtain the composed map \[j_{!}:H^{*}_{c}(\mathfrak{Y}_{0},\mathcal{F})=\lim_{\mathfrak{U}\in \operatorname{Aff}(\mathfrak{Y}_{0})}H^{*}_{\mathfrak{U}}(\mathfrak{Y}_{0}, \mathcal{F})\xrightarrow[\simeq]{\operatorname{res}^{-1}}\lim_{\mathfrak{U} \in\operatorname{Aff}(\mathfrak{Y}_{0})} H^{*}_{\mathfrak{U}}(\mathfrak{Y}),\mathcal{F})\] \[\to\lim_{\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y})}H^{*}_{ \mathfrak{U}}(\mathfrak{Y}),\mathcal{F})=H^{*}_{c}(\mathfrak{Y}),\mathcal{F})\.\] For later purposes we have to analyze the following situation. Let \[\mathfrak{U}_{1}\subseteq\mathfrak{U}_{2}\subseteq\ldots\subseteq\mathfrak{U} _{n}\subseteq\ldots\subseteq\mathfrak{Y}=\bigcup_{n}\mathfrak{U}_{n}\] be a Stein covering. We **assume** that each admissible open subset \(\mathfrak{Y}_{n}:=\mathfrak{Y}\backslash\mathfrak{U}_{n}\) also is a Stein space. Since \(\ldots\subseteq\mathfrak{Y}_{n}\subseteq\ldots\subseteq\mathfrak{Y}_{1} \subseteq\mathfrak{Y}\) we then have the projective system \[\ldots\to H^{*}_{c}(\mathfrak{Y}_{n},\mathcal{F})\to\ldots\to H^{*}_{c}( \mathfrak{Y}_{1},\mathcal{F})\to H^{*}_{c}(\mathfrak{Y},\mathcal{F})\.\] By Lemma 4.2.2 we may rewrite it as the projective system \[\ldots\to\lim_{\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y}_{n})}H^{*}_{ \mathfrak{U}}(\mathfrak{Y})\to\ldots\to\lim_{\mathfrak{U}\in\operatorname{Aff} (\mathfrak{Y}_{1})}H^{*}_{\mathfrak{U}}(\mathfrak{Y}),\mathcal{F})\to\lim_{ \mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y})}H^{*}_{\mathfrak{U}}( \mathfrak{Y},\mathcal{F}). \tag{37}\] **Lemma 4.2.3**.: _In the above situation we assume in addition that \(\mathcal{F}\) is a coherent sheaf and that the restriction maps \(\mathcal{F}(\mathfrak{Y})\to\mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U})\) for any \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y})\) are injective. We then have_ \[\varprojlim_{n}H^{1}_{c}(\mathfrak{Y}_{n},\mathcal{F})=\big{(}\varprojlim_{n} \varprojlim_{\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y}_{n})}\mathcal{F} (\mathfrak{Y}\backslash\mathfrak{U})\big{)}/\mathcal{F}(\mathfrak{Y}). \tag{38}\] Proof.: This is immediate from (37) and the relative cohomology sequence. For coherent sheaves \(\mathcal{F}\) all the above cohomology vector spaces carry natural locally convex topologies, which we briefly recall. The global sections \(\mathcal{F}(\mathfrak{Y})\) and \(\mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U})\), for \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y})\), are Frechet spaces. Using the relative cohomology sequence (36) we equip \(H^{1}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\) with the quotient topology from \(\mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U})\) (which might be non-Hausdorff) and then \(H^{1}_{c}(\mathfrak{Y},\mathcal{F})\) with the locally convex inductive limit topology (w.r.t. varying \(\mathfrak{U}\)). **Remark 4.2.4**.: _Let_ _be a commutative diagram of Frechet spaces such that the induced map \(\operatorname{coker}(\alpha)\to\operatorname{coker}(\beta)\) is bijective; then the latter map is a topological isomorphism for the quotient topologies._ Proof.: A more general statement can be found in [BS, Chap. VII Lem. 1.32]. Using this Remark we see that the bijection \(H^{1}_{\mathfrak{U}}(\mathfrak{Y},\mathcal{F})\xrightarrow[\simeq]{ \operatorname{res}}H^{1}_{\mathfrak{U}}(\mathfrak{Y}_{0},\mathcal{F})\) from Lemma 4.2.2 is a topological isomorphism. It follows that, in degree one at least, the above map \(j_{!}\) is as well as the transition maps in the projective system (37) are continuous. **Lemma 4.2.5**.: _The isomorphism (38) is topological._ Proof.: The assertion has to be understood, of course, in such a way that forming projective limits, inductive limits, and quotient spaces on both sides of (38) is meant in the topological sense. First of all one checks that forming the projective limit on the right hand side commutes with passing to the quotient space by \(\mathcal{F}(\mathfrak{Y})\) (compare (ii) in the proof of [Pro, Thm. 4.3] for a more general statement). Secondly, as a special case of [B-TVS, II.28 Cor. 2], forming inductive limits commutes with passing to quotient spaces. This reduces us to \(H^{1}_{\mathfrak{U}}(\mathfrak{Y})\backslash\mathfrak{U},\mathcal{F})= \mathcal{F}(\mathfrak{Y})\backslash\mathfrak{U})/\mathcal{F}(\mathfrak{Y})\) being a topological isomorphism, but which holds by definition. In the following we compute (38) further in two concrete cases. **The open unit disk** Let \(\mathbf{B}=\mathbf{B}_{[0,1)}\) denote the open unit disk over \(L\). We recall our convention that all radii are assumed to lie in \((0,1)\cap p^{\mathbb{Q}}\). For any radii \(r\leq s\) we introduce the affinoid disk \(\mathbf{B}_{[0,r]}\) as well as the open disk \(\mathbf{B}_{[0,r)}\) of radius \(r\) around \(0\) and the affinoid annulus \(\mathbf{B}_{[r,s]}:=\{r\leq|x|\leq s\}\). We put \(\mathbf{B}_{(r,1)}:=\mathbf{B}\backslash\mathbf{B}_{[0,r]}\), which are Stein spaces. By the identity theorem for Laurent series the assumptions of Lemma 4.2.3 are satisfied for the structure sheaf \(\mathcal{O}=\mathcal{O}_{\mathbf{B}}\) of \(\mathbf{B}\). We first fix a radius \(r\) and compute the cohomology \(H^{1}_{c}(\mathbf{B}_{(r,1)},\mathcal{O})\). By Lemma 4.2.2 and the relative cohomology sequence we have \[H^{1}_{c}(\mathbf{B}_{(r,1)},\mathcal{O})=\varinjlim_{\mathfrak{U}\in \operatorname{Aff}(\mathbf{B}_{(r,1)})}H^{*}_{\mathfrak{U}}(\mathbf{B}, \mathcal{O})=\big{(}\varinjlim_{\mathfrak{U}\in\operatorname{Aff}(\mathbf{B}_ {(r,1)})}\mathcal{O}(\mathbf{B}\backslash\mathfrak{U})\big{)}/\mathcal{O}( \mathbf{B})\.\] Of course, it suffices to take the inductive limit over a cofinal sequence of larger and larger affinoid annuli in \(\mathbf{B}_{(r,1)}\). For this we choose two sequences of radii \(r<\ldots<r_{m}<\ldots<r_{1}\bigcup<s_{1}<\ldots<s_{m}<\ldots<1\) with \((r_{m})_{m}\) and \((s_{m})_{m}\) converging to \(r\) and \(1\), respectively. Each space \(\mathbf{B}\backslash\mathbf{B}_{[r_{m},s_{m}]}=\mathbf{B}_{(s_{m},1)} \backslash\operatorname{\dot{\cup}}\mathbf{B}_{[0,r_{m})}\) has two connected components. We see that \[H^{1}_{c}(\mathbf{B}_{(r,1)},\mathcal{O})=\big{(}\varinjlim_{m\to\infty} \mathcal{O}(\mathbf{B}_{(s_{m},1)})\oplus\varinjlim_{m\to\infty}\mathcal{O}( \mathbf{B}_{[0,r_{m})})\big{)}/\mathcal{O}(\mathbf{B})\.\] As explained in Lemma 4.2.5 this is a topological equality. We observe that \(\mathcal{R}_{L}=\mathcal{R}_{L}(\mathbf{B})=\varinjlim_{m\to\infty}\mathcal{O }(\mathbf{B}_{(s_{m},1)})\) is the usual Robba ring (over \(L\)) whereas \(\mathcal{O}^{\dagger}(\mathbf{B}_{[0,r]})=\varinjlim_{m\to\infty}\mathcal{O}( \mathbf{B}_{[0,r_{m})})\) is the ring of overconvergent analytic functions on \(\mathbf{B}_{[0,r]}\). Hence \[H^{1}_{c}(\mathbf{B}_{(r,1)},\mathcal{O})=\big{(}\mathcal{R}_{L}\oplus \mathcal{O}^{\dagger}(\mathbf{B}_{[0,r]})\big{)}/\mathcal{O}(\mathbf{B})\.\] Passing now to the projective limit w.r.t. \(r\to 1\) of the continuous restriction maps \(\mathcal{O}(\mathbf{B})\to\mathcal{O}^{\dagger}(\mathbf{B}_{[0,r]})\to\mathcal{O }(\mathbf{B}_{[0,r]})\) we observe that \(\varprojlim_{r\to 1}\mathcal{O}^{\dagger}(\mathbf{B}_{[0,r]})=\mathcal{O}( \mathbf{B})\) holds true topologically. We finally deduce that \[\varinjlim_{r\to 1}H^{1}_{c}(\mathbf{B}_{(r,1)},\mathcal{O})=\varinjlim_{r\to 1} \big{(}\varinjlim_{\mathfrak{U}\in\operatorname{Aff}(\mathbf{B}_{(r,1)})} \mathcal{O}(\mathbf{B}\backslash\mathfrak{U})\big{)}/\mathcal{O}(\mathbf{B})= \big{(}\mathcal{R}_{L}\oplus\mathcal{O}(\mathbf{B})\big{)}/\mathcal{O}( \mathbf{B})\cong\mathcal{R}_{L} \tag{39}\] as locally convex vector spaces. **The character variety \(\mathfrak{X}\)** Since \(\mathfrak{X}_{/\mathbb{C}_{p}}\cong\mathbf{B}_{/\mathbb{C}_{p}}\) by [ST2] the injectivity of the restriction maps \(\mathcal{O}(\mathfrak{X})\to\mathcal{O}(\mathfrak{X}\backslash\mathfrak{U})\) for any \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{X})\) follows from the corresponding fact for \(\mathbf{B}\), which we saw already. According to Prop. 4.1.9 the admissible open subdomains \(\mathfrak{X}_{(s_{n},1)}\) of \(\mathfrak{X}\) are Stein spaces. In order to compute their cohomology with compact support in the structure sheaf \(\mathcal{O}=\mathcal{O}_{\mathfrak{X}}\) we fix an \(n\geqslant 0\). We choose a sequence of radii \(r_{n+1}>r_{n+2}>\ldots>s_{n}\) in \(S_{n}\) converging to \(s_{n}\). Furthermore we observe that the increasing sequence \((s_{m})_{m>n}\) in \(S_{\infty}\) converges to \(1\). By Prop. 4.1.7 we then have the Stein covering \[\mathfrak{X}_{(s_{n},1)}=\bigcup_{m>n}\mathfrak{X}_{[r_{m},s_{m}]}\;.\] Hence, by Lemma 4.2.2, the relative cohomology sequence, and the explanation in Lemma 4.2.5, we have the topological equality \[H^{1}_{c}(\mathfrak{X}_{(s_{n},1)},\mathcal{O})=\big{(}\varinjlim_{m\to\infty }\mathcal{O}(\mathfrak{X}\backslash\mathfrak{X}_{[r_{m},s_{m}]})\big{)}/ \mathcal{O}(\mathfrak{X})\.\] The obvious set theoretic decomposition \[\mathfrak{X}\backslash\mathfrak{X}_{[r_{m},s_{m}]}=\mathfrak{X}\backslash( \mathfrak{X}(s_{m})\backslash\mathfrak{X}^{-}(r_{m}))=(\mathfrak{X} \backslash\mathfrak{X}(s_{m}))\mathbin{\dot{\cup}}\mathfrak{X}^{-}(r_{m})= \mathfrak{X}_{(s_{m},1)}\mathbin{\dot{\cup}}\mathfrak{X}^{-}(r_{m})\] is in fact the decomposition of the space \(\mathfrak{X}\backslash\mathfrak{X}_{[r_{m},s_{m}]}\) into its connected components. This can be checked after base change to \(\mathbb{C}_{p}\) where, by [BSX, Prop. 1.20 and proof of Prop. 2.1], the setting becomes isomorphic to the setting for the open unit disk which we discussed in the previous section. Entirely similarly as in the previous subsection it follows now that \[H^{1}_{c}(\mathfrak{X}_{(s_{n},1)},\mathcal{O})=\big{(}\mathcal{R}_{L}( \mathfrak{X})\oplus\mathcal{O}^{\dagger}(\mathfrak{X}(s_{n}))\big{)}/ \mathcal{O}(\mathfrak{X})\] where \(\mathcal{O}^{\dagger}(\mathfrak{X}(s_{n})):=\varinjlim_{m\to\infty}\mathfrak{ X}^{-}(r_{m})\), and then (40) as locally convex vector spaces. #### 4.2.2 Serre duality for Stein spaces In the following the continuous dual of a locally convex vector space is always equipped with the strong topology. The Serre duality for smooth Stein spaces is established in [Chi] and [2]. Let \(\mathfrak{Y}\) be a \(1\)-(equi)dimensional smooth Stein space over \(L\). **Theorem 4.2.6**.: _For any coherent sheaf \(\mathcal{F}\) on \(\mathfrak{Y}\) we have:_ 1. \(H^{1}_{c}(\mathfrak{Y},\mathcal{F})\) _is a complete reflexive Hausdorff space._ 2. \(\operatorname{Hom}_{\mathfrak{Y}}(\mathcal{F},\Omega^{1}_{\mathfrak{Y}})=H^{ 0}(\mathfrak{Y}),\underline{\operatorname{Hom}}_{\mathfrak{Y}}(\mathcal{F}, \Omega^{1}_{\mathfrak{Y}}))\)_, being the global sections of another coherent sheaf, is a reflexive Frechet space strictly of countable type ([PGS, Def. 4.2.3])._ 3. _There is a canonical trace map_ \[t_{\mathfrak{Y}}:H^{1}_{c}(\mathfrak{Y},\Omega^{1}_{\mathfrak{Y}})\longrightarrow L\] _such that the Yoneda pairing_ \[H^{1}_{c}(\mathfrak{Y},\mathcal{F})\times\operatorname{Hom}_{\mathfrak{Y}}( \mathcal{F},\Omega^{1}_{\mathfrak{Y}})\longrightarrow H^{1}_{c}(\mathfrak{Y},\Omega^{1}_{\mathfrak{Y}})\] _composed with the trace map induces isomorphisms of topological vector spaces_ \(\operatorname{Hom}_{L}^{cont}(H^{1}_{c}(\mathfrak{Y}),\mathcal{F}),L)\xrightarrow{ \cong}\operatorname{Hom}_{\mathfrak{Y}}(\mathcal{F},\Omega^{1}_{\mathfrak{Y}})\) _and_ \(\operatorname{Hom}_{L}^{cont}(\operatorname{Hom}_{\mathfrak{Y}}(\mathcal{F}, \Omega^{1}_{\mathfrak{Y}}),L)\xrightarrow{\cong}H^{1}_{c}(\mathfrak{Y}, \mathcal{F})\) _which are natural in_ \(\mathcal{F}\)_._ Proof.: See [1, Thm. 7.1]12 and [1, Thm. 4.21] (as well as [13, Prop. 3.6]). Footnote 12: This references depends on the results in the article [1, which unfortunately contains the following gaps. Firstly, in the proof of Lemma 4.2.2. he quotes a result of Bosch concerning the connectedness of formal fibers without verifying the required assumptions. This is repaired by [1, Thm. 22/Cor. 23]. Secondly, Beyer claims implicitly and without proof, that _special affinoid wide-open spaces_ are _affinoid wide-open spaces_ in the sense of Definition 4.1.1 and Remark 4.1.2 in (loc. cit.). This crucial ingredient has now been shown explicitly in [1, §2.5]. If we specialize the above assertion to the case of the structure sheaf \(\mathcal{F}=\mathcal{O}_{\mathfrak{Y}}\) then we have \(\operatorname{Hom}_{\mathfrak{Y}}(\mathcal{O}_{\mathfrak{Y}},\Omega^{1}_{ \mathfrak{Y}})=\Omega^{1}_{\mathfrak{Y}}(\mathfrak{Y})\) for trivial reasons. On the other hand the relative cohomology sequence implies that \[H^{1}_{c}(\mathfrak{Y},\mathcal{O}_{\mathfrak{Y}})=\mathcal{R}_{L}(\mathfrak{ Y})/\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y}). \tag{41}\] Hence we have the following consequence of Serre duality. **Corollary 4.2.7**.: _Serre duality gives rise to an isomorphism of topological vector spaces_ \[\operatorname{Hom}_{L}^{cont}(\mathcal{R}_{L}(\mathfrak{Y})/\mathcal{O}_{ \mathfrak{Y}}(\mathfrak{Y}),L)\xrightarrow{\cong}\Omega^{1}_{\mathfrak{Y}}( \mathfrak{Y})\.\] **Lemma 4.2.8**.: _Let \(\alpha:\mathfrak{Y}\longrightarrow\mathfrak{Y}^{\prime}\) be a finite etale morphism of 1-dimensional smooth Stein spaces over \(L\). We then have, for any coherent sheaf \(\mathcal{F}\) on \(\mathfrak{Y}^{\prime}\), the commutative diagram of Serre duality pairings_ Proof.: The vertical arrows in the lower part of the diagram are induced by the adjunction homomorphism \(\mathcal{F}\to\alpha_{*}\alpha^{*}\mathcal{F}\). It is commutative by the naturality of Serre duality in the coherent sheaf. For the upper part we consider more generally a coherent sheaf \(\mathcal{G}\) on \(\mathfrak{Y}\) and check the commutativity of the diagram \[\begin{CD}H^{1}_{c}(\mathfrak{Y}),\mathcal{G})\qquad\times\qquad\quad \operatorname{Hom}_{\mathfrak{Y}}(\mathcal{G},\Omega^{1}_{\mathfrak{Y}})\xrightarrow{ \cong}H^{1}_{c}(\mathfrak{Y}),\Omega^{1}_{\mathfrak{Y}})\xrightarrow{t_{ \mathfrak{Y}}}L\\ \cong\\ H^{1}_{c}(\mathfrak{Y}^{\prime},\alpha_{*}\mathcal{G})\qquad\times\qquad \quad\operatorname{Hom}_{\mathfrak{Y}^{\prime}}(\alpha_{*}\mathcal{G},\alpha_ {*}\Omega^{1}_{\mathfrak{Y}^{\prime}})\xrightarrow{\cong}H^{1}_{c}(\mathfrak{Y }^{\prime},\alpha_{*}\Omega^{1}_{\mathfrak{Y}})\\ H^{1}_{c}(\mathfrak{Y}^{\prime},\alpha_{*}\mathcal{G})\qquad\times\qquad \quad\operatorname{Hom}_{\mathfrak{Y}^{\prime}}(\alpha_{*}\mathcal{G},\Omega^ {1}_{\mathfrak{Y}^{\prime}})\xrightarrow{\cong}H^{1}_{c}(\mathfrak{Y}^{\prime },\Omega^{1}_{\mathfrak{Y}^{\prime}})\xrightarrow{t_{\mathfrak{Y}^{\prime}}}L. \end{CD}\] Here the second and third lower vertical arrows are induced by the relative trace map \(t_{\alpha}:\alpha_{*}\Omega^{1}_{\mathfrak{W}}\to\Omega^{1}_{\mathfrak{W}^{ \prime}}\) (see below). The commutativity of the Yoneda pairings (before applying the horizontal trace maps) is a trivial consequence of functoriality properties. That the first and third upper vertical arrows are isomorphisms follows from the fact that for a finite morphism the functor \(\alpha_{*}\) is exact on quasi-coherent sheaves. This reduces us to showing that the diagram (42) is commutative. For the convenience of the reader we briefly explain the definition of the relative trace map \(t_{\alpha}\). But first we need to recall that any coherent \(\alpha_{*}\mathcal{O}_{\mathfrak{W}}\)-module \(\mathcal{M}\) can naturally be viewed ([EGA, Prop. I.9.2.5]) as a coherent \(\mathcal{O}_{\mathfrak{W}}\)-module \(\widetilde{\mathcal{M}}\) such that \(\alpha_{*}\widetilde{\mathcal{M}}=\mathcal{M}\) (for any open affinoid subdomain \(\mathfrak{W}\in\mathfrak{Y}^{\prime}\) one has \(\widetilde{\mathcal{M}}(\alpha^{-1}(\mathfrak{W}))=\mathcal{M}(\mathfrak{W} )\)). Since \(\alpha\) is etale we have ([Mat, Thm. 25.1]) that \[(\alpha_{*}\mathcal{O}_{\mathfrak{W}}\otimes_{\mathcal{O}_{\mathfrak{W}^{ \prime}}}\Omega^{1}_{\mathfrak{W}^{\prime}})^{\sim}\xrightarrow{\cong}\Omega^ {1}_{\mathfrak{W}}\.\] Since \(\alpha\) is finite flat the natural map \[\underline{\mathrm{Hom}}_{\mathfrak{W}^{\prime}}(\alpha_{*}\mathcal{O}_{ \mathfrak{W}},\mathcal{O}_{\mathfrak{W}^{\prime}})\otimes_{\mathcal{O}_{ \mathfrak{W}^{\prime}}}\Omega^{1}_{\mathfrak{W}^{\prime}}\xrightarrow{\cong} \underline{\mathrm{Hom}}_{\mathfrak{W}^{\prime}}(\alpha_{*}\mathcal{O}_{ \mathfrak{W}},\Omega^{1}_{\mathfrak{W}^{\prime}})\] is an isomorphism. Finally, since \(\alpha\) is finite etale the usual trace pairing is nondegenerate and induces an isomorphism 13 Footnote 13: Compare [https://stacks.math.columbia.edu/tag/0BVH](https://stacks.math.columbia.edu/tag/0BVH) \[\underline{\mathrm{Hom}}_{\mathfrak{W}^{\prime}}(\alpha_{*}\mathcal{O}_{ \mathfrak{W}},\mathcal{O}_{\mathfrak{W}^{\prime}})\xrightarrow{\cong}\alpha_ {*}\mathcal{O}_{\mathfrak{W}}\.\] The relative trace map is now defined to be the composite map \[\alpha_{*}\Omega^{1}_{\mathfrak{W}}\cong\alpha_{*}(\alpha_{*} \mathcal{O}_{\mathfrak{W}}\otimes_{\mathcal{O}_{\mathfrak{W}^{\prime}}}\Omega ^{1}_{\mathfrak{W}^{\prime}})^{\sim}\cong\alpha_{*}(\underline{\mathrm{Hom}} _{\mathfrak{W}^{\prime}}(\alpha_{*}\mathcal{O}_{\mathfrak{W}},\mathcal{O}_{ \mathfrak{W}^{\prime}})\otimes_{\mathcal{O}_{\mathfrak{W}^{\prime}}}\Omega^ {1}_{\mathfrak{W}^{\prime}})^{\sim}\cong\\ \alpha_{*}\underline{\mathrm{Hom}}_{\mathfrak{W}^{\prime}}( \alpha_{*}\mathcal{O}_{\mathfrak{W}},\Omega^{1}_{\mathfrak{W}^{\prime}})^{ \sim}=\underline{\mathrm{Hom}}_{\mathfrak{W}^{\prime}}(\alpha_{*}\mathcal{O}_ {\mathfrak{W}},\Omega^{1}_{\mathfrak{W}^{\prime}})\xrightarrow{f\mapsto f(1)} \Omega^{1}_{\mathfrak{W}^{\prime}}\.\] The commutativity of (42) is shown in detail in [Mal] and should also be consequence of [AL, Prop. 6.5.1 (2)] upon showing that their general construction boils down to the above description of the relative trace map. We make the last lemma more explicit for the structure sheaf. Let \(\rho:\mathfrak{Y}\to\mathfrak{Z}\) be a finite, faithfully flat, and etale morphism of \(1\)-dimensional smooth Stein spaces over \(L\) such that \(\mathcal{O}_{\mathfrak{W}}(\mathfrak{Y})\) is finitely generated free as an \(\mathcal{O}_{\mathfrak{Z}}(\mathfrak{Z})\)-module. Fix a basis \(f_{1},\ldots,f_{h}\in\mathcal{O}_{\mathfrak{W}}(\mathfrak{Y})\). Going through the proof of Lemma 4.2.8 one checks that on global sections the relative trace map is given by \[t_{\rho}:\Omega^{1}_{\mathfrak{W}}(\mathfrak{Y})=\mathcal{O}_{ \mathfrak{W}}(\mathfrak{Y})\otimes_{\mathcal{O}_{\mathfrak{Z}}(\mathfrak{Z})} \Omega^{1}_{\mathfrak{Z}}(\mathfrak{Z}) \longrightarrow\Omega^{1}_{\mathfrak{Z}}(\mathfrak{Z})\\ \omega=\sum_{i=1}^{h}f_{i}\otimes\omega_{i}\longmapsto\sum_{i=1}^{ h}trace_{\mathcal{O}_{\mathfrak{W}}(\mathfrak{Y})/\mathcal{O}_{\mathfrak{Z}}( \mathfrak{Z})}(f_{i})\omega_{i}. \tag{43}\] Hence we have the commutative diagram of duality pairings \[\begin{CD}\operatorname{Hom}_{L}^{cont}(\mathcal{R}_{L}(\mathfrak{Y})/ \mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y}),L)\xrightarrow{\cong}\Omega^{1}_{ \mathfrak{Y}}(\mathfrak{Y})\\ \operatorname{Hom}_{L}^{cont}(\mathcal{R}_{L}(\mathfrak{Z})/\mathcal{O}_{ \mathfrak{Z}}(\mathfrak{Z}),L)\xrightarrow{\cong}\Omega^{1}_{\mathfrak{Z}}( \mathfrak{Z}).\end{CD} \tag{44}\] It remains to explicitly compute the relative trace map in the cases of interest to us. But first we observe that, by the explanation at the end of section 2.3 in [BSX], the sheaf of differentials \(\Omega^{1}_{\mathfrak{Y}}\) on a smooth \(1\)-dimensional Stein group variety \(\mathfrak{Y}\) is a free \(\mathcal{O}_{\mathfrak{Y}}\)-module. Furthermore, if \(\mathfrak{Y}\) is one of our character varieties, say of the group \(G\), then by the construction before Def. 1.27 in [BSX] we have the embedding \[L=\operatorname{Lie}(G) \longrightarrow\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y})\] \[\mathfrak{x} \longmapsto[\chi\mapsto d\chi(\mathfrak{x})]\] and the function \(\log_{\mathfrak{Y}}\) defined as the image of \(1\in L=\operatorname{Lie}(G)\). **Remark 4.2.9**.: _The function \(\log_{\mathfrak{X}}\) corresponds under the LT-isomorphism \(\kappa\) to the function \(\Omega\log_{LT}\) by the commutative diagram after [ST2, Lemma 3.4]. In particular, \(d\log_{\mathfrak{X}}\) corresponds to \(\Omega d\log_{LT}\) and \(\tilde{c}^{\mathfrak{X}}_{inv}\) to \(\frac{1}{\Omega}\tilde{c}_{inv}\), where \(df=\tilde{c}^{\mathfrak{X}}_{inv}d\log_{\mathfrak{X}}\) defines the invariant derivation on \(\mathcal{O}_{K}(\mathfrak{X})\) similarly as for \(\tilde{c}_{inv}\) in (2)._ **Proposition 4.2.10**.: 1. _For_ \(\pi^{*}_{L}:\mathfrak{X}\to\mathfrak{X}\) _we have_ \(\Omega^{1}_{\mathfrak{X}}(\mathfrak{X})=\mathcal{O}_{\mathfrak{X}}(\mathfrak{ X})d\log_{\mathfrak{X}}\) _and_ \[t_{\pi^{*}_{L}}(fd\log_{\mathfrak{X}})=\frac{q}{\pi_{L}}\psi^{\mathfrak{X}}_{L}( f)d\log_{\mathfrak{X}}\.\] 2. _For_ \(n\geqslant n_{0}\) _and_ \(\ell^{*}_{n}:\mathfrak{X}\xrightarrow{\cong}\mathfrak{X}^{\times}_{n}\) _we have_ \(\Omega^{1}_{\mathfrak{X}^{\times}_{n}}(\mathfrak{X}^{\times}_{n})=\mathcal{O} _{\mathfrak{X}^{\times}_{n}}(\mathfrak{X}^{\times}_{n})d\log_{\mathfrak{X}^{ \times}_{n}}\) _and_ \[t_{\ell^{*}_{n}}(fd\log_{\mathfrak{X}})=\pi^{n}_{L}(\ell^{*}_{n})_{*}(f)d\log_ {\mathfrak{X}^{\times}_{n}}\.\] 3. _For_ \(n\geqslant m\geqslant 1\) _and_ \(\rho_{m,n}:\mathfrak{X}^{\times}_{m}\to\mathfrak{X}^{\times}_{n}\) _we have_ \(\Omega^{1}_{\mathfrak{X}^{\times}_{m}}(\mathfrak{X}^{\times}_{m})=\mathcal{O} _{\mathfrak{X}^{\times}_{m}}(\mathfrak{X}^{\times}_{m})d\log_{\mathfrak{X}^{ \times}_{m}}\) _and_ \[t_{\rho_{m,n}}(fd\log_{\mathfrak{X}^{\times}_{m}})=q^{n-m}f_{1}d\log_{ \mathfrak{X}^{\times}_{n}}\quad\text{if }f=\sum_{i=1}^{h}\operatorname{ev}_{u_{i}}\rho^{*}_{m,n}(f_{i})\.\] _where_ \(u_{1}=1,u_{2},\dots,u_{h}\in U_{m}\) _are representatives for the cosets of_ \(U_{n}\) _in_ \(U_{m}\) _(with_ \(h:=q^{n-m}\)_)._ 4. _For_ \(n\geqslant 1\) _and_ \(\rho_{n}:\mathfrak{X}^{\times}\to\mathfrak{X}^{\times}_{n}\) _we have_ \(\Omega^{1}_{\mathfrak{X}^{\times}}(\mathfrak{X}^{\times})=\mathcal{O}_{ \mathfrak{X}^{\times}}(\mathfrak{X}^{\times})d\log_{\mathfrak{X}^{\times}}\) _and_ \[t_{\rho_{n}}(fd\log_{\mathfrak{X}^{\times}})=(q-1)q^{n-1}f_{1}d\log_{\mathfrak{X} ^{\times}_{n}}\quad\text{if }f=\sum_{i=1}^{h}\operatorname{ev}_{u_{i}}\rho^{*}_{n}(f_{i})\.\] _where_ \(u_{1}=1,u_{2},\dots,u_{h}\in U_{n}\) _are representatives for the cosets of_ \(U_{n}\) _in_ \(o^{\times}_{L}\) _(with_ \(h:=(q-1)q^{n-1}\)_)._ 5. _For the multiplication_ \(\mu_{\chi}:\mathfrak{X}^{\times}\xrightarrow{\cong}\mathfrak{X}^{\times}\) _by a fixed point_ \(\chi\in\mathfrak{X}^{\times}(L)\) _we have_ \[t_{\mu_{\chi}}(fd\log_{\mathfrak{X}^{\times}})=\mu_{\chi^{\ast}}(f)d\log_{ \mathfrak{X}^{\times}}=\mu_{\chi^{-1}}^{\ast}(f)d\log_{\mathfrak{X}^{\times}}\.\] Proof.: All subsequent computations start, of course, from the formula (43) for the relative trace map. 1. The assumptions are satisfied by Lemmas 4.1.1 and 4.1.10. As explained at the end of section 2.3 in [BSX], the sheaf of differentials \(\Omega^{1}_{\mathfrak{X}}\) on \(\mathfrak{X}\) is a free \(\mathcal{O}_{\mathfrak{X}}\)-module of rank one with basis the global differential \(d\log_{\mathfrak{X}}\). By [BSX, Lem. 1.28.ii] we have \(\varphi_{L}(\log_{\mathfrak{X}})=\pi_{L}\log_{\mathfrak{X}}\). The formula for \(t_{\pi_{L}^{\ast}}\) now follows from Remark 4.1.4. 2. The assumptions are trivially satisfied. The map \(d\ell_{n}:L=\operatorname{Lie}(U_{n})\to L=\operatorname{Lie}(o_{L})\) is multiplication by \(\pi_{L}^{-n}\). It follows that the isomorphism \((\ell_{n}^{\ast})^{\ast}:\mathcal{O}_{\mathfrak{X}_{n}^{\times}}(\mathfrak{X }_{n}^{\times})\to\mathcal{O}_{\mathfrak{X}}(\mathfrak{X})\) sends \(\log_{\mathfrak{X}_{n}^{\times}}\) to \(\pi_{L}^{-n}\log_{\mathfrak{X}}\). This implies the assertions. 3. The assumptions are satisfied by Remark 4.1.18. The inclusion \(U_{n}\hookrightarrow U_{m}\) of an open subgroup induces the identity map on the Lie algebras. It follows that \(\rho_{m,n}^{\ast}(\log_{\mathfrak{X}_{n}^{\times}})=\log_{\mathfrak{X}_{m}^{ \times}}\). Since \(\rho_{m,n}\) is etale we first may apply this with some \(n\geq n_{0}\) and, using 2., deduce that for \(m\geq 1\). The formula for \(t_{\rho_{m,n}}\) follows by the same argument as in the proof of Remark 4.1.4. 4. The argument is the same as the one for 3. 5. The assumptions are trivially satisfied. Using that \(d\chi(1)=\frac{d}{dt}\chi(\exp(t))_{|t=0}\) we check that \(\log_{\mathfrak{X}^{\times}}(\chi_{1}\chi_{2})=\log_{\mathfrak{X}^{\times}}( \chi_{1})+\log_{\mathfrak{X}^{\times}}(\chi_{2})\) holds true. It follows that \(\mu_{\chi}^{\ast}(d\log_{\mathfrak{X}^{\times}})=d\log_{\mathfrak{X}^{\times}}\) and hence the formula for \(t_{\mu_{\chi}}\). We briefly remark on the case where our Stein space is the open unit disk \(\mathbf{B}\) around zero. Then \(\mathcal{R}_{L}(\mathbf{B})\) is the usual Robba ring of all Laurent series \(f(Z)=\sum_{i\in\mathbb{Z}}c_{i}Z^{i}\) with coefficients \(c_{i}\in L\) which converge in some annulus near 1. Analogously to (41) we have \[H^{1}_{c}(\mathbf{B},\Omega^{1}_{\mathbf{B}})=\mathcal{R}_{L}(\mathbf{B})dZ/ \mathcal{O}_{\mathbf{B}}(\mathbf{B})dZ\] and the trace map sends \(\sum_{i\in\mathbb{Z}}c_{i}Z^{i}dZ\) to its residue which is the coefficient \(c_{-1}\) ([1, SS3.1]). #### 4.2.3 Duality for boundary sections First we recall another functoriality property of Serre duality. **Proposition 4.2.11**.: _Let \(j:\mathfrak{Y}_{0}\to\mathfrak{Y}\) be an open immersion of 1-(equi)dimensional smooth Stein spaces over \(L\), and let \(\mathcal{F}\) be a coherent sheaf on \(\mathfrak{Y}\). Then the diagram_ _is commutative._ Proof.: The commutativity of the Yoneda pairing (before applying trace maps) is immediate from the functoriality of the cohomology with compact support in the coefficient sheaf. The assertion that \(t_{\mathfrak{Y}}\circ j_{i}=t_{\mathfrak{Y}_{0}}\) holds true is shown in [vdP, Thm. 3.7]. In order to combine the above functoriality property with Lemma 4.2.3 in the case of the structure sheaf \(\mathcal{F}=\mathcal{O}_{\mathfrak{Y}}\) we first recall the setting of that lemma. 1. \(\mathfrak{Y}=\bigcup_{n}\mathfrak{U}_{n}\) is a Stein covering of the Stein space \(\mathfrak{Y}\) such that the \(\mathfrak{Y}_{n}=\mathfrak{Y}\backslash\mathfrak{U}_{n}\) are Stein spaces as well. In particular, \(\mathcal{R}_{L}(\mathfrak{Y})=\varinjlim_{n}\mathcal{O}_{\mathfrak{Y}}( \mathfrak{Y}_{n})\) with the locally convex inductive limit topology. 2. The restriction maps \(\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y})\to\mathcal{O}_{\mathfrak{Y}}( \mathfrak{Y}\backslash\mathfrak{U})\) are injective for any \(\mathfrak{U}\in\operatorname{Aff}(\mathfrak{Y})\). 3. The inductive system of Frechet spaces \(\Omega^{1}_{\mathfrak{Y}}(\mathfrak{Y}_{1})\to\ldots\to\Omega^{1}_{ \mathfrak{Y}}(\mathfrak{Y}_{n})\to\Omega^{1}_{\mathfrak{Y}}(\mathfrak{Y}_{n+1 })\to\ldots\) is regular ([PGS, Def. 11.1.3(ii)]). By [PGS, Thm. 11.2.4(ii)] the locally convex inductive limit \[\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{Y})}:=\varinjlim_{n}\Omega^{1}_{ \mathfrak{Y}}(\mathfrak{Y}_{n})=\varinjlim_{\mathfrak{U}\in\operatorname{Aff }(\mathfrak{Y})}\Omega^{1}_{\mathfrak{Y}}(\mathfrak{Y}\backslash\mathfrak{U})\] is a locally convex Hausdorff space. **Proposition 4.2.12**.: _In the above setting 1.-3. we have a natural topological isomorphism_ \[\operatorname{Hom}^{cont}_{L}(\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{Y})},L) \cong\big{(}\varprojlim_{n}\varinjlim_{\mathfrak{U}\in\operatorname{Aff}( \mathfrak{Y}_{n})}\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y}\backslash\mathfrak{U })\big{)}/\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y})\] _(where the left hand side is equipped with the strong topology of bounded convergence)._ Proof.: The asserted isomorphism is the composite of the isomorphisms \[\operatorname{Hom}^{cont}_{L}(\Omega^{1}_{\mathcal{R}_{L}( \mathfrak{Y})},L) =\operatorname{Hom}^{cont}_{L}(\varinjlim_{n}\Omega^{1}_{ \mathfrak{Y}}(\mathfrak{Y}_{n}),L)\xrightarrow{\cong}\varprojlim_{n} \operatorname{Hom}^{cont}_{L}(\Omega^{1}_{\mathfrak{Y}}(\mathfrak{Y}_{n}),L)\] \[=\varprojlim_{n}H^{1}_{c}(\mathfrak{Y}_{n},\mathcal{O}_{ \mathfrak{Y}})\] \[=\big{(}\varprojlim_{n}\varinjlim_{\mathfrak{U}\in\operatorname{Aff }(\mathfrak{Y}_{n})}\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y}\backslash \mathfrak{U})\big{)}/\mathcal{O}_{\mathfrak{Y}}(\mathfrak{Y})\.\] The isomorphism in the first line comes from [PGS, Thm. 11.1.13]. The equality in the second, resp. third, line is a consequence of Thm. 4.2.6 and Prop. 4.2.11, resp. Lemmata 4.2.3 and 4.2.5. We now evaluate this latter result in the same concrete cases as in section 4.2.1. **The open unit disk** First of all, the sheaf of differentials \(\Omega^{1}_{\mathbf{B}}\) on the open unit disk \(\mathbf{B}\) is a free \(\mathcal{O}_{\mathbf{B}}\)-module of rank one. Hence, by choosing, for example the global differential \(dZ\) for a coordinate function \(Z\), as a basis we obtain a topological isomorphism \(\mathcal{R}_{L}(\mathbf{B})\cong\Omega^{1}_{\mathcal{R}_{L}(\mathbf{B})}\) as \(\mathcal{R}_{L}(\mathbf{B})\)-modules. The regularity assumption in 3. above therefore is reduced to the corresponding property for \(\mathcal{R}_{L}(\mathbf{B})\), which is established in the proof of [BSX, Prop. 2.6.i]. Hence Prop. 4.2.12 is available. By combining its assertion with (39) we obtain a natural topological isomorphism \[\operatorname{Hom}^{cont}_{L}(\mathcal{R}_{L}(\mathbf{B}),L)\cong \operatorname{Hom}^{cont}_{L}(\Omega^{1}_{\mathcal{R}_{L}(\mathbf{B})},L) \cong\mathcal{R}_{L}(\mathbf{B}). \tag{45}\] This shows that \(\mathcal{R}_{L}(\mathbf{B})\) is topologically selfdual. By going through the definitions and using the explicit description of the trace map in this case as the usual residue map (end of section 4.2.2) one checks that this selfduality comes from the pairing \[\mathcal{R}_{L}(\mathbf{B})\times\mathcal{R}_{L}(\mathbf{B}) \longrightarrow L\] \[(f_{1}(Z),f_{2}(Z)) \longmapsto\text{residue of }f_{1}(Z)f_{2}(Z)dZ.\] This latter form of the result was known ([CR], [MS]) before Serre duality in rigid analysis was established. In this paper it is more natural to use the selfduality given by the pairing \[<\,\ >_{\mathbf{B}}:\mathcal{R}_{L}(\mathbf{B})\times\mathcal{R} _{L}(\mathbf{B}) \longrightarrow L\] \[(f_{1},f_{2}) \longmapsto\text{residue of }f_{1}f_{2}d\log_{LT}.\] We will denote by \(\operatorname{res}_{\mathbf{B}}:\Omega^{1}_{\mathcal{R}_{L}(\mathbf{B})} \to L\) the linear form which corresponds to \(1\in\mathcal{R}_{L}(\mathbf{B})\) under the second isomorphism in (45). #### The character variety \(\mathfrak{X}\) We recall that the sheaf of differentials \(\Omega^{1}_{\mathfrak{X}}\) on \(\mathfrak{X}\) is a free \(\mathcal{O}_{\mathfrak{X}}\)-module of rank one with basis the global differential \(d\log_{\mathfrak{X}}\). Hence again we have a topological isomorphism \(\mathcal{R}_{L}(\mathfrak{X})\cong\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{X})}\) as \(\mathcal{R}_{L}(\mathfrak{X})\)-modules. The regularity assumption in 3. above therefore holds by [BSX, Prop. 2.6.i]. Hence Prop. 4.2.12 is available. By combining its assertion with (40) we obtain a natural topological isomorphism \[\operatorname{Hom}^{cont}_{L}(\mathcal{R}_{L}(\mathfrak{X}),L)\cong \operatorname{Hom}^{cont}_{L}(\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{X})},L) \cong\mathcal{R}_{L}(\mathfrak{X}). \tag{46}\] This shows that \(\mathcal{R}_{L}(\mathfrak{X})\) is topologically selfdual. Let \(\operatorname{res}_{\mathfrak{X}}:\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{X})}\to L\) be the linear form which corresponds to \(1\in\mathcal{R}_{L}(\mathfrak{X})\) under the above isomorphism. Then, as a consequence of the naturality of the Yoneda pairing, this selfduality comes from the pairing \[<\,\ >_{\mathfrak{X}}:\mathcal{R}_{L}(\mathfrak{X})\times \mathcal{R}_{L}(\mathfrak{X}) \longrightarrow L\] \[(f_{1},f_{2}) \longmapsto\operatorname{res}_{\mathfrak{X}}(f_{1}f_{2}d\log_{ \mathfrak{X}}). \tag{47}\] Next we consider \(\mathfrak{X}_{n}^{\times}\) for some \(n\geq n_{0}\), where \(n_{0}\geq 1\) is the integer from section 4.1.2. We then have the isomorphism of Stein group varieties \(\ell_{n}^{*}:\mathfrak{X}\xrightarrow{\cong}\mathfrak{X}_{n}^{\times}\). Hence all we have established for \(\mathfrak{X}\) holds true correspondingly for \(\mathfrak{X}_{n}^{\times}\). In particular, we have a natural topological isomorphism \[\operatorname{Hom}^{cont}_{L}(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times}),L) \cong\operatorname{Hom}^{cont}_{L}(\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{X}_{n }^{\times})},L)\cong\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times}). \tag{48}\] Let \(\operatorname{res}_{\mathfrak{X}_{n}^{\times}}:\Omega^{1}_{\mathcal{R}_{L}( \mathfrak{X}_{n}^{\times})}\to L\) be the linear form which corresponds to \(1\in\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) under this isomorphism. We obtain that \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) is topologically selfdual w.r.t. the pairing \[<\,\ >_{\mathfrak{X}_{n}^{\times}}:\mathcal{R}_{L}(\mathfrak{X}_{n }^{\times})\times\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times}) \longrightarrow L\] \[(f_{1},f_{2}) \longmapsto\operatorname{res}_{\mathfrak{X}_{n}^{\times}}(f_{1}f_{ 2}d\log_{\mathfrak{X}_{n}^{\times}})\.\] It follows from [vdP, Thm. 3.7] that the diagram is commutative. But in the proof of Prop. 4.2.10.2 we have seen that \((\ell_{n}^{*})_{*}(\log_{\mathfrak{X}})=\pi_{L}^{n}\log_{\mathfrak{X}_{n}^{ \times}}\). Therefore the diagram of pairings \[\begin{array}{cccc}\mathcal{R}_{L}(\mathfrak{X})&&\times&&\mathcal{R}_{L}( \mathfrak{X})\xrightarrow{<\,\ >_{\mathfrak{X}}}L\\ \parbox{142.26378pt}{$(\ell_{n}^{*})^{*}$}\nmid\cong&&\pi_{L}^{n}(\ell_{n}^{ *})_{*}\nmid\cong\\ \mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})&&\times&&\mathcal{R}_{L}(\mathfrak{ X}_{n}^{\times})\xrightarrow{<\,\ >_{\mathfrak{X}^{\times}}}L\end{array} \tag{49}\] is commutative. Alternatively we could have used the following observation. **Remark 4.2.13**.: _Let \(\rho:\mathfrak{Y}\to\mathfrak{Z}\) be one of the morphisms in Prop. 4.2.10. Then Prop. 4.1.11 applies, and it follows that, for any admissible open subset \(\mathfrak{U}\subseteq\mathfrak{Z}\) which is Stein, the relative trace map \(t_{\rho|\rho^{-1}(\mathfrak{U})}\) is given by the same formula as for \(t_{\rho}\)._ In the case of the morphisms \(\pi_{L}^{*}\) and \(\rho_{m,n}\) for \(n\geqslant m\geqslant n_{0}\) this immediately leads to the following equalities of pairings. **Lemma 4.2.14**.: 1. \(<\varphi_{L}(f_{1}),f_{2}>_{\mathfrak{X}}=\ \frac{q}{\pi_{L}}<f_{1},\psi_{L}^{ \mathfrak{X}}(f_{2})>_{\mathfrak{X}}\) _for any_ \(f_{1},f_{2}\in\mathcal{R}_{L}(\mathfrak{X})\)_._ 2. _Let_ \(n\geqslant m\geqslant n_{0}\) _and let_ \(u_{1}=1,u_{2},\ldots,u_{h}\in U_{m}\) _be representatives for the cosets of_ \(U_{n}\) _in_ \(U_{m}\) _(with_ \(h:=q^{n-m}\)_); for any_ \(f^{\prime}\in\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) _and any_ \(f=\sum_{i=1}^{h}\operatorname{ev}_{u_{i}}\rho_{m,n}^{*}(f_{i})\in\mathcal{R}_ {L}(\mathfrak{X})_{m}^{\times}\) _(cf. Remark_ 4.1.18_) we have_ \[<\rho_{m,n}^{*}(f^{\prime}),f>_{\mathfrak{X}_{m}^{\times}}=\ q^{n-m}<f^{\prime},f_{1}>_{ \mathfrak{X}_{n}^{\times}}\.\] **The multiplicative character variety**\(\mathfrak{X}^{\times}\) We fix an \(n\geqslant n_{0}\) for the moment as well as representatives \(u_{1}=1,u_{2},\ldots,u_{h}\in o_{L}^{\times}\) for the cosets of \(U_{n}\) in \(o_{L}^{\times}\) (with \(h:=(q-1)q^{n-1}\)). Recalling from Lemma 4.1.15.ii that \[\mathcal{R}_{L}(\mathfrak{X}^{\times})=\mathbb{Z}[o_{L}^{\times}]\otimes_{ \mathbb{Z}[U_{n}]}\rho_{n}^{*}(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times}))\] we may write any \(f\in\mathcal{R}_{L}(\mathfrak{X}^{\times})\) as \(f=\sum_{i=1}^{h}\operatorname{ev}_{u_{i}}\rho_{n}^{*}(f_{i})\) with uniquely determined \(f_{i}\in\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\). We now define \[\operatorname{res}_{\mathfrak{X}^{\times}}:\Omega_{\mathcal{R}_{L}(\mathfrak{ X}^{\times})}^{1}\longrightarrow L\quad\text{by }\operatorname{res}_{\mathfrak{X}^{\times}}(fd\log_{\mathfrak{X}^{\times}}):= (q-1)q^{n-1}\operatorname{res}_{\mathfrak{X}_{n}^{\times}}(f_{1}d\log_{ \mathfrak{X}_{n}^{\times}}) \tag{50}\] and then the pairing \[<\,\ >_{\mathfrak{X}^{\times}}:\mathcal{R}_{L}(\mathfrak{X}^{ \times})\times\mathcal{R}_{L}(\mathfrak{X}^{\times}) \longrightarrow L\] \[(f_{1},f_{2}) \longmapsto\operatorname{res}_{\mathfrak{X}^{\times}}(f_{1}f_{2}). \tag{51}\] These definitions are obviously independent of the choice of the representatives \(u_{i}\). Moreover, due to Lemma 4.2.14.ii they are independent of the choice of \(n\) as well and we have \[<\rho_{n}^{*}(f^{\prime}),f>_{\mathfrak{X}^{\times}}\,=\,(q-1)q^{n-1}<f^{\prime},f_{1}>_{\mathfrak{X}_{n}^{\times}} \tag{52}\] for any \(f^{\prime}\in\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) and any \(f=\sum_{i=1}^{(q-1)q^{n-1}}\mathrm{ev}_{u_{i}}\,\rho_{n}^{*}(f_{i})\in \mathcal{R}_{L}(\mathfrak{X})^{\times}\) where \(u_{i}\in o_{L}^{\times}\) runs through representatives for the cosets of \(U_{n}\) in \(o_{L}^{\times}\). The topological selfduality of \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) easily implies that this pairing makes \(\mathcal{R}_{L}(\mathfrak{X}^{\times})\) topological selfdual. **Lemma 4.2.15**.: _The twist morphism \(\mu_{\chi}:\mathfrak{X}^{\times}\to\mathfrak{X}^{\times}\), for any \(\chi\in\mathfrak{X}^{\times}(L)\), satisfies_ \[<\mu_{\chi}^{*}(f_{1}),\mu_{\chi}^{*}(f_{2})>_{\mathfrak{X}^{\times}}\,=\,< f_{1},f_{2}>_{\mathfrak{X}^{\times}}\qquad\text{ for any }f_{1},f_{2}\in\mathcal{R}_{L}(\mathfrak{X}^{\times}).\] Proof.: The assertion immediately reduces to checking the equality \(\mathrm{res}_{\mathfrak{X}^{\times}}\circ\mu_{\chi}^{*}=\mathrm{res}_{ \mathfrak{X}^{\times}}\). Obviously there are twist morphisms on \(\mathfrak{X}_{n}^{\times}\) as well. One easily checks that \[\mu_{\chi}^{*}\circ\rho_{n}^{*}=\rho_{n}^{*}\circ\mu_{\chi|U_{n}}^{*}\] and that \[\mu_{\chi}^{*}(\mathrm{ev}_{u})=\chi(u)\,\mathrm{ev}_{u}\qquad\text{ for any }u\in o_{L}^{\times}.\] Using Lemma 4.1.15.ii we write an \(f\in\mathcal{R}_{L}(\mathfrak{X}^{\times})\) as \(f=\sum_{i=1}^{h}\mathrm{ev}_{u_{i}}\,\rho_{n}^{*}(f_{i})\) and compute \[\mu_{\chi}^{*}(f)=\sum_{i=1}^{h}\mu_{\chi}^{*}(\mathrm{ev}_{u_{i}})\mu_{\chi}^ {*}(\rho_{n}^{*}(f))=\sum_{i=1}^{h}\mathrm{ev}_{u_{i}}\,\rho_{n}^{*}(\chi(u_{i })\mu_{\chi|U_{n}}^{*}(f_{i}))\.\] This shows that \(\mu_{\chi}^{*}(f)_{1}=\mu_{\chi|U_{n}}^{*}(f_{1})\). This further reduces us to showing that \(\mathrm{res}_{\mathfrak{X}_{n}^{\times}}\circ\mu_{\chi|U_{n}}^{*}=\mathrm{res}_ {\mathfrak{X}_{n}^{\times}}\). But this follows from [vdP, Thm. 3.7] or, alternatively, from a version of Prop. 4.2.10.5 for \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\). Of course, everything in this entire section 4.2 remains valid over any complete extension field \(K\) of \(L\) contained in \(\mathbb{C}_{p}\). Moreover, our constructions above are compatible under (complete) base change: Let \(\mathfrak{Y}_{K}\) denote the base change of \(\mathfrak{Y}\) over \(L\) to \(K\) (and similarly for affinoids). Then we obtain a commutative diagram \[\begin{CD}\mathcal{R}_{L}(\mathfrak{Y})@>{}>{}>\times\\ \mathcal{R}_{K}(\mathfrak{Y}_{K})@>{}>{}>\times\end{CD} \tag{53}\] \[\Omega^{1}_{\mathcal{R}_{L}(\mathfrak{Y})}@>{}>{}>K.\] Indeed, it is shown in [Mal] that Serre-duality is compatible with base change in the sense that there is the following commutative diagram for any \(n\), in which the horizontal lines are the Serre-dualities over \(L\) and \(K\), respectively: \[\begin{CD}H^{1}_{c}(\mathfrak{Y}_{n},\mathcal{O}_{\mathfrak{Y}_{K}})@>{}>{} \times\\ H^{1}_{c}(\mathfrak{Y}_{n,K},\mathcal{O}_{\mathfrak{Y}_{K}})@>{}>{}>\times \end{CD}\] Hence, taking limits as in the proof of Proposition 4.2.12 the claim follows upon observing that also the relative cohomology sequence (36) is compatible with base change. By inserting \(1\in\mathcal{R}_{L}(\mathfrak{Y})\subseteq\mathcal{R}_{K}(\mathfrak{Y}_{K})\) into the pairings of (53) we see that in any example discussed above the residue maps resp are compatible under base change as well as the pairings \(<\,,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, ### \((\varphi_{L},\Gamma_{L})\)-modules As before we let \(L\subseteq K\subseteq\mathbb{C}_{p}\) be a complete intermediate field, and we denote by \(o_{K}\) its ring of integers. #### 4.3.1 The usual Robba ring In sections 4.2.2 and 4.2.3 we already had introduced the usual Robba ring \(\mathcal{R}=\mathcal{R}_{K}=\mathcal{R}_{K}(\mathbf{B})\) of the Stein space \(\mathbf{B}_{/K}\) in connection with Serre duality. We briefly review its construction in more detail. The ring of \(K\)-valued global holomorphic functions \(\mathcal{O}_{K}(\mathbf{B})\)15 on \(\mathbf{B}\) is the Frechet algebra of all power series in the variable \(Z\) with coefficients in \(K\) which converge on the open unit disk \(\mathbf{B}(\mathbb{C}_{p})\). The Frechet topology on \(\mathcal{O}_{K}(\mathbf{B})\) is given by the family of norms Footnote 15: In the notation from [Co2, §1.2] this is the ring \(\mathcal{R}^{+}\). \[|\sum_{i\geqslant 0}c_{i}Z^{i}|_{r}:=\max_{i}|c_{i}|r^{i}\qquad\text{for }0<r<1\.\] In the commutative integral domain \(\mathcal{O}_{K}(\mathbf{B})\) we have the multiplicative subset \(Z^{\mathbb{N}}=\{Z^{j}:j\in\mathbb{N}\}\), so that we may form the corresponding localization \(\mathcal{O}_{K}(\mathbf{B})_{Z^{\mathbb{N}}}\). Each norm \(|\ |_{r}\) extends to this localization \(\mathcal{O}_{K}(\mathbf{B})_{Z^{\mathbb{N}}}\) by setting \(|\sum_{i\geqslant-\infty}c_{i}Z^{i}|_{r}:=\max_{i}|c_{i}|r^{i}\). The Robba ring \(\mathcal{R}\supseteq\mathcal{O}_{K}(\mathbf{B})\) is constructed as follows. For any \(s>0\), resp. any \(0<r\leqslant s\), in \(p^{\mathbb{Q}}\) let \(\mathbf{B}_{[0,s]}\), resp. \(\mathbf{B}_{[r,s]}\), denote the affinoid disk around \(0\) of radius \(s\), resp. the affinoid annulus of inner radius \(r\) and outer radius \(s\), over \(K\). For \(I=[0,s]\) or \([r,s]\) we denote by \[\mathcal{R}^{I}:=\mathcal{R}^{I}_{K}(\mathbf{B}):=\mathcal{O}_{K}(\mathbf{B}_{ I})\] the affinoid \(K\)-algebra of \(\mathbf{B}_{I}\). The Frechet algebra \(\mathcal{R}^{[r,1)}:=\varprojlim_{r<s<1}\mathcal{R}^{[r,s]}\) is the algebra of (infinite) Laurent series in the variable \(Z\) with coefficients in \(K\) which converge on the half-open annulus \(\mathbf{B}_{[r,1)}:=\bigcup_{r<s<1}\mathbf{B}_{[r,s]}\). The Banach algebra \(\mathcal{R}^{[0,s]}\) is the completion of \(\mathcal{O}_{K}(\mathbf{B})\) with respect to the norm \(|\ |_{s}\). The Banach algebra \(\mathcal{R}^{[r,s]}\) is the completion of \(\mathcal{O}_{K}(\mathbf{B})_{Z^{\mathbb{N}}}\) with respect to the norm \(|\ |_{r,s}:=\max(|\ |_{r},|\ |_{s})\). It follows that the Frechet algebra \(\mathcal{R}^{[r,1)}\) is the completion of \(\mathcal{O}_{K}(\mathbf{B})_{Z^{\mathbb{N}}}\) in the locally convex topology defined by the family of norms \((|\ |_{r,s})_{r<s<1}\). Finally, the Robba ring is \(\mathcal{R}=\bigcup_{0<r<1}\mathcal{R}^{[r,1)}\). **Remark 4.3.1**.: \(\mathcal{O}_{K}(\mathbf{B})_{Z^{\mathbb{N}}}\) _is dense in \(\mathcal{R}_{K}(\mathbf{B})\)._ Let \(p^{-\frac{d}{(q-1)e}}<r\leqslant s<1\). Then we have a surjective map \[\mathbf{B}_{[r,s]} \to\mathbf{B}_{[r^{q},s^{q}]}\] \[z \mapsto[\pi_{L}](z) \tag{55}\] according to [FX, proof of Lem. 2.6]16 It induces a ring homomorphism Footnote 16: The proof there is only written for the special Lubin-Tate group, but generalizes easily to the general case by using the fact that \([\pi_{L}]=X^{q}+\pi_{L}Xf(X)\) with \(f(X)\in o_{L}[[X]]^{\times}\). \[\varphi_{L}^{[r^{q},s^{q}]}:\mathcal{R}^{[r^{q},s^{q}]}\to\mathcal{R}^{[r,s]} \tag{56}\] which is isometric with respect to the supremum norms, i.e., \(|\varphi_{L}^{[r^{q},s^{q}]}(f)|_{[r,s]}=|f|_{[r^{q},s^{q}]}\) for any \(f\in\mathcal{R}^{[r^{q},s^{q}]}\). In particular, by taking first inverse and then direct limits we obtain a continuous ring homomorphism \(\varphi_{L}:\mathcal{R}\to\mathcal{R}\). We shall often omit the interval in \(\varphi_{L}^{[r,s]}\) and just write \(\varphi_{L}\). Similarly, we obtain a continuous \(\Gamma_{L}\)-action on \(\mathcal{R}\): According to (loc. cit.) we have a bijective map \[\mathbf{B}_{[r,s]} \to\mathbf{B}_{[r,s]}\] \[z \mapsto[\![\chi_{LT}(\gamma)]\!](z) \tag{57}\] for any \(\gamma\in\Gamma_{L}\), whence we obtain an isometric isomorphism \[\gamma:\mathcal{R}^{[r,s]}\to\mathcal{R}^{[r,s]} \tag{58}\] with respect to the supremum norms, i.e., \(|\gamma(f)|_{[r,s]}=|f|_{[r,s]}\) for any \(f\in\mathcal{R}^{[r,s]}\). Finally, we extend the operator \(\psi_{L}\) to \(\mathcal{R}\) : For \(y\in\ker([\pi_{L}])\) we have the isomorphism \[\mathbf{B}_{[r,s]} \to\mathbf{B}_{[r,s]}\] \[z \mapsto z+_{LT}y \tag{59}\] of affinoid varieties, because \(|z+_{LT}y|=|z+y|=|z|\). The latter equality comes from \(|z|\geq r>p^{-\frac{d}{(q-1)\epsilon}}=q^{-\frac{1}{q-1}}=|y|\) for \(y\neq 0\). Setting \(tr(f):=\sum_{y\in\ker([\pi_{L}])}f(z+_{LT}y)\) we obtain a norm decreasing linear map \(tr:\mathcal{R}^{[r,s]}\to\mathcal{R}^{[r,s]}\). We claim that the image of \(tr\) is contained in the (closed) image of the isometry \(\varphi_{L}^{[r,s^{q}]}\), whence there is a norm decreasing map \[\psi_{Col}:\mathcal{R}^{[r,s]}\to\mathcal{R}^{[r^{q},s^{q}]},\] such that \(\varphi_{L}\circ\psi_{Col}=tr\). Indeed, by continuity it suffices to show that \(tr(Z^{i})\) belongs to the image for any \(i\in\mathbb{Z}\). For \(i\geq 0\), Coleman has shown that \(tr(Z^{i})=\varphi_{L}(\psi_{Col}(Z^{i}))\) with \(\psi_{Col}(Z^{i})\in o_{L}[[Z]]\subseteq\mathcal{R}^{[r^{q},s^{q}]}\), see [12, SS2]. For \(i<0\), we calculate \[\varphi_{L}(Z^{i}\psi_{Col}([\pi_{L}](Z)^{-i}Z^{i})) =\varphi_{L}(Z^{i})\left(\sum_{y\in\ker([\pi_{L}])}[\pi_{L}](Z)^ {-i}Z^{i}\right)\!(Z+_{LT}y)\] \[=\varphi_{L}(Z^{i})\left(\sum_{y\in\ker([\pi_{L}])}[\pi_{L}]((Z+_ {LT}y))^{-i}(Z+_{LT}y)^{i}\right)\] \[=\varphi_{L}(Z^{i})\left(\sum_{y\in\ker([\pi_{L}])}\varphi_{L}(Z )^{-i}(Z+_{LT}y)^{i}\right)\] \[=\sum_{y\in\ker([\pi_{L}])}(Z+_{LT}y)^{i}=tr(Z^{i}),\] whence the claim follows. We put \(\psi_{L}^{[r,s]}:=\frac{1}{\pi_{L}}\psi_{Col}:\mathcal{R}^{[r,s]}\to\mathcal{R }^{[r^{q},s^{q}]}\) which induces the continuous operator \(\psi_{L}:\mathcal{R}\to\mathcal{R}\) by taking first inverse limits and then direct limits. By definition of \(tr\) the operators \(\psi_{L}^{[r,s]}\) and hence \(\psi_{L}\) satisfy the projection formula. We shall often omit the interval in \(\psi_{L}^{[r,s]}\) and just write \(\psi_{L}\). As in section 4.1.4 we fix a generator \(\eta^{\prime}\) of the dual Tate module \(T^{\prime}_{\pi}\) and denote by \(\Omega\) the corresponding period. For the rest of this subsection we **assume** in addition that \(K\) contains \(\Omega\). Following Colmez in the notation we introduce the power series \(\eta(a,Z):=\exp(a\Omega\log_{LT}(Z))\in o_{K}[[Z]]\) for \(a\in o_{L}\). As noted in section 4.1.4 the power series \(\eta(a,Z)\) is nothing else than the image under the LT-isomorphism \(\kappa^{*}\) of the holomorphic function \(\mathrm{ev}_{a}\in\mathcal{O}_{K}(\mathfrak{X})\). Generalizing the equality (34) we have the following decompositions of Banach spaces \[\mathcal{R}^{[r,s]}=\bigoplus_{a\in o_{L}/\pi_{L}^{n}}\varphi_{L}^{n}(\mathcal{ R}^{[r^{q},s^{q}]})\eta(a,Z) \tag{60}\] and hence \[\mathcal{R}=\bigoplus_{a\in o_{L}/\pi_{L}^{n}}\varphi_{L}^{n}(\mathcal{R}) \eta(a,Z) \tag{61}\] of LF-spaces using the formula \[r=(\frac{\pi_{L}}{q})^{n}\sum_{a}\varphi_{L}^{n}\psi_{L}^{n}\left(\eta(-a,Z)r \right)\eta(a,Z). \tag{62}\] This can easily be reduced by induction on \(n\) to the case \(n=1\). Using the definition of \(tr\) and the orthogonality relations for the characters \(\kappa_{y}\) for \(y\in\ker([\pi_{L}])\), the formula follows and, moreover, defines a continuous inverse to the continuous map \[\mathbb{Z}[o_{L}]\otimes_{\mathbb{Z}[\pi_{L}o_{L}]} \mathcal{R}^{[r^{q},s^{q}]} \stackrel{{\cong}}{{\longrightarrow}} \mathcal{R}^{[r,s]}\] \[a\otimes f \longmapsto a\varphi_{L}(f).\] Inductively, we obtain canonical isomorphisms \[\mathbb{Z}[o_{L}]\otimes_{\mathbb{Z}[\pi_{L}^{n}o_{L}]} \mathcal{R}^{[r^{q^{n}},s^{q}]} \stackrel{{\cong}}{{\longrightarrow}} \mathcal{R}^{[r,s]}\] \[a\otimes f \longmapsto a\varphi_{L}^{n}(f). \tag{63}\] Moreover, immediately from the definitions we have \[\varphi_{L}(\eta(a,Z)) =\eta(\pi_{L}a,Z), \tag{65}\] \[\sigma(\eta(a,Z)) =\eta(\chi_{LT}(\sigma)a,Z)\text{ for }\sigma\in\Gamma_{L},\] (66) \[\psi_{L}(\eta(a,Z)) =\frac{q}{\pi_{L}}\eta(\frac{a}{\pi_{L}},Z)\text{ for }a\in\pi_{L}o_{L} \text{ and }=0\text{ otherwise.} \tag{64}\] **Remark 4.3.2**.: _We have \(\psi_{L}=\frac{1}{\pi_{L}}\varphi_{L}^{-1}\circ trace_{\mathcal{R}/\varphi_{ L}(\mathcal{R})}\)._ Proof.: Both maps \(tr\) and \(trace_{\mathcal{R}/\varphi_{L}(\mathcal{R})}\) are easily seen to be \(\varphi_{L}(\mathcal{R})\)-linear and to be multiplication by \(q\) on \(\varphi_{L}(\mathcal{R})\). Hence, by (61), it suffices to compare their values on the elements \(\eta(a,Z)\). By (66) we have \[tr(\eta(a,Z))=\begin{cases}q\varphi_{L}(\eta(\pi_{L}^{-1}a,Z))&\text{if }a\in\pi_{L}o_{L},\\ 0&\text{otherwise.}\end{cases}\] On the other hand a computation as in the proof of Remark 4.1.4 shows that \(trace_{\mathcal{R}/\varphi_{L}(\mathcal{R})}\) has exactly the same values. But \(\psi_{L}=\frac{1}{\pi_{L}}\varphi_{L}^{-1}\circ tr\). For uniformity of notation we put \(\psi_{L}^{\mathbf{B}}:=\frac{\pi_{L}}{q}\psi_{L}=\frac{1}{q}\varphi_{L}^{-1} \circ trace_{\mathcal{R}/\varphi_{L}(\mathcal{R})}\). #### 4.3.2 The LT-isomorphism, part 2 We **assume** throughout this subsection that \(\Omega\) is contained in \(K\). The map \(\kappa:\mathbf{B}\xrightarrow{\cong}\mathfrak{X}\) being an isomorphism of rigid varieties it preserves the systems of affinoid subdomains on both sides. Hence the LT-isomorphism (33) extends to a topological isomorphism \[\kappa^{*}:\mathcal{R}_{K}(\mathfrak{X})\xrightarrow{\cong}\mathcal{R}_{K}( \mathbf{B}). \tag{67}\] In order to have a uniform notation, we usually write from now on \[\mathcal{R}_{K}^{I}(\mathfrak{X}):=\mathcal{O}_{K}(\kappa(\mathbf{B}_{I}))\] for any closed interval \(I\subseteq(0,1)\) so that we have the isomorphism of Banach algebras \[\kappa^{*}:\mathcal{R}_{K}^{I}(\mathfrak{X})\xrightarrow{\cong}\mathcal{R}_{K} ^{I}(\mathbf{B}). \tag{68}\] We warn the reader that only for specific closed intervals \(I\) there is another closed interval \(I^{\prime}\) given by a complicated but explicit rule such that \(\kappa(\mathbf{B}_{I})=\mathfrak{X}_{I^{\prime}}\). The precise statement can be worked out from [2, Prop. 1.20]. In the following we list a few compatibilities under this extended LT-isomorphism. First of all under this isomorphism the \(\Gamma_{L}\cong o_{L}^{\times}\)-action and the maps \(\varphi_{L}\) on both sides correspond to each other (cf. [2, SS2.2]). Then it follows from Remarks 4.1.4 4.3.2 that the operators \(\psi_{L}^{\mathfrak{X}}\) (defined at the end of section 4.1.1) and \(\psi_{L}^{\mathbf{B}}\) (defined in previous section) also correspond under \(\kappa^{*}\). Secondly, as a consequence of [13, Thm. 3.7] we have the commutative diagram (69) This combined with Remark 4.2.9 implies the explicit formula \[\operatorname{res}_{\mathfrak{X}}(fd\log_{\mathfrak{X}})=\Omega\operatorname{ res}_{\mathbf{B}}(\kappa^{*}(f)g_{LT}dZ). \tag{70}\] #### 4.3.3 \(\varphi_{L}\)-modules Let \(\mathfrak{Y}\) be either \(\mathfrak{X}\) or \(\mathbf{B}\) and \(\mathcal{R}:=\mathcal{R}_{K}(\mathfrak{Y})\). Henceforth we will use the operator \(\psi_{L}:=\frac{q}{\pi_{L}}\psi_{L}^{\mathfrak{Y}}\) on \(\mathcal{R}\). We also put \[q_{\mathfrak{Y}}:=\begin{cases}p&\text{if }\mathfrak{Y}=\mathfrak{X},\\ q&\text{if }\mathfrak{Y}=\mathbf{B}.\end{cases}\] **Definition 4.3.3**.: _A \(\varphi_{L}\)-module \(M\) over \(\mathcal{R}\) is a finitely generated free \(\mathcal{R}\)-module \(M\) equipped with a semilinear endomorphism \(\varphi_{M}\) such that the \(\mathcal{R}\)-linear map_ \[\varphi_{M}^{lin}:\mathcal{R}\otimes_{\mathcal{R},\varphi_{L}}M \xrightarrow{\cong}M\] \[f\otimes m \longmapsto f\varphi_{M}(m)\] _is bijective._ Technically important is the following fact, which for \(\mathfrak{X}\) is part of the proof of [BSX, Prop. 2.24]. The proof for \(\mathbf{B}\) is entirely analogous. It allows to extend the above maps and decompositions from the previous sections to \(\varphi_{L}\)-modules. For \(r>0\) we introduce the intervals \[I(r,\mathfrak{Y}):=\begin{cases}(r,1)&\text{if }\mathfrak{Y}=\mathfrak{X},\\ \@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr[r,1)&\text{if }\mathfrak{Y}= \mathbf{B}.\end{cases}\] **Proposition 4.3.4**.: _Let \(M\) be a \(\varphi_{L}\)-module \(M\) over \(\mathcal{R}\). There exists a radius_ \[r_{0}\geq\begin{cases}p^{-\frac{dp}{p-1}}&\text{if }\mathfrak{Y}=\mathfrak{X}, \\ p^{-\frac{dq}{(q-1)e}}&\text{if }\mathfrak{Y}=\mathbf{B}\end{cases}\] _and a finitely generated free \(\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y}))}\)-module \(M_{0}\) equipped with a semilinear continuous homomorphism_ \[\varphi_{M_{0}}:M_{0}\longrightarrow\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0}, \mathfrak{Y})^{1/q_{\mathfrak{Y}}}})\otimes_{\mathcal{O}_{K}(\mathfrak{Y})_ {I(r_{0},\mathfrak{Y}))}}M_{0}\] _such that the induced \(\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y})^{1/q_{\mathfrak{Y}}}})\)-linear map_ \[\varphi_{M_{0}}^{lin}:\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y})^{1 /q_{\mathfrak{Y}}}})\otimes_{\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{ Y})),\varphi_{L}}}M_{0}\xrightarrow{\cong}\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0}, \mathfrak{Y})^{1/q_{\mathfrak{Y}}}})\otimes_{\mathcal{O}_{K}(\mathfrak{Y})_{ I(r_{0},\mathfrak{Y}))}}M_{0}\] _is an isomorphism and such that_ \[\mathcal{R}\otimes_{\mathcal{O}_{K}(\mathfrak{Y}_{I(r_{0},\mathfrak{Y}))}}M_ {0}=M\] _with \(\varphi_{L}\otimes\varphi_{M_{0}}\) and \(\varphi_{M}\) corresponding to each other._ The continuity condition for the \(\varphi_{M_{0}}\), of course, refers to the product topology on \(M_{0}\cong(\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y}))})^{d}\). In the following we fix a \(\varphi_{L}\)-module \(M\) over \(\mathcal{R}\) and a pair \((r_{0},M_{0})\) as in Prop. 4.3.4. For any \(r_{0}\leq r^{\prime}<1\) and any closed interval \(I=[r,s]\subseteq I(r^{\prime},\mathfrak{Y}))\) we then have the finitely generated free modules \[M^{I(r^{\prime},\mathfrak{Y})}:=\mathcal{O}_{K}(\mathfrak{Y})_{I(r^{\prime}, \mathfrak{Y}))}\otimes_{\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y})) }}M_{0}\quad\text{over }\mathcal{R}^{[r,1)}\] and \[M^{I}:=\mathcal{O}_{K}(\mathfrak{Y})_{I})\otimes_{\mathcal{O}_{K}(\mathfrak{Y }_{I(r^{\prime},\mathfrak{Y}))}}M^{I(r^{\prime},\mathfrak{Y}))}\quad\text{ over }\mathcal{O}_{K}(\mathfrak{Y})_{I})\.\] They satisfy \[M^{I(r^{\prime},\mathfrak{Y})}=\varprojlim_{s>r}M^{I}\qquad\text{and}\qquad M =\varprojlim_{r^{\prime}}M^{I(r^{\prime},\mathfrak{Y}))}. \tag{71}\] We equip \(M^{I}\) with the Banach norm \(|-|_{M^{I}}\) given by the maximum norm with respect to any fixed basis (the induced topology does not depend on the choice of basis) which is submultiplicative with respect to scalar multiplication and the norm \(|-|_{I}\) on \(\mathcal{O}_{K}(\mathfrak{Y}_{I})\). Furthermore, base change with \(\mathcal{O}_{K}(\mathfrak{Y})_{I^{1/q_{\mathfrak{Y}}}})\) over \(\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y})^{1/q_{\mathfrak{Y}}}})\) induces isomorphisms of Banach spaces \[\varphi_{lin}^{I}=\mathcal{O}_{K}(\mathfrak{Y})_{I^{1/q_{\mathfrak{Y}}}}) \otimes_{\mathcal{O}_{K}(\mathfrak{Y})_{I(r_{0},\mathfrak{Y})^{1/q_{ \mathfrak{Y}}})}}\varphi_{M_{0}}^{lin}:\mathcal{O}_{K}(\mathfrak{Y})_{I^{1/q_ {\mathfrak{Y}}})}\otimes_{\mathcal{O}_{K}(\mathfrak{Y}_{I}),\varphi_{L}}M^{I} \xrightarrow{\cong}M^{I^{1/q_{\mathfrak{Y}}}}\] and hence injective, continuous maps \[\varphi^{I}:M^{I}\to M^{I^{1/q_{\mathfrak{W}}}}\] by restriction. Assuming that \(I^{q_{\mathfrak{W}}}\subseteq I(r^{\prime},\mathfrak{Y})\) we define the additive, \(K\)-linear, continuous map \(\psi^{I}:M^{I}\to M^{I^{q_{\mathfrak{W}}}}\) as the composite \[\psi^{I}:M^{I}\xrightarrow{(\varphi^{I^{q_{\mathfrak{W}}}}_{lin})^{-1}} \mathcal{O}_{K}(\mathfrak{Y}_{I})\otimes_{\mathcal{O}_{K}(\mathfrak{Y}_{I^{q_ {\mathfrak{W}}}}),\varphi_{L}}M^{I^{q_{\mathfrak{W}}}}\to M^{I^{q_{\mathfrak{W} }}},\] where the last map sends \(f\otimes m\) to \(\psi^{I}(f)m.\) By construction, it satisfies the projection formulas \[\psi^{I}(\varphi^{I^{q_{\mathfrak{W}}}}(f)m)=f\psi^{I}(m)\qquad\text{and} \qquad\psi^{I}(g\varphi^{I^{q_{\mathfrak{W}}}}(m^{\prime}))=\psi^{I}(g)m^{ \prime}\, \tag{72}\] for any \(f\in\mathcal{O}_{K}(\mathfrak{Y}_{I^{q_{\mathfrak{W}}}}),\)\(g\in\mathcal{O}_{K}(\mathfrak{Y}_{I})\) and \(m\in M^{I},\)\(m^{\prime}\in M^{I^{q_{\mathfrak{W}}}}\) as well as the formula \[\psi^{I}\circ\varphi^{I^{q_{\mathfrak{W}}}}=\frac{q}{\pi_{L}}\cdot\mathrm{id} _{M^{I^{q_{\mathfrak{W}}}}}\ \.\] Using Prop. 4.1.14 in case \(\mathfrak{Y}=\mathfrak{X}\), resp. the decomposition (60) in case \(\mathfrak{Y}=\mathbf{B}\) (under the assumption that \(\Omega\) is contained in \(K\)), combined with (iterates of) \(\varphi^{I}_{lin}\) gives rise to decompositions \[M^{I^{\frac{1}{q_{\mathfrak{W}}^{\frac{1}{q_{\mathfrak{W}}}}}}}=\begin{cases} \bigoplus_{a\in(o_{L}/\pi_{L}^{n})}\ \mathrm{ev}_{a}\,\varphi^{n}_{L}(M^{I})&\text{if $ \mathfrak{Y}$}=\mathfrak{X},\\ \bigoplus_{a\in(o_{L}/\pi_{L}^{n})}\ \eta(a,Z)\varphi^{n}_{L}(M^{I})&\text{if $ \mathfrak{Y}$}=\mathbf{B}\ \text{and}\ \Omega\in K\end{cases} \tag{73}\] of Banach spaces and \[M=\begin{cases}\bigoplus_{a\in(o_{L}/\pi_{L}^{n})}\ \mathrm{ev}_{a}\, \varphi^{n}_{L}(M)&\text{if $\mathfrak{Y}$}=\mathfrak{X},\\ \bigoplus_{a\in(o_{L}/\pi_{L}^{n})}\ \eta(a,Z)\varphi^{n}_{L}(M)&\text{if $ \mathfrak{Y}$}=\mathbf{B}\ \text{and}\ \Omega\in K\end{cases} \tag{74}\] of LF-spaces, again given by the formula \[m=\begin{cases}(\frac{\pi_{L}}{q})^{n}\sum_{a}\varphi_{M}\psi_{M}\left(\mathrm{ ev}_{-a}\,m\right)\mathrm{ev}_{a}&\text{if $\mathfrak{Y}$}=\mathfrak{X},\\ (\frac{\pi_{L}}{q})^{n}\sum_{a}\varphi_{M}\psi_{M}\left(\eta(-a,Z)m\right) \eta(a,Z)&\text{if $\mathfrak{Y}$}=\mathbf{B}\ \text{and}\ \Omega\in K.\end{cases} \tag{75}\] #### 4.3.4 The Robba ring of a group Recall that \(L_{n}=L(\ker([\pi_{L}^{n}])).\) We set \[\Gamma_{n}:=G(L_{\infty}/L_{n})=\ker\left(\Gamma_{L}\xrightarrow{\chi_{LT}}o _{L}^{\times}\to(o_{L}/\pi_{L}^{n})^{\times}\right).\] Also recall from section 4.1.2 the notation \(U_{n}:=1+\pi_{L}^{n}o_{L}\) for \(n\geq 1\) and the isomorphisms \(\log:U_{n}\xrightarrow{\cong}\pi_{L}^{n}o_{L}\) and \(\ell_{n}=\pi_{L}^{-n}\log:U_{n}\xrightarrow{\cong}o_{L}\) for \(n\geq n_{0},\) where \(n_{0}\geq 1\) is minimal among \(n\) such that \(\log:1+\pi_{L}^{n}o_{L}\to\pi_{L}^{n}o_{L}\) and \(\exp:\pi_{L}^{n}o_{L}\to 1+\pi_{L}^{n}o_{L}\) are mutually inverse isomorphisms. Obviously \(\chi_{LT}\) restricts to isomorphisms \(\Gamma_{n}\cong U_{n}\) for any \(n\geq 1.\) Consider the composed maps \[\hat{\ell}:=\log\circ\chi_{LT}:\Gamma_{L}\to L\qquad\text{and}\qquad\hat{ \ell}_{n}:=\ell_{n}\circ\chi_{LT}:\Gamma_{n}\xrightarrow{\cong}o_{L}\quad \text{for}\ n\geq n_{0}.\] The latter isomorphisms induce isomorphisms of Frechet algebras \(D(\Gamma_{n},K)\stackrel{{\cong}}{{\longrightarrow}}D(o_{L},K)\). Because of the isomorphisms \(\Gamma_{L}\cong o_{L}^{\times}\) and \(\Gamma_{n}\cong U_{n}\) the formalism of character varieties and corresponding Robba rings applies to the groups \(\Gamma_{L}\) and \(\Gamma_{n}\) as well giving us the corresponding character varieties \(\mathfrak{X}_{\Gamma_{L}}\) and \(\mathfrak{X}_{\Gamma_{n}}\), and the results of section 4.1.2 transfer to this setting. To make a clear distinction we put \(\mathcal{R}_{K}(\Gamma_{L}):=\mathcal{R}_{K}(\mathfrak{X}_{\Gamma_{L}})\) and \(\mathcal{R}_{K}(\Gamma_{n}):=\mathcal{R}_{K}(\mathfrak{X}_{\Gamma_{n}})\) and call them the Robba rings of the groups \(\Gamma_{L}\) and \(\Gamma_{n}\). Clearly the Lubin-Tate character \(\chi_{LT}\) induces topological ring isomorphisms \[\mathcal{R}_{K}(\Gamma_{L})\stackrel{{\cong}}{{\longrightarrow} }\mathcal{R}_{K}(\mathfrak{X}^{\times})\qquad\text{and}\qquad\mathcal{R}_{K}( \Gamma_{n})\stackrel{{\cong}}{{\longrightarrow}}\mathcal{R}_{K}( \mathfrak{X}_{\text{${}_{n}$}^{\times}}^{\times})\text{ for }n\geqslant 1. \tag{76}\] If \(\Gamma\) denotes any of these groups then we will very often view, via the Fourier isomorphism, \(K[\Gamma]\subseteq D(\Gamma,K)\) as subrings of \(\mathcal{R}_{K}(\Gamma)\). In particular we consider elements \(\gamma\in\Gamma\) as elements of the Robba ring writing them in any of the forms \(\gamma\cong\delta_{\gamma}\cong\operatorname{ev}_{\gamma}\). Let \(n\geqslant m\geqslant 1\). The inclusions \(\iota_{n}:\Gamma_{n}\hookrightarrow\Gamma_{L}\) and \(\iota_{n,m}:\Gamma_{n}\hookrightarrow\Gamma_{m}\) induce, by the transfer of the results in section 4.1.2, ring monomorphisms \(\iota_{n*}:\mathcal{R}_{K}(\Gamma_{n})\hookrightarrow\mathcal{R}_{K}(\Gamma_{L})\) and \(\iota_{n,m*}:\mathcal{R}_{K}(\Gamma_{n})\hookrightarrow\mathcal{R}_{K}(\Gamma_ {m})\). More precisely we have (Lemma 4.1.15 and Remark 4.1.18) topological ring isomorphisms \[\mathbb{Z}[\Gamma_{L}]\otimes_{\mathbb{Z}[\Gamma_{n}]}\mathcal{R}_{K}(\Gamma_ {n})\stackrel{{\cong}}{{\longrightarrow}}\mathcal{R}_{K}( \Gamma_{L}) \tag{77}\] and \[\mathbb{Z}[\Gamma_{m}]\otimes_{\mathbb{Z}[\Gamma_{n}]}\mathcal{R}_{K}(\Gamma_ {n})\stackrel{{\cong}}{{\longrightarrow}}\mathcal{R}_{K}( \Gamma_{m}). \tag{78}\] Here the left hand sides are viewed as free \(\mathcal{R}_{K}(\Gamma_{n})\)-modules endowed with the product topology. We also note that, for \(n\geqslant m\geqslant n_{0}\), the commutative diagram induces the commutative diagrams \[\begin{CD}D(\Gamma_{n},K)@>{\hat{\ell}_{n*}}>{}>D(o_{L},K)@>{\cong}>{Fourier}> \mathcal{O}_{K}(\mathfrak{X})\\ D(\Gamma_{m},K)@>{\hat{\ell}_{m*}}>{}>D(o_{L},K)@>{\cong}>{Fourier}>\mathcal{O}_{K}( \mathfrak{X})\end{CD} \tag{79}\] and \[\begin{CD}\mathcal{R}_{K}(\Gamma_{n})@>{\hat{\ell}_{n*}}>{}>\cong\mathcal{R} _{K}(\mathfrak{X})\\ \mathcal{R}_{K}(\Gamma_{m})@>{\hat{\ell}_{m*}}>{}>\cong\mathcal{R}_{K}( \mathfrak{X}).\end{CD} \tag{80}\] For the rest of this subsection we **assume** that \(\Omega\) is contained in \(K\). Let \(n\geq n_{0}\). We then have the isomorphisms of rigid varieties \[\mathbf{B}\stackrel{{\simeq}}{{\underset{\kappa}{\rightharpoonup}}} \mathfrak{X}\stackrel{{\simeq}}{{\underset{\hat{\ell}_{n}^{*}}{ \rightharpoonup}}}\mathfrak{X}_{\Gamma_{n}}\.\] For any closed interval \(I\subseteq(0,1)\) we therefore have the affinoid subdomain \(\hat{\ell}_{n}^{*}\circ\kappa(\mathbf{B}_{I})\) in \(\mathfrak{X}_{\Gamma_{n}}\) and we may introduce the Banach algebra \(\mathcal{R}_{K}^{I}(\Gamma_{n}):=\mathcal{O}_{K}(\hat{\ell}_{n}^{*}\circ \kappa(\mathbf{B}_{I}))\). By its very construction the diagram (81) for \(n\geq m\geq n_{0}\), is commutative. Together with (63) it implies the canonical isomorphism \[\mathbb{Z}[\Gamma_{m}]\otimes_{\mathbb{Z}[\Gamma_{n}]}\mathcal{R}_{K}^{I^{q^{n -m}}}(\Gamma_{n})\stackrel{{\cong}}{{\xrightarrow{\rightharpoonup}}} \mathcal{R}_{K}^{I}(\Gamma_{m}). \tag{82}\] We will denote the composite of Fourier and LT-isomorphism by \[\mathfrak{K}:D(o_{L},K)\stackrel{{\cong}}{{\xrightarrow{ \rightharpoonup}}}\mathcal{O}_{K}(\mathfrak{X})\stackrel{{ \cong}}{{\xrightarrow{\rightharpoonup}}}\mathcal{O}_{K}(\mathbf{B})\.\] Recall that \(\mathcal{O}_{K}(\mathbf{B})\) is a space of certain power series in the variable \(Z\). We put \[X:=\mathfrak{K}^{-1}(Z)\in D(o_{L},K)\qquad\text{and}\qquad Y_{n}:=\hat{\ell}_{ n*}^{-1}(X)\in D(\Gamma_{n},K)\text{ for }n\geq n_{0}.\] In this way we can express elements in these distribution algebras as power series in these variables. This will later on be an important technical tool for our proofs. As an immediate consequence of Remark 4.3.1 we have the following. **Remark 4.3.5**.: 1. \(D(o_{L},K)_{Z^{\mathbb{N}}}\) _is dense in_ \(\mathcal{R}_{K}(\mathfrak{X})\)_._ 2. \(D(\Gamma_{n},K)_{Y_{n}^{\mathbb{N}}}\) _is dense in_ \(\mathcal{R}_{K}(\Gamma_{n})\) _for_ \(n\geq n_{0}\)_._ #### 4.3.5 Locally \(\mathbb{Q}_{p}\)-analytic versus locally \(L\)-analytic distribution algebras. We fix a \(\mathbb{Z}_{p}\)-basis \(h_{1}=1,\ldots,h_{d}\) of \(o_{L}\) and set \(b_{i}:=h_{i}-1\) and, for any multiindex \(\mathbf{k}=(k_{1},\ldots,k_{d})\in\mathbb{N}_{0}^{d}\), \(\mathbf{b}^{\alpha}:=\prod_{i=1}^{d}b_{i}^{\alpha_{i}}\in\mathbb{Z}_{p}[o_{L}]\). We write \(D_{\mathbb{Q}_{p}}(G,K)\) for the algebra of \(K\)-valued locally \(\mathbb{Q}_{p}\)-analytic distributions on a \(\mathbb{Q}_{p}\)-Lie group \(G\). Any \(\lambda\in D_{\mathbb{Q}_{p}}(o_{L},K)\) has a unique convergent expansion \(\lambda=\sum_{\mathbf{k}\in\mathbb{N}_{0}^{d}}\alpha_{\mathbf{k}}\mathbf{b}^ {\mathbf{k}}\) with \(\alpha_{\mathbf{k}}\in K\) such that, for any \(0<r<1\), the set \(\{\alpha_{\mathbf{k}}r^{\circ|\mathbf{k}|}_{\mathbf{k}\in\mathbb{N}_{0}^{d}}\}\) is bounded, where \(\circ:=2\) if \(p=2\) and \(\circ:=1\) otherwise. The completion with respect to the norm \[\|\lambda\|_{\mathbb{Q}_{p},r}:=\sup_{\mathbf{k}\in\mathbb{N}_{0}^{d}}|\alpha _{\mathbf{k}}|r^{\circ|\mathbf{k}|}\] for \(0<r<1\) is denoted by \[D_{\mathbb{Q}_{p},r}(o_{L},K)=\{\sum_{\mathbf{k}\in\mathbb{N}_{0}^{d}}\alpha_{ \mathbf{k}}\mathbf{b}^{\mathbf{k}}|\alpha_{\mathbf{k}}\in K\text{ and }|\alpha_{\mathbf{k}}|r^{\circ|\mathbf{k}|}\to 0 \text{ as }|\mathbf{k}|\rightarrow\infty\}.\] By [Sc1, Prop. 2.1] the group \(o_{L}\) satisfies the hypothesis \((HYP)\) of [ST] with \(p\)-valuation \(\omega\) satisfying \(\omega(h_{i})=\circ\). Thus by [ST, Thm. 4.5], restricting to the subfamily \(q^{-e}<r<1\), \(r\in p^{\mathbb{Q}}\), the norms \(\|-\|_{\mathbb{Q}_{p},r}\) are multiplicative. If not otherwise specified, we denote by \(V\otimes_{K}W\) the projective tensor product of locally convex \(K\)-vector spaces \(V,W\). **Lemma 4.3.6**.: _Let_ \[0\rTo V\rTo W\rTo X\rTo 0\] _be a strict exact sequence of locally convex topological \(K\)-vector spaces with \(W\) metrizable and \(X\) Hausdorff, then_ 1. _the sequence of the associated Hausdorff completed spaces_ \[0\rTo\hat{V}\rTo\hat{W}\rTo\hat{X}\rTo 0\] _is again strict exact,_ 2. _for a complete valued field extension_ \(F\) _of_ \(K\) _the associated sequence of completed base extension_ \[0\rTo F\widehat{\otimes}_{K}V\rTo F\widehat{\otimes}_{K}W\rTo F\widehat{ \otimes}_{K}X\rTo 0\] _is again strict exact._ 3. _If_ \(W\) _is a_ \(K\)_-Banach space,_ \(V\) _a closed subspace with induced norm and_ \(X=W/V\) _endowed with the quotient norm, then in (ii) the quotient norm coincides with the tensor product norm on_ \(F\widehat{\otimes}_{K}X\)_._ Proof.: By [B-TVS, I.17 SS2] with \(W\) also \(V\), \(X\) and all their completions are metrizable. Hence the first statement follows from [B-TG, IX.26 Prop. 5]. For the second statement we first obtain the exact sequence of metrizable locally convex spaces ([PGS, Thm. 10.3.13]). The first non-trivial map is strict by Thm. 10.3.8 in (loc. cit.). Regarding the strictness of the second map one easily checks that \(F\otimes_{K}W/F\otimes_{K}V\) endowed with the quotient topology satisfies the universal property of the projective tensor product \(F\otimes_{K}X\). Now apply (i). The third item is contained in [G, SS3, \(n^{\circ}\) 2, Thm. 1], see also [vR, Thm. 4.28]. The kernel of the surjection of Frechet spaces \(D_{\mathbb{Q}_{p}}(o_{L},K)\twoheadrightarrow D(o_{L},K)\) is generated as a closed ideal by \(\mathfrak{a}:=\ker(L\otimes_{\mathbb{Q}_{p}}\operatorname{Lie}_{\mathbb{Q}_{ p}}(o_{L})\stackrel{{ a\otimes_{\mathbb{P}}\to a_{\mathbb{P}}}}{{\longrightarrow}}\operatorname{Lie}_{L}(o_{L}))\). For \(K=L\) this is [Sc1, Lemma 5.1]. As seen in the proof of Lemma 4.1.2 we have \(K\widehat{\otimes}_{L}D_{\mathbb{Q}_{p}}(o_{L},L)=D_{\mathbb{Q}_{p}}(o_{L},K)\) and \(K\widehat{\otimes}_{L}D(o_{L},L)=D(o_{L},K)\). Hence the assertion for general \(K\) follows from Lemma 4.3.6(ii). We write \(D_{r}(o_{L},K)\) for the completion of \(D(o_{L},K)\) with respect to the quotient norm \(\|-\|_{r}\) of \(\|-\|_{\mathbb{Q}_{p},r}\). By the proof of [ST, Prop. 3.7] we have the exact sequence of \(K\)-Banach algebras \[0\longrightarrow\hat{\mathfrak{a}}_{r}\longrightarrow D_{\mathbb{Q}_{p},r}(o _{L},K)\longrightarrow D_{r}(o_{L},K)\longrightarrow 0 \tag{83}\] where \(\hat{\mathfrak{a}}_{r}\) denotes the closed ideal generated by \(\mathfrak{a}\). Moreover, the \(K\)-Banach algebras \(D_{r}(o_{L},K)\) realize a Frechet-Stein structure on \(D(o_{L},K)\). For convenience we set \(\mathfrak{r}_{0}:=q^{-\circ e}\) and \(\mathfrak{r}_{m}:=q^{-\frac{\circ e}{p^{m}}}\) for \(m\geq 1\). We, of course, have \[D(o_{L},K)=\varprojlim_{m}D_{\mathfrak{r}_{m}}(o_{L},K)\.\] Moreover according to [Sc2, Cor. 5.13] one has \(D_{\mathfrak{r}_{m}}(o_{L},K)=\mathbb{Z}[o_{L}]\otimes_{\mathbb{Z}[p^{m}o_{L}] }D_{\mathfrak{r}_{0}}(p^{m}o_{L},K)\). We have corresponding results and will be using analogous notation for groups isomorphic to \(o_{L}\). This applies, in particular, to \(\Gamma_{n}\) for any \(n\geq n_{0}\). Note that \(\Gamma_{n}^{p^{m}}=\Gamma_{n+me}\). #### 4.3.6 \((\varphi,\Gamma)\)-modules We recall the definition of as well as a few known facts about \((\varphi_{L},\Gamma_{L})\)-modules (cf. [BSX]). Let \(\mathfrak{Y}\) be either \(\mathfrak{X}\) or \(\mathbf{B}\) and \(\mathcal{R}:=\mathcal{R}_{K}(\mathfrak{Y})\). Any \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}\) is, by definition, in particular an \(\mathcal{R}\)-module with a semilinear action of the group \(\Gamma_{L}\). Our aim in this section is to show that these two structures on \(M\) give rise to a module structure on \(M\) under the 'group' Robba ring \(\mathcal{R}_{K}(\Gamma_{L})\). **Definition 4.3.7**.: _A \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}\) is a \(\varphi_{L}\)-module \(M\) (see Definition 4.3.3) equipped with a semilinear continuous action of \(\Gamma_{L}\) which commutes with the endomorphism \(\varphi_{M}\). We shall write \(\mathcal{M}(\mathcal{R})\) for the category of \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}\)._ The continuity condition for the \(\Gamma_{L}\)-action on \(M\), of course, refers to the product topology on \(M\cong\mathcal{R}^{d}\). According to [BSX, Prop. 2.25] the \(\Gamma_{L}\)-action on a \((\varphi_{L},\Gamma_{L})\)-module \(M\) is differentiable so that the derived action of the Lie algebra \(\mathrm{Lie}(o_{L}^{\times})\) on \(M\) is available. **Definition 4.3.8**.: _The \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}\) is called \(L\)-analytic, if the derived action \(\mathrm{Lie}(\Gamma_{L})\times M\to M\) is \(L\)-bilinear, i.e., if the induced action \(\mathrm{Lie}(\Gamma_{L})\to\mathrm{End}(M)\) of the Lie algebra \(\mathrm{Lie}(\Gamma_{L})\) of \(\Gamma_{L}\) is \(L\)-linear (and not just \(\mathbb{Q}_{p}\)-linear). We shall write \(\mathcal{M}^{an}(\mathcal{R})\) for the category of \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}\)._ In [BSX] a \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}\) is only required to be projective instead of free as in our definition. Since throughout this paper we are exclusively interested in \(L\)-analytic modules, that makes no difference as by [BSX, Thm. 3.17] any \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-module \(M\) is actually a free \(\mathcal{R}\)-module. We have the following variant of Prop. 4.3.4 (cf. [BSX, Prop. 2.24]). **Proposition 4.3.9**.: _Let \(M\) be a \((\varphi_{L},\Gamma_{L})\)-module over \(\mathcal{R}\). Then there exists a model \((M_{0},r_{0})\) as in Prop. 4.3.4 equipped with a semilinear continuous action of \(\Gamma_{L}\) such that_ \[\mathcal{R}\otimes_{\mathcal{R}^{[r_{0},1)}}M_{0}=M\] _respects the \(\Gamma_{L}\)-actions (acting diagonally on the left hand side)._ From now on in this subsection **we fix a \((\varphi,\Gamma)\)-module \(M\) over \(\mathcal{R}\) and a pair \((r_{0},M_{0})\) as in Prop. 4.3.9**. We then have available the objects introduced after Prop. 4.3.4. But now the finitely generated free modules \(M^{I(r^{\prime},\mathfrak{Y})}\) and \(M^{I}\) are each in addition equipped with a semilinear continuous \(\Gamma_{L}\)-action, compatible with the identities (71). Moreover, the \(\Gamma_{L}\)-actions commutes with the \(\psi^{I}\)-operators, and the decompositions (74) and (73) are \(\Gamma_{L}\)-equivariant. **Assume** henceforth in this subsection that \(M\) is an \(L\)-_analytic_\((\varphi_{L},\Gamma_{L})\)-module over \(\mathcal{R}\). **Proposition 4.3.10**.: _The \(\Gamma_{L}\)-action on \(M\) extends uniquely to a separately continuous action of the locally \(L\)-analytic distribution algebra \(D(\Gamma_{L},K)\) of \(\Gamma_{L}\) with coefficients in \(K\). If \(M\stackrel{{ f}}{{\longrightarrow}}N\) is a homomorphism of \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-modules, then \(f\) is \(D(\Gamma_{L},K)\)-equivariant with regard to this action._ Proof.: First of all we observe that the Dirac distributions generate a dense \(L\)-subspace in \(D(\Gamma_{L},L)\) by [12, Lem. 3.1]. Since \(\Gamma_{L}\cong o_{L}^{\times}\) we have seen in the proof of Lemma 4.1.2 that \(D(\Gamma_{L},K)=K\widehat{\otimes}_{L}D(\Gamma_{L},L)\). Hence the Dirac distributions also generate a dense \(K\)-linear subspace of \(D(\Gamma_{L},K)\). Therefore the extended action is unique provided it exists. Our assertion is easily reduced to the analogous statement concerning the Banach spaces \(M^{I}\) for a closed interval \(I=[r,s]\). From [11, Prop. 2.16 and Prop. 2.17] we know that the \(\Gamma_{L}\)-action on \(M^{I}\) is locally \(\mathbb{Q}_{p}\)-analytic. But since we assume \(M\) to be \(L\)-analytic it is actually locally \(L\)-analytic (cf. Addendum to Prop. 2.25 and the argument at the end of the proof of Prop. 2.17 in [11] ). For our purpose we show more generally the existence, for any \(K\)-Banach space \(W\), of a continuous \(K\)-linear map \[I:\mathcal{C}^{an}(\Gamma_{L},W)\to\mathcal{L}_{b}(D(\Gamma_{L},K),W)\] satisfying \(I(f)(\delta_{g})=f(g)\). Note that this map, if it exists is unique by our initial observation. Recall (cf. [12, SS12]) that the locally convex vector space \(\mathcal{C}^{an}(\Gamma_{L},W)\) is the locally convex inductive limit of finite products of Banach spaces of the form \(B\widehat{\otimes}_{K}W\) with a Banach space \(B\), and that its strong dual \(D(\Gamma_{L},K)\) is the corresponding projective limit of the finite sums of dual Banach spaces \(B^{\prime}\). We therefore may construct the map \(I\) as the inductive limit of finite products of maps of the form \[B\widehat{\otimes}_{K}W \longrightarrow\mathcal{L}_{b}(B^{\prime},W)\] \[x\otimes y \longmapsto[\ell\mapsto\ell(x)y]\.\] Since \(B\) as a Banach space is barrelled this map is easily seen to be continuous (cf. the argument in the proof of [12, Lem. 9.9]). Now suppose that \(W\) carries a locally \(L\)-analytic \(\Gamma_{L}\)-action (e.g., \(W=M^{I}\)). For \(y\in W\) let \(\rho_{y}(g):=gy\) denote the orbit map in \(\mathcal{C}^{an}(\Gamma_{L},W)\). We then define \[D(\Gamma_{L},K)\times W \longrightarrow W\] \[(\mu,y) \longmapsto I(\rho_{y})(\mu)\.\] Due to our initial observation the proof of [12, Prop. 3.2], that the above is a separately continuous module structure, remains valid even so \(K\) is not assumed to be spherically complete. By [11, Rem. 2.20] the homomorphism \(f\) is continuous and hence the \(D(\Gamma_{L},K)\)-equivariance of \(f\) follows from the \(\Gamma_{L}\)-invariance by the first paragraph of this proof. Recall that \(M^{I}\), for each \(I=[r,s]\) with \(r\geq r_{0}\), bears a natural \(\Gamma_{L}\)-action. Now, for each \(n\geq 1\), we will define a different action of \(\Gamma_{n}\) on \(M^{[r,s]}\), which is motivated by Lemma 4.3.11 below and which is crucial for analysing the structure of \(M^{\psi_{M}=0}\) in the next subsection. To this end consider for each \(\gamma\in\Gamma_{n}\) the operator \(H_{n}(\gamma)\) on \(M^{[r,s]}\) defined by \[H_{n}(\gamma)(m):=\begin{cases}\operatorname{ev}_{\pi_{L}^{-n}(\chi_{LT}( \gamma)-1)}\gamma m&\text{if }\mathfrak{Y}=\mathfrak{X},\\ \eta(\pi_{L}^{-n}(\chi_{LT}(\gamma)-1),Z)\gamma m&\text{if }\mathfrak{Y}= \mathbf{B}\text{ and }\Omega\in K.\end{cases}\] Note that, since \(\Gamma_{n}\) acts on \(\mathcal{O}_{K}(\mathfrak{Y})\) via \(\chi_{LT}\) and the \(o_{L}^{\times}\)-action, we may form the skew group ring \(\mathcal{O}_{K}(\mathfrak{Y})[\Gamma_{n}]\), which due to the semi-linear action of \(\Gamma_{L}\) on \(M\) maps into the \(K\)-Banach algebra \(\mathcal{E}nd_{K}(M^{I})\) of continuous \(K\)-linear endomorphisms of \(M^{I}\), endowed with the operator norm \(\|\ \|_{M^{I}}\). Hence we obtain the ring homomorphism \[H_{n}:K[\Gamma_{n}] \longrightarrow\mathcal{O}_{K}(\mathfrak{Y})[\Gamma_{n}] \longrightarrow\mathcal{E}nd_{K}(M^{I})\] \[\gamma \longmapsto\begin{cases}\operatorname{ev}_{\pi_{L}^{-n}(\chi_{ LT}(\gamma)-1)}\gamma&\text{if }\mathfrak{Y}=\mathfrak{X},\\ \eta(\pi_{L}^{-n}(\chi_{LT}(\gamma)-1),Z)\gamma&\text{if }\mathfrak{Y}=\mathbf{B} \text{ and }\Omega\in K.\end{cases}\] The next lemma holds true in both cases. We spell it out only in the \(\mathbf{B}\)-case since we technically need it only there. **Lemma 4.3.11**.: _Suppose that \(\Omega\) is contained in \(K\), and let \(n\geq m\geq 1\)._ 1. _We have for all_ \(\sigma\in\Gamma_{n}\)__ \[\sigma\big{(}\eta(1,Z)\varphi_{L}^{n}(y)\big{)}=\eta(1,Z)\varphi_{L}^{n}(H_{n }(\sigma)(y)),\] _i.e., the isomorphisms_ \[M \xrightarrow{\cong}\eta(1,Z)\varphi_{L}^{n}(M),\] \[M^{[r,s]} \xrightarrow{\cong}\eta(1,Z)\varphi_{L}^{n}(M^{[r,s]})\] \[y \mapsto\eta(1,Z)\varphi_{L}^{n}(y)\] _are_ \(\Gamma_{n}\)_-equivariant with respect to the natural action on the right hand side and the action via_ \(H_{n}\) _on the left hand side._ 2. _The map_ \[\mathbb{Z}[\Gamma_{m}]\otimes_{\mathbb{Z}[\Gamma_{n}],H_{n}}M^{[r, s]} \to M^{[r^{1/q^{n-m}},s^{1/q^{n-m}}]}\] \[\gamma \otimes y \mapsto\eta(\frac{\chi_{LT}(\gamma)-1}{\pi_{L}^{m}},Z)\varphi_{ M}^{n-m}(\gamma y)\] _is a homeomorphism of Banach-spaces, where the left hand side is viewed as the direct sum of Banach-spaces_ \(\bigoplus_{\gamma\in\Gamma_{m}/\Gamma_{n}}\gamma\otimes M^{[r,s]}\)_. Moreover, the map is_ \(\Gamma_{m}\)_-equivariant with respect to the_ \(H_{m}\)_-action on the right hand side._ 3. _If the homomorphism_ \(H_{n}:K[\Gamma_{n}]\rightarrow\mathcal{E}nd_{K}(M^{I})\) _extends to a continuous homomorphism_ \(R_{K}^{I}(\Gamma_{n})\rightarrow\mathcal{E}nd_{K}(M^{I})\)_, then_ \(H_{m}:K[\Gamma_{m}]\rightarrow\mathcal{E}nd_{K}(M^{I^{1/q^{n-m}}})\) _extends to a continuous homomorphism_ \(R_{K}^{I^{1/q^{n-m}}}(\Gamma_{m})\rightarrow\mathcal{E}nd_{K}(M^{I^{1/q^{n-m }}})\)_. If the first extension is unique, so is the second one._ Proof.: (i) Setting \(b:=\frac{\chi_{LT}(\sigma)-1}{\pi_{L}^{n}}\) we calculate \[\sigma\big{(}\eta(1,Z)\varphi_{L}^{n}(m)\big{)} =\sigma(\eta(1,Z))\varphi_{L}^{n}(\sigma m)\] \[=\eta(1+\pi_{L}^{n}b,Z)\varphi_{L}^{n}(\sigma m)\] \[=\eta(1,Z)\eta(\pi^{n}b,Z)\varphi_{L}^{n}(\sigma m)\] \[=\eta(1,Z)\varphi_{L}^{n}\left(\eta(b,Z)\sigma m\right).\] where we used the multiplicativity of \(\eta\) in the first variable in the third and (64) in the last equality. (ii) By a straight forward computation one first checks that the map is well defined. The bijectivity follows from (73) using the bijection \(1+\pi_{L}^{m}o_{L}/1+\pi_{L}^{n}o_{L}\xrightarrow{\simeq}o_{L}/\pi_{L}^{n-m}o_{L}\), \(\gamma\mapsto\frac{\chi_{LT}(\gamma)-1}{\pi_{L}^{m}}\) and that \(M^{[r,s]}=\gamma M^{[r,s]}\). (iii) Base change induces the \(R_{K}^{I^{1/q^{n-m}}}(\Gamma_{m})\)-action on \[\begin{split} R_{K}^{I^{1/q^{n-m}}}(\Gamma_{m})\otimes_{R_{K}^{I} (\Gamma_{n}),H_{n}}M^{I}&\cong\mathbb{Z}[\Gamma_{m}]\otimes_{ \mathbb{Z}[\Gamma_{n}]}R_{K}^{I}(\Gamma_{n})\otimes_{R_{K}^{I}(\Gamma_{n}),H_{ n}}M^{I}\\ &\cong\mathbb{Z}[\Gamma_{m}]\otimes_{\mathbb{Z}[\Gamma_{n}],H_{n}} \otimes M^{I}\\ &\cong M^{I^{1/q^{n-m}}},\end{split} \tag{84}\] where we used (82) and (ii). The continuity is easily checked by considering'matrix entries' which are built by composites of the original continuous map by other continuous transformations. Here we use that the identifications (82) and (84) are homeomorphisms when we endow the left hand side with the maximum norm. Finally, the claim regarding uniqueness follows from (84) as the action of \(\Gamma_{m}\) is already determined by the original \(H_{m}\). For the rest of this subsection we **assume that \(\Omega\) is contained in \(K\)** and we will work exclusively in the \(\mathbf{B}\)-case, i.e., \(\mathcal{R}=\mathcal{R}_{K}(\mathbf{B})\) and \(\mathcal{R}^{I}=\mathcal{R}_{K}^{I}(\mathbf{B})\). The consequences for the \(\mathfrak{X}\)-case will be given in section 4.3.8. There is a natural ring homomorphism \(\mathcal{R}^{I}\to\mathcal{E}nd_{K}(M^{I})\) by assigning to \(f\in\mathcal{R}^{I}\) the multiplication- with-\(f\)-operator, which we denote by the same symbol \(f\). Part (iii) of the following remark means that this ring homomorphism has operator norm \(1\). **Remark 4.3.12**.: * _We have_ \(\sup_{x\in o_{L}}|\eta(x,Z)-1|_{I}<1\) _and_ \(|\eta(x,Z)|_{I}=1\) _for all_ \(x\in o_{L}\)_._17__ Footnote 17: \(|\eta(x,Z)-1|_{I}=|\eta(1,Z)-1|_{I}<1\) for all \(x\in o_{L}^{\times}\) because any \(x\in o_{L}^{\times}\) induces an isomorphism \([x](-)\) of \(B_{I}\). * \(|\eta(px,Z)-1|_{I}\leqslant\max\{|\eta(x,Z)-1|_{I}^{p},\frac{1}{q^{e}}|\eta(x,Z)-1|_{I}\}(=|\eta(x,Z)-1|_{I}^{p},\) _if_ \(|\eta(x,Z)-1|_{I}\geqslant q^{-\frac{e}{p-1}})\)_._ * \(|f|_{I}=|f|_{M^{I}}\) _for all_ \(f\in\mathcal{R}^{I}\)_._ Proof.: It is known ([ST2]) that \(\eta(x,Z)=\eta(1,[x](Z))\) belongs to \(1+Zo_{\mathbb{C}_{p}}[[Z]]\), whence we have, for any \(x\in o_{L}\), that \(|\eta(x,Z)-1|_{I}<1\) from the definition of \(|-|_{I}\), and (i) follows from the fact that the map \(o_{L}\to\mathbb{R}\), \(x\mapsto|\eta(x,Z)-1|_{I}\) is continuous with compact source. Affirmation (ii) is a consequence of the expansion \[\eta(px,Z)-1 =(\eta(x,Z)-1+1)^{p}-1\] \[=(\eta(x,Z)-1)^{p}+\sum_{k=1}^{p-1}\binom{p}{k}\,(\eta(x,Z)-1)^{k}\] and \(|\,\binom{p}{k}\,|=q^{-e}\) for \(k=1,\ldots,p-1\). (iii) follows from the submultiplicativity of \(|-|_{I}\) plus the fact that \(1\in\mathcal{R}^{I}\), which implies the statement on \(M\cong(\mathcal{R}^{I})^{m}\) The above Remark allows us to fix a natural number \(m_{0}=m_{0}(r_{0})\) such that for all \(m\geqslant m_{0}\) we have that \[|\eta(x,Z)-1|_{I} <\mathfrak{v}_{m}\text{ for all }x\in o_{L}\text{ and }|\eta(x,Z)-1|_{I}\leqslant \mathfrak{v}_{0}\text{ for all }x\in p^{m}o_{L}, \tag{86}\] \[r_{0}^{1/q} <\mathfrak{v}_{m}, \tag{85}\] for any of the intervals \(I=[r_{0},r_{0}]\), \([r_{0},r_{0}^{1/q}]\) and \([r_{0}^{1/q},r_{0}^{1/q}]\). In the following let \(I\) always denote one of those intervals. **Lemma 4.3.13**.: _Let \(\epsilon>0\) arbitrary. Then there exists \(n_{1}\gg 0\) such that, for any \(n\geqslant n_{1},\) the operator norm \(\|-\|_{M^{I}}\) on \(M^{I}\) satisfies_ \[\|\gamma-1\|_{M^{I}}\leqslant\epsilon\text{ for all }\gamma\in\Gamma_{n}. \tag{87}\] Proof.: We first prove the statement for the module \(M=\mathcal{R}\). For the convenience of the reader we adopt the proof of [Ked, Lem. 5.2]. First note that for any fixed \(f\in R^{I}\) by continuity of the action of \(\Gamma_{L}\) there exists an open normal subgroup \(H\) of \(\Gamma_{L}\) such that \[|(\gamma-1)f|_{I}<\epsilon|f|_{I} \tag{88}\] holds for all \(\gamma\in H\). So me may assume that the latter inequality holds for \(Z\) and \(Z^{-1}\) simultaneously. Using the twisted Leibniz rule \[(\gamma-1)(gf)=(\gamma-1)(g)f+\gamma(g)(\gamma-1)(f)\] and induction we get (88) for all powers \(Z^{\mathbb{Z}}\). Since the latter form an orthogonal basis, the claim follows using that \(|\gamma(g)|_{I}=|g|_{I}\) for any \(\gamma\in H,g\in\mathcal{R}^{I}\). If \(M\cong\bigoplus_{i=1}^{d}\mathcal{R}e_{i}\) and \(m=\sum f_{i}e_{i}\), we may assume that \[|(\gamma-1)e_{i}|_{M^{I}}<\epsilon|e_{i}|_{M^{I}} \tag{89}\] holds for \(1\leqslant i\leqslant d\), and apply the same Leibniz rule to \(f_{i}e_{i}\) instead of \(gf\), whence the result follows, noting that \(|e_{i}|_{M^{I}}=1\) by the definition of the maximum norm and that \(|\gamma(e_{i})|_{M^{I}}=|e_{i}|_{M^{I}}=1\) for any \(\gamma\in H\) and \(1\leqslant i\leqslant d\) as a consequence of (89). We fix \(n_{1}=n_{1}(r_{0})\geqslant n_{0}\) such that the Lemma holds for \(\epsilon=\mathfrak{v}_{0}\). Then, for any \(n\geqslant n_{1},m\geqslant m_{0}\), the above \(H_{n}\) extends to continuous ring homomorphisms \[\tilde{H}_{n} :D_{\mathbb{Q}_{p},r_{m}}(\Gamma_{n},K)\to\mathcal{E}nd_{K}(M^{ I}),\] \[\sum_{\mathbf{k}\in\mathbb{N}_{0}^{d}}\alpha_{\mathbf{k}}\hat{ \ell}_{n,\bullet}^{-1}(\mathbf{b})^{\mathbf{k}}\mapsto\sum_{k\geqslant 0} \alpha_{\mathbf{k}}\prod_{i=1}^{d}H_{n}(\hat{\ell}_{n,\bullet}^{-1}(b_{i}))^{ k_{i}},\] and \[\tilde{\mathbb{H}}_{n}:=\tilde{H}_{n}\circ\hat{\ell}_{n,\bullet}^ {-1}:D_{\mathbb{Q}_{p},\mathfrak{v}_{m}}(o_{L},K)\xrightarrow{\hat{\ell}_{n, \bullet}^{-1}}D_{\mathbb{Q}_{p},\mathfrak{v}_{m}}(\Gamma_{n},K) \to\mathcal{E}nd_{K}(M^{I}),\] \[\sum_{\mathbf{k}\in\mathbb{N}_{0}^{d}}\alpha_{\mathbf{k}}\mathbf{b }^{\mathbf{k}} \mapsto\sum_{k\geqslant 0}\alpha_{\mathbf{k}}\prod_{i=1}^{d}H_{n}( \hat{\ell}_{n,\bullet}^{-1}(b_{i}))^{k_{i}}.\] Indeed, we have \[H_{n}(\hat{\ell}_{n,\ast}^{-1}(b_{i}))=\eta(\frac{\hat{\ell}_{n}^{-1}(h_{i})-1}{ \pi_{L}^{n}},Z)-1+\eta(\frac{\hat{\ell}_{n}^{-1}(h_{i})-1}{\pi_{L}^{n}},Z)(\hat{ \ell}_{n}^{-1}(h_{i})-1)\] and since \[\|\eta(\frac{\hat{\ell}_{n}^{-1}(h_{i})-1}{\pi_{L}^{n}},Z)-1+\eta (\frac{\hat{\ell}_{n}^{-1}(h_{i})-1}{\pi_{L}^{n}},Z)(\hat{\ell}_{n}^{-1}(h_{i}) -1)\|_{M^{I}}\leq\\ \max\{\|\eta(\frac{\hat{\ell}_{n}^{-1}(h_{i})-1}{\pi_{L}^{n}},Z)- 1\|_{M^{I}},\|\eta(\frac{\hat{\ell}_{n}^{-1}(h_{i})-1}{\pi_{L}^{n}},Z)(\hat{ \ell}_{n}^{-1}(h_{i})-1)\|_{M^{I}}\}\leq\mathfrak{r}_{m}\] by (87),(85) and Remark 4.3.12 (i), the above defining sum converges with respect to the operator norm. Moreover, we have \[\|\tilde{\mathbb{H}}_{n}(\lambda)\|_{M^{I}}\leq\sup_{\mathbf{k}}|\alpha_{ \mathbf{k}}|\mathfrak{r}_{m}^{|\mathbf{k}|}=\|\lambda\|_{\mathbb{Q}_{p}, \mathfrak{r}_{m}} \tag{90}\] for all \(\lambda\in D_{\mathbb{Q}_{p},\mathfrak{r}_{m}}(o_{L},K)\), i.e., the operator norm of \(\tilde{\mathbb{H}}_{n}\) is less or equal to \(1\). Since \(M\) is assumed to be \(L\)-analytic, \(\tilde{H}_{n}\) factorises over the desired ring homomorphism \[H_{n}:\bigg{(}D(\Gamma_{n},K)\subseteq\bigg{)}D_{\mathfrak{r}_{m}}(\Gamma_{n },K)\rightarrow\mathcal{E}nd_{K}(M^{I})\] and \(\tilde{\mathbb{H}}_{n}\) over \[\mathbb{H}_{n}:\bigg{(}D(o_{L},K)\subseteq\bigg{)}D_{\mathfrak{r}_{m}}(o_{L},K)\rightarrow\mathcal{E}nd_{K}(M^{I})\] by (83). As \(D_{\mathfrak{r}_{m}}(o_{L},K)\) carries the quotient norm of \(D_{\mathbb{Q}_{p},\mathfrak{r}_{m}}(o_{L},K)\) we obtain from (90) \[\|\mathbb{H}_{n}(\lambda)\|_{M^{I}}\leq\inf_{\tilde{\lambda},pr(\tilde{ \lambda})=\lambda}\|\lambda\|_{\mathbb{Q}_{p},\mathfrak{r}_{m}}=\|\lambda\|_{ \mathfrak{r}_{m}} \tag{91}\] for all \(\lambda\in D_{\mathfrak{r}_{m}}(o_{L},K)\), i.e., the operator norm of \(\mathbb{H}_{n}\) is again less or equal to \(1\). By a similar, but simpler reasoning one shows the following **Lemma 4.3.14**.: _The isomorphism (LT together with Fourier) \(\mathfrak{F}:D(o_{L},K)\cong\mathcal{O}_{K}(\mathbf{B}),\delta_{a}\mapsto \eta(a,Z),\) induces, for all \(m\geq m_{0},\) a commutative diagram of continuous maps_ _with operator norms less or equal to \(1\). Moreover, the operator norm of the scalar action via \(\mathfrak{F}\)_ \[\text{scal}:(D(o_{L},K)\subseteq)D_{\mathfrak{r}_{m}}(o_{L},K)\overset{ \mathfrak{F}}{\rightarrow}\mathcal{R}^{I}\rightarrow\mathcal{E}nd_{K}(M^{I}) \tag{92}\] _is also bounded by \(1\), see Remark 4.3.12 (iii)._ **Remark 4.3.15**.: _The maps \(\bar{H}_{n}\) and \(\bar{\mathbb{H}}_{n},\) as well as \(H_{n}\) and \(\mathbb{H}_{n}\) are uniquely determined by their restriction to \(K[\Gamma_{n}]\) and \(K[o_{L}]\), respectively, because these group algebras are dense in the sources \(D_{\mathbb{Q}_{p},\mathfrak{r}_{m}}(\Gamma_{n},K)\), \(D_{\mathfrak{r}_{m}}(\Gamma_{n},K)\) and \(D_{\mathbb{Q}_{p},\mathfrak{r}_{m}}(o_{L},K)\), \(D_{\mathfrak{r}_{m}}(o_{L},K)\), respectively._ Applying our convention before Remark 4.3.12 we usually shall abbreviate \(\mathit{scal}(\mu)\) by \(\mathfrak{k}(\mu)\) for \(\mu\in D(o_{L},K)\) below when we refer to this scalar action on \(M^{I}.\) For the proof of Thm. 4.3.20 below it will be crucial to compare the two actions \(\mathit{scal}\) and \(\mathbb{H}_{n}\) of \(D(o_{L},K)\) on \(M^{I}.\) Finally, for \(n\geqslant n_{1},\) we obtain similar maps for the original (multiplicative) action of \(\Gamma_{n}\subseteq\Gamma\) on \(M^{I}:\) \[D_{\mathbb{Q}_{p},\mathfrak{r}_{m}}(\Gamma_{n},K) \rightarrow\mathcal{E}nd_{K}(M^{I}), \tag{94}\] \[\sum_{\mathbf{k}\in\mathbb{N}_{0}^{d}}\alpha_{\mathbf{k}}\hat{ \ell}_{n,*}^{-1}(\mathbf{b})^{\mathbf{k}} \mapsto\sum_{k\geqslant 0}\alpha_{\mathbf{k}}\prod_{i=1}^{d}\hat{ \ell}_{n,*}^{-1}(b_{i}))^{k_{i}} \tag{93}\] with operator norm bounded by \(1.\) A special case of the following lemma was pointed out to us by Rustam Steingart. **Lemma 4.3.16**.: _Let \(m\in\mathbb{N}\) be arbitrary. Setting \(u_{n}(a):=\frac{\exp(\pi_{n}^{\gamma}a)-1}{\pi_{L}^{\gamma}a}\) for \(a\in o_{L}\backslash\{0\}\) and \(u_{n}(0)=1\) there exist \(n_{2}=n_{2}(m)\) such that \(u_{n}(a)\in 1+\pi_{L}^{m}o_{L}\) for all \(a\in o_{L}\) and \(n\geqslant n_{2}.\)_ Proof.: This is easily checked using \(v_{p}(n!)\leqslant\frac{n}{p-1}.\) In order to distinguish Dirac distributions for elements \(\gamma\) in the _multiplicative_ group \(\Gamma_{n}\) from those for elements \(a\) in the _additive_ group \(o_{L}\) we often shall write \(\delta_{\gamma}^{\times}\) in contrast to \(\delta_{a}.\) **Lemma 4.3.17**.: _Let \(0<\epsilon<1\) be arbitrary and \(\Delta=\sum_{k}c_{k}(\delta_{a_{k}}-1)\in D(o_{L},K)\) a finite sum with \(a_{k}\in o_{L}\). Then there exists \(n_{3}=n_{3}(\epsilon,\Delta,r_{0})\) such that for all \(n\geqslant n_{3}\) it holds_ \[\left\|\mathfrak{k}(\Delta)-\mathbb{H}_{n}(\Delta)\right\|_{M^{I}}<\epsilon.\] Proof.: Put \(\xi:=\sup_{k}|c_{k}|\) and choose \(\epsilon^{\prime}\leqslant\epsilon\) such that \(\epsilon^{\prime}\xi<\epsilon.\) Then choose \(m_{1}\geqslant m_{0}\) such that \[\left\|\delta_{\gamma}^{\times}-1\right\|_{M^{I}}<\epsilon^{\prime}\text{ and }\left\|\delta_{\gamma}^{\times}-1\right\|_{\mathcal{R}^{I}}<\epsilon^{\prime}\] for all \(\gamma\in\Gamma_{m_{1}}\) (see Lemma 4.3.13). Now according to Lemma 4.3.16 we choose \(n_{3}:=n_{2}(m_{1})\geqslant m_{1}.\) Observing that for \(a\in o_{L}\) \[\mathbb{H}_{n}(\delta_{a})=\mathfrak{k}(\delta_{u_{n}(a)a})\circ\delta_{\hat{ \ell}_{n}^{-1}(a)}^{\times}\] we estimate, for \(n\geqslant n_{3},\) \[\left\|\mathfrak{k}(\Delta)-\mathbb{H}_{n}(\Delta)\right\|_{M^{I}} =\left\|\sum_{k}c_{k}\left\{\mathfrak{k}(\delta_{a_{k}})-1-\left( \mathfrak{k}(\delta_{u_{n}(a_{k})a_{k}})\circ\delta_{\hat{\ell}_{n}^{-1}(a_{k })}^{\times}-1\right)\right\}\right\|_{M^{I}}\] \[=\left\|\sum_{k}c_{k}\left\{\mathfrak{k}(\delta_{a_{k}})-1-\left( \mathfrak{k}(\delta_{u_{n}(a_{k})a_{k}})\circ(\delta_{\hat{\ell}_{n}^{-1}(a_{k })}^{\times}-1)+\mathfrak{k}(\delta_{u_{n}(a_{k})a_{k}})-1\right)\right\} \right\|_{M^{I}}\] \[=\left\|\sum_{k}c_{k}\left\{\mathfrak{k}(\delta_{a_{k}})-1-\left( \mathfrak{k}(\delta_{u_{n}(a_{k})a_{k}})\circ(\delta_{\hat{\ell}_{n}^{-1}(a_{k })}^{\times}-1)+\delta_{u_{n}(a_{k})}^{\times}(\mathfrak{k}(\delta_{a_{k}})-1) \right)\right\}\right\|_{M^{I}}\] \[=\left\|\sum_{k}c_{k}\left\{(1-\delta_{u_{n}(a_{k})}^{\times}( \mathfrak{k}(\delta_{a_{k}})-1)-\left(\mathfrak{k}(\delta_{u_{n}(a_{k})a_{k}}) \circ(\delta_{\hat{\ell}_{n}^{-1}(a_{k})}^{\times}-1)\right)\right\}\right\|_{M ^{I}}\] \[\leqslant\sup_{k}\left(|c_{k}|\max\left\{\left\|(1-\delta_{u_{n}(a _{k})})(\mathfrak{k}(\delta_{a_{k}})-1)\right\|_{M^{I}},\left\|\mathfrak{k}( \delta_{u_{n}(a_{k})a_{k}})\circ(\delta_{\hat{\ell}_{n}^{-1}(a_{k})}^{\times }-1)\right\|_{M^{I}}\right\}\right)\] \[\leqslant\epsilon^{\prime}\sup_{k}|c_{k}|=\epsilon^{\prime}\xi<\epsilon,\] where we used for the last line the estimate \[\|(1-\delta_{u_{n}(a_{k})}^{\times})(\mathfrak{F}(\delta_{a_{k}})-1) \|_{M^{I}} =|(1-\delta_{u_{n}(a_{k})}^{\times})(\mathfrak{F}(\delta_{a_{k}})-1) |_{I}\] \[\leq\|(1-\delta_{u_{n}(a_{k})}^{\times})\|_{\mathcal{R}^{I}}| \mathfrak{E}(\delta_{a_{k}})-1|_{I}\] \[\leq\epsilon^{\prime}|\eta(a_{k},Z)-1|_{I}\leq\epsilon^{\prime}\] (by Remark 4.3.12(i)/(iii) and due to the choice of \(m_{1}\) and \(n_{3}\)) for the first term as well as the estimate \[\|\mathfrak{E}(\delta_{u_{n}(a_{k})a_{k}})\circ(\delta_{\hat{ \ell}_{n}^{-1}(a_{k})}^{\times}-1)\|_{M^{I}} \leq|\eta(u_{n}(a_{k})a_{k},Z)|_{I}\|\delta_{\hat{\ell}_{n}^{-1}( a_{k})}^{\times}-1\|_{M^{I}}\] \[=\|\delta_{\hat{\ell}_{n}^{-1}(a_{k})}^{\times}-1\|_{M^{I}}< \epsilon^{\prime}\] for the second term (again by Remark 4.3.12(i)/(iii) and due to the choice of \(m_{1}\) and \(n_{3}\geq m_{1}\)). **Lemma 4.3.18**.: _Let \(0<\epsilon<1\) be arbitrary and \(\mu\in D(o_{L},K)\) be any element. Then there exists \(\Delta=\sum_{k}c_{k}(\delta_{a_{k}}-1)\in D(o_{L},K)\) a finite sum with \(a_{k}\in o_{L}\) such that \(\|\mu-\Delta\|_{\mathfrak{x}_{m_{0}}}<\epsilon\). Moreover, for \(n_{3}=n_{3}(\epsilon,\Delta,r_{0})\) from the previous Lemma and all \(n\geq n_{3}\) we have_ \[\|\mathfrak{E}(\mu)-\mathbb{H}_{n}(\mu)\|_{M^{I}}<\epsilon.\] _In particular, if \(\mathfrak{E}(\mu)\) is invertible in \(\mathcal{E}nd_{K}(M^{I})\) or equivalently invertible as an element of \(\mathcal{R}^{I}\),18 then firstly there exists \(n_{4}=n_{4}(\mu,r_{0})\) such that \(\|\mathfrak{E}(\mu)-\mathbb{H}_{n}(\mu)\|_{M^{I}}<|\mathfrak{E}(\mu)^{-1}|_{I} ^{-1}\) and \(\|\mathbb{H}_{n}(\mu)^{-1}-\mathfrak{E}(\mu)^{-1}\|_{M^{I}}<|\mathfrak{E}(\mu) ^{-1}|_{I}\) for any \(n\geq n_{4}\) and secondly the operator \(\mathbb{H}_{n}(\mu)\) is invertible, too._ Footnote 18: \(M^{I}\) being a free \(\mathcal{R}^{I}\)-module on which \(\mathfrak{E}(\mu)\) acts via the diagonal matrix with all diagonal entries equal to \(\mathfrak{E}(\mu)\). Proof.: The existence of such \(\Delta\) is clear because such elements form a dense subset of \(D(o_{L},K)\) in the Frechet topology (as noted at the beginning of the proof of Prop. 4.3.10). Consider the estimation for \(n\geq n_{3}\) \[\|\mathfrak{E}(\mu)-\mathbb{H}_{n}(\mu)\|_{M^{I}}\leq\max\left(\|\mathfrak{E} (\mu-\Delta)\|_{M^{I}},\|\mathfrak{E}(\Delta)-\mathbb{H}_{n}(\Delta)\|_{M^{I} },\|\mathbb{H}_{n}(\mu-\Delta)\|_{M^{I}}\right)<\epsilon,\] where we use the estimate \[\|\mathfrak{E}(\mu-\Delta)\|_{M^{I}}=|\mathfrak{E}(\mu-\Delta)|_{I}\leq\|\mu- \Delta\|_{\mathfrak{x}_{m_{0}}}<\epsilon\] by (92) for the first, Lemma 4.3.17 for the second and (91) for the last term. Now suppose that \(\mathfrak{E}(\mu)\) as an operator on \(M^{I}\) is invertible. We choose \(\epsilon\leq\|\mathfrak{E}(\mu)^{-1}\|_{M^{I}}^{-1}\), \(\Delta\) accordingly and put \(n_{4}=n_{3}(\epsilon,\Delta,r_{0})\). Then, for \(n\geq n_{4}\), we have \(\|1-\mathfrak{E}(\mu)^{-1}\mathbb{H}_{n}(\mu)\|_{M^{I}}=\|\mathfrak{E}(\mu)^{- 1}(\mathfrak{E}(\mu)-\mathbb{H}_{n}(\mu))\|_{M^{I}}<1\), whence \(\sum_{k\geq 0}(1-\mathfrak{E}(\mu)^{-1}\mathbb{H}_{n}(\mu))^{k}\) converges in \(\mathcal{E}nd_{K}(M^{I})\) and \(\mathbb{H}_{n}(\mu)^{-1}:=\left(\sum_{k\geq 0}(1-\mathfrak{E}(\mu)^{-1} \mathbb{H}_{n}(\mu))^{k}\right)\mathfrak{E}(\mu)^{-1}\) is the inverse of \(\mathbb{H}_{n}(\mu)=\mu(1-(1-\mathfrak{E}(\mu)^{-1}\mathbb{H}_{n}(\mu)))\). Furthermore, \[\|\mathbb{H}_{n}(\mu)^{-1}-\mathfrak{E}(\mu)^{-1}\|_{M^{I}} =\|\left(\sum_{k\geq 1}(1-\mathfrak{E}(\mu)^{-1}\mathbb{H}_{n}( \mu))^{k}\right)\mathfrak{E}(\mu)^{-1}\|_{M^{I}}\] \[\leq\sup_{k\geq 1}\|1-\mathfrak{E}(\mu)^{-1}\mathbb{H}_{n}(\mu)\|_{M ^{I}}^{k}|\mathfrak{E}(\mu)^{-1}|_{I}<|\mathfrak{E}(\mu)^{-1}|_{I}.\] Note that the above lemma applies to the variable \(X\) and from now on we set \(n_{4}:=n_{4}(X,r_{0}).\) In view of Lemma 4.3.11 (iii) the following lemma is crucial for the main result Thm. 4.3.20 of this section. **Lemma 4.3.19**.: _For \(n\geqslant n_{4}\)_ * _the map_ \(\Theta_{n}:D(\Gamma_{n},K)\xrightarrow{H_{n}}\mathcal{E}nd_{K}(M^{I})\) _extends uniquely to a continuous ring homomorphism_ \[\mathcal{R}^{I}_{K}(\Gamma_{n})\to\mathcal{E}nd_{K}(M^{I}).\] _If_ \(M\xrightarrow{f}N\) _is a homomorphism of_ \(L\)_-analytic_ \((\varphi_{L},\Gamma_{L})\)_-modules, then_ \(f^{I}:M^{I}\to N^{I}\) _is_ \(\mathcal{R}^{I}_{K}(\Gamma_{n})\)_-equivariant with regard to this action._ * \(M^{I}\) _is a free_ \(\mathcal{R}^{I}_{K}(\Gamma_{n})\)_-module of rank_ \(\mathrm{rk}_{\mathcal{R}}M.\) _Any basis as_ \(\mathcal{R}^{I}\)_-module also is a basis as_ \(\mathcal{R}^{I}_{K}(\Gamma_{n})\)_-module._ * _The natural maps_ \[\mathcal{R}^{[r_{0},r_{0}]}_{K}(\Gamma_{n})\otimes_{\mathcal{R}^{[ r_{0},r_{0}^{\frac{1}{q}}]}_{K}(\Gamma_{n})}M^{[r_{0},r_{0}^{\frac{1}{q}}]} \xrightarrow{\simeq}M^{[r_{0},r_{0}]},\] \[\mathcal{R}^{[r_{0}^{\frac{1}{q}},r_{0}^{\frac{1}{q}}]}_{K}( \Gamma_{n})\otimes_{\mathcal{R}^{[r_{0},r_{0}^{\frac{1}{q}}]}_{K}(\Gamma_{n})} M^{[r_{0},r_{0}^{\frac{1}{q}}]}\xrightarrow{\simeq}M^{[r_{0}^{\frac{1}{q}},r_{0}^{ \frac{1}{q}}]}\] _are isomorphisms._ Proof.: (i) Inductively, for \(n\geqslant n_{4},\) we obtain from Lemma 4.3.18 - by expressing \((\mathbb{H}_{n}(\mu)^{\pm})^{k}-(\mathfrak{k}(\mu)^{\pm})^{k}\) as \(\sum_{l=1}^{k}\begin{pmatrix}k\\ l\end{pmatrix}(\mathbb{H}_{n}(\mu)^{\pm}-\mathfrak{k}(\mu)^{\pm})^{l}( \mathfrak{k}(\mu)^{\pm})^{k-l}\) - that \[\|\mathbb{H}_{n}(\mu)^{k}-\mathfrak{k}(\mu)^{k}\|_{M^{I}}<\left\{\begin{array} []{ll}|\mathfrak{k}(\mu)|_{I}^{k},&\text{for $k\geqslant 0$;}\\ \leqslant|\mathfrak{k}(\mu)^{-1}|_{I}^{-k}\leqslant|\mathfrak{k}(\mu)|_{I}^{ k}\leqslant|\mathfrak{k}(\mu)^{k}|_{I},&\text{for $k<0$.}\end{array}\right. \tag{95}\] for all \(k\in\mathbb{Z}.\) It follows for \(\mu=X\) that, if \(\sum_{k\in\mathbb{Z}}a_{k}Z^{k}\in\mathcal{R}^{I}\) with \(a_{i}\in K,\) then \(\sum_{k\in\mathbb{Z}}a_{k}\mathbb{H}_{n}(X)^{k}\) converges in \(\mathcal{E}nd_{K}(M^{I}),\) because \[\|a_{k}\mathbb{H}_{n}(X)^{k}\|_{M^{I}}\leqslant\max\{\|a_{k}(\mathbb{H}_{n}(X) ^{k}-\mathfrak{k}(\mu)^{k})\|_{M^{I}},\|a_{k}\mathfrak{k}(\mu)^{k}\|_{M^{I}}\} \leqslant\left\{\begin{array}{ll}|a_{k}||Z|_{I}^{k},&\text{for $k\geqslant 0$;}\\ |a_{k}||Z^{-1}|_{I}^{-k},&\text{for $k<0$.}\end{array}\right.\] goes to zero for \(k\) going to \(\pm\infty.\) In other words, we have extended the continuous ring homomorphism \(\Theta_{n}\) to a continuous ring homomorphism \[\mathcal{R}^{I}\to\mathcal{E}nd_{K}(M^{I}),\ \ Z\mapsto\mathbb{H}_{n}(X).\] As by definition \(\kappa^{*}\circ\hat{\ell}_{n,*}\) extends to a continuous ring isomorphism \(\mathcal{R}^{I}_{K}(\Gamma_{n})\xrightarrow{\simeq}\mathcal{R}^{I}_{K}( \mathbf{B})=\mathcal{R}^{I}\) we have constructed a continuous ring homomorphism \[\mathcal{R}^{I}_{K}(\Gamma_{n})\to\mathcal{E}nd_{K}(M^{I})\] as claimed. The uniqueness is a consequence of the fact that \(\mathcal{R}^{I}_{K}(\Gamma_{n})\) is the completion of the localization \(D(\Gamma_{n},K)_{Y_{n}^{\mathbb{N}}}\) by a certain norm, for which the extended map is continuous. Concerning functoriality observe that the maps \(f\) and \(f^{I}\) are automatically continuous by [BSX, Rem. 2.20] (with respect to the canonical topologies). Without loss of generality we may assume that the estimates of Lemma 4.3.18 hold for \(M\) and \(N\) simultaneously. By the invariance under the distribution algebra and \(\mathcal{R}\)-linearity of \(f\), the map \(f^{I}\) is compatible with respect to the operators \(\mathbb{H}_{n}(X)^{\pm}\) of \(M^{I}\) and \(N^{I}\). By continuity this extends to arbitrary elements of \(\mathcal{R}^{I}_{K}(\Gamma_{n})\). (ii) follows similarly as in [KPX]: Recall that \((e_{k})\) denotes an \(\mathcal{R}^{I}\)-basis of \(M^{I}\) and consider the maps \[\Phi:\bigoplus_{k=1}^{m}\mathcal{R}^{I}_{K}(\mathbf{B})\cong M^{I},(f_{k}) \mapsto\sum_{k=1}^{m}f_{k}e_{k},\] \[\Phi^{\prime}:\bigoplus_{k=1}^{m}\mathcal{R}^{I}_{K}(\Gamma_{n})\to M^{I},(f_{ k})\mapsto\sum_{k=1}^{m}f_{k}(e_{k}),\] and \[\Upsilon:\bigoplus_{k=1}^{m}\mathcal{R}^{I}_{K}(\mathbf{B})\xrightarrow{ \cong}\bigoplus_{k=1}^{m}\mathcal{R}^{I}_{K}(\Gamma_{n}),\] which in each component is given by \((\kappa^{*}\circ\hat{\ell}_{n,\bullet})^{-1}\). Then we have from (95) that \[|\Phi^{\prime}\circ\Upsilon\circ\Phi^{-1}(m)-m|_{I}<|m|_{I},\] i.e., \[\|\Phi^{\prime}\circ\Upsilon\circ\Phi^{-1}-\operatorname{id}\|_{I}<1,\] whence with \(\Phi\) and \(\Upsilon\) also \(\Phi^{\prime}\) is an isomorphism because \(\Phi^{\prime}\circ\Upsilon\circ\Phi^{-1}\) is invertible by the usual argument using the geometric series. (iii) The base change property follows from the fact that \(\Phi^{\prime}\) is compatible with changing the interval. **Theorem 4.3.20**.: _Suppose that \(\Omega\) is contained in \(K\)._ 1. _Let_ \(J\) _be any of the intervals_ \[[r_{0},r_{0}]^{1/q^{n}}\text{ or }[r_{0},r_{0}^{1/q}]^{1/q^{n}}\text{ for }n\geqslant 0.\] _Then the_ \(\Gamma_{n_{4}}\)_-action on_ \(M^{J}\) _via_ \(H_{n_{4}}\) _extends uniquely to a continuous_ \(\mathcal{R}^{I}_{K}(\Gamma_{n_{4}})\)_-module structure. Moreover,_ \(M^{J}\) _is a finitely generated free_ \(\mathcal{R}^{J}_{K}(\Gamma_{n_{4}})\)_-module; any_ \(\mathcal{R}^{[r_{0},1)}\)_-basis of_ \(M_{0}\) _is also an_ \(\mathcal{R}^{J}_{K}(\Gamma_{n_{4}})\)_-basis of_ \(M^{J}\)_. If_ \(M\xrightarrow{f}N\) _is a homomorphism of_ \(L\)_-analytic_ \((\varphi_{L},\Gamma_{L})\)_-modules, then_ \(f^{J}:M^{J}\to N^{J}\) _is_ \(\mathcal{R}^{J}_{K}(\Gamma_{n_{4}})\)_-equivariant with regard to this action._ 2. _The_ \(\Gamma_{1}\)_-action on_ \(M\) _via_ \(H_{1}\) _extends uniquely to a separately continuous_ \(\mathcal{R}_{K}(\Gamma_{1})\)_-module structure. Moreover,_ \(M\) _is a finitely generated free_ \(\mathcal{R}_{K}(\Gamma_{1})\)_-module; any_ \(\mathcal{R}^{[r_{0},1)}\)_-basis of_ \(M_{0}\) _is also an_ \(\mathcal{R}_{K}(\Gamma_{1})\)_-basis of_ \(M\)_. If_ \(M\xrightarrow{f}N\) _is a homomorphism of_ \(L\)_-analytic_ \((\varphi_{L},\Gamma_{L})\)_-modules, then_ \(f\) _is_ \(\mathcal{R}_{K}(\Gamma_{1})\)_-equivariant with regard to this action._ Proof.: (i) From Lemma 4.3.19 we obtain, for any \(n\geqslant n_{4}\), the \(H_{n}\)-action of \(\mathcal{R}_{K}(\Gamma_{n})\) on \(M^{I}\) for the original three intervals \(I\). Using Lemma 4.3.11(iii) we deduce the \(H_{n_{4}}\)-action of \(\mathcal{R}_{K}^{I^{1/q^{n-n_{4}}}}(\Gamma_{n_{4}})\)-action on \(M^{I^{1/q^{n-n_{4}}}}\). The asserted properties of these actions follow from the same lemmas. (ii) By the uniqueness part in (i) we may glue the \(\mathcal{R}_{K}^{J}(\Gamma_{n_{4}})\)-actions on the \(M^{J}\) to a continuous \(\mathcal{R}_{K}^{\left[r_{0},1\right)}(\Gamma_{n_{4}})\)-action on \(M^{\left[r_{0},1\right)}\). By Remark 4.3.5.ii it is uniquely determined by the \(\Gamma_{n_{4}}\)-action. Therefore we may vary \(r_{0}\) now and obtain in the inductive limit a separately continuous \(H_{n_{4}}\)-action of \(\mathcal{R}_{K}(\Gamma_{n_{4}})\) on \(M\). Using (78) and Lemma 4.3.11(ii) we deduce the separately continuous \(H_{1}\)-action of \(\mathcal{R}_{K}(\Gamma_{1})\) on \(M\). Again by Remark 4.3.5 this action is uniquely determined by the \(\Gamma_{1}\)-action. The remaining assertions follow from the corresponding ones in (i). #### 4.3.7 The structure of \(M^{\psi_{M}=0}\) We still **assume that \(\Omega\) is contained in \(K\) and let \(M\) be an \(L\)-analytic \((\varphi_{L},\Gamma)\)-module over \(\mathcal{R}=\mathcal{R}_{K}(\mathbf{B})\).** We want to show that \(M^{\psi=0}\) carries a natural \(\mathcal{R}_{K}(\Gamma_{L})\)-action extending the action of \(D(\Gamma_{L},K)\). From (74) and using formula (66) and (65) we have \[M^{\psi_{L}=0}=\bigoplus_{a\in(o_{L}/\pi_{L})^{\times}}\eta(a,Z)\varphi_{M}(M )=\mathbb{Z}[\Gamma_{L}]\otimes_{\mathbb{Z}[\Gamma_{1}]}\left(\eta(1,Z)\varphi _{M}(M)\right). \tag{96}\] **Theorem 4.3.21**.: _The \(\Gamma_{L}\) action on \(M\) extends to a unique separately continuous \(\mathcal{R}_{K}(\Gamma_{L})\)-action on \(M^{\psi_{L}=0}\) (with respect to the \(LF\)-topology on \(\mathcal{R}_{K}(\Gamma_{L})\) and the subspace topology on \(M^{\psi_{L}=0}\)); moreover the latter is a free \(\mathcal{R}_{K}(\Gamma_{L})\)-module of rank \(\operatorname{rk}_{\mathcal{R}}M\). If \(e_{1},\ldots,e_{r}\) is a basis of \(M\) over \(\mathcal{R}\), an \(\mathcal{R}_{K}(\Gamma_{L})\)-basis of \(M^{\psi_{L}=0}\) is given by \(\eta(1,Z)\varphi_{M}(e_{1}),\ldots,\eta(1,Z)\varphi_{M}(e_{r})\). If \(M\xrightarrow{f}N\) is a homomorphism of \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-modules, then \(M\xrightarrow{f^{\psi_{L}=0}}N\) is \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariant with regard to this action._ Proof.: By Lemma 4.3.11 (i) we transfer the \(\mathcal{R}_{K}(\Gamma_{1})\)-action on \(M\) from Thm. 4.3.20(ii) to the space \(\eta(1,Z)\varphi_{M}(M)\). Note that the resulting action is separately continuous for the subspace topology of \(\eta(1,Z)\varphi_{M}(M)\), because the map \(\varphi_{L}:M\to M\) is a homeomorphism onto its image. The latter is a consequence of the existence of the continuous operator \(\psi_{L}\) and the relation \(\psi_{L}\circ\varphi_{L}=\frac{q}{\pi_{L}}\operatorname{id}_{M}\). Finally, because of (77) and (96) the \(\mathcal{R}_{K}(\Gamma_{1})\)-action extends to the asserted \(\mathcal{R}_{K}(\Gamma_{L})\)-action. Similarly as before, since \(\Gamma_{L}\) spans a dense subspace of \(D(\Gamma_{L},K)\), the uniqueness of the action follows from Remark 4.3.5. #### 4.3.8 Descent For the proof of Thm. 4.3.21 we had to work over a field \(K\) containing the period \(\Omega\) since only then we were able to write elements in \(\mathcal{R}_{K}(\Gamma_{L})\) or rather \(\mathcal{R}_{K}(\Gamma_{n})\) as certain Laurent series in one variable \(Y_{n}\) by means of the Lubin-Tate isomorphism \(\mathcal{R}_{K}(\mathfrak{X})\cong\mathcal{R}_{K}(\mathbf{B})\), which in general does not exist over \(L\). In this section we are going to explore to which extent the structure theorem over \(K\) descends to \(L\). We shall consider two situations, i.e., we now start with an \(L\)-analytic \((\varphi_{L},\Gamma)\)-module \(M\) over \(\mathcal{R}_{L}(\mathfrak{X})\) or \(\mathcal{R}_{L}(\mathbf{B})\), respectively. Thus, in what follows let be either \(\mathfrak{X}\) or \(\mathbf{B}\) and \(\mathcal{R}=\mathcal{R}_{L}(\mathfrak{Y})\). Then we consider the functor \[\mathfrak{M}^{an}(\mathcal{R}_{L}(\mathfrak{Y})) \rightarrow\mathfrak{M}^{an}(\mathcal{R}_{K}(\mathfrak{Y})),\] \[M \mapsto M_{K}:=\mathcal{R}_{K}(\mathfrak{Y})\otimes_{\mathcal{R}_ {L}(\mathfrak{Y})}M\cong K\widehat{\otimes}_{\iota,L}M,\] where the last isomorphism and the well-definedness of the functor are established in [BSX, Lem. 2.23]. Moreover, there is a natural action of \(G_{L}\) on both \(\mathcal{R}_{K}(\mathfrak{Y})\cong K\widehat{\otimes}_{\iota,L}\mathcal{R}_{L }(\mathfrak{Y})\) and \(M_{K}\) via the first tensor factor (and the identity on the second). We have \[\mathcal{R}_{K}(\mathfrak{Y})^{G_{L}}\cong\mathcal{R}_{L}(\mathfrak{Y}) \tag{97}\] by [BSX, Prop. 2.7 (iii)], whence also \[(M_{K})^{G_{L}}=M \tag{98}\] because \(M\) is finitely generated free over \(\mathcal{R}_{L}(\mathfrak{Y})\) (by definition or [BSX, Thm. 3.17]) and hence \(M_{K}\) has a \(G_{L}\)-invariant basis over \(\mathcal{R}_{K}(\mathfrak{Y})\). Since the \(\varphi_{L}\)-operator on \(\mathcal{R}_{K}(\mathfrak{Y})\) is induced from that on \(\mathcal{R}_{L}(\mathfrak{Y})\), it commutes with the action of \(G_{L}\). Similarly, one checks that this action commutes with the operator \(\psi_{L}\) of \(\mathcal{R}_{K}(\mathfrak{Y})\). Indeed, by Lemma 4.1.14 there exists a \(G_{L}\)-invariant basis of \(\mathcal{R}_{K}(\mathfrak{Y})\) over \(\varphi_{L}(\mathcal{R}_{K}(\mathfrak{Y}))\)), whence the trace commutes with the \(G_{L}\)-action. From this and the construction of the operator \(\psi_{M}\), one derives easily that also \(\psi_{M}\) commutes with the \(G_{L}\)-action. As a consequence we obtain natural isomorphisms \[M^{\psi_{M}=0}\cong\left((M_{K})^{G_{L}}\right)^{\psi_{M}=0}\cong\left((M_{K}) ^{\psi_{M}=0}\right)^{G_{L}}. \tag{99}\] Since the Lubin-Tate isomorphism \(\mathcal{R}_{K}(\mathfrak{X})\cong\mathcal{R}_{K}(\mathbf{B})\) respects the \((\varphi_{L},\Gamma_{L})\)-module structure, Thm. 4.3.21 applies for both choices of \(\mathfrak{Y}\), i.e., we obtain a separately continuous action \[\mathcal{R}_{K}(\Gamma_{L})\times(M_{K})^{\psi=0}\rightarrow(M_{K})^{\psi=0}. \tag{100}\] Moreover, if \(M=\bigoplus_{i=1}^{r}\mathcal{R}_{L}(\mathfrak{Y})e_{i}\), then the families \((\eta(1,Z)\varphi_{M}(e_{i}))\) and \((\mathrm{ev}_{1}\varphi_{M}(e_{i}))\) form basis of \((M_{K})^{\psi=0}\) as \(\mathcal{R}_{K}(\Gamma_{L})\)-modules in case \(\mathbf{B}\) and \(\mathfrak{X}\), respectively. Therefore, we consider next a natural \(G_{L}\)-action on \(\mathcal{R}_{K}(\Gamma_{L})\) and show that (100) is \(G_{L}\)-equivariant. To the first aim we use the canonical isomorphisms (77) and (80) \[\mathcal{R}_{K}(\Gamma_{L})\stackrel{{\cong}}{{\leftarrow}} \mathbb{Z}[\Gamma_{L}]\otimes_{\mathbb{Z}[\Gamma_{n}]}\mathcal{R}_{K}(\Gamma_ {n})\stackrel{{\cong}}{{\underset{\ell_{n}\ast}{\overset{\ell}{ \ast}}}}\mathbb{Z}[\Gamma_{L}]\otimes_{\mathbb{Z}[\Gamma_{n}]}\mathcal{R}_{K}( \mathfrak{X})\] to extend the \(G_{L}\)-action from \(\mathcal{R}_{K}(\mathfrak{X})\) to \(\mathcal{R}_{K}(\Gamma_{L})\); clearly, we obtain from (97) and the fact that the isomorphism \(\mathcal{R}_{K}(\Gamma_{n})\stackrel{{\ell_{n}\ast}}{{\underset{ \cong}{\overset{\ell_{n}\ast}{\ast}}}}\mathcal{R}_{K}(\mathfrak{X})\) is defined over \(L\), that \[\mathcal{R}_{K}(\Gamma_{L})^{G_{L}}\cong\mathcal{R}_{L}(\Gamma_{L}). \tag{101}\] To the second aim we proof the following **Lemma 4.3.22**.: _The action (100) is \(G_{L}\)-equivariant._ Proof.: We fix \(\sigma\in G_{L}\) and define a second separately continuous action \[\mathcal{R}_{K}(\Gamma_{L})\times(M_{K})^{\psi=0}\to(M_{K})^{\psi=0}\] by sending \((\lambda,x)\) to \(\sigma^{-1}\left(\sigma(\lambda)(\sigma(x))\right)\) (using that \(\sigma\) and \(\psi_{L}\) commute and that \(\sigma\) is a homeomorphism). By the uniqueness statement of Thm. 4.3.21, it suffices to show that the new and original action coincide on \(\Gamma\times(M_{K})^{\psi=0}\). We shall show that these actions coincide even as actions \(\Gamma_{L}\times M_{K}\to M_{K}:\) For \(\gamma\in\Gamma\), \(f\in\mathcal{R}_{K}(\mathfrak{Y})\) and \(m\in M\) we calculate \[\sigma^{-1}\left(\sigma(\gamma)(\sigma(f\otimes m))\right) =\sigma^{-1}\left(\gamma(\sigma(f)\otimes m)\right)\] \[=\sigma^{-1}\left(\gamma(\sigma(f))\otimes\gamma(m)\right)\] \[=\sigma^{-1}\left(\sigma(\gamma(f))\right)\otimes\gamma(m)\] \[=\gamma(f)\otimes\gamma(m)\] \[=\gamma(f\otimes m).\] Here we used firstly that \(\sigma\) acts trivially on \(\gamma\) (or rather \(\mathrm{ev}_{\gamma}\)) as they are already defined over \(L\) (via the Fourier transformation) and secondly, that the \(G_{L}\)- and \(\Gamma_{L}\)-actions commute. Since this equality holds for all \(\sigma\in G_{L}\), the claim follows. Taking \(G_{L}\)-invariants of (100) therefore induces - upon using (99) and (101) - the following separately continuous action \[\mathcal{R}_{L}(\Gamma_{L})\times M^{\psi=0}\to M^{\psi=0} \tag{102}\] which extends the \(\Gamma_{L}\)-action. We thus obtain the following **Theorem 4.3.23**.: 1. _The_ \(\Gamma_{L}\)_-action on_ \(M\) _(in_ \(\mathfrak{M}^{an}(\mathcal{R}_{L}(\mathfrak{X}))\) _or_ \(\mathfrak{M}^{an}(\mathcal{R}_{L}(\mathbf{B}))\)_) extends to a separately continuous_ \(\mathcal{R}_{L}(\Gamma_{L})\)_-action on_ \(M^{\psi_{L}=0}\) _(with respect to the_ \(LF\)_-topology on_ \(\mathcal{R}_{L}(\Gamma_{L})\) _and the subspace topology on_ \(M^{\psi_{L}=0}\)_). If_ \(M\xrightarrow{f}N\) _is a homomorphism of_ \(L\)_-analytic_ \((\varphi_{L},\Gamma_{L})\)_-modules, then_ \(M\xrightarrow{f^{\psi_{L}}\to 0}N\) _is_ \(\mathcal{R}_{L}(\Gamma_{L})\)_-equivariant with regard to this action._ 2. _If_ \(\mathfrak{Y}=\mathfrak{X}\) _then_ \(M^{\psi_{L}=0}\) _is a free_ \(\mathcal{R}_{L}(\Gamma_{L})\)_-module of rank_ \(\mathrm{rk}_{\mathcal{R}}M\)_. More precisely, if_ \(e_{1},\ldots,e_{r}\) _is a basis of_ \(M\) _over_ \(\mathcal{R}_{L}(\mathfrak{X})\)_, then an_ \(\mathcal{R}_{L}(\Gamma_{L})\)_-basis of_ \(M^{\psi_{L}=0}\) _is given by_ \(\mathrm{ev}_{1}\varphi_{M}(e_{1}),\ldots,\mathrm{ev}_{1}\varphi_{M}(e_{r})\)_._ Proof.: It is easy to check that also the \(\mathcal{R}_{L}(\Gamma_{L})\)-equivariance of \(f^{\psi_{L}=0}\) follows by descent. Therefore only (ii) remains to be shown. But this is an immediate consequence of the fact noted above, that the family \((\mathrm{ev}_{1}\varphi_{M}(e_{i}))\) forms a \(G_{L}\)-invariant basis of \((M_{K})^{\psi=0}\) as \(\mathcal{R}_{K}(\Gamma_{L})\)-module, by just taking \(G_{L}\)-invariants again. **Remark 4.3.24**.: 1. _For each complete intermediate field_ \(L\subseteq K^{\prime}\subseteq\mathbb{C}_{p}\) _we obtain an analogous structure theorem for_ \((M_{K^{\prime}})^{\psi=0}\) _over_ \(\mathcal{R}_{K^{\prime}}(\Gamma_{L})\) _by replacing_ \(L\) _by_ \(K^{\prime}\) _everywhere in the above reasoning._ 2. _Since for_ \(\mathfrak{Y}=\mathbf{B}\) _the basis_ \((\eta(1,Z)\varphi_{M}(e_{i}))\) _of_ \((M_{K})^{\psi=0}\) _as_ \(\mathcal{R}_{K}(\Gamma_{L})\)_-module is visibly not_ \(G_{L}\)_-invariant, we cannot conclude the analogue of Thm._ 4.3.23_(ii) in this case._ #### 4.3.9 The Mellin transform and twists Extending the Mellin transform from Lemma 4.1.6 we introduce the map \[\mathfrak{M}:\mathcal{R}_{K}(\Gamma_{L})\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{R}_{K}(\mathfrak{X})^{\psi_{L}=0},\ \ \lambda\mapsto\lambda(\mathrm{ev}_{1})\,\] which is an isomorphism by Thm. 4.3.23. If \(\Omega\in K\), then its composite with the LT-isomorphism is the isomorphism \[\mathfrak{M}_{LT}:\mathcal{R}_{K}(\Gamma_{L})\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{R}_{K}(\mathbf{B})^{\psi_{L}=0},\ \ \lambda\mapsto\lambda(\eta(1,Z))\.\] Recall the twist operators \(Tw_{\chi}\) from section 4.1.3. **Lemma 4.3.25**.: _The diagram_ (103) _is commutative; in particular, the right hand vertical map is an isomorphism._ Proof.: The commutativity can be checked after base change. Assuming \(\Omega\in K\) the diagram corresponds by Remark 4.2.9 to the diagram (104) Now, the corresponding result for \(\mathcal{R}_{K}(\Gamma_{L})\) replaced by \(D(\Gamma_{L},K)\) is implicitly given in sections 4.1.3 and 4.1.4. [Co2, SS1.2.4] establishes, for \(\gamma\in\Gamma_{L}\), the relation \(\partial_{\mathrm{inv}}\circ\gamma=\chi_{LT}(\gamma)\gamma\circ\partial_{ \mathrm{inv}}\) as operators on \(\mathcal{R}_{K}(\mathbf{B})\). It follows by \(K\)-linearity and continuity that the relation of operators \(\partial_{\mathrm{inv}}\circ\lambda=Tw_{\chi_{LT}}(\lambda)\circ\partial_{ \mathrm{inv}}\) holds for all \(\lambda\in D(\Gamma_{L},K)\). By continuity of the action of \(\mathcal{R}_{K}(\Gamma_{L})=\mathbb{Z}[\![\Gamma_{L}]\!]\otimes_{\mathbb{Z}[ \![\Gamma_{n}]\!]}\mathcal{R}_{K}(\Gamma_{n})\) on \(\mathcal{R}_{K}(\mathbf{B})^{\psi_{L}=0}\) it suffices to check the compatibility for the element \(Y_{n}^{-1}\), where \(Y_{n}\in D(\Gamma_{n},K)\), for \(n>>0\), has been defined at the end of section 4.3.4. Using that \(Tw_{\chi_{LT}}\) is multiplicative and that \(\partial_{\mathrm{inv}}(\eta(1,Z))=\Omega\eta(1,Z)\) the claim follows from the relation \[Tw_{\chi_{LT}}(Y_{n}^{-1})\eta(1,Z) =Tw_{\chi_{LT}}(Y_{n})^{-1}\frac{1}{\Omega}\partial_{\mathrm{inv }}\left(Y_{n}Y_{n}^{-1}\eta(1,Z)\right)\] \[=Tw_{\chi_{LT}}(Y_{n})^{-1}Tw_{\chi_{LT}}(Y_{n})\frac{1}{\Omega} \partial_{\mathrm{inv}}\left(Y_{n}^{-1}\eta(1,Z)\right)\] \[=\frac{1}{\Omega}\partial_{\mathrm{inv}}\left(Y_{n}^{-1}\eta(1,Z )\right).\] **Lemma 4.3.26**.: _Assume \(\Omega\in K\) and let \(n_{1}\) be as in Lemma 4.3.13. Then, for \(n\geqslant n_{1},\) the map \(\mathfrak{M}_{LT}\) induces isomorphisms_ \[\mathcal{R}_{K}(\Gamma_{n})\cong\varphi_{L}^{n}(\mathcal{R}_{K}(\mathbf{B})) \eta(1,Z)\ \ (\subseteq\mathcal{R}_{K}(\mathbf{B})^{\psi_{L}=0})\] of \(\mathcal{R}_{K}(\Gamma_{n})\)-modules and_ \[D(\Gamma_{n},K)\cong\varphi_{L}^{n}(\mathcal{O}_{K}(\mathbf{B}))\eta(1,Z)\ \ (\subseteq \mathcal{O}_{K}(\mathbf{B})^{\psi_{L}=0})\] _of \(D(\Gamma_{n},K)\)-modules._ Proof.: By taking limits the first isomorphism follows from Lemma 4.3.19(ii) in combination with Lemma 4.3.11(i), both applied to \(M^{I}=\mathcal{R}_{K}(\mathbf{B}_{I})\). The isomorphism of the latter restricts visibly to the isomorphism \(\mathcal{O}_{K}(\mathbf{B}_{I})\cong\varphi_{L}^{n}(\mathcal{O}_{K}(\mathbf{B }_{I}))\eta(1,Z)\) while \(\mathcal{O}_{K}(\mathbf{B}_{I})\) is a free \(\mathcal{O}_{K}(\hat{\ell}_{n}^{*}\circ\kappa(\mathbf{B}_{I}))\)-module with basis \(1\) by an obvious analogue of the former reference. Hence we obtain the second isomorphism by the same reasoning. ### Explicit elements There are two sources for explicit elements in the distribution algebras \(D(o_{L}^{\times},L)\) and \(D(U_{n},L)\), where in this section we fix an \(n\geq n_{1}\), i.e., \(\log:U_{n}\xrightarrow{\cong}\pi_{L}^{n}o_{L}\) is an isomorphism. First of all we have, for any group element \(u\in o_{L}^{\times}\), resp. \(u\in U_{n}\), the Dirac distribution \(\delta_{u}\) in \(D(o_{L}^{\times},L)\), resp. in \(D(U_{n},L)\). As in section 4.1.1 the corresponding holomorphic function \(F_{\delta_{u}}=\operatorname{ev}_{u}\) is the function of evaluation in \(u\). **Lemma 4.4.1**.: 1. _Let_ \(u\in o_{L}^{\times}\) _be any element not of finite order; then the zeros of the function_ \(\operatorname{ev}_{u}-1\) _on_ \(\mathfrak{X}^{\times}\) _are exactly the characters_ \(\chi\) _of finite order such that_ \(\chi(u)=1\)_._ 2. _For any_ \(1\neq u\in U_{n}\) _the zeros of the function_ \(\operatorname{ev}_{u}-1\) _on_ \(\mathfrak{X}_{n}^{\times}\) _all have multiplicity one._ Proof.: i. Obviously the zeros of \(\operatorname{ev}_{u}-1\) are the characters \(\chi\) such that \(\chi(u)=1\). On the other hand consider any locally \(L\)-analytic character \(\chi:o_{L}^{\times}\to\mathbb{C}_{p}^{\times}\). Its kernel \(H:=\ker(\chi)\) is a closed locally \(L\)-analytic subgroup of \(o_{L}^{\times}\). Hence its Lie algebra \(\operatorname{Lie}(H)\) is an \(L\)-subspace of \(\operatorname{Lie}(o_{L}^{\times})=L\). We see that either \(\operatorname{Lie}(H)=L\), in which case \(H\) is open in \(o_{L}^{\times}\) and hence \(\chi\) is a character of finite order, or \(\operatorname{Lie}(H)=0\), in which case \(H\) is zero dimensional and hence is a finite subgroup of \(o_{L}^{\times}\). If \(\chi(u)=1\) then, by our assumption on \(u\), the second case cannot happen. ii. (We will recall the concept of multiplicity further below.) Because of the isomorphism \(\mathfrak{X}_{n}^{\times}\cong\mathfrak{X}\) it suffices to prove the corresponding assertion in the additive case. Let \(0\neq a\in o_{L}\) and let \(\chi\in\mathfrak{X}(\mathbb{C}_{p})\) be a character of finite order such that \(\chi(a)=1\). By [ST2] we have an isomorphism between \(\mathfrak{X}_{/\mathbb{C}_{p}}\) and the open unit disk \(\mathbf{B}_{/\mathbb{C}_{p}}\). Let \(z\in\mathbf{B}(\mathbb{C}_{p})\) denote the image of \(\chi\) under this isomorphism. By [ST2, Prop. 3.1] and formula (\(\infty\)) on p. 458, the function \(\operatorname{ev}_{a}-1\) corresponds under this isomorphism to the holomorphic function on \(\mathbf{B}(\mathbb{C}_{p})\) given by the formal power series \[F_{at_{0}^{\prime}}(Z)=\exp(g\Omega\log_{LT}(Z))-1\,\] where \(\Omega\neq 0\) is a certain period. By assumption we have \(F_{at_{0}^{\prime}}(z)=0\). On the other hand the formal derivative of this power series is \[\frac{d}{dZ}F_{gt_{0}^{\prime}}(Z)=g\Omega g_{LT}(Z)(F_{gt_{0}^{\prime}}(Z)+1)\.\] Since \(g_{LT}(Z)\) is a unit in \(o_{L}[[Z]]\) we see that \(z\) is not a zero of this derivative. It follows that \(z\) has multiplicity one as a zero of \(F_{at_{0}^{\prime}}(Z)\). The other source comes from the Lie algebra \(\operatorname{Lie}(U_{n})=\operatorname{Lie}(o_{L}^{\times})=L\). We have the element \[\nabla:=1\in\operatorname{Lie}(o_{L}^{\times})=L\.\] On the other hand there is the \(L\)-linear embedding ([ST1, SS2]) \[\operatorname{Lie}(U_{n}) \longrightarrow D(U_{n},L)\] \[\mathfrak{x} \longmapsto[f\mapsto\frac{d}{dt}f(\exp_{U_{n}}(t\mathfrak{x}))_{| t=0}]\,\] which composed with the Fourier isomorphism becomes the map \[\operatorname{Lie}(U_{n}) \longrightarrow\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times})\] \[\mathfrak{x} \longmapsto[\chi\mapsto d\chi(\mathfrak{x})]\.\] On the one hand we therefore may and will view \(\nabla\) always as a distribution on \(U_{n}\) or \(\sigma_{L}^{\times}\). On the other hand, using the formula before [BSX, Lem. 1.28], one checks that the function \(F_{\nabla}\) (corresponding to \(\nabla\) via the Fourier isomorphism) on \(\mathfrak{X}_{n}^{\times}\) is explicitly given by \[F_{\nabla}(\chi)=\pi_{L}^{-n}\log(\chi(\exp(\pi_{L}^{n}))). \tag{105}\] **Lemma 4.4.2**.: _The zeros of the function \(F_{\nabla}\) on \(\mathfrak{X}_{n}^{\times}\) are precisely the characters of finite order each with multiplicity one._ Proof.: Once again because of the isomorphism \(\mathfrak{X}_{n}^{\times}\simeq\mathfrak{X}\) it suffices to prove the corresponding assertion in the additive case. This is done in [BSX, Lem. 1.28]. To recall from [BSX, SS1.1] the concept of multiplicity used above and to explain a divisibility criterion in these rings of holomorphic functions we let \(\mathfrak{Y}\) be any one dimensional smooth rigid analytic quasi-Stein space over \(L\) such that \(\mathcal{O}_{L}(\mathfrak{Y})\) is an integral domain. Under these assumptions the local ring in a point \(y\) of the structure sheaf \(\mathcal{O}_{\mathfrak{Y}}\) is a discrete valuation ring. Let \(\mathfrak{m}_{y}\) denote its maximal ideal. The divisor \(\operatorname{div}(f)\) of any nonzero function \(f\in\mathcal{O}_{L}(\mathfrak{Y})\) is defined to be the function \(\operatorname{div}(f):\mathfrak{Y}\to\mathbb{Z}_{\geqslant 0}\) given by \(\operatorname{div}(f)(y)=n\) if and only if the germ of \(f\) in \(y\) lies in \(\mathfrak{m}_{y}^{n}\backslash\mathfrak{m}_{y}^{n+1}\). By Lemma 1.1 in (loc. cit.) for any affinoid subdomain \(\mathfrak{Z}\subseteq\mathfrak{Y}\) the set \[\{x\in\mathfrak{Z}|\operatorname{div}(f)>0\}\text{ is finite.} \tag{106}\] **Lemma 4.4.3**.: _For any two nonzero functions \(f_{1},f_{2}\in\mathcal{O}_{L}(\mathfrak{Y})\) we have \(f_{2}\in f_{1}\mathcal{O}_{L}(\mathfrak{Y})\) if and only if \(\operatorname{div}(f_{2})\geqslant\operatorname{div}(f_{1})\)._ Proof.: We consider the principal ideal \(f_{1}\mathcal{O}_{L}(\mathfrak{Y})\). As a consequence of [BSX, Prop. 1.6 and Prop. 1.4] we have \[f_{1}\mathcal{O}_{L}(\mathfrak{Y})=\{f\in\mathcal{O}_{L}(\mathfrak{Y}) \backslash\{0\}:\operatorname{div}(f)\geqslant\operatorname{div}(f_{1})\} \cup\{0\}.\] We now apply these results to exhibit a few more explicit elements in the distribution algebra \(D(U_{n},L)\), which will be used later on. **Lemma 4.4.4**.: _For any \(1\neq u\in U_{n}\) the fraction \(\frac{\nabla}{\partial_{u}-1}\) is a well defined element in the integral domain \(D(U_{n},L)\)._ Proof.: By the Fourier isomorphism we may equivalently establish that the fraction \(\frac{F_{\nabla}}{\operatorname{ev}_{u}-1}\) exists in \(\mathcal{O}_{L}(\mathfrak{X}_{u}^{\times})\). But for this we only need to combine the Lemmas 4.4.1, 4.4.2, and 4.4.3. The next elements will only lie in the Robba ring of \(U_{n}\). Since \(\mathfrak{X}_{n}^{\times}\cong\mathfrak{X}\) we deduce from Prop. 4.1.7 and the subsequent discussion that there is an admissible covering \(\mathfrak{X}_{n}^{\times}=\bigcup_{j\geqslant 1}\mathfrak{Y}_{n,j}\) by an increasing sequence \(\mathfrak{Y}_{n,1}\subseteq\ldots\subseteq\mathfrak{Y}_{n,j}\subseteq\ldots\) of affinoid subdomains \(\mathfrak{Y}_{n,j}\) with the following properties: * The system \((\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j})/_{\mathbb{C}_{p}}\) is isomorphic to an increasing system of one dimensional annuli. This implies: * \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) is the increasing union of the rings \(\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j})\) and contains \(\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times})\); * each \(\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j})\) as well as \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) are integral domains. * Each \(\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j}\) is a one dimensional smooth quasi-Stein space. In particular, the \(\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j})\) are naturally Frechet algebras, and we may view \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\) as their locally convex inductive limit. We also conclude that Lemma 4.4.3 applies to each \(\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j}\). We now fix a basis \(b=(b_{1},\ldots,b_{d})\) of \(U_{n}\) as a \(\mathbb{Z}_{p}\)-module such that \(b_{i}\neq 1\) for any \(1\leqslant i\leqslant d\). **Proposition 4.4.5**.: _The fraction_ \[\Xi_{b}:=\frac{F_{\nabla}^{d-1}}{\prod_{i=1}^{d}(\operatorname{ev}_{b_{i}}-1)}\] _is well defined in the Robba ring \(\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\)._ Proof.: The zeros of the fraction \(\frac{F_{\nabla}}{\operatorname{ev}_{b_{i}}-1}\in\mathcal{O}_{L}(\mathfrak{X }_{n}^{\times})\) are precisely those finite order characters which are nontrivial on \(b_{i}\). Hence, if we fix a \(1\leqslant j\leqslant d\), then the product \(\prod_{i\neq j}\frac{F_{\nabla}}{\operatorname{ev}_{b_{i}}-1}\) still has a zero in any finite order character which is nontrivial on \(b_{i}\) for at least one \(i\neq j\). On the other hand the zeros of \(\operatorname{ev}_{b_{j}}-1\) are those finite order characters which are trivial on \(b_{j}\) (and they have multiplicity one). Since only the trivial character is trivial on all \(b_{1},\ldots,b_{d}\) we see that all zeros of \(\operatorname{ev}_{b_{j}}-1\) with the exception of the trivial character occur also as zeros of the product \(\frac{F_{\nabla}^{d-1}}{\prod_{i\neq j}(\operatorname{ev}_{b_{i}}-1)}\). It follows that the asserted fraction \(\Xi_{b}\) exists in \(\mathcal{O}_{L}(\mathfrak{X}_{n}^{\times}\backslash\mathfrak{Y}_{n,j})\) provided \(j\) is large enough so that the trivial character is a point in \(\mathfrak{Y}_{n,j}\). Since \((\prod_{i=1}^{d}(\operatorname{ev}_{b_{i}}-1))\Xi_{b}=F_{\nabla}^{d-1}\) and \(\mathcal{R}_{L}(\mathfrak{X}_{\Gamma}^{\times})\) is an integral domain, we see the independence of \(j\). In fact, the proof of Prop. 4.4.5 shows that \(\Xi_{b}\) is a meromorphic function on \(\mathfrak{X}_{n}^{\times}\) with a single pole at the trivial character, which moreover is a simple pole. We abbreviate \(\ell(b):=\prod_{i=1}^{d}\log(b_{i})\). **Proposition 4.4.6**.: _For any other basis \(b^{\prime}=(b^{\prime}_{1},\ldots,b^{\prime}_{d})\) of \(U_{n}\) as a \(\mathbb{Z}_{p}\)-module with \(b^{\prime}_{i}\neq 1\) we have_ \[\ell(b^{\prime})\Xi_{b^{\prime}}-\ell(b)\Xi_{b}\in\mathcal{O}_{L}(\mathfrak{X }_{n}^{\times})\.\] Proof.: We only have to check that the asserted difference does not any longer have a pole at the trivial character. Both, \(\Xi_{b^{\prime}}^{-1}\) and \(\Xi_{b}^{-1}\), are uniformizers in the local ring \(\mathcal{O}_{1}\) of \(\mathfrak{X}_{n}^{\times}\) in the trivial character. Hence we have in \(\mathcal{O}_{1}\) an equality of the form \[\frac{\Xi_{b^{\prime}}}{\Xi_{b}}=x+\Xi_{b}^{-1}\cdot G\] with some \(x\in L\) and \(G\in\mathcal{O}_{1}\). Our assertion amounts to the claim that \(x=\prod_{i}\frac{\log(b_{i})}{\log(b^{\prime}_{i})}\). To compute \(x\) we use (27) which leads to the open embedding \[\mathbf{B}(r_{0})_{/L}\xrightarrow{\Xi}\mathfrak{X}(r_{0})\xrightarrow{\Xi} \mathfrak{X}\xrightarrow{\ell^{\bullet}}\mathfrak{X}_{n}^{\times}\] which maps \(y\) to the character \(\chi_{y}(u):=\exp(\pi_{L}^{-n}\log(u)y)\) ( and, in particular, \(0\) to the trivial character). Using (105) we see that \(F_{\nabla}\) pulls back to the function \(y\mapsto\pi_{L}^{-n}y\) on \(\mathbf{B}(r_{0})\). On the other hand \(\operatorname{ev}_{b_{i}}-1\) pulls back to \(y\mapsto\exp(\pi_{L}^{-n}\log(b_{i})y)-1\). Hence \(\Xi_{b}\) pulls back to the meromorphic function \[y\longmapsto\frac{\pi_{L}^{-n(d-1)}y^{d-1}}{\prod_{i}(\exp(\pi_{L}^{-n}\log(b_ {i})y)-1)}\.\] Its germ at zero lies in \(\frac{1}{\pi_{L}^{-n}(\prod_{i}\log(b_{i}))y}(1+y\mathcal{O}_{0})\), where \(\mathcal{O}_{0}\) denotes the local ring of \(\mathbf{B}(r_{0})_{/L}\) in zero. It follows that the germ of the pull back of \(\frac{\Xi_{b}}{\Xi_{b}}\) lies in \((\prod_{i}\frac{\log(b_{i})}{\log(b_{i}^{\prime})})(1+y\mathcal{O}_{0})\). By Lemma 4.4.2 the function \(F_{\nabla}\Xi_{b}\) is holomorphic on \(\mathfrak{X}_{n}^{\times}\) and has no zero in the trivial character. **Lemma 4.4.7**.: _The value of \(F_{\nabla}\Xi_{b}\) at the trivial character is \(\ell(b)^{-1}\)._ Proof.: We use the same strategy as in the previous proof. The function \(\Theta_{b}\) pulls back to the function \[y\longmapsto\frac{\pi_{L}^{-nd}y^{d}}{\prod_{i}(\exp(\pi_{L}^{-n}\log(b_{i})y )-1)}\] on \(\mathbf{B}(r_{0})_{/L}\), and we have to compute its value at \(0\). But visibly the above right hand side is a power series in \(y\) with constant term \(\frac{1}{\prod_{i=1}^{d}\log(b_{i})}\). These last two facts suggest to renormalize our functions by setting \[\overline{\Xi}_{b}:=\ell(b)\Xi_{b}\qquad\text{and}\qquad\Theta_{b}:=F_{\nabla }\overline{\Xi}_{b}\.\] Choosing a field \(K\) containing \(\Omega\) we also let \(\widetilde{\Xi}_{b}\) denote the image of \(\overline{\Xi}_{b}\) under the composite map \[\mathcal{R}_{L}(\mathfrak{X}_{n}^{\times})\xrightarrow{\ell_{n*}}\mathcal{R} _{L}(\mathfrak{X})\subseteq\mathcal{R}_{K}(\mathfrak{X})\xrightarrow{\kappa^{ *}}\mathcal{R}_{K}(\mathbf{B}). \tag{107}\] **Remark 4.4.8**.: _Suppose that \(K\) contains \(\Omega\). We have_ \[\widetilde{\Xi}_{b}=\frac{\ell(b)(\frac{\Omega}{\pi_{L}^{n}}\log_{LT}(Z))^{d- 1}}{\prod_{j}(\exp(\log(b_{j})\frac{\Omega}{\pi_{L}^{n}}\log_{LT}(Z))-1)}\,\] _and it follows from the proof of Prop. 4.4.5 that \(Z\widetilde{\Xi}_{b}\) belongs to \(\mathcal{O}_{K}(\mathbf{B})\) with constant term \((\frac{\Omega}{\pi_{L}^{n}})^{-1}\), whence_ \[\widetilde{\Xi}_{b}\equiv\frac{\pi_{L}^{n}}{\Omega Z}\bmod\mathcal{O}_{K}( \mathbf{B}).\] Proof.: One checks that the map (107) sends a distribution \(\mu\) to the map \[g_{\mu}(z)=\mu(\exp(\Omega\frac{\log(-)}{\pi_{L}^{n}}\log_{LT}(z)))\.\] In particular, a Dirac distribution \(\delta_{a}\) is sent to \(\exp(\Omega\frac{\log(a)}{\pi_{L}^{n}}\log_{LT}(z))\). Recall that the action of \(\nabla\) as distribution sends a locally \(L\)-analytic function \(f\) to \(-\left(\frac{d}{dt}f(\exp(-t))\right)_{|t=0},\) whence \(\nabla\) is sent to \[\nabla\left(\exp(\Omega\frac{\log(-)}{\pi_{L}^{n}}\log_{LT}(z))\right)=-\left( \frac{d}{dt}\exp(\Omega\frac{\log(\exp(-t))}{\pi_{L}^{n}}\log_{LT}(z))\right)_ {|t=0}=\frac{\Omega}{\pi_{L}^{n}}\log_{LT}(z).\] **Remark 4.4.9**.: _Recall that \(\Theta_{b}\) lies in \({\mathcal{O}}_{L}({\mathfrak{X}}_{n}^{\times})\) and therefore can be viewed, via the Fourier transform, as a distribution in \(D(U_{n},L)\subseteq D(o_{L}^{\times},L)\). If \(K\) contains \(\Omega\) and for sufficiently large \(n\) the Mellin transform \({\mathfrak{M}}\) in Lemma 4.1.6 then satisfies_ \[\kappa^{*}\circ{\mathfrak{M}}(\Theta_{b})=\varphi_{L}^{n}(\xi_{b})\eta(1,Z)\] _with_ \[\xi_{b}\equiv\frac{\log_{LT}(Z)}{Z}\bmod\log_{LT}(Z){\mathcal{O}}_{K}({\bf B })\.\] Proof.: Consider the the element \[F(X)=\frac{X}{\exp(X)-1}=1+XQ(X)\] with \(Q(X)\in{\mathbb{Q}}_{p}[[X]]\) and let \(r>0\) be such that \(Q(X)\) converges on \(|X|\leq r\). We shall proof the claim within the Banach algebra \({\mathcal{R}}_{K}^{I}({\bf B})\) for \(I=[0,r]\) (which contains \({\mathcal{O}}_{K}({\bf B})\) and using that the actions on both rings are compatible). We assume for the operator norm that \(\|\delta_{b_{i}}-1\|_{I}<\min(p^{-\frac{1}{p-1}},r)\) for all \(i\) (otherwise we enlarge \(n\) according to Lemma 4.3.13). From [BSX, Cor. 2.3.2, proof of Lem. 2.3.1] it follows that \(\nabla=\frac{\log(\delta_{b_{i}})}{\log(b_{i})}\) as operators in the Banach algebra \(A\) of continuous linear endomorphisms of \({\mathcal{R}}_{K}^{I}({\bf B})\) and \[\exp(\log(b_{i})\nabla)=\exp(\log(\delta_{b_{i}}))=\delta_{b_{i}} \tag{108}\] in \(A\). Moreover, \[\|\log(\delta_{b_{i}})\|_{I}<\min(p^{-\frac{1}{p-1}},r) \tag{109}\] for all \(i\), whence \(\|\nabla\|_{I}<\min(p^{-\frac{1}{p-1}},r)|\log(b_{i})|\). Then, as operators in \(A\) we have \[\log(b_{i})^{-1}+\nabla Q(\log(b_{i})\nabla)=\log(b_{i})^{-1}F(\log(b_{i}) \nabla)=\frac{\nabla}{\exp(\log(b_{i})\nabla)-1}=\frac{\nabla}{\delta_{b_{i}} -1}. \tag{110}\] Hence \[\Theta_{b}=\frac{\ell(b)\nabla^{d}}{\prod(\exp(\log(b_{j})\nabla)-1)}=1+\ell( b)\nabla g(\log(b_{j})\nabla)\] for some power series \(g\in{\mathcal{R}}_{K}^{I}({\bf B})\). It follows that \[\kappa^{*}\circ{\mathfrak{M}}(\Theta_{b})=\big{(}1+\ell(b)\Omega\log_{LT}(Z)f( Z)\big{)}\eta(1,Z). \tag{111}\] for some \(f(Z)\in{\mathcal{R}}_{K}^{I}({\bf B})\). Indeed, concerning the derived action we have \[\nabla\left(\eta(1,Z)\right)=\left(\frac{d}{dt}\exp(\Omega\exp(t)\log_{LT}(Z) )\right)_{|t=0}=\Omega\log_{LT}(Z)\eta(1,Z)\] (cf. also [BSX, end of SS2.3] for the fact that \[\nabla\ \text{acts as}\ \log_{LT}(Z)\partial_{\text{inv}}\ \text{on}\ {\mathcal{O}}_{K}({\bf B})\ ) \tag{112}\] and \[\nabla(\Omega\log_{LT}(Z))=\Omega\log_{LT}(Z).\] Furthermore, we obtain inductively that \[\nabla^{i}\eta(1,Z)=\left(\prod_{k=0}^{i-1}(\Omega\log_{LT}(Z)+k)\right)\eta(1,Z)\] for all \(i\geq 0.\) The convergence of \(f(Z)\) can be deduced using the operator norm (109). On the other hand, according to [BF, Lem. 2.4.2] we have \[\Theta_{b}\eta(1,Z)=\frac{\ell(b)\log_{LT}(Z)}{\varphi_{L}^{n}(Z)}g(Z)\] for some \(g(Z)\in\mathcal{O}_{K}(\mathbf{B}).\) Since the element \(\Theta_{b}\eta(1,Z)\) lies in \((\mathcal{O}_{K}(\mathbf{B}))^{\psi_{L}=0},\) we conclude from \[0=\psi_{L}(\frac{\pi_{L}^{-1}\varphi_{L}(\log_{LT}(Z))}{\varphi_{L}^{n}(Z)}g(Z ))=\frac{\pi_{L}^{-1}\log_{LT}(Z)}{\varphi_{L}^{n-1}(Z)}\psi_{L}(g(Z))\] that \(g(Z)\) belongs to \((\mathcal{O}_{K}(\mathbf{B}))^{\psi_{L}=0},\) whence it is of the form \(\sum_{a\in(o_{L}/\pi_{L})^{\times}}\varphi_{L}(g_{a}(Z))\eta(a,Z)\) for some \(g_{a}\in\mathcal{O}_{K}(\mathbf{B})\) by the analogue of (96) for \(\mathcal{O}_{K}(\mathbf{B}).\) From Lemma 4.3.26 we derive that, for some \(a(Z)\in\mathcal{O}_{K}(\mathbf{B}),\) we have \[\Theta_{b}\eta(1,Z)=\ell(b)\varphi_{L}^{n}(a(Z))\eta(1,Z).\] Since the decomposition in (96) is direct, we conclude that \(g(Z)=\varphi_{L}(g_{1}(Z))\eta(1,Z)\) and \(\frac{\log_{LT}(Z)}{\varphi_{L}^{n}(Z)}\varphi_{L}(g_{1}(Z))=\varphi_{L}^{n} (a(Z)),\) whence \(\log_{LT}(Z)\) divides \(\varphi_{L}^{n}(a(Z)Z).\) Since \(\varphi_{L}^{n}\) sends the zeroes of \(\log_{LT}(Z),\) i.e., the points in \(LT(\pi_{L})=\bigcup_{k}LT[\pi_{L}^{k}],\) surjectively onto itself, we conclude by Lemma 4.4.3 that \(\log_{LT}(Z)\) divides also \(a(Z)Z\) in \(\mathcal{O}_{K}(\mathbf{B})\) and that there exists \(c(Z)\in\mathcal{O}_{K}(\mathbf{B})\) such that \[\kappa^{*}\circ\mathfrak{M}(\Theta_{b})=\ell(b)\varphi_{L}^{n}\left(\frac{ \log_{LT}(Z)}{Z}c(Z)\right)\eta(1,Z). \tag{113}\] Comparing (113) with the first description (111) gives the claim as \(c(0)=\ell(b)^{-1}\) because evaluation at \(0\) is compatible with the embedding \(\mathcal{O}_{K}(\mathbf{B})\subseteq\mathcal{R}_{K}^{I}(\mathbf{B})\) and \(\frac{\log_{LT}(Z)}{Z}(0)=1\) by (1). ### Pairings In this section we discuss various kinds of pairings. The starting point is Serre duality on \(\mathfrak{X}\) which induces a (residue) pairing \[<\,\ >_{\mathfrak{X}}:\mathcal{R}_{L}(\mathfrak{X})\times\mathcal{R}_{L}( \mathfrak{X})\to L,\] as we have seen in (47). Similarly, Serre duality on \(\mathfrak{X}^{\times}\) induces a pairing \[<\,\ >_{\Gamma_{L}}:\mathcal{R}_{L}(\Gamma_{L})\times\mathcal{R}_{L}(\Gamma_{ L})\to L \tag{114}\] for the Robba ring of \(\Gamma_{L}\), which by definition is the Robba ring of its character variety \(\mathfrak{X}_{\Gamma_{L}}\cong\mathfrak{X}^{\times}\) (induced by the isomorphism \(\chi_{LT}:\Gamma_{L}\to o_{L}^{\times}\)) as constructed in (51). This pairing, as defined in subsection 4.5.2, is actually already characterized by its restriction to \(\mathcal{R}_{L}(\Gamma_{n})\) for any \(n\geq n_{0}\) and thus is by construction and the functoriality properties of section 4.2 closely related to the pairing \(<\,\ >_{\mathfrak{X}}\) using the 'logarithm' \(\mathcal{R}_{L}(\Gamma_{n})\xrightarrow{(\ell_{n})*}\mathcal{R}_{L}(\mathfrak{ X})\), see diagram (49). In contrast, the commutative diagram from Thm. 4.5.12 in subsection 4.5.3 relates the pairing \(<\,\ >_{\Gamma_{L}}\) to the pairing \(<\,\ >_{\mathfrak{X}}\)_in a highly non-trivial, non-obvious way_. The resulting description of \(<\,\ >_{\Gamma_{L}}\) in (128) forms one main ingredient in the proof of the abstract reciprocity formula 4.5.32 below in subsection 4.5.5. Based on the (generalized) residue pairings (116) in subsection 4.5.1 \[\{\,\ \}:\tilde{M}\times M\to L,\] with \(\tilde{M}:=Hom_{\mathcal{R}}(M,\Omega^{1}_{\mathcal{R}})\) the pairing (114) induces for any (analytic) \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}_{L}(\mathfrak{X})\) an Iwasawa pairing (132) \[\{\,\ \}_{Iw}:\tilde{M}^{\psi_{L}=\frac{q}{\pi_{L}}}\times M^{\psi_{L}=1} \to D(\Gamma_{L},L)\] in subsection 4.5.4, which behaves well with twisting (cf. Lemma 4.5.22). By construction and the comparison isomorphism (135) for Kisin-Ren modules - the second main ingredient - the pairing \(\{\,\ \}_{Iw}\) is closely related to a pairing \[[\,\ ]:\mathcal{R}_{L}(\mathfrak{X})^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V^{*} (1))\times\mathcal{R}_{L}(\mathfrak{X})^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V( \tau^{-1}))\to\mathcal{R}_{L}(\Gamma_{L})\] induced from the natural pairing for \(D_{cris,L}\). The precise relationship is the content of an abstract form of a reciprocity formula, see Thm. 4.5.32. As a consequence we shall later derive a concrete reciprocity formula, i.e., the adjointness of Berger's and Fourquaux' big exponential map with our regulator map, see Thm. 5.2.1. #### 4.5.1 The residuum pairing for modules Throughout our coefficient field \(K\) is a complete intermediate extension \(L\subseteq K\subseteq\mathbb{C}_{p}.\) Let \(\mathfrak{Y}\) be either \(\mathfrak{X}\) or \(\mathbf{B}\) and \(\mathcal{R}=\mathcal{R}_{K}(\mathfrak{Y}).\) Consider the residuum map \(\operatorname{res}_{\mathfrak{X}}\) defined after (46) and the residuum map \[\operatorname{res}_{\mathbf{B}}:\Omega^{1}_{\mathcal{R}}\to K,\ \ \sum_{i}a_{i}Z^{i}dZ\mapsto a_{-1}.\] Recall that we are using the operator \(\psi_{L}:=\frac{q}{\pi}\psi_{L}^{\mathfrak{Y}}\) on \(\mathcal{R}.\) Moreover, we define \(\mathfrak{iota}_{*}:\mathcal{R}_{K}(\Gamma_{L})\to\mathcal{R}_{K}(\Gamma_{L})\) to be the map which is induced by sending \(\gamma\in\Gamma_{L}\) to its inverse \(\gamma^{-1},\) i.e., the involution of the group induces an isomorphism on the multiplicative character variety, which in turn gives rise to \(\mathfrak{iota}_{*}.\) The corresponding involution on \(\mathcal{R}_{K}(\Gamma_{n_{0}}),\) also denoted by \(\mathfrak{iota}_{*},\) satisfies the commutative diagram (115) \[\begin{array}{c}\mathcal{R}_{K}(\Gamma_{n_{0}})\stackrel{{ \hat{\ell}_{n_{0}}*}}{{\simeq}}\mathcal{R}_{K}(\mathfrak{X})\\ \stackrel{{\mathfrak{iota}_{*}}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\simeq}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{ \cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{\cong}}{{ \simeq}}\stackrel{{\cong}}{{\simeq}}\stackrel{{\cong}}{{\simeq}} \stackrel{{\cong} **Remark 4.5.3**.: _If we assume \(\Omega\in K\), then these pairings can be compared via the LT-isomorphism \(\kappa\). By (70) we have_ \[\Omega<\kappa^{*}(f),\kappa^{*}(g)>_{\mathbf{B}}=<f,g>_{\mathfrak{X}}\] _for \(f,g\in\mathcal{R}_{K}(\mathfrak{X})\)._ **Assume** henceforth that \(M\) is an analytic \((\varphi_{L},\Gamma_{L})\)-module over \(\mathcal{R}\) and recall from Proposition 4.3.10 that the \(\Gamma_{L}\)-action on \(M\) extends continuously to a \(D(\Gamma_{L},K)\)-module structure. **Corollary 4.5.4**.: _The isomorphism \(\tilde{M}\cong\mathrm{Hom}_{K,cts}(M,K)\) (induced by \(\{\,\ \}\)) is \(D(\Gamma_{L},K)\)-linear._ Proof.: This follows from Lemma 4.5.1(iii) since \(\Gamma_{L}\) generates a dense subspcae of \(D(\Gamma_{L},K)\). Since \(\frac{\pi_{L}}{q}\psi_{L}\circ\varphi_{L}=\mathrm{id}_{M}\) we have a canonical decomposition \(M=\varphi_{L}(M)\oplus M^{\psi_{L}=0}\). By Lemma 4.5.1 we see that \(M^{\psi_{L}=0}\) is the exact orthogonal complement of \(\varphi_{L}(\tilde{M})\), i.e., we obtain a canonical isomorphism \[\tilde{M}^{\psi_{L}=0}\cong\mathrm{Hom}_{K,cts}(M^{\psi_{L}=0},K). \tag{117}\] **Lemma 4.5.5**.: _The isomorphism (117) is \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariant, i.e., we have for all \(\tilde{m}\in\tilde{M}^{\psi_{L}=0}\), \(m\in M^{\psi_{L}=0}\), and \(\lambda\in\mathcal{R}_{K}(\Gamma_{L})\) that_ \[\{\lambda\tilde{m},m\}=\{\tilde{m},\mathfrak{l}_{*}(\lambda)m\}\.\] Proof.: This is clear for \(D(\Gamma_{L},K)\) by Cor. 4.5.4. Without loss of generality we may and do assume that \(\Omega\) belongs to \(K\). It then follows for the localization \(D(\Gamma_{L},K)_{Y_{n_{1}}^{\mathbb{N}}}\), where we use the notation and considerations from subsection 4.3.6, especially Lemma 4.3.19 and its proof. Since \(D(\Gamma_{L},K)_{Y_{n_{1}}^{\mathbb{N}}}\) is dense in \(\mathcal{R}_{K}(\Gamma_{L})\) by 4.3.5 the assertion now is a consequence of the continuity property in Thm. 4.3.21. #### 4.5.2 The duality pairing \(<,>_{\Gamma_{L}}\) for the group Robba ring Using the isomorphisms (76) induced by the Lubin-Tate character \(\chi_{LT}\) we now carry over structures concerning the (multiplicative) character varieties \(\mathfrak{X}^{\times},\mathfrak{X}^{\times}_{n}\) to those of the groups \(\Gamma_{L},\Gamma_{n}\). In particular, we use analogous notation \(\mathrm{res}_{\Gamma_{L}},\mathrm{res}_{\Gamma_{n}},\log_{\Gamma_{L}},\log_{ \Gamma_{n}}\) for corresponding objects. In this sense we introduce and recall from (51) the pairing \[<\,\ >_{\Gamma_{L}}:\mathcal{R}_{K}(\Gamma_{L})\times\mathcal{R}_{K}(\Gamma_{L} )\xrightarrow{}K \tag{118}\] and similarly \(<\,\ >_{\Gamma_{n}}\) from (49). This pairing is of the form \[<\,\ >_{\Gamma_{L}}:\mathcal{R}_{K}(\Gamma_{L})\times\mathcal{R}_{K}(\Gamma_{L} )\xrightarrow{mult}\mathcal{R}_{K}(\Gamma_{L})\xrightarrow{\varrho}K, \tag{119}\] where \[\varrho=<1,->_{\Gamma_{L}}:\mathcal{R}_{K}(\Gamma_{L}) \xrightarrow{}\Omega^{1}_{\mathcal{R}_{K}(\Gamma_{L})} \xrightarrow{\mathrm{res}_{\Gamma_{L}}}K\] \[f \mapsto fd\log_{\Gamma_{L}}\mapsto\mathrm{res}_{\Gamma_{L}}(fd \log_{\Gamma_{L}})\] has also the following description - writing \(pr_{n,m}\) and similarly \(pr_{L,m}\) for the projection maps induced by (77),(78) - \[\varrho:\mathcal{R}_{K}(\Gamma_{L}) =\mathbb{Z}[\Gamma_{L}]\otimes_{\mathbb{Z}[\Gamma_{n_{0}}]} \mathcal{R}(\Gamma_{n_{0}})\to K \tag{120}\] \[f \mapsto\frac{q-1}{q}(\frac{q}{\pi_{L}})^{n_{0}}\mathrm{res}_{ \mathfrak{X}}(\hat{\ell}_{n_{0}*}\circ pr_{L,n_{0}}(f)d\log_{\mathfrak{X}})\] with \(n_{0}\) as defined at the beginning of subsection 4.3.4. Indeed, using (50), (49) we obain \[<1,f>_{\Gamma_{L}} =\mathrm{res}_{\Gamma_{L}}(fd\log_{\Gamma_{L}})\] \[=\frac{q-1}{q}q^{n_{0}}\mathrm{res}_{\Gamma_{n_{0}}}(pr_{L,n_{0}} (f)d\log_{\Gamma_{n_{0}}})\] \[=\frac{q-1}{q}(\frac{q}{\pi_{L}})^{n_{0}}\mathrm{res}_{\mathfrak{ X}}(\hat{\ell}_{n_{0}*}\circ pr_{L,n_{0}}(f)d\log_{\mathfrak{X}})\] because \((\ell_{n}^{*})^{*}(1)=1\). The following properties follow immediately from the definition: **Lemma 4.5.6**.: _We have for all \(f,\lambda,\mu\in\mathcal{R}_{K}(\Gamma_{L})\) that_ * \(<\lambda,f\mu>_{\Gamma_{L}}=<f\lambda,\mu>_{\Gamma_{L}},\)__ * \(<\lambda,\mu>_{\Gamma_{L}}=<\mu,\lambda>_{\Gamma_{L}}\)_._ **Remark 4.5.7**.: _For \(n\geqslant n_{0}\) we have the projection formula \(pr_{L,n}(\iota_{n,*}(x)y)=xpr_{L,n}(y)\) and (52) translates into_ \[<\iota_{n,*}(x),y>_{\Gamma_{L}}=\,(q-1)q^{n-1}<x,pr_{L,n}(y)>_{\Gamma_{n}} \tag{121}\] _for \(x\in\mathcal{R}(\Gamma_{n})\), \(y\in\mathcal{R}(\Gamma_{L})\) and the canonical inclusion \(\mathcal{R}(\Gamma_{n})\xrightarrow{\iota_{n,*}}\mathcal{R}(\Gamma_{L}).\) Analogous formulae hold for \(\Gamma_{m}\) with \(n\geqslant m\geqslant n_{0}\)instead of \(\Gamma_{L}\) by 4.2.14 (ii)._ **Remark 4.5.8** (Frobenius reciprocity).: _The projection map \(pr_{\Gamma_{L},\Gamma_{n}}:\mathcal{R}_{K}(\Gamma_{L})\to\mathcal{R}_{K}( \Gamma_{n})\) induces an isomorphism_ \[\mathrm{Hom}_{\mathcal{R}_{K}(\Gamma_{L})}(N,\mathcal{R}_{K}(\Gamma_{L})) \cong\mathrm{Hom}_{\mathcal{R}_{K}(\Gamma_{n})}(N,\mathcal{R}_{K}(\Gamma_{n}))\] _for any \(\mathcal{R}_{K}(\Gamma_{L})\)-module \(N\); the inverse sends \(f\) to the homomorphism \(x\mapsto\sum_{g\in\Gamma_{L}/U}g\iota_{n,*}\circ f(g^{-1}x)\)._ The following proposition translates the results at the end of subsection 4.2.3 into the present setting. **Proposition 4.5.9**.: _The pairing \(<\,\ >_{\Gamma_{L}}\)_: \(\mathcal{R}_{K}(\Gamma_{L})\times\mathcal{R}_{K}(\Gamma_{L})\to K\) induces topological isomorphisms_ \[\mathrm{Hom}_{K,cts}(\mathcal{R}_{K}(\Gamma_{L}),K)\cong\mathcal{R}_{K}( \Gamma_{L})\text{ and }\mathrm{Hom}_{K,cts}(\mathcal{R}_{K}(\Gamma_{L})/D(\Gamma_{L},K),K)\cong D( \Gamma_{L},K).\] **Proposition 4.5.10**.: _Assume \(\Omega\in K\) and \(M\) in \(\mathcal{M}(\mathcal{R})\). Then the map_ \[\mathrm{Hom}_{\mathcal{R}_{K}(\Gamma_{L})}(M^{\psi_{L}=0},\mathcal{R}_{K}( \Gamma_{L}))^{\iota}\xrightarrow{\cong}\mathrm{Hom}_{K,cts}(M^{\psi_{L}=0},K )\xrightarrow{\cong}\tilde{(\ref{eq:M})}\tilde{M}^{\psi_{L}=0} \tag{122}\] \[F\longmapsto\rho\circ F\] _is an isomorphism of \(\mathcal{R}_{K}(\Gamma_{L})\)-modules, where the superscript "\(\iota\)" on the left hand side indicates that \(\mathcal{R}_{K}(\Gamma_{L})\) acts through the involution \(\iota_{*}\)._ Proof.: According to Thm. 4.3.21 the \(\mathcal{R}_{K}(\Gamma_{L})\)-module \(M^{\psi_{L}=0}\) is finitely generated free. Hence it suffices to show that the map \[\operatorname{Hom}_{\mathcal{R}_{K}(\Gamma_{L})}(\mathcal{R}_{K}( \Gamma_{L}),\mathcal{R}_{K}(\Gamma_{L})) \longrightarrow\operatorname{Hom}_{K,cts}(\mathcal{R}_{K}(\Gamma_{L }),K)\] \[F \longmapsto\rho\circ F\] is bijective. But this map is nothing else than the duality isomorphism in Prop. 4.5.9. The following twist invariance is just Lemma 4.2.15 **Proposition 4.5.11**.: _Let \(U\) be \(\Gamma_{L}\) or \(\Gamma_{n}\) for \(n\geq n_{0}\). Then, for all \(\lambda,\mu\in\mathcal{R}(U)\) we have_ \[<Tw_{\chi_{LT}}(\mu),Tw_{\chi_{LT}}(\lambda)>_{U}=<\mu,\lambda>_{U}.\] #### 4.5.3 A residuum identity and an alternative description of \(<\,,\,\,>_{\Gamma_{L}}\) Let \(\sigma_{-1}\in\Gamma_{L}\) be the element with \(\chi_{LT}(\sigma_{-1})=-1\). Consider the continuous \(K\)-linear map \[\varsigma:\mathcal{R}_{K}(\Gamma_{L}) \to K,\] \[\lambda \mapsto\operatorname{res}_{\mathfrak{X}}(\mathfrak{M}(\sigma_{- 1})\mathfrak{M}^{\Omega^{1}}(\lambda))\] where \(\mathfrak{M}^{\Omega^{1}}:\mathcal{R}_{K}(\Gamma_{L})\xrightarrow{\cong} \Omega^{1}_{\mathcal{R}(\mathfrak{X})}\operatorname{\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, The proof of this theorem requires some preparation. **Lemma 4.5.14**.: _For all \(\lambda\in\mathcal{R}_{K}(\Gamma_{L})\) and \(j\in\mathbb{Z}\) we have_ \[\varsigma(Tw_{\chi^{j}_{LT}}(\lambda))=\varsigma(\lambda).\] Proof.: For the proof we may and do assume that \(\Omega\) belongs to \(K.\) Since then \[\operatorname{res}_{\mathfrak{X}}(\widehat{c}^{\mathfrak{X}}_{\operatorname{ inv}}(f)d\log_{\mathfrak{X}})=\Omega\text{res}_{\mathbf{B}}(\frac{1}{\Omega} \partial_{\operatorname{inv}}(\kappa^{*}(f))d\log_{LT})=\text{res}_{\mathbf{ B}}(d\kappa^{*}(f))=0\] for any \(f\) by Remark 4.2.9, (70) and [FX, Prop. 2.12], the case \(j=1\) follows directly from the relation (124) using with \(g:=Tw_{\chi_{LT}}(\lambda)\) that \[\partial_{\operatorname{inv}}^{\mathfrak{X}}(\mathfrak{M}(\sigma _{-1})\mathfrak{M}(g)) =\partial_{\operatorname{inv}}^{\mathfrak{X}}(\mathfrak{M}( \sigma_{-1}))\mathfrak{M}(g)+\mathfrak{M}(\sigma_{-1})\partial_{ \operatorname{inv}}^{\mathfrak{X}}(\mathfrak{M}(g))\] \[\stackrel{{\eqref{eq:201}}}{{=}}\mathfrak{M}(Tw_{ \chi_{LT}}(\sigma_{-1}))\mathfrak{M}(g)+\mathfrak{M}(\sigma_{-1})\mathfrak{M }(Tw_{\chi_{LT}}(g))\] \[=-\mathfrak{M}(\sigma_{-1})\mathfrak{M}(g)+\mathfrak{M}(\sigma_ {-1})\mathfrak{M}(Tw_{\chi_{LT}}(g)).\] From this the general case is immediate. **Lemma 4.5.15**.: _Let \(\lambda\in D(\Gamma_{L},\mathbb{C}_{p})\) with \(\operatorname{ev}_{\chi^{j}_{LT}}(\lambda)=0\) for infinitely many \(j,\) then \(\lambda=0.\)_ Proof.: On the character variety the characters \(\chi^{j}_{LT}\) corresponds to points which converge to the trivial character. It follows that \(\lambda\) corresponds to the trivial function, since otherwise its divisor of zeroes would have only finitely many zeroes in any disk with fixed radius strictly smaller than \(1\) by (106), which would contradict the assumptions. Now fix a \(\mathbb{Z}_{p}\)-basis \(b=(b_{1},\ldots,b_{d})\) of \(U_{n_{0}}\) with all \(b_{i}\neq 1\) and set \(\ell^{*}(b):=\ell^{*}_{\Gamma}(b):=q^{-n_{0}}\ell(b)\in o_{L}^{\times}\) with \(\Gamma:=\Gamma_{n_{0}}\). According to section 4.4 we may define the operator \[\widehat{\Xi_{b}}:= q^{-n_{0}}\chi^{*}_{LT}(\overline{\Xi}_{b})=\ell^{*}(b)\chi^{*}_{ LT}(\overline{\Xi}_{b})\] in \(\mathcal{R}_{K}(\Gamma).\) Let \(\text{aug}:D(\Gamma,K)\to K\) denote the augmentation map, induced by the trivial map \(\Gamma\to\{1\}.\) **Lemma 4.5.16**.: _The element \(\widehat{\Xi_{b}}\) induces - up to the constant \(q^{-n_{0}}\) - the augmentation map_ \[<\widehat{\Xi_{b}},->_{\Gamma_{n_{0}}}=q^{-n_{0}}\text{aug}:D(\Gamma_{n_{0}},K)\to K. \tag{125}\] _Moreover, we have_ \[\varsigma(\widehat{\Xi_{b}})=1=\frac{q}{q-1}\varrho(\widehat{\Xi_{b}}). \tag{126}\] Proof.: We may and do assume \(\Omega\in K\) by compatibility of res with respect to (complete) base change (54). Since \(\kappa^{*}(\hat{\ell}_{n_{0}*}(\widehat{\Xi_{b}}))=q^{-n_{0}}\widehat{\Xi}_{b }\equiv\frac{\pi_{0}^{n_{0}}}{q^{n_{0}}\Omega}\frac{1}{Z}\mod\mathcal{O}_{K}( \mathbf{B})\) by Remark 4.4.8, one has for every \(\lambda\in D(\Gamma,K)\) \[<\widehat{\Xi_{b}},\lambda>_{\Gamma} \stackrel{{\eqref{eq:201}}}{{=}}\frac{1}{(q-1)q^{n_{0 }-1}}<\widehat{\Xi_{b}},\lambda>_{\Gamma_{L}}\] \[\stackrel{{\eqref{eq:201}}}{{=}}q^{-n_{0}}\text{ res}_{\mathfrak{X}}((\frac{q}{\pi_{L}})^{n_{0}}\kappa^{*}(\hat{\ell}_{n_{0}*}( \widehat{\Xi_{b}}\lambda))d\log_{\mathfrak{X}})\] \[\stackrel{{\eqref{eq:201}}}{{=}}q^{-n_{0}}\text{ res}_{\mathbf{B}}(\Omega(\frac{q}{\pi_{L}})^{n_{0}}\kappa^{*}(\hat{\ell}_{n_{0}*}( \widehat{\Xi_{b}}\lambda))g_{LT}dZ)\] \[=q^{-n_{0}}\text{res}_{\mathbf{B}}(\frac{1}{Z}\kappa^{*}(\hat{ \ell}_{n_{0}*}(\lambda))g_{LT}dZ)\] \[=q^{-n_{0}}\text{aug}(\lambda),\] where we use for the last equation that \(g_{LT}(Z)\) has constant term \(1\) and the fact that the augmentation map corresponds via Fourier theory and the LT-isomorphism to the 'evaluation at \(Z=0\)' map. Taking \(\lambda=1\) we see that \(\varrho(\widehat{\Xi_{b}})=<\widehat{\Xi_{b}},1>_{\Gamma_{L}}=\frac{q-1}{q}\). For the other equation of the second claim one has by definition of \(\varsigma\) \[\varsigma(\widehat{\Xi_{b}}) \stackrel{{\eqref{eq:10}}}{{=}}\Omega\ell^{*}(b) \mathrm{res}_{\mathbf{B}}(\kappa^{*}\Big{(}\mathfrak{M}(\sigma_{-1}) \mathfrak{M}(Tw_{\chi LT}(\chi_{LT}^{*}(\Xi_{b})))\Big{)}d\log_{LT})\] \[=\Omega\ell^{*}(b)\mathrm{res}_{\mathbf{B}}(\mathfrak{M}_{LT}( \sigma_{-1})\mathfrak{M}_{LT}(Tw_{\chi LT}(\chi_{LT}^{*}(\Xi_{b})))d\log_{LT})\] \[=\ell^{*}(b)\mathrm{res}_{\mathbf{B}}(\mathfrak{M}_{LT}(\sigma_{ -1})\log_{LT}(Z)\hat{c}_{\mathrm{inv}}\mathfrak{M}_{LT}(\chi_{LT}^{*}(\Xi_{b} ))\frac{d\log_{LT}}{\log_{LT}(Z)})\] \[=\ell^{*}(b)\mathrm{res}_{\mathbf{B}}(\mathfrak{M}_{LT}(\sigma_{ -1})\mathfrak{M}_{LT}(\nabla\chi_{LT}^{*}(\Xi_{b}))\frac{d\log_{LT}}{\log_{LT }(Z)})\] \[=\ell^{*}(b)\mathrm{res}_{\mathbf{B}}(\mathfrak{M}_{LT}(\sigma_{ -1})\frac{\pi_{L}^{n_{0}}\log_{LT}(Z)}{\varphi_{L}^{n_{0}}(Z\ell(b))}\eta(1,Z) \frac{d\log_{LT}}{\log_{LT}(Z)})\] \[=\ell^{*}(b)\pi_{L}^{n_{0}}\mathrm{res}_{\mathbf{B}}(\eta(-1,Z) \frac{1}{\varphi_{L}^{n_{0}}(Z\ell(b))}\eta(1,Z)d\log_{LT})\] \[=\ell^{*}(b)\pi_{L}^{n_{0}}\mathrm{res}_{\mathbf{B}}(\eta(1-1,Z) \frac{1}{\varphi_{L}^{n_{0}}(Z\ell(b))}d\log_{LT})\] \[=\ell^{*}(b)\pi_{L}^{n_{0}}\mathrm{res}_{\mathbf{B}}(\varphi_{L}^ {n_{0}}\left(\eta(0,Z)\frac{1}{Z\ell(b)}d\log_{LT}\right))\] \[=\frac{\ell^{*}(b)}{\ell(b)}\pi_{L}^{n_{0}}(\frac{q}{\pi_{L}})^{n _{0}}\mathrm{res}_{\mathbf{B}}(\frac{1}{Z}g_{LT}dZ)\] \[=1,\] where we use (104) in the third equation, the fact that \(\nabla\) acts on \(\mathcal{R}\) as \(\log_{LT}(Z)\hat{c}_{\mathrm{inv}}\) (cp. (112)) in the fourth equation, Remark 4.4.9 for the fifth equation, Lemma 4.5.1 (iv) with \(\psi_{L}(1)=\frac{q}{\pi_{L}}\) for the penultimate equation and finally for the last equation that \(g_{LT}(Z)\) has constant term \(1\). Proof of Thm. 4.5.12.: Since the equality can also be checked after base change by (54) we may and do assume that \(\Omega\) belongs to \(K\). Due to Prop. 4.5.9 there exists \(g\in D(\Gamma_{L},K)\) such that \(\varsigma(\lambda)=<g,\lambda>_{\Gamma_{L}}\) for all \(\lambda\in\mathcal{R}_{K}(\Gamma_{L})\), because \(\varsigma\) sends \(D(\Gamma_{L},K)\) to zero. We claim that \[Tw_{\chi_{LT}^{j}}(g)=g \tag{127}\] for all \(j\in\mathbb{Z}:\) By Prop. 4.5.11 and Lemma 4.5.14 we have \[<Tw_{\chi_{LT}^{j}}(g),f>_{\Gamma_{L}} =<g,Tw_{\chi_{LT}^{-j}}(f)>_{\Gamma_{L}}\] \[=\varsigma(Tw_{\chi_{LT}^{-j}}(f))\] \[=\varsigma(f)\] \[=<g,f>_{\Gamma_{L}}\] for all \(f\in\mathcal{R}_{K}(\Gamma_{L})\). Now it follows from (127) combined with Lemma 4.5.15 that \(g\) is constant (and equal to \(\mathrm{ev}_{\chi_{LT}^{0}}(g)\)), i.e., \(\varsigma(-)=g<1,->_{\Gamma_{L}}=g\varrho(-)\). Finally, it follows from (126) that \(g=\frac{q}{q-1}\) **Corollary 4.5.17**.: _The pairing \(\frac{q}{q-1}<,>_{\Gamma_{L}}\) makes the following diagram commutative_ (128) _i.e., we have_ \[\begin{split}\frac{q}{q-1}<\mu,\lambda>_{\Gamma_{L}}& =\{\mathfrak{M}(\sigma_{-1}\mathfrak{u}_{*}(\mu)),\mathfrak{M}^{ \Omega^{1}}(\lambda)\}_{\Omega^{1}}\\ &=\operatorname{res}_{\mathfrak{X}}(\sigma_{-1}\mathfrak{M}( \mathfrak{u}_{*}(\mu))\mathfrak{M}^{\Omega^{1}}(\lambda))\\ &=\operatorname{res}_{\mathfrak{X}}(\mathfrak{M}(\sigma_{-1} \mathfrak{u}_{*}(\mu))\mathfrak{M}(Tw_{\chi_{LT}}(\lambda))d\log_{\mathfrak{X }})\\ &=\operatorname{res}_{\mathfrak{X}}(\mathfrak{M}(\mathfrak{u}_{*}( \mu))\mathfrak{M}(Tw_{\chi_{LT}}(\sigma_{-1}\lambda))d\log_{\mathfrak{X}}). \end{split} \tag{129}\] Proof.: By Thm. 4.5.12, the definition of \(\varsigma\) and of \(<\;,\;>_{\Gamma_{L}}\) we have \[\begin{split}\frac{q}{q-1}<\mu,\lambda>_{\Gamma_{L}}& =\frac{q}{q-1}<1,\mu\lambda>_{\Gamma_{L}}\\ &=\{\mathfrak{M}(\sigma_{-1}),\mathfrak{M}^{\Omega^{1}}(\mu \lambda)\}_{\Omega^{1}}\\ &=\{\mathfrak{M}(\sigma_{-1}\mathfrak{u}_{*}(\mu)),\mathfrak{M}^ {\Omega^{1}}(\lambda)\}_{\Omega^{1}}\end{split} \tag{130}\] where we use Lemma 4.5.5 for the last equation. **Lemma 4.5.18**.: _We have for all \(\lambda,\mu\in\mathcal{R}_{K}(\Gamma_{L})\) that \(<\lambda,\mu>_{\Gamma_{L}}=-<\mathfrak{u}_{*}(\lambda),\mathfrak{u}_{*}(\mu) >_{\Gamma_{L}}.\)_ Proof.: Using (129) for the first and third equation, property 3. in Subsection 4.1.3 applied to \(\mathfrak{l}\) and the fact that \(Tw_{\chi_{LT}}(\sigma_{-1})=-\sigma_{-1}\) for the second equation and Prop. 4.5.11 for the last one, we see that \[\begin{split}\frac{q}{q-1}<\mu,\lambda>_{\Gamma_{L}}& =\operatorname{res}_{\mathfrak{X}}\bigl{(}\mathfrak{M}(Tw_{\chi_{LT}}( \sigma_{-1}\lambda))\mathfrak{M}(\mathfrak{u}_{*}(\mu))d\log_{\mathfrak{X}} \bigr{)}\\ &=-\text{res}_{\mathfrak{X}}\bigl{(}\mathfrak{M}(\sigma_{-1} \mathfrak{u}_{*}(Tw_{\chi_{LT}^{-1}}(\mathfrak{u}_{*}(\lambda))))\mathfrak{M}( \mathfrak{u}_{*}(\mu))d\log_{\mathfrak{X}}\bigr{)}\\ &=-\frac{q}{q-1}<Tw_{\chi_{LT}^{-1}}(\mathfrak{u}_{*}(\lambda)), Tw_{\chi_{LT}^{-1}}(\mathfrak{u}_{*}(\mu))>_{\Gamma_{L}}\\ &=-\frac{q}{q-1}<\mathfrak{u}_{*}(\lambda),\mathfrak{u}_{*}(\mu) >_{\Gamma_{L}}.\end{split}\] #### 4.5.4 The Iwasawa pairing for \((\varphi_{L},\Gamma_{L})\)-modules over the Robba ring As before let \(\mathfrak{Y}\) be either \(\mathfrak{X}\) or \(\mathbf{B}\) and \(\mathcal{R}=\mathcal{R}_{K}(\mathfrak{Y})\) and \(M\) be in \(\mathcal{M}^{an}(\mathcal{R})\), where \(K\) is any complete intermediate extension \(L\subseteq K\subseteq\mathbb{C}_{p}\). Using Prop. 4.5.9 we define the pairing \[\{\;,\;\}_{tw}^{0}:=\{\;,\;\}_{M,tw}^{0}:\check{M}^{\psi_{L}=0}\times M^{\psi _{L}=0}\rightarrow\mathcal{R}_{K}(\Gamma_{L})\,\] which is \(\mathcal{R}_{K}(\Gamma_{L})\)-\(\mathfrak{u}_{*}\)-sesquilinear in the sense that \[\lambda\{\check{m},m\}_{tw}^{0}=\{\lambda\check{m},m\}_{tw}^{0}=\{\check{m},\mathfrak{u}_{*}(\lambda)m\}_{tw}^{0} \tag{131}\] for all \(\lambda\in{\cal R}_{K}(\Gamma_{L})\) and \(\check{m}\in\check{M}^{\psi_{L}=0},\ m\in M^{\psi_{L}=0}.\) This requires the commutativity of the diagram in which the upper line sends \((f,x,y)\) to \(\{f(x),y\}_{M},\) where the latter pairing is (116). Indeed, the property \[\{\lambda\check{m},m\}_{Iw}^{0}=\{\check{m},\iota_{\bullet}(\lambda)m\}_{Iw}^ {0}\] follows from the corresponding property for \(\{,\}_{M}\) by Lemma 4.5.5, while with regard to the second one \[\lambda\{\check{m},m\}_{Iw}^{0}=\{\lambda\check{m},m\}_{Iw}^{0}\] we have for all \(f\in{\cal R}_{K}(\Gamma_{L})\) \[<f,\{\lambda\check{m},m\}_{Iw}^{0}>_{\Gamma_{L}} =\{f\cdot\lambda\check{m},m\}\] \[=<\lambda f,\{\check{m},m\}_{Iw}^{0}>_{\Gamma_{L}}\] \[=<f,\lambda\{\check{m},m\}_{Iw}^{0}>_{\Gamma_{L}}\] by Lemma 4.5.6. Note that the pairing \(\{\,\ \}_{Iw}^{0}\) induces the isomorphism (122). We set \[{\cal C}:=(\frac{\pi_{L}}{q}\varphi_{L}-1)M^{\psi_{L}=1}\ \ \text{and}\ \check{\cal C}:=(\varphi_{L}-1)\check{M}^{\psi_{L}=\frac{q}{\pi_{L}}}\] and we shall need the following **Lemma 4.5.19**.: _For \(f\in D(\Gamma_{L},K)\) we have \(\{f\cdot(\varphi_{L}-1)x,(\frac{\pi_{L}}{q}\varphi_{L}-1)y\}=0\) for all \(x\in\check{M}^{\psi_{L}=\frac{q}{\pi_{L}}}\) and \(y\in M^{\psi_{L}=1}.\)_ Proof.: Straightforward calculation using Lemma 4.5.1 above, cp. [KPX, Lem. 4.2.7]. This Lemma combined with the second statement of Prop. 4.5.9 implies that the restriction of \(\{\,\ \}_{Iw}^{0}\) to \(\check{\cal C}\times{\cal C},\) which by abuse of notation we denote by the same symbol, is characterized by the commutativity of the diagram in which the upper line sends \((x,y,f)\) to \(\{f(x),y\}_{M}.\) In particular, it takes values in \(D(\Gamma_{L},K).\) Finally, we obtain a \(D(\Gamma_{L},K)\)-\(\iota_{\bullet}\)-sesquilinear pairing \(\{\,\ \}_{Iw}:=\{\,\ \}_{M,Iw}\) which by definition fits into the following commutative diagram Altogether we obtain the following **Theorem 4.5.20**.: _There is a \(D(\Gamma_{L},K)\)-\(\mathfrak{t}_{*}\)-sesquilinear pairing_ \[\{\,\ \}_{Iw}:\check{M}^{\psi_{L}=\frac{q}{\pi_{L}}}\times M^{\psi_{L}=1} \to D(\Gamma_{L},K). \tag{132}\] _It is characterized by the following property_ \[<f,\{\check{m},m\}_{Iw}>_{\Gamma_{L}}=\{f\cdot(\varphi_{L}-1)\check{m},(\frac{ \pi_{L}}{q}\varphi_{L}-1)m\}\text{ for all }f\in\mathcal{R}_{K}(\Gamma_{L}),\ \check{m}\in\check{M},\ m\in M. \tag{133}\] **Remark 4.5.21**.: _For any \(n\geqslant n_{0},\) we obtain similarly as in (132) \(D(\Gamma_{n},K)\)-\(\mathfrak{t}_{*}\)-sesquilinear pairings_ \[\{\,\ \}_{Iw,\Gamma_{n}}:\check{M}^{\psi_{L}=\frac{q}{\pi_{L}}}\times M^{\psi_{ L}=1}\to D(\Gamma_{n},K).\] _It follows immediately from the definitions, the projection formulae (121) and Frobenius reciprocity 4.5.8 that_ \[\{\,\ \}_{Iw,\Gamma_{n}}:=(q-1)q^{n-1}pr_{L,n}\circ\{\,\ \}_{Iw}.\] If \(\chi:\Gamma_{L}\longrightarrow o_{L}^{\times}\) is any continuous character with representation module \(W_{\chi}=o_{L}t_{\chi}\) then, for any \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R},\) we have the twisted \((\varphi_{L},\Gamma_{L})\)-module \(M(\chi)\) where \(M(\chi):=M\otimes_{o_{L}}W_{\chi}\) as \(\mathcal{R}\)-module, \(\varphi_{M(\chi)}(m\otimes w):=\varphi_{M}(m)\otimes w\), and \(\gamma|M(\chi)(m\otimes w):=\gamma|M(m)\otimes\gamma|W_{\chi}(w)=\chi(\gamma) \cdot\gamma|M(m)\otimes w\) for \(\gamma\in\Gamma_{L}.\) It follows that \(\psi_{M(\chi)}(m\otimes w)=\psi_{M}(m)\otimes w\). For the character \(\chi_{LT}\) we take \(W_{\chi_{LT}}=T=o_{L}\eta\) and \(W_{\chi_{LT}^{-1}}=T^{*}=o_{L}\eta^{*}\) as representation module, where \(T^{*}\) denotes the \(o_{L}\)-dual with dual basis \(\eta^{*}\) of \(\eta\). Consider the \(\mathcal{R}_{K}\)-linear (but of course not \(\mathcal{R}_{K}(\Gamma_{L})\)-linear) map \[tw_{\chi}:M\to M(\chi),\ m\mapsto m\otimes t_{\chi}.\] **Lemma 4.5.22**.: _There is a commutative diagram_ Proof.: We have for all \(f\in\mathcal{R}_{K}(\Gamma_{L})\), \[<f,\{tw_{\chi_{LT}^{-j}}(\check{m}),tw_{\chi_{LT}^{j}}(m)\}_{Iw}> _{\Gamma_{L}} =\{f\cdot\left((\varphi_{L}-1)\check{m}\otimes\eta^{\otimes-j} \right),(\frac{\pi_{L}}{q}\varphi_{L}-1)m\otimes\eta^{\otimes j}\}\] \[=\{\left(Tw_{\chi_{LT}^{-j}}(f)\cdot(\varphi_{L}-1)\check{m} \right)\otimes\eta^{\otimes-j},(\frac{\pi_{L}}{q}\varphi_{L}-1)m\otimes\eta^ {\otimes j}\}\] \[=\{\left(Tw_{\chi_{LT}^{-j}}(f)\cdot(\varphi_{L}-1)\check{m} \right),(\frac{\pi_{L}}{q}\varphi_{L}-1)m\}\] \[=<Tw_{\chi_{LT}^{-j}}(f),\{\check{m},m\}_{Iw}>_{\Gamma_{L}}\] \[=<f,Tw_{\chi_{LT}^{j}}(\{\check{m},m\}_{Iw})>_{\Gamma_{L}}\] where we used Corollary 4.5.11 for the last equation. The second equation is clear for \(\delta\)-distributions and hence extends by the uniqueness result of Thm. 4.3.21, cf. the proof of Thm. 4.3.20. #### 4.5.5 The abstract reciprocity formula We keep the notation from the preceding subsection and set \(t_{\mathfrak{D}}:=\log_{\mathfrak{D}}\). Compatibility of the Iwasawa pairing under comparison isomorphismsLet \(M,N\) be (not necessarily etale) \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}\). We extend the action of \(\Gamma_{L}\), \(\varphi_{L}\) and \(\psi_{L}\) to the \(\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\)-module \(M[\frac{1}{t_{\mathfrak{D}}}]\) (and in the same way to \(N[\frac{1}{t_{\mathfrak{D}}}]\)) as follows:19 Footnote 19: Since \(t_{\mathfrak{D}}^{k}=\varphi_{L}(\frac{t_{\mathfrak{D}}^{k}}{\pi_{L}^{k}})\) one checks that \(\psi_{L}(t_{\mathfrak{D}}^{k}m)=\frac{t_{\mathfrak{D}}^{k}}{\pi_{L}^{k}}\psi_{ L}(m)\) by the projection formula. In particular, the definition is independent of the chosen denominator. \[\gamma\frac{m}{t_{\mathfrak{D}}^{k}} :=\frac{\gamma m}{\gamma t_{\mathfrak{D}}^{k}}=\frac{\frac{ \gamma m}{\gamma t_{\mathfrak{D}}^{k}}}{t_{\mathfrak{D}}^{k}},\] \[\varphi_{L}(\frac{m}{t_{\mathfrak{D}}^{k}}) :=\frac{\varphi_{L}(m)}{\varphi_{L}(t_{\mathfrak{D}}^{k})}=\frac {\frac{\varphi_{L}(m)}{\pi_{L}^{k}}}{t_{\mathfrak{D}}^{k}}\text{ and }\] \[\psi_{L}(\frac{m}{t_{\mathfrak{D}}^{k}}) :=\frac{\pi_{L}^{k}\psi_{L}(m)}{t_{\mathfrak{D}}^{k}}.\] Now we assume that there is an isomorphism \[c:\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}M\xrightarrow{ \cong}\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}N\] of \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\). **Lemma 4.5.23**.: 1. \((M[\frac{1}{t_{\mathfrak{D}}}])^{\psi_{L}=0}=(M^{\psi_{L}=0})[\frac{1}{t_{ \mathfrak{D}}}]:=\{\frac{m}{t_{\mathfrak{D}}^{k}}|m\in M^{\psi_{L}=0},\,k\geq 0\}.\)__ 2. _The (separatedly continuous)_ \(\mathcal{R}_{K}(\Gamma_{L})\)_-action on_ \(M^{\psi_{L}=0}\) _extends to a (separatedly continuous with respect to direct limit topology) action of_ \(\mathcal{R}_{K}(\Gamma_{L})\) _on_ \((M[\frac{1}{t_{\mathfrak{D}}}])^{\psi_{L}=0}.\)__ Proof.: For (i) note that \(0=\psi_{L}(\frac{m}{t_{\mathfrak{D}}^{k}})=\frac{\pi_{L}^{k}\psi_{L}(m)}{t_{ \mathfrak{D}}^{k}}\) if and only if \(\psi_{L}(m)=0\). For (ii) take for any \(f\in\mathcal{R}_{K}(\Gamma_{L})\) the direct limit of the following commutative diagram This defines a (separatedly continuous) action. Consider the composite map \[\tilde{c}:\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{ \mathcal{R}}\tilde{M} \cong\mathrm{Hom}_{\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]}( \mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}M,\mathcal{R}[ \frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}\Omega_{\mathcal{R}}^{1})\] \[\cong\mathrm{Hom}_{\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]}( \mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}N,\mathcal{R}[ \frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}\Omega_{\mathcal{R}}^{1})\] \[\cong\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}} \tilde{N}\] where the second isomorphism is \((c^{-1})^{*}\). **Lemma 4.5.24**.: \(c^{\psi_{L}=0}\) _and \(\bar{c}^{\psi_{L}=0}\) are \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariant._ Proof.: Consider, for \(n\in\mathbb{Z},\) the \((\varphi_{L},\Gamma_{L})\)-modules (!) \(M_{n}:=t_{\mathfrak{T}}^{-n}M\) over \(\mathcal{R}\) and note that the inclusion \((M_{n})^{\psi_{L}=0}\subseteq(M[\frac{1}{t_{\mathfrak{T}}}])^{\psi_{L}=0}\) is \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariant by construction of the action. Now, since \(M,N\) are finitely generated over \(\mathcal{R},\) there exists \(n_{0}\geqslant 0\) such that \(c\) restricts to a homomorphism \(c_{0}:M\to N_{n_{0}}\) of \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R},\) whence \(c_{0}^{\psi_{L}=0}:M^{\psi_{L}=0}\to N_{n_{0}}^{\psi_{L}=0}\subseteq(N[\frac{1 }{t_{\mathfrak{T}}}])^{\psi_{L}=0}\) is \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariant by the functoriality of Thm. 4.3.23 and similarly for the induced maps \(c_{n}:M_{n}\to N_{n_{0}+n}\) for all \(n\geqslant 0.\) The equivariance for \(c^{\psi_{L}=0}\) follows by taking direct limits. Similarly, for some \(n_{0}\geqslant 0,\) the inverse \(b\) of \(c\) induces homomorphisms \(b_{n}:N_{-n_{0}-n}\to M_{-n}\) of \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}\) all \(n\in\mathbb{Z}.\) We obtain homomorphisms of \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}\) \[\begin{split}\tilde{c}_{n}:(\check{M})_{n}&=\mathrm{ Hom}_{\mathcal{R}}(M,t_{\mathfrak{T}}^{-n}\Omega_{\mathcal{R}}^{1})\\ &\cong\mathrm{Hom}_{\mathcal{R}}(M_{-n},\Omega_{\mathcal{R}}^{1}) \\ &\cong\mathrm{Hom}_{\mathcal{R}}(N_{-n_{0}-n},\Omega_{\mathcal{R}}^ {1})\\ &\cong(\check{N})_{n_{0}+n}\end{split}\] where the third isomorphism is \((b_{n})^{*}.\) As above \((\tilde{c}_{n})^{\psi_{L}=0}\) is \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariant and the claim follows by taking direct limits. **Lemma 4.5.25**.: _The following diagram commutes on the vertical intersections_ _i.e., if \(\check{m}\in\check{M},m\in M,\check{n}\in\check{N},n\in N\) with \(\check{c}(\check{m})=\check{n}\) and \(c(m)=n,\) then_ \[\{\check{m},m\}_{M,Iw}^{0}=\{\check{n},n\}_{N,Iw}^{0}.\] Proof.: By definition of the Iwasawa pairings we have for all \(f\in\mathcal{R}_{K}(\Gamma_{L})\) \[<f,\{\check{n},n\}_{N,Iw}^{0}>_{\Gamma_{L}} =\{f\cdot\check{n},n\}_{N}\] \[=\{f\cdot\check{c}(\check{m}),c(m)\}_{N}\] \[=\{\check{c}(f\cdot\check{m}),c(m)\}_{N}\] \[=\operatorname{res}_{\mathfrak{H}}(\check{c}(f\cdot\check{m})(c( m))\] \[=\operatorname{res}_{\mathfrak{H}}(((f\cdot\check{m})\circ c^{-1 })(c(m))\] \[=\operatorname{res}_{\mathfrak{H}}((f\cdot\check{m})(m))\] \[=\{f\cdot\check{m},m\}_{M}\] \[=<f,\{\check{m},m\}_{M,Iw}^{0}>_{\Gamma_{L}}\] whence the claim. Here we use the \(\mathcal{R}_{K}(\Gamma_{L})\)-equivariance of \(\check{c}\) in the third equality. Now let \(D\) be any \(\varphi_{L}\)-module over \(L\) of finite dimension, say \(d\), (with trivial \(\Gamma_{L}\)-action) and consider the \((\varphi_{L},\Gamma_{L})\)-module \(N:=\mathcal{R}\otimes_{L}D\) over \(\mathcal{R}\) (with diagonal actions) Since \(N\cong\mathcal{R}^{d}\) as \(\Gamma_{L}\)-module, it is \(L\)-analytic. Moreover, we have \(\check{N}\cong\Omega^{1}_{\mathcal{R}}\otimes D^{*}\) with \(D^{*}=\operatorname{Hom}_{L}(D,L)\) being the dual \(\varphi_{L}\)-module. We set \[\tilde{\Omega}:=\left\{\begin{array}{ll}1,&\text{if }\mathfrak{Y}= \mathfrak{X};\\ \Omega,&\text{if }\mathfrak{Y}=\mathbf{B}\text{ (and }\Omega\in K).\end{array}\right.\] **Lemma 4.5.26**.: _If \(\mathfrak{Y}=\mathbf{B},\) we assume \(\Omega\in K.\) There is a commutative diagram_ \[(\Omega^{1}_{\mathcal{R}}\otimes D^{*})^{\psi_{L}=0} \times\quad(\mathcal{R}\otimes_{L}D)^{\psi_{L}=0}\] \[\cong\] \[\mathcal{R}_{K}(\Gamma_{L})\otimes_{L}D^{*} \times\quad\mathcal{R}_{K}(\Gamma_{L})\otimes_{L}D\] _where the bottom line is the \(\mathcal{R}_{K}(\Gamma_{L})\)-linear extension of the canonical pairing between \(D^{*}\) and \(D\), i.e., it maps \((\lambda\otimes l,\mu\otimes d)\) to \(\lambda\mu l(d).\)_ Proof.: Let \(\check{d}_{j}\) and \(d_{i}\) be a basis of \(D^{*}\) and \(D\), respectively, and \(x=\sum_{j}\lambda_{j}\cdot\check{d}_{j}\) and \(y=\sum_{i}\mu_{i}\cdot d_{i}.\) Then, by definition of \(\{,\}_{Iw}^{0}\) we have for all \(\lambda\in\mathcal{R}_{K}(\Gamma_{L})\) \[<\lambda,\{(\mathfrak{M}^{\Omega^{1}}\otimes\operatorname{id})(x ),(\sigma_{-1}\mathfrak{M}\circ\mathfrak{i}\otimes\operatorname{id})(y)\}_{Iw }^{0}>_{\Gamma_{L}}\] \[=\{(\lambda\mathfrak{M}^{\Omega^{1}}\otimes\operatorname{id})(x ),(\sigma_{-1}\mathfrak{M}\circ\mathfrak{i}_{*}\otimes\operatorname{id})(y)\}\] \[=\{\sum_{j}(\lambda\lambda_{j})\cdot(\operatorname{ev}_{1}d \log_{\mathfrak{Y}}\otimes\check{d}_{j}),\sum_{i}\mathfrak{i}_{*}(\mu_{i}) \cdot\operatorname{ev}_{-1}\otimes d_{i}\}\] \[=\sum_{i,j}\{(\lambda\lambda_{j}\mu_{i})\cdot(\operatorname{ev}_ {1}d\log_{\mathfrak{Y}})\otimes\check{d}_{j},\operatorname{ev}_{-1}\otimes d _{i}\}\] \[=\sum_{i,j}\operatorname{res}_{\mathfrak{Y}}\bigg{(}\check{d}_{j} \big{(}d_{i}\big{)}\operatorname{ev}_{-1}(\lambda\lambda_{j}\mu_{i})\cdot( \operatorname{ev}_{1}d\log_{\mathfrak{Y}})\bigg{)}.\] \[=\sum_{i,j}\check{d}_{j}\big{(}d_{i}\big{)}\operatorname{res}_{ \mathfrak{Y}}\bigg{(}\operatorname{ev}_{-1}(\lambda\lambda_{j}\mu_{i})\cdot( \operatorname{ev}_{1}d\log_{\mathfrak{Y}})\bigg{)}.\] Here, for the third equation we used property (iii) in Lemma 4.5.1. On the other hand we can pair the image \(\sum_{i,j}\lambda_{j}\mu_{i}\tilde{d}_{j}(d_{i}))\) of \((x,y)\) under the bottom pairing with \(\lambda\) using the description (130) \[\frac{q}{q-1}<\lambda,\sum_{i,j}\lambda_{j}\mu_{i}\tilde{d}_{j}(d _{i}))>_{\Gamma_{L}} =\sum_{i,j}\tilde{d}_{j}(d_{i})\{\mathfrak{M}(\sigma_{-1}), \mathfrak{M}^{\Omega^{1}}(\lambda\lambda_{j}\mu_{i})\}\] \[=\sum_{i,j}\tilde{d}_{j}(d_{i})\text{res}_{\mathfrak{X}}\bigg{(} \operatorname{ev}_{-1}(\lambda\lambda_{j}\mu_{i})\cdot(\operatorname{ev}_{1}d \log_{\mathfrak{X}})\bigg{)},\] whence comparing with the above gives the result for \(\mathfrak{Y}=\mathfrak{X}\),using Prop. 4.5.9. If \(\mathfrak{Y}=\mathbf{B}\), we obtain the factor \(\Omega\) due to Remark 4.5.3. **Definition 4.5.27**.: _An \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-module \(M\) over \(\mathcal{R}\) is called etale, if it is semistable and of slope \(0.\) We write \(\mathfrak{M}^{an,\ell t}(\mathcal{R})\) for the category of etale, analytic \((\varphi_{L},\Gamma_{L})\)-modules over \(\mathcal{R}.\)_ Crucial is the following **Theorem 4.5.28**.: _There are equivalences of categories_ \[Rep_{L}^{an}(G_{L}) \longleftrightarrow\mathfrak{M}^{an,\ell t}(\mathcal{R}_{L}( \mathbf{B}))\] \[V \mapsto D_{\mathrm{rig}}^{\dagger}(V).\] _and_ \[Rep_{L}^{an}(G_{L}) \longleftrightarrow\mathfrak{M}^{an,\ell t}(\mathcal{R}_{L}( \mathfrak{X}))\] \[V \mapsto D_{\mathrm{rig}}^{\dagger}(V)_{\mathfrak{X}},\] _where the functor is defined in the proof below._ Proof.: Thm. D in [1] and Thm. 3.27 in [2], which states an equivalence of categories \[\mathfrak{M}^{an,\ell t}(\mathcal{R}_{L}(\mathbf{B})) \longleftrightarrow\mathfrak{M}^{an,\ell t}(\mathcal{R}_{L}( \mathfrak{X}))\] \[M \mapsto M_{\mathfrak{X}}. \tag{134}\] We recall the definition of the subring \(\mathbf{B}_{L}^{\dagger}\) of \(\mathcal{R}_{L}(\mathbf{B})\) by defining first \(\tilde{\mathbf{A}}:=W(\mathbb{C}_{p}^{\flat})_{L}\) and \[\tilde{\mathbf{A}}^{\dagger}:=\{x=\sum_{n\geq 0}\pi_{L}^{n}[x_{n}]\in \tilde{\mathbf{A}}:|\pi_{L}^{n}|x_{n}|_{\flat}^{\text{ $n\to\infty$}}\text{ $0$ for some $r>0$}\}.\] Then we set \(\mathbf{A}^{\dagger}:=\tilde{\mathbf{A}}^{\dagger}\cap\mathbf{A}\), \(\mathbf{B}^{\dagger}:=\mathbf{A}^{\dagger}[\frac{1}{\pi_{L}}]\) as well as \(\mathbf{A}_{L}^{\dagger}:=(\mathbf{A}^{\dagger})^{H_{L}}\) and \(\mathbf{B}_{L}^{\dagger}:=(\mathbf{B}^{\dagger})^{H_{L}}\). It follows from the proof of [1, Thm. 10.1] that for \(V\in Rep_{L}^{an}(G_{L})\) we have \(D_{\mathrm{rig}}^{\dagger}(V)=\mathcal{R}_{L}(\mathbf{B})\otimes_{\mathbf{B }_{L}^{\dagger}}D^{\dagger}(V),\) where \(D^{\dagger}(V)\) belongs to \(\mathfrak{M}^{\ell t}(\mathbf{B}_{L}^{\dagger}).\) From the theory of Wach modules we actually know that \(D_{LT}(V)\) is even of finite height, if \(V\) is crystallin in addition: \[D^{\dagger}(V)=\mathbf{B}_{L}^{\dagger}\otimes_{\mathbf{A}_{L}^{+}}N(T)= \mathbf{B}_{L}^{\dagger}\otimes_{\mathbf{B}_{L}^{+}}N(V)\] for any Galois stable \(o_{L}\)-lattice \(T\subseteq V.\) From the big diagram in section 3.1 we thus obtain the following diagram, in which the horizontal maps are equivalences of categories. Here \(\mathfrak{M}^{et,cis}(\mathbf{B}_{L})\) denotes the essential image of \(\mathrm{Rep}_{L}^{cis,an}(G_{L})\) under \(D_{LT}(-)\) in \(\mathfrak{M}^{et}(\mathbf{B}_{L})\) with \(\mathbf{B}_{L}:=\mathbf{A}_{L}[\frac{1}{\pi_{L}}].\) Now let \(T\) be an \(o_{L}\)-lattice in an \(L\)-linear continuous representation of \(G_{L}\) such that \(V^{*}(1)\) (and hence \(V(\tau^{-1})\)) is \(L\)-analytic and crystalline: Then it follows from [KR] and the discussion above that \[M:=D_{\mathrm{rig}}^{\dagger}(V(\tau^{-1}))=\mathcal{R}_{L}(\mathbf{B}) \otimes_{\mathcal{O}_{K}(\mathbf{B})}\mathcal{M}(D_{cris,L}(V(\tau^{-1})))= \mathcal{R}_{L}(\mathbf{B})\otimes_{\mathbb{A}_{L}^{+}}N(T(\tau^{-1}))\] as well as \[\check{M}=D_{\mathrm{rig}}^{\dagger}(V^{*}(1))=\mathcal{R}_{L}(\mathbf{B}) \otimes_{\mathcal{O}_{K}(\mathbf{B})}\mathcal{M}(D_{cris,L}(V^{*}(1)))= \mathcal{R}_{L}(\mathbf{B})\otimes_{\mathbb{A}_{L}^{+}}N(T^{*}(1))\] and the comparison isomorphism (20) induces isomorphisms \[\mathrm{comp}_{M}:M[\frac{1}{t_{\mathfrak{D}}}]\cong\mathcal{R}_{L}(\mathbf{B} )[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V(\tau^{-1}))\] and \[\mathrm{comp}_{\check{M}}:\check{M}[\frac{1}{t_{\mathfrak{D}}}]\cong\mathcal{ R}_{L}(\mathbf{B})[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V^{*}(1)).\] By [BSX, SS3.4/5] an analogue of Kisin-Ren modules exists for \(\mathfrak{Y}=\mathfrak{X}\), i.e., if we take \(M:=D_{\mathrm{rig}}^{\dagger}(V(\tau^{-1}))_{\mathfrak{X}}\) and \(\check{M}=D_{\mathrm{rig}}^{\dagger}(V^{*}(1))_{\mathfrak{X}}\) we obtain analogous comparison isomorphisms \[\mathrm{comp}_{M}:M[\frac{1}{t_{\mathfrak{D}}}]\cong\mathcal{R}_{L}(\mathfrak{ X})[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V(\tau^{-1})) \tag{135}\] and \[\mathrm{comp}_{\check{M}}:\check{M}[\frac{1}{t_{\mathfrak{D}}}]\cong\mathcal{ R}_{L}(\mathfrak{X})[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V^{*}(1)).\] which this time stem from [BSX, Prop. 3.42] by base change \(\mathcal{R}_{L}(\mathfrak{X})\otimes_{\mathcal{O}_{K}(\mathfrak{X})}-\) using the inclusion \(\mathcal{O}_{L}(\mathfrak{X})[Z^{-1}]\subseteq\mathcal{R}_{L}(\mathfrak{X})[ \frac{1}{t_{\mathfrak{D}}}].\) Moreover, these comparison isomorphism for \(\mathbf{B}\) and \(\mathfrak{X}\) are compatible with regard to the equivalence of categories (134) by [BSX, Thm. 3.48]. Note that for \(c=\mathrm{comp}_{M}\) and \(D=D_{cris,L}(V(\tau^{-1}))\) we have \[\mathrm{comp}_{\check{M}}=(\mathrm{comp}_{\Omega_{\mathcal{R}}^{1}}\otimes_{ L}\mathrm{id}_{D^{*}})\circ\check{c} \tag{136}\] using the identifications \(\Omega^{1}_{\mathcal{R}}\cong\mathcal{R}(\chi_{LT})\) and \[D_{cris,L}(V^{*}(1))\cong D^{*}\otimes D_{cris,L}(L(\chi_{LT})).\] We set \(b:=\mathrm{comp}_{\Omega^{1}_{\mathcal{R}}}(t^{-1}_{\mathfrak{Y}}d\log_{LT})= \frac{1}{t_{\mathfrak{Y}}}\otimes\eta\in D_{0}:=D_{cris,L}(L(\chi_{LT}))\) and \[\tilde{\nabla}:=\left\{\begin{array}{ll}\nabla,&\text{if }\mathfrak{Y}= \mathfrak{X};\\ \frac{\nabla}{\Omega},&\text{if }\mathfrak{Y}=\mathbf{B}\ (\text{and}\ \Omega\in K). \end{array}\right.\] **Remark 4.5.29**.: _As operators on \(\mathcal{R}\) we have the equalities_ \[\nabla=t_{\mathfrak{Y}}\tilde{\sigma}^{\mathfrak{Y}}_{inv}\text{ and }\tilde{\nabla}=t_{\mathfrak{Y}}\tilde{\sigma}^{\mathfrak{Y}}_{inv},\] _where we define \(\tilde{\sigma}^{\mathbf{B}}_{inv}:=\partial_{inv}\) and_ \[\tilde{\sigma}^{\mathfrak{Y}}_{inv}:=\left\{\begin{array}{ll}\tilde{\sigma }^{\mathfrak{X}}_{inv},&\text{if }\mathfrak{Y}=\mathfrak{X};\\ \frac{t_{inv}}{\Omega},&\text{if }\mathfrak{Y}=\mathbf{B}\ (\text{and}\ \Omega\in K ).\end{array}\right.\] _Indeed, for \(\mathfrak{Y}=\mathbf{B}\) the fact (112) grants these equalities of operators on the subring \(\mathcal{O}_{K}(\mathbf{B})\). Concerning the ring \(\mathcal{R}_{K}(\mathbf{B})\) we note that \(\nabla\) is acting as a continuous derivation as can be shown similarly as in [KR, Lem. 2.1.2], while for the operator \(t_{\mathbf{B}}\tilde{c}_{\mathrm{inv}}\) this is clear anyway. Thus the same equalities hold for \(\mathcal{R}\) by 4.3.1. Indeed, on the localisation \(\mathcal{O}_{K}(\mathbf{B})_{Z^{\mathbb{N}}}\) it extends uniquely by the derivation property and then it extends uniquely by continuity to \(\mathcal{R}_{K}(\mathbf{B})\). Regarding \(\mathfrak{Y}=\mathfrak{X}\) note that all operators are defined over \(K\). Since the equality can be checked over \(\mathbb{C}_{p}\), the claim follows from Remark 4.2.9 and the previous case \(\mathfrak{Y}=\mathbf{B}\)._ **Lemma 4.5.30**.: _The following diagram commutes_ _assuming \(\Omega\in K,\) if \(\mathfrak{Y}=\mathbf{B}.\)_ Proof.: We first give the proof for \(\mathfrak{Y}=\mathbf{B}.\) Observe, since on \(D^{*}\) we have the identity throughout, that the commutativity of the above diagram follows from the commutativity of (137) where the map \(\tilde{\tilde{c}}^{\mathfrak{Y}}_{inv}\otimes t_{\mathfrak{Y}}:\mathcal{R}\otimes_ {L}D_{cris,L}(L(\chi_{LT}))\rightarrow\mathcal{R}(\chi_{LT})\) sends \(f\otimes\frac{1}{t_{\mathfrak{Y}}}\otimes\eta\) to \(\tilde{\tilde{c}}^{\mathfrak{Y}}_{inv}f\otimes\eta\) and the composite with the natural identification \(\mathcal{R}(\chi_{LT})\cong\Omega^{1}\), which sends \(\eta\) to \(d\log_{LT}\), is the map \(\frac{d}{\Omega}:\mathcal{R}\rightarrow\Omega^{1}_{\mathcal{R}}\) upon identifying \(\mathcal{R}\otimes_{L}D_{cris,L}(L(\chi_{LT}))\) with \(\mathcal{R}\) by sending \(f\otimes\frac{1}{t_{\mathfrak{Y}}}\otimes\eta\) to \(f\). Remark 4.5.29 implies the commutativity of the left lower corner while for the upper left corner it follows from (104), the easily checked identity \(\tilde{\tilde{c}}^{\mathfrak{Y}}_{inv}\eta(1,Z)=\eta(1,Z)\) and (123) \[\mathfrak{M}^{\Omega^{1}}(\lambda) =\big{(}Tw_{\chi_{LT}}(\lambda)\cdot\eta(1,Z)\big{)}d\log_{LT}\] \[=\big{(}Tw_{\chi_{LT}}(\lambda)\cdot\tilde{\tilde{c}}^{\mathfrak{ Y}}_{inv}\eta(1,Z)\big{)}d\log_{LT}\] \[=\tilde{\tilde{c}}^{\mathfrak{Y}}_{inv}(\lambda\cdot\eta(1,Z))d \log_{LT}\] Finally, since \(\eta(1,Z)\otimes b\in\mathcal{R}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(L(\chi_{LT }))\) is sent up to \(\eta(1,Z)d\log_{LT}\) and down to \(t_{\mathfrak{Y}}\eta(1,Z)\otimes b\), the compatibility with \(\operatorname{comp}_{\Omega^{1}_{\mathcal{R}}}\) is easily checked. The same proof works for \(\mathfrak{Y}=\mathfrak{X}\) by using (103) instead of (104) and replacing \(\eta(1,Z)\) and \(\frac{d}{\Omega}\) by \(\operatorname{ev}_{1}\) and \(d\), respectively. Now we introduce a pairing - if \(\mathfrak{Y}=\mathbf{B}\) assuming \(\Omega\in K\) as usual - \[[\;,\;]:=[\;,\;]_{D_{cris,L}(V(\tau^{-1}))}:\mathcal{R}^{\psi_{L}=0}\otimes_{ L}D_{cris,L}(V^{*}(1))\times\mathcal{R}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V(\tau^{-1})) \rightarrow\mathcal{R}_{K}(\Gamma_{L})\] by requiring that the following diagram becomes commutative (138) where the bottom line sends \((\lambda\otimes l\otimes\beta b,\mu\otimes d)\) to \(\lambda\mu\beta l(d)\). Combining the Lemmata 4.5.26 and 4.5.30 we obtain for \(N=\mathcal{R}\otimes_{L}D_{cris,L}(V(\tau^{-1}))\) **Lemma 4.5.31**.: \([-,-]_{D_{cris,L}(V(\tau^{-1}))}=\frac{q-1}{q}\{\nabla(\operatorname{comp}_{ \Omega^{1}_{\mathcal{R}}}\otimes_{L}\operatorname{id}_{D^{*}})^{-1}(-),-\}_{ N,Iw}^{0}\)_._ Setting \[M^{\prime} :=\operatorname{comp}^{-1}(\mathcal{R}^{\psi_{L}=0}\otimes_{L}D_ {cris,L}(V(\tau^{-1})))\text{ and }\] \[\check{M}^{\prime} :=\operatorname{comp}^{-1}(\mathcal{R}^{\psi_{L}=0}\otimes_{L}D_ {cris,L}(V^{*}(1)))\] we obtain **Theorem 4.5.32**.: _Assume \(\Omega\in K\), if \(\mathfrak{Y}=\mathbf{B}\). For all \(x\in\check{M}^{\prime}\cap(\check{M}^{\psi_{L}=0})\) and \(y\in M^{\prime}\cap(M^{\psi_{L}=0})\) it holds_ \[\frac{q-1}{q}\{\nabla x,y\}_{Iw}^{0}=[x,y],\] _i.e., the following diagram commutes on the vertical intersections_ \[\begin{CD}\tilde{M}^{\psi_{L}=0}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}\tilde{M})^{\psi_{L }=0}\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{\mathcal{R}}\tilde{M})^{\psi_{ L}=0}\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V^{*}(1)))^{\psi_{L }=0}\\ \times\end{CD}\] \[\begin{CD}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V^{*}(1)))^{\psi_{ L}=0}\\ \times\end{CD}\] \[\begin{CD}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V^{*}(1)))^{\psi_{ L}=0}\\ \times\end{CD}\] \[\begin{CD}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V(\tau^{-1})))^{ \psi_{L}=0}\\ \times\end{CD}\] \[\begin{CD}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V(\tau^{-1})))^{ \psi_{L}=0}\\ \times\end{CD}\] \[\begin{CD}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V(\tau^{-1})))^{ \psi_{L}=0}\\ \times\end{CD}\] \[\begin{CD}\times\\ (\mathcal{R}[\frac{1}{t_{\mathfrak{D}}}]\otimes_{L}D_{cris,L}(V^{*}(1)))^{ \psi_{L}=0}\\ \times\end{CD}\] for \(\mathfrak{Y}=\mathbf{B}\) while for \(\mathfrak{Y}=\mathfrak{X}\) one has to decorate the \(D^{\dagger}_{\mathrm{rig}}\)s with index \(\mathfrak{X}\). **Question:** Can one extend the definition of \([\,\ ]\) and \(\{\,\ \}\) to \((\tilde{M}[\frac{1}{t_{\mathfrak{Y}}}])^{\psi_{L}=0}\times(M[\frac{1}{t_{ \mathfrak{Y}}}])^{\psi_{L}=0}\) by perhaps enlarging the target \(\mathcal{R}_{K}(\Gamma_{L})\) by an appropriate localisation, which reflects the inversion of \(t_{\mathfrak{Y}}\) somehow? Application ### The regulator map Recall that we write \(\tau^{-1}=\chi_{LT}\chi_{cyc}^{-1}.\) Let \(T\) be in \(\operatorname{Rep}^{cris}_{o_{L},f}(G_{L})\) such that \(T(\tau^{-1})\) belongs to \(\operatorname{Rep}^{cris,an}_{o_{L},f}(G_{L})\) with all Hodge-Tate weights in \([0,r],\) and such that \(V:=L\otimes_{o_{L}}T\) does not have any quotient isomorphic to \(L(\tau).\) Then we define the regulator maps \[\mathbf{L}_{V}: H^{1}_{Iw}(L_{\infty}/L,T)\to D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(V(\tau^{-1})),\] \[\mathcal{L}_{V}^{0}: H^{1}_{Iw}(L_{\infty}/L,T)\to\mathcal{O}_{L}(\mathbf{B})^{\psi_{L}=0} \otimes_{L}D_{cris,L}(V(\tau^{-1})),\] \[\mathcal{L}_{V}: H^{1}_{Iw}(L_{\infty}/L,T)\to D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(V)\] as (part of) the composite \[H^{1}_{Iw}(L_{\infty}/L,T) \cong D_{LT}(T(\tau^{-1}))^{\psi_{L}=1}=N(T(\tau^{-1}))^{\psi_{ D_{LT}(\tau^{-1})}=1}\xrightarrow{(1-\frac{\tau_{L}}{q}\varphi_{L})}\varphi_{L}^{*}(N(V( \tau^{-1})))^{\psi_{L}=0} \tag{141}\] \[\hookrightarrow\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V( \tau^{-1}))\subseteq\mathcal{O}_{\mathbb{C}_{p}}(\mathbf{B})^{\psi_{L}=0} \otimes_{L}D_{cris,L}(V(\tau^{-1}))\] \[\xrightarrow{\mathfrak{M}^{-1}\otimes\operatorname{id}}D(\Gamma _{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V(\tau^{-1}))\to D(\Gamma_{L}, \mathbb{C}_{p})\otimes_{L}D_{cris,L}(V)\] using [12, Thm. 5.13], Lemma 3.3.6, the inclusion (22) and where the last map sends \(\mu\otimes d\in D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V(\tau^{-1}))\) to \(\mu\otimes d\otimes\mathbf{d}_{1}\in D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L} D_{cris,L}(V(\tau^{-1}))\otimes_{L}D_{cris,L}(L(\tau))\cong D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(V).\) Note that \(D:=D_{cris,L}(L(\tau))=D^{0}_{dR,L}(L(\tau))=L\mathbf{d}_{1}\) with \(\mathbf{d}_{1}=t_{LT}t_{Q_{p}}^{-1}\otimes(\eta^{\otimes-1}\otimes\eta^{cyc}),\) where \(L(\chi_{LT})=L\eta\) and \(L(\chi_{cyc})=L\eta^{cyc}.\) Alternatively, in order to stress that the regulator is essentially the map \(1-\varphi_{L},\) one can rewrite this as \[H^{1}_{Iw}(L_{\infty}/L,T)\cong D_{LT}(V(\tau^{-1}))^{\psi_{L}=1} =N(T(\tau^{-1}))^{\psi_{D_{LT}(\tau^{(\tau^{-1})})}=1}\hookrightarrow N(V(\tau ^{-1}))^{\psi_{D_{LT}(V(\tau^{-1}))}=1}\otimes_{L}D\] \[\xrightarrow{1-\varphi_{L}}\varphi_{L}^{*}(N(V(\tau^{-1})))^{ \psi_{L}=0}\otimes_{L}D\hookrightarrow\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{ cris,L}(V(\tau^{-1}))\otimes_{L}D\subseteq\mathcal{O}_{\mathbb{C}_{p}}( \mathbf{B})^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V)\] \[\xrightarrow{\mathfrak{M}^{-1}\otimes\operatorname{id}}D(\Gamma _{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V) \tag{142}\] where the \(\hookrightarrow\) in the first line sends \(n\) to \(n\otimes\mathbf{d}_{1}\) and the \(\varphi_{L}\) now acts diagonally. By construction, this regulator map \(\mathcal{L}_{V}\) takes values in \(D(\Gamma_{L},K)^{G_{L},*}\otimes_{L}D_{cris,L}(V),\) where the twisted action of \(G_{L}\) on the distribution algebra is induced by the Mellin-transform as in (ii) of Prop. 4.1.25. We write \(\nabla_{\operatorname{Lie}}\in\operatorname{Lie}(\Gamma_{L})\) for the element in the Lie algebra of \(\Gamma_{L}\) corresponding to \(1\) under the identification \(\operatorname{Lie}(\Gamma_{L})=L.\) **Proposition 5.1.1**.: _The regulator maps for \(V\) and \(V(\chi_{LT})\) - assuming that both representations satisfy the conditions above - are related by_ \[\mathcal{L}_{V(\chi_{LT})}(x\otimes\eta)=\nabla_{\operatorname{Lie}}\cdot \left(\frac{1}{\Omega}\mathrm{Tw}_{\chi_{LT}^{-1}}(\mathcal{L}_{V}(x))\otimes t _{LT}^{-1}\eta\right),\] _i.e., the following \(\Gamma_{L}\)-equivariant diagram commutes:_ \[H^{1}_{Iw}(L_{\infty}/L,V(\chi_{LT}))\xrightarrow{\mathcal{L}_{V(\chi_{LT})} }D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V(\chi_{LT}))\] \[H^{1}_{Iw}(L_{\infty}/L,V)\otimes_{o_{L}}L(\chi_{LT})\xrightarrow{ \mathcal{L}_{V\otimes L(\chi_{LT})}}D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L}D_{ cris,L}(V)\otimes_{L}L(\chi_{LT}),\] Proof.: Analogous to [21, Pro. 3.1.4]. Note that the period \(\Omega\) enters due to (104). This twisting property can be used to drop the condition concerning the Hodge-Tate weights in the definition of the regulator map, i.e., upon replacing \(D(\Gamma_{L},\mathbb{C}_{p})\) in the target by its total ring of quotients one can extend the regulator map as usual to all \(T\) in \(\operatorname{Rep}_{o_{L},f}^{cris}(G_{L})\) such that \(T(\tau^{-1})\) belongs to \(\operatorname{Rep}_{o_{L},f}^{cris,an}(G_{L})\). In order to better understand the effect of twisting we have the following **Lemma 5.1.2**.: _For \(\mu\in D(\Gamma_{L},K)\) we have_ \[\frac{1}{\Omega}(\nabla_{\operatorname{Lie}}Tw_{\chi^{-1}})(\mu)=\mathfrak{ M}^{-1}(t_{LT}\mathfrak{M}(\mu))\] _and for all \(n\geq 1\)_ \[(\nabla_{\operatorname{Lie}}Tw_{\chi^{-1}}(\mu))(\chi_{LT}^{n})=n\mu(\chi_{LT }^{n-1}).\] Proof.: The first claim follows by combining (146) with (104), while the second claim is just Lemma 4.1.22 applied to the first. One significance of regulator maps is that it should interpolate (dual) Bloch-Kato exponential maps. We shall prove such interpolation formulae in subsection 5.2.4 by means of a reciprocity formula. #### 5.1.1 The basic example Setting \(U:=\varprojlim_{n}o_{L_{n}}^{\times}\) with transition maps given by the norm we are looking for a map \[\mathcal{L}:U\otimes_{\mathbb{Z}}T_{\pi}^{*}\to D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(L(\tau))\] such that \[\frac{\Omega^{r}}{r!}\frac{1-\pi_{L}^{-r}}{1-\frac{\pi_{L}^{r}}{q}}\mathcal{ L}(u\otimes a\eta^{*})(\chi_{LT}^{r})\otimes(t_{LT}^{r-1}\otimes\eta^{\otimes-r+1} )=CW(u\otimes a\eta^{\otimes-r}) \tag{143}\] for all \(r\geq 1,u\in U,a\in o_{L}\), where \(CW\) denotes the diagonal map in **Theorem 5.1.3** (A special case of Kato's explicit reciprocity law, [21, Cor. 8.7]).: _For \(r\geq 1\) the diagram_ _commutes, i.e., the diagonal map sends \(u\otimes a\eta^{\otimes-r}\) to_ \[a(1-\pi_{L}^{-r})r\psi_{CW}^{r}(u)\mathbf{d}_{r}=a\frac{1-\pi_{L}^{-r}}{(r-1)!}\partial_{\operatorname{inv}}^{r}\log g_{u,\eta}(Z)_{|Z=0}\mathbf{d}_{r}\] _with \(\mathbf{d}_{r}:=t_{LT}^{r}t_{\mathbb{Q}_{p}}^{-1}\otimes(\eta^{\otimes-r} \otimes\eta^{cyc})\)._ We set \(\mathcal{L}=\mathfrak{L}\otimes\mathbf{d}_{1}\) with \(\mathfrak{L}\) given as follows \[\mathfrak{L}:U\otimes T^{*}_{\pi}\xrightarrow{\nabla}o_{L}[[\omega_{LT}]]^{ \psi_{L}=1}\xrightarrow{(1-\frac{\pi_{L}}{q}\varphi)}\mathcal{O}_{\mathbb{C}_{ p}}(\mathbf{B})^{\psi_{L}=0}\xrightarrow{\log_{LT}\cdot}\mathcal{O}_{\mathbb{C}_{ p}}(\mathbf{B})^{\psi_{L}=0}\xrightarrow{\mathfrak{M}^{-1}}D(\Gamma_{L}, \mathbb{C}_{p}),\] where the map \(\nabla\) has been defined in [12, SS6] as the homomorphism \[\nabla:U\otimes_{\mathbb{Z}}T^{*} \longrightarrow o_{L}[[\omega_{LT}]]^{\psi=1}\] \[u\otimes a\eta^{*} \longmapsto a\frac{\partial_{\mathrm{inv}}(g_{u,\eta})}{g_{u,\eta}}( \omega_{LT})\.\] Note that due to the multiplication by \(\log_{LT}\) the maps \(\mathcal{L},\ \mathfrak{L}\) are not \(\Gamma_{L}\)-equivariant. Using Lemmata 4.1.22, 4.1.21 we obtain \[\mathfrak{L}(u\otimes a\eta^{*})(\chi_{LT}^{r}) =a\mathfrak{M}^{-1}(\log_{LT}(1-\frac{\pi_{L}}{q}\varphi) \partial_{\mathrm{inv}}\log g_{u,\eta})(\chi_{LT}^{r})\] \[=a\Omega^{-1}r\mathfrak{M}^{-1}(1-\frac{\pi_{L}}{q}\varphi) \partial_{\mathrm{inv}}\log g_{u,\eta})(\chi_{LT}^{r-1})\] \[=ar\Omega^{-r}(1-\frac{\pi_{L}}{q}\pi_{L}^{r-1})(\partial_{ \mathrm{inv}}^{r-1}\partial_{\mathrm{inv}}\log g_{u,\eta})_{|Z=0}\] \[=ar\Omega^{-r}(1-\frac{\pi_{L}^{r}}{q})(\partial_{\mathrm{inv}}^{ r-1}\partial_{\mathrm{inv}}\log g_{u,\eta})_{|Z=0}, \tag{144}\] i.e., \(\mathcal{L}\) satisfies (143), indeed. By construction and Proposition 4.1.25 the image of \(\mathcal{L}\) actually lies in the \(G_{L}\)-invariants: \[\mathcal{L}:U\otimes_{\mathbb{Z}}T^{*}_{\pi}\to D(\Gamma_{L},K)^{G_{L}} \otimes_{L}D_{cris,L}(L(\tau)).\] We claim that \[U\otimes_{\mathbb{Z}}T^{*}_{\pi}\xrightarrow{-\kappa\otimes T^{*}_{\pi}}H^{1 }_{Iw}(L_{\infty}/L,o_{L}(\tau))\xrightarrow{\mathcal{L}_{L(\tau\chi_{LT})} \otimes o_{L}(\chi_{LT}^{-1})\otimes t_{LT}}D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(L(\tau)) \tag{145}\] coincides with \[\mathcal{L}:U\otimes_{\mathbb{Z}}T^{*}_{\pi}\to D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(L(\tau)).\] Indeed, from to the commutativity of the following diagram (cp. with [13, Appendix C] for \(L=\mathbb{Q}_{p}\)), in which \(\mathcal{L}_{L(\tau\chi_{LT})}\otimes\mathbf{d}_{1}^{\otimes-1}\) or more generally \(\mathcal{L}_{L(\tau\chi_{LT}^{r})}\otimes\mathbf{d}_{1}^{\otimes-1}\), \(r\geqslant 1\), shows up at \(\diamondsuit\), the above claim immediately follows by tensoring the diagram for \(r=1\) with \(o_{L}(\chi_{LT}^{-1})\) and then composing with the multiplication by \(t_{LT}\); we set \(e_{r}:=t_{LT}^{-r}\otimes\eta^{\otimes r}\in D_{cris,L}(L(\chi_{LT}^{r}))\): where \(\mathfrak{l}_{i}:=t_{LT}\partial_{\rm inv}-i\), \(\partial_{\rm inv}=\frac{d}{dt_{LT}}\). Note that we have \[\mathfrak{M}^{-1}(\mathfrak{l}_{0}f)=\lim_{\gamma\to 1}\frac{\delta_{\gamma}( \mathfrak{M}^{-1}(f))-\mathfrak{M}^{-1}(f)}{\ell(\gamma))}=\nabla_{\rm Lie} \mathfrak{M}^{-1}(f), \tag{146}\] see [KR, Lem. 2.1.4] for the fact that \(\nabla_{\rm Lie}=t_{LT}\partial_{\rm inv}\) as operators on \(\mathcal{O}\). By abuse of notation we thus also write \(\mathfrak{l}_{i}=\nabla_{\rm Lie}-i\) for the corresponding element in \(D(\Gamma_{L},K)\), compare [ST1, SS2.3] for the action of \({\rm Lie}(\Gamma_{L})\) on and its embedding into \(D(\Gamma_{L},K)\). Moreover we set \(\mathfrak{l}_{L(\chi^{r}_{LT})}=\prod_{i=0}^{r-1}\mathfrak{l}_{i}\). Note that \(\partial_{\rm inv}\) is invertible on \(\mathcal{O}^{\psi_{L}=0}\) by [FX, Prop. 3.12]. Finally the map \[comp:\varphi^{*}(N(o_{L}(\chi^{r}_{LT})))^{\psi_{L}=0}\to\mathcal{O}^{\psi_{L }=0}\otimes_{L}D_{cris,L}(L(\chi^{r}_{LT}))\] is (22). Inspired by Proposition 5.1.1 - we define \(\mathcal{L}_{L(\tau)}\) - since \(L(\tau)\) does not satisfy the conditions from the beginning of this chapter while \(L(\tau\chi_{LT})\) does - as a twist of \(\mathcal{L}_{L(\tau\chi_{LT})}\) by requiring the commutativity of the following diagram: which is possible due to the commutativity of the above diagram. Then \[\mathcal{L}:U\otimes_{\mathbb{Z}}T^{*}_{\pi}\to D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(L(\tau))\] also coincides with \[U\otimes_{\mathbb{Z}}T_{\pi}^{*}\xrightarrow{-\kappa\otimes T_{\pi}^{*}}H^{1}_{ Iw}(L_{\infty}/L,o_{L}(\tau))\xrightarrow{(\frac{1}{\Omega}\nabla_{\text{Lie}}Tw_{\chi^{-1}} \otimes\text{\goth id})\circ\mathcal{L}_{L(\tau)}}D(\Gamma_{L},\mathbb{C}_{p}) \otimes_{L}D_{cris,L}(L(\tau)) \tag{147}\] by Proposition 5.1.1. We refer the interested reader to SS5 of [10] for an example of a CM-elliptic curve \(E\) with supersingular reduction at \(p\) in which they attach to a norm-compatible sequence of elliptic units \(e(\mathfrak{a})\) (in the notation of [11, II 4.9]) a distribution \(\mu(\mathfrak{a})\in D(\Gamma_{L},K)\) in [10, Prop. 5.2] satisfying a certain interpolation property with respect to the values of the attached (partial) Hecke-\(L\)-function. Without going into any detail concerning their setting and instead referring the reader to the notation in (loc. cit.) we just want to point out that up to twisting this distribution is the image of \(\kappa(e(\mathfrak{a}))\otimes\eta^{-1}\) under the regulator map \(\mathcal{L}_{L(\tau)}\): \[\mathcal{L}_{L(\tau)}(\kappa(e(\mathfrak{a}))\otimes\eta^{-1})=\Omega Tw_{\chi LT }(\mu(\mathfrak{a}))\otimes\mathbf{d}_{1}.\] Here, \(L=\mathbf{K}_{p}=\mathbf{F}_{\rho}\) (in their notation) is the unique unramified extension of \(\mathbb{Q}_{p}\) of degree \(2\), \(\pi_{L}=p\), \(q=p^{2}\), and the Lubin-Tate formal group is \(\hat{E}_{\wp}\) while \(K=\widehat{L_{\infty}}\). Indeed, we have a commutative diagram (148) where the Coleman map \(Col\) is given as the composite in the upper line of the following commutative diagram (149) in which the second line is just \(\mathfrak{L}\). Then the commutativity of (148) follows by comparing (149) with (147). Finally, \(Col(e(\mathfrak{a}))=\mu(\mathfrak{a})(=\mathfrak{M}^{-1}(g_{\mathfrak{a}}(Z))\) in their notation) holds by construction in (loc. cit.) upon noting that on \(\mathcal{O}^{\psi_{L}=\frac{1}{\pi_{L}}}\) the operator \(1-\frac{\pi}{p^{2}}\varphi_{L}\circ\psi_{L}\), which is used implicitly to define \(g_{\mathfrak{a}}(Z)(=(1-\frac{\pi}{p^{2}}\varphi_{L}\circ\psi_{L})\log Q_{ \mathfrak{a}}(Z))\), equals \(1-\frac{\varphi_{L}^{2}}{p^{2}}\). ### Relation to Berger's and Fourquaux' big exponential map Let \(V\) denote a \(L\)-analytic representation of \(G_{L}\) and take an integer \(h\geqslant 1\) such that \(\text{Fil}^{-h}D_{cris,L}(V)=D_{cris,L}(V)\) and such that \(D_{cris,L}(V)^{\varphi_{L}=\pi_{L}^{-h}}=0\) holds. Under these conditions in [10] a big exponential map a la Perrin-Riou \[\Omega_{V,h}:\left(\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V)\right)^{ \Delta=0}\to D_{\text{rig}}^{\dagger}(V)^{\psi_{L}=\frac{q}{\pi_{L}}}\] is constructed as follows: According to [BF, Lem. 3.5.1] there is an exact sequence \[0\to\bigoplus_{k=0}^{h}t_{LT}^{k}D_{cris,L}(V)^{\varphi_{L}=\pi_{L}^{-k}}\to( \mathcal{O}\otimes_{\partial_{L}}D_{cris,L}(V))^{\psi_{L}=\frac{q}{\pi_{L}}} \xrightarrow{1-\varphi_{L}}\] \[\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V)\xrightarrow{\Delta}\bigoplus_{ k=0}^{h}D_{cris,L}(V)/(1-\pi_{L}^{k}\varphi_{L})D_{cris,L}(V)\to 0,\] where, for \(f\in\mathcal{O}\otimes_{L}D_{cris,L}(V)\), \(\Delta(f)\) denotes the image of \(\bigoplus_{k=0}^{h}(\hat{\sigma}_{\mathrm{inv}}^{k}\otimes\mathrm{id}_{D_{cris,L}(V)})(f)(0)\) in \(\bigoplus_{k=0}^{h}D_{cris,L}(V)/(1-\pi_{L}^{k}\varphi_{L})D_{cris,L}(V)\). Hence, if \(f\in\big{(}\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V)\big{)}^{\Delta=0}\) there exists \(y\in(\mathcal{O}\otimes_{\partial_{L}}D_{cris,L}(V))^{\psi_{L}=\frac{q}{\pi_ {L}}}\) such that \(f=(1-\varphi_{L})y\). Setting \(\nabla_{i}:=\nabla-i\) for any integer \(i\), one observes that \(\nabla_{h-1}\circ\ldots\circ\nabla_{0}\) annihilates \(\bigoplus_{k=0}^{h-1}t_{LT}^{k}D_{cris,L}(V)^{\varphi_{L}=\pi_{L}^{-k}}\) whence \(\Omega_{V,h}(f):=\nabla_{h-1}\circ\ldots\circ\nabla_{0}(y)\) is well-defined and belongs under the comparison isomorphism (20) to \(D_{\mathrm{rig}}^{\dagger}(V)^{\psi_{L}=\frac{q}{\pi_{L}}}\) by Prop. 3.1.13. Note that \(\big{(}\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V)\big{)}^{\Delta=0}= \mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V)\) if \(D_{cris,L}(V)^{\varphi_{L}=\pi_{L}^{-k}}=0\) for all \(0\leqslant k\leqslant h\). If this does not hold for \(V\) itself, it does hold for \(V(\chi_{LT}^{-r})\) for \(r\) sufficiently large (with respect to the same \(h\)). In the case \(L=\mathbb{Q}_{p}\) the above map specialises to the exponential map due to Perrin-Riou and satisfies the following adjointness property with Loeffer's and Zerbes' regulator map, see [LVZ15, A.2.2], where the upper pairing and notation are introduced: In fact this is a variant of Perrin-Riou's reciprocity law comparing \(\Omega_{V,h}\) with \(\Omega_{V*(1),1-h}\). For \(L\neq\mathbb{Q}_{p}\) the issue of \(L\)-analyticity requires that \(V^{*}(1)\) is \(L\)-analytic for the construction of \(\Omega_{V*(1),1-h}\), which then implies that \(V\) is not \(L\)-analytic. Instead our regulator map is available and the purpose of this subsection is to prove an analogue of the above adjointness for arbitrary \(L\). **Theorem 5.2.1** (Reciprocity formula/Adjointness of Big exponential and regulator map).: _Assume that \(V^{*}(1)\) is \(L\)-analytic with \(\mathrm{Fil}^{-1}D_{cris,L}(V^{*}(1))=D_{cris,L}(V^{*}(1))\) and \(D_{cris,L}(V^{*}(1))^{\varphi_{L}=\pi_{L}^{-1}}=D_{cris,L}(V^{*}(1))^{\varphi_ {L}=1}=0\). Then the following diagram consisting of \(D(\Gamma_{L},K)\mbox{-}_{\mathfrak{k}}\)-sesquilinear pairings (in the sense of (131)) commutes:_ (150) _Note that the terms on the right hand side of the pairings are all defined over \(L!\)_ Proof.: This follows from the abstract reciprocity formula 4.5.32 (with \(M:=D_{\mathrm{rig}}^{\dagger}(V(\tau^{-1}))\) as before) by construction. Indeed, assuming that \(z\in\mathcal{O}^{\psi_{L}=0}\otimes_{L}D_{cris,L}(V^{*}(1))\) and \(D(V(\tau^{-1}))^{\psi_{L}=1}\) we have that \((1-\frac{\pi_{L}}{q}\cdot\varphi_{L})y\in M^{\prime}\cap(M^{\psi_{L}=0})\) (see (141)) and \(\mathrm{comp}^{-1}((1-\varphi_{L})x)\in\tilde{M^{\prime}}\) for \(x\in(\mathcal{O}\otimes_{L}D_{cris,L}(V^{*}(1)))^{\psi_{L}=\frac{q}{\pi_{L}}}\) such that \(z=(1-\varphi_{L})x\). Moreover, \(\mathrm{comp}^{-1}((1-\varphi_{L})x)\in\tilde{M^{\psi_{L}=0}}\) by Prop. 3.1.13 as \(V^{*}(1)\) is positive by assumption. Recall that \(\mathrm{comp}^{-1}(\nabla x)\) is an element in \(D^{\dagger}_{\mathrm{rig}}(V^{*}(1))^{\psi_{L}=\frac{q}{\pi_{L}}}\) again by Prop. 3.1.13. We thus obtain \[\frac{q-1}{q}\{\mathrm{comp}^{-1}(\nabla x),y\}_{Iw} =\frac{q-1}{q}\{\nabla\mathrm{comp}^{-1}((1-\varphi_{L})x),(1- \frac{\pi_{L}}{q}\varphi_{L})y\}_{Iw}^{0}\] \[=[(1-\varphi_{L})x,\mathrm{comp}((1-\frac{\pi_{L}}{q}\varphi_{L} )y)].\] By definition of the big exponential and regulator map the latter is equivalent to \[\{\Omega_{V^{*}(1),1}(z),y\}_{Iw}=[z,\mathcal{L}^{0}_{V}(y)].\] We also could consider the following variant of the big exponential map (under the assumptions of the theorem) \[\mathbf{\Omega}_{V,h}:D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V^{*}( 1))\to D^{\dagger}_{\mathrm{rig}}(V)^{\psi_{L}=\frac{q}{\pi_{L}}}\] by extending scalars from \(L\) to \(\mathbb{C}_{p}\) and composing the original one with \(\Omega^{-h}\)20 times Footnote 20: This means to replace \(\nabla\) by \(\frac{\nabla}{\Omega}\) in order to achieve twist invariance of the big exponential map, see the remark below. \[D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V^{*}(1))\xrightarrow{ \mathfrak{M}\otimes\mathrm{id}}(\mathcal{O}_{K}(\mathbf{B}))^{\psi_{L}=0} \otimes_{L}D_{cris,L}(V^{*}(1)).\] **Corollary 5.2.2** (Reciprocity formula/Adjointness of Big exponential and regulator map).: _Under the assumptions of the theorem the following diagram of \(D(\Gamma_{L},K)\)-\(\mathfrak{l}_{*}\)-sesquilinear pairings commutes:_ (151) _where \([-,-]^{0}=[\mathfrak{M}\otimes\mathrm{id}(-),\sigma_{-1}\mathfrak{M}\otimes \mathrm{id}(-)]\), i.e.,_ \[[\lambda\otimes\tilde{d},\mu\otimes d]^{0}\cdot\eta(1,Z)\otimes(t_{LT}^{-1} \otimes\eta)=\lambda_{t*}(\mu)\cdot\eta(1,Z)\otimes[\tilde{d},d]_{cris}, \tag{152}\] _where \(D_{cris,L}(V^{*}(1))\times D_{cris,L}(V(\tau^{-1}))\xrightarrow{[\;,\;]_{cris}}D _{cris,L}(L(\chi_{LT}))\) is the canonical pairing._ **Remark 5.2.3**.: _By [BF, Cor. 3.5.4] we have \(\Omega_{V,h}(x)\otimes\eta^{\otimes j}=\Omega_{V(\chi_{LT}^{j}),h+j}(\partial _{\mathrm{inv}}^{-j}x\otimes t_{LT}^{-j}\eta^{\otimes j})\) and \(\mathfrak{l}_{h}\circ\Omega_{V,h}=\Omega_{V,h+1},\) whence we obtain \(\mathbf{\Omega}_{V,h}(x)\otimes\eta^{\otimes j}=\mathbf{\Omega}_{V(\chi_{LT}^ {j}),h+j}(Tw_{\chi_{LT}^{-j}}(x)\otimes t_{LT}^{-j}\eta^{\otimes j})\) and \(\mathfrak{l}_{h}\circ\mathbf{\Omega}_{V,h}=\mathbf{\Omega}_{V,h+1}.\)_ #### 5.2.1 Some homological algebra Let \(X\xrightarrow{f}Y\) be a morphism of cochain complexes. Its mapping cone \(\operatorname{cone}(f)\) is defined as \(X[1]\bigoplus Y\) with differential \(d^{i}_{\operatorname{cone}(f)}:=\begin{pmatrix}d^{i}_{X[1]}&0\\ f[1]^{i}&d^{i}_{Y}\end{pmatrix}\) (using column notation) and we define the mapping fibre of \(f\) as \(\operatorname{Fib}(f):=\operatorname{cone}(f)[-1]\). Here the translation \(X[n]\) of a complex \(X\) is given by \(X[n]^{i}:=X^{i+n}\) and \(d^{i}_{X[n]}:=(-1)^{n}d^{i+n}_{X[n]}\). Alternatively, we may consider \(f\) as a double cochain complex concentrated horizontally in degree \(0\) and \(1\) and form the total complex (as in [SP, Def. 18.3/tag 012Z]). Then the associated total complex coincides with \(\operatorname{Fib}(-f)\). For a complex \((X^{\bullet},d_{X})\) of topological \(L\)-vector spaces we define its \(L\)-dual \(((X^{*})^{\bullet},d_{X^{*}})\) to be the complex with \[(X^{*})^{i}:=\operatorname{Hom}_{L,cts}(X^{-i},L)\] and \[d_{X^{*}}(f):=(-1)^{\operatorname{deg}(f)-1}f\circ d_{X}.\] More generally, for two complexes \((X^{\bullet},d_{X})\) and \((Y^{\bullet},d_{Y})\) of topological \(L\)-vector spaces we define the complex \(\operatorname{Hom}^{\bullet}_{L,cts}(X^{\bullet},Y^{\bullet})\) by \[\operatorname{Hom}^{n}_{L,cts}(X^{\bullet},Y^{\bullet})=\prod_{i\in\mathbb{Z} }\operatorname{Hom}_{L,cts}(X^{i},Y^{i+n})\] with differentials \(df=d\circ f+(-1)^{\operatorname{deg}(f)-1}f\circ d\). Note that the canonical isomorphism \[\operatorname{Hom}^{\bullet}(X^{\bullet},Y^{\bullet})[n]\xrightarrow{\cong} \operatorname{Hom}^{\bullet}(X^{\bullet},Y^{\bullet}[n])\] does not involve any sign, i.e., it is given by the identity map in all degrees. Also we recall that the tensor product of two complexes \(X^{\bullet}\) and \(Y^{\bullet}\) is given by \[(X^{\bullet}\otimes_{L}Y^{\bullet})^{i}:=\bigoplus_{n}X^{n}\otimes_{L}Y^{i-n}\] and \[d(x\otimes y)=dx\otimes y+(-1)^{\operatorname{deg}(x)}x\otimes dy.\] The adjunction morphism on the level of complexes \[\operatorname{adj}:\operatorname{Hom}^{\bullet}_{L,cts}(X^{\bullet}\otimes_{L }Y^{\bullet},Z^{\bullet})\to\operatorname{Hom}^{\bullet}_{L,cts}(Y^{\bullet}, \operatorname{Hom}^{\bullet}_{L,cts}(X^{\bullet},Z^{\bullet}))\] sends \(u\) to \((y\mapsto(x\mapsto(-1)^{\operatorname{deg}(x)\operatorname{deg}(y)}u(x \otimes y)))\). It is well-defined and continuous with respect to the projective tensor product topology and the strong topology for the Homs. Furthermore, by definition we have the following commutative diagram (153) where \(ev_{2}\) sends \((x,f)\) to \((-1)^{\operatorname{deg}(x)\operatorname{deg}(f)}f(x)\). **Lemma 5.2.4**.: _Let \((\mathcal{C}^{\bullet},d^{\bullet})\) be a complex in the category of locally convex topological \(L\)-vector spaces._ * _If_ \(\mathcal{C}\) _consists of Frechet spaces and_ \(h^{i}(\mathcal{C}^{\bullet})\) _is finite-dimensional over_ \(L\)_, then_ \(d^{i-1}\) _is strict and has closed image._ * _If_ \(d^{i}\) _is strict, then_ \(h^{-i}(\mathcal{C}^{*})\cong h^{i}(\mathcal{C})^{*}\)_._ Proof.: (i) Apply the argument from [BW, SS IX, Lem. 3.4] and use the open mapping theorem [NFA, Prop. 8.8]. (ii) If forms part of the complex with \(B\) in degree \(i\), one immediately obtains a map \[\ker(\alpha^{*})/\mathrm{im}(\beta^{*})\to\left(\ker(\beta)/\mathrm{im}( \alpha)\right)^{*},\] where \(\ker(\beta)\) carries the subspace topology and \(\ker(\beta)/\mathrm{im}(\alpha)\) the quotient topology. Now use the Hahn-Banach theorem [NFA, Cor. 9.4] for the strict maps \(B/\ker(\beta)\hookrightarrow C\) (induced from \(\beta\)) and \(\ker(\beta)\hookrightarrow B\) in order to show that this map is an isomorphism. **Definition 5.2.5**.: _A locally convex topological vector space is called an LF-space, if it is the direct limit of a countable family of Frechet spaces, the limit being formed in the category of locally convex vector spaces._ **Remark 5.2.6**.: * _If_ \(V\xrightarrow{\alpha}W\) _is a continuous linear map of Hausdorff_ \(LF\)_-spaces with finite dimensional cokernel, then_ \(\alpha\) _is strict and has closed image by the same argument used in (i) of the previous lemma. However, since a closed subspace of an LF-space need not be an LF-space, we cannot achieve the same conclusion for complexes by this argument as_ \(\ker(d^{i})\) _may fail to be an LF-space, whence one cannot apply the open mapping theorem, in general. But consider the following special situation. Assume that the complex_ \(\mathcal{C}^{\bullet}\) _consists of LF-spaces and_ \(h^{i}(\mathcal{C}^{\bullet})\) _is finite-dimensional. If moreover_ \(\mathcal{C}^{i+1}=0\)_, i.e.,_ \(\mathcal{C}^{i}=\ker(d^{i})\)_, then_ \(d^{i-1}\) _is strict and_ \(h^{1-i}(\mathcal{C}^{*})\cong h^{i-1}(\mathcal{C})^{*}\)_._ * _If_ \(d^{i}\) _is not strict, the above proof still shows that we obtain a surjection_ \(h^{-i}(\mathcal{C}^{*})\twoheadrightarrow h^{i}(\mathcal{C})^{*}\)_._ However, for a special class of LF-spaces and under certain conditions we can say more about how forming duals and cohomology interacts. **Lemma 5.2.7**.: _Let \((\mathcal{C}^{\bullet},d^{\bullet})=\varinjlim_{r}(\mathcal{C}^{\bullet}_{r},d ^{\bullet}_{r})\) be a complex in the category of locally convex topological \(L\)-vector spaces arising as regular inductive limit of complexes of Frechet spaces, i.e., in each degree \(i\) the transition maps in the countable sequence \((\mathcal{C}^{i}_{r})_{r}\) are injective and for each bounded subset \(B\subseteq\mathcal{C}^{i}\) there exists an \(r\geqslant 1\) such that \(B\) is contained in \(\mathcal{C}^{i}_{r}\) and is bounded as a subset of the Frechet space \(\mathcal{C}^{i}_{r}\). Then,_ * _we have topological isomorphisms_ \((\mathcal{C}^{\bullet})^{*}\cong\varinjlim_{r}(\mathcal{C}^{\bullet}_{r})^{*},\)__ * _if, in addition,_ \(\varinjlim_{r\geqslant 0}^{1}h^{i}((\mathcal{C}^{\bullet}_{r})^{*})=0\) _for all_ \(i\)_, we have a long exact sequence_ \[\dots\xrightarrow{}h^{i}((\mathcal{C}^{\bullet})^{*})\xrightarrow{}\varinjlim _{r\geqslant 0}h^{i}((\mathcal{C}^{\bullet}_{r})^{*})\xrightarrow{}h^{i-1}( \varinjlim_{r\geqslant 0}^{1}(\mathcal{C}^{\bullet}_{r})^{*}) \xrightarrow{}h^{i+1}((\mathcal{C}^{\bullet})^{*})\xrightarrow{}\dots,\] * _if, in addition to (ii), the differentials_ \(d^{\bullet}_{r}\) _are strict, e.g., if all_ \(h^{i}(\mathcal{C}^{\bullet}_{r})\) _have finite dimension over_ \(L\)_, and_ \(\varinjlim_{r\geqslant 0}^{1}(\mathcal{C}^{\bullet}_{r})^{*}=0,\) _we have isomorphisms_ \[h^{i}((\mathcal{C}^{\bullet})^{*})\cong\varinjlim_{r\geqslant 0}h^{-i}( \mathcal{C}^{\bullet}_{r})^{*}.\] Proof.: (i) is [PGS, Thm: 11.1.13] while (ii), (iii) follows from (i) and [Lu, Ch. 3, Prop. 1] applied to the inverse system \(((\mathcal{C}^{\bullet}_{r})^{*})_{r}\) combined with Lemma 5.2.4. #### 5.2.2 Koszul complexes In this paragraph we restrict to the situation \(U\cong\mathbb{Z}_{p}^{d}\) and fix topological generators \(\gamma_{1},\ldots\gamma_{d}\) of \(U\) and we set \(\Lambda:=\Lambda(U).\) Furthermore, let \(M\) be any complete linearly topologized \(o_{L}\)-module with a continuous \(U\)-action. Then by [12, Thm. II.2.2.6] this actions extends to continuous \(\Lambda\)-action and one has \(\operatorname{Hom}_{\Lambda,cts}(\Lambda,M)=\operatorname{Hom}_{\Lambda}( \Lambda,M).\) Consider the (homological) complexes \(K_{\bullet}(\gamma_{i}):=[\Lambda\xrightarrow{\gamma_{i}-1}\Lambda]\) concentrated in degrees \(1\) and \(0\) and define \[K_{\bullet}:= K_{\bullet}^{U}:=K_{\bullet}(\gamma):=\bigotimes_{i=1}^{d}K_{ \bullet}(\gamma_{i}),\] \[K^{\bullet}(M):= K_{U}^{\bullet}(M):=\operatorname{Hom}_{\Lambda}^{\bullet}(K_{ \bullet},M)\cong\operatorname{Hom}_{\Lambda}^{\bullet}(K_{\bullet},\Lambda) \otimes_{\Lambda}M=K^{\bullet}(\Lambda)\otimes_{\Lambda}M,\] \[K_{\bullet}(M):= K_{\bullet}\otimes_{\Lambda}M\text{ (homological complex)},\] \[K_{\bullet}(M)^{\bullet}:= (K_{\bullet}\otimes_{\Lambda}M)^{\bullet}\text{ (the associated cohomological complex)}.\] If we want to indicate the dependence on \(\gamma=(\gamma_{1},\ldots\gamma_{d})\) we also write \(K^{\bullet}(\gamma,M)\) instead of \(K^{\bullet}(M)\) and similarly for other notation; moreover, we shall use the notation \(\gamma^{-1}=(\gamma_{1}^{-1},\ldots,\gamma_{d}^{-1})\) and \(\gamma^{p^{n}}=(\gamma_{1}^{p^{n}},\ldots\gamma_{d}^{p^{n}})\). Note that in each degree these complexes consists of a direct sum of finitely many copies of \(M\) and will be equipped with the corresponding direct product topology. The complex \(K_{\bullet}\) will be identified with the exterior algebra complex \(\bigwedge_{\Lambda}^{\bullet}\Lambda^{d}\) of the free \(\Lambda\)-module with basis \(e_{1},\ldots,e_{d}\), for which the differentials \(d_{q}:\bigwedge_{\Lambda}^{q}\Lambda^{d}\to\bigwedge_{\Lambda}^{q-1}\Lambda^{d}\) with respect to the standard basis \(e_{i_{1},\ldots,i_{q}}=e_{i_{1}}\wedge\cdots\wedge e_{i_{q}}\), \(1\leqslant i_{1}<\cdots<i_{q}\leqslant d\), is given by the formula \[d_{q}(a_{i_{1},\ldots,i_{q}})=\sum_{k=1}^{q}(-1)^{k+1}(\gamma_{i_{k}}-1)a_{i_ {1},\ldots,\widehat{i_{k}},\ldots,i_{q}}.\] Then the well-known selfduality (compare [1, Prop. 17.15] although the claim there is not precisely the same) of the Koszul complex, i.e., the isomorphism of complexes \[K_{\bullet}(\Lambda)^{\bullet}\cong K^{\bullet}(\Lambda)[d] \tag{154}\] can be explicitly described in degree \(-q\) as follows (by identifying \(\bigwedge_{\Lambda}^{d}\Lambda^{d}=\Lambda e_{1}\wedge\cdots\wedge e_{d}=\Lambda\)): \[\bigwedge_{\Lambda}^{q}\Lambda^{d}\xrightarrow{\alpha_{-q}} \operatorname{Hom}_{\Lambda}(\bigwedge_{\Lambda}^{d-q}\Lambda^{d},\Lambda)\] \[e_{i_{1},\ldots,i_{q}}\mapsto\quad\operatorname{sign}(I,J)e_{j_ {1},\ldots,j_{d-q}}^{*},\] where \(e_{1}^{*},\ldots e_{d}^{*}\) denotes the dual basis of \(e_{1},\ldots,e_{d}\), the elements \(e_{j_{1},\ldots,j_{d-q}}^{*}=e_{j_{1}}^{*}\wedge\cdots\wedge e_{j_{d-q}}^{*}\), \(1\leqslant j_{1}<\cdots<j_{d-q}\leqslant d\), form a (dual) basis of \(\operatorname{Hom}_{\Lambda}(\bigwedge_{\Lambda}^{d-q}\Lambda^{d},\Lambda)\), the indices \(J=(j_{k})_{k}\) are complementary to \(I=(i_{n})_{n}\) in the following sense \(\{i_{1},\ldots,i_{q}\}\cup\{j_{1},\ldots,j_{d-q}\}=\{1,\ldots,d\}\) and \(\operatorname{sign}(I,J)\) denotes the sign of the permutation \([i_{1},\ldots,i_{q},j_{1},\ldots,j_{d-q}].\) Indeed, the verification that the induced diagram involving the differentials from cohomological degree \(-q\) to \(-q+1\) 21 commutes, relies on the observation that Footnote 21: The signs \((-1)^{d}\) and \((-1)^{d-q-1}\) result from the shift by \(d\) and the sign rule for complex-homomorphisms, respectively. \[\operatorname{sign}(I,J)\operatorname{sign}(I_{\widehat{k}},J_{k})^{-1}=(-1)^{q- k+l-1},\] where \(I_{\widehat{k}}:=(i_{1},\ldots,\widehat{i_{k}},\ldots,i_{q})\) denotes the sequence which results from \(I\) by omitting \(i_{k}\) while \(J_{k}=(j_{1},\ldots,j_{l-1},i_{k},j_{l},\ldots i_{d-q})\) denotes the sequence which arises from \(J\) by inserting \(i_{k}\) at position \(l\) with regard to the strict increasing ordering: The permutations \([i_{1},\ldots,i_{q},j_{1},\ldots,j_{d-q}]\) and \([i_{1},\ldots,\widehat{i_{k}},\ldots,i_{q},j_{1},\ldots,j_{l-1},i_{k},j_{l}, \ldots,j_{d-q}]\) differ visibly by \(q-k+l-1\) transpositions. Now we assume that \(M\) is any complete locally convex \(L\)-vector space with continuous \(U\)-action such that its strong dual is again complete with continuous \(U\)-action. Then we obtain isomorphisms of complexes \[\begin{split} K^{\bullet}(\gamma,M)^{*}&=\operatorname {Hom}_{L,cts}^{\bullet}(\operatorname{Hom}_{\Lambda}^{\bullet}(K_{\bullet}( \gamma),\Lambda)\otimes_{\Lambda}M,L)\\ &\cong\operatorname{Hom}_{\Lambda}^{\bullet}(\operatorname{Hom} _{\Lambda}^{\bullet}(K_{\bullet}(\gamma^{-1}),\Lambda),\operatorname{Hom}_{L, cts}(M,L))\\ &\cong\operatorname{Hom}_{\Lambda}^{\bullet}(\operatorname{Hom }_{\Lambda}^{\bullet}(K_{\bullet}(\gamma^{-1}),\Lambda),\Lambda)\otimes_{ \Lambda}\operatorname{Hom}_{L,cts}(M,L)\\ &\cong K_{\bullet}(\gamma^{-1},\Lambda)^{\bullet}\otimes_{ \Lambda}\operatorname{Hom}_{L,cts}(M,L)\\ &\cong K^{\bullet}(\gamma^{-1},\Lambda)[d]\otimes_{\Lambda}M^{ *}\\ &\cong K^{\bullet}(\gamma^{-1},M^{*})[d],\end{split} \tag{155}\] where in the second line we use the adjunction morphism; the isomorphism in the fourth line being the biduality morphism (according to [Ne, (1.2.8)]) \[\begin{split} K_{\bullet}(\Lambda)^{\bullet}& \xrightarrow{\cong}\operatorname{Hom}_{\Lambda}^{\bullet}( \operatorname{Hom}_{\Lambda}^{\bullet}(K_{\bullet},\Lambda),\Lambda)\\ x&\mapsto(-1)^{i}x^{**}\end{split}\] with the usual biduality of modules \[\begin{split} K_{\bullet}(\Lambda)^{i}& \xrightarrow{\cong}\operatorname{Hom}_{\Lambda}(\operatorname{Hom}_{ \Lambda}(K_{-i},\Lambda),\Lambda)\\ x&\mapsto(x^{**}:f\mapsto f(x))\end{split}\] involves a sign, while the isomorphism in the third last line stems from (154) together with Lemma 4.5.1 (i). Note that the isomorphism in the second last line does not involve any further signs by [Ne, (1.2.15)]. We finish this subsection by introducing restriction and corestriction maps concerning the change of group for Koszul complexes. To this end let \(U_{1}\subseteq U\) be the open subgroup generated by \(\gamma_{1}^{p^{n}},\ldots\gamma_{d}^{p^{n}}\). Then \(\operatorname{Hom}_{\Lambda}^{\bullet}(-,M)\) applied to the tensor product of the diagrams gives a map \(cor_{U}^{U_{1}}:K_{U_{1}}^{\bullet}(\gamma^{p^{n}})(M)\to K_{U}^{\bullet}( \gamma)(M)\) which we call corestriction map and which is compatible under (169) below with the corestriction map on cocylces (for appropriate choices of representatives in the definition of the latter). Using the diagram instead, one obtains the restriction map \(res_{U_{1}}^{U}:K_{U}^{\bullet}(\gamma)(M)\to K_{U_{1}}^{\bullet}(\gamma^{p^{n}}) (M)\), again compatible under (169) with the restriction map on cocycles. #### 5.2.3 Continuous and analytic cohomology For any profinite group \(G\) and topological abelian group \(M\) with continuous \(G\)-action we write \(\mathcal{C}^{\bullet}:=\mathcal{C}^{\bullet}(G,M)\) for the continuous (inhomogeneous) cochain complex of \(G\) with coefficients in \(M\) and \(H^{\bullet}(G,M):=h^{\bullet}(\mathcal{C}^{\bullet}(G,M))\) for continuous group cohomology. Note that \(\mathcal{C}^{0}(G,M)=M\). If \(G\) is moreover a \(L\)-analytic group and \(M=\varinjlim_{s}\varprojlim_{r}M^{[r,s]}\) with Banach spaces \(M^{[r,s]}\) a LF space with a pro-\(L\)-analytic action of \(G\), i.e., a locally analytic action on each \(M^{[r,s]}\), which means that for all \(m\in M^{[r,s]}\) there exist an open \(L\)-analytic subgroup \(\Gamma_{n}\subseteq\Gamma\) in the notation of subsection 4.3.4 such that the orbit map of \(m\) restricted to \(\Gamma_{n}\) is a power series of the form \(g(m)=\sum_{k\geq 0}\ell(g)^{k}m_{k}\) for a sequence \(m_{k}\) of elements in \(M^{[r,s]}\) with \(\pi_{L}^{nk}m_{k}\) converging to zero. Following [Co2, SS5] we write \(\mathcal{C}^{\bullet}_{an}:=\mathcal{C}^{\bullet}_{an}(G,M)\) for the locally \(L\)-analytic cochain complex of \(G\) with coefficients in \(M\) and \(H^{\bullet}_{an}(G,M):=h^{\bullet}(\mathcal{C}^{\bullet}_{an}(G,M))\) for locally \(L\)-analytic group cohomology. More precisely, if \(\operatorname{Maps}_{locL-an}(G,M^{[r,s]})\) denotes the space of locally \(L\)-analytic maps from \(G\) to \(M^{[r,s]}\), then \[C^{n}_{an}(G,M)=\varinjlim_{s}\varprojlim_{r}\operatorname{Maps}_{locL-an}(G, M^{[r,s]})\] is the space of locally \(L\)-analytic functions (locally with values in \(\varprojlim_{r}M^{[r,s]}\) for some \(s\) and such that the composite with the projection onto \(M^{[r,s]}\) is locally \(L\)-analytic for all \(r\)). Note that again \(\mathcal{C}^{0}_{an}(G,M)=M\) and that there are canonical homomorphisms \[\mathcal{C}^{\bullet}_{an}(G,M)\hookrightarrow\mathcal{C}^{ \bullet}(G,M), \tag{157}\] \[H^{\bullet}_{an}(G,M)\to H^{\bullet}(G,M). \tag{156}\] Let \(f\) be any continuous endomorphism of \(M\) which commutes with the \(G\)-action. We define \[H^{0}(f,M):=M^{f=1}\qquad\text{and}\qquad H^{1}(f,M):=M_{f=1} \tag{158}\] as the kernel and cokernel of the map \(M\xrightarrow{f-1}M\), respectively. The endomorphism \(f\) induces an operator on \(\mathcal{C}^{\bullet}\) or \(\mathcal{C}^{\bullet}_{an}\) and we denote by \(\mathcal{T}:=\mathcal{T}_{f,G}(M)\) and \(\mathcal{T}^{an}:=\mathcal{T}^{an}_{f,G}(M)\) the mapping fibre of \(\mathcal{C}^{\bullet}(G,f)\) and \(\mathcal{C}^{\bullet}_{an}(G,f)\), respectively. Again there are canonical homomorphisms \[\mathcal{T}^{an}_{f,G}(M)\hookrightarrow\mathcal{T}_{f,G}(M), \tag{160}\] \[h^{\bullet}(\mathcal{T}^{an}_{f,G}(M))\to h^{\bullet}(\mathcal{T}_{f,G}(M)). \tag{159}\] For? either empty or \(an\), one of the corresponding double complex spectral sequences is \[{}_{I}E_{2}^{i,j}=H^{i}(f,H_{?}^{j}(G,M))\Longrightarrow h^{i+j}(\mathcal{T}^{?}) \tag{161}\] 22 Footnote 22: Naively, one would expect that the second corresponding double complex spectral sequences looks like \[{}_{II}E_{2}^{i,j}=H_{?}^{i}(G,H^{j}(f,M))\Longrightarrow h^{i+j}(\mathcal{T}^ {?})\.\] But this would require to first of all give sense to the required structure of \(H^{j}(f,M)\) as topological/analytic \(G\)-module! In low degrees this can be achieved and we obtain an exact sequence \[0\longrightarrow H_{?}^{1}(G,M^{f-1})\longrightarrow h^{1}(\mathcal{T}^{?}) \longrightarrow(M_{f-1})^{G}\stackrel{{\delta}}{{\longrightarrow}} H_{?}^{2}(G,M^{f-1})\.\] See [Th]. If \(M^{f-1}\) is again a LF-space with pro-\(L\)-analytic \(G\)-operation, one might be able to interpret the second spectral sequence in low degrees. It degenerates into the short exact sequences \[0\longrightarrow H_{?}^{i-1}(G,M)_{f=1}\longrightarrow h^{i}(\mathcal{T}^{?} _{f,G}(M))\longrightarrow H_{?}^{i}(G,M)^{f=1}\longrightarrow 0.\] In (loc. cit.) as well as in [BF] analytic cohomology is also defined for the semigroups \(\Gamma_{L}\times\Phi\) and \(\Gamma_{L}\times\Psi\) with \(\Phi=\{\varphi_{L}^{n}|n\geq 0\}\) and \(\Psi=\{(\frac{\pi}{q}\psi_{L})^{n}|n\geq 0\}\), if \(M\) denotes an \(L\)-_analytic_\((\varphi_{L},\Gamma_{L})\)-module over the Robba ring \(\mathcal{R}\). **Remark 5.2.8**.: _Any \(L\)-analytic \((\varphi_{L},\Gamma_{L})\)-module \(M\) over the Robba ring \(\mathcal{R}\) is a pro-\(L\)-analytic \(\Gamma_{L}\)-module by the discussion at the end of the proof of [BSX, Prop. 2.25], whence it is also an \(L\)-analytic \(\Gamma_{L}\times\Phi\)- and \(\Gamma_{L}\times\Psi\)-module as \(\Phi\) and \(\Psi\) possess the discrete structure as \(L\)-analytic manifolds._ **Proposition 5.2.9**.: _We have canonical isomorphisms_ \[h^{i}(\mathcal{T}^{an}_{\varphi_{L},\Gamma_{L}}(M))\cong H^{i}_{an}(\Gamma_{L} \times\Phi,M)\cong H^{i}_{an}(\Gamma_{L}\times\Psi,M)\cong h^{i}(\mathcal{T}^{ an}_{\frac{\pi}{q}\psi_{L},\Gamma_{L}}(M)).\] _and an exact sequence_ \[{}_{0}\longrightarrow H^{1}_{an}(\Gamma_{L},M^{\psi_{L}-\frac{\pi}{q}}) \longrightarrow h^{i}(\mathcal{T}^{an}_{\frac{\pi}{q}\psi_{L},\Gamma_{L}}(M) )\longrightarrow(M_{\psi_{L}-\frac{\pi}{q}})^{\Gamma_{L}}\longrightarrow H^{2 }_{an}(\Gamma_{L},M^{\psi_{L}-\frac{\pi}{q}})\longrightarrow h^{2}(\mathcal{T }^{an}_{\frac{\pi}{q}\psi_{L},\Gamma_{L}}(M)). \tag{162}\] Proof.: The isomorphism in the middle is [BF, Cor. 2.2.3]. For the two outer isomorphism we refer the reader to [Th, 3.7.6]. The exact sequence is the extension [Th, Thm. 5.1.5] of [BF, Thm. 2.2.4]. Note that, for \(U\subseteq U^{\prime}\), the restriction and corestriction homomorphisms \(\mathcal{C}^{\bullet}(U^{\prime},M)\stackrel{{\rm res}}{{ \longrightarrow}}\mathcal{C}^{\bullet}(U,M)\) and \(\mathcal{C}^{\bullet}(U,M)\stackrel{{\rm cor}}{{\longrightarrow }}\mathcal{C}^{\bullet}(U^{\prime},M)\) induce maps on \(\mathcal{T}_{f,U^{\prime}}(M)\stackrel{{\rm res}}{{ \longrightarrow}}\mathcal{T}_{f,U}(M)\) and \(\mathcal{T}_{f,U}(M)\stackrel{{\rm cor}}{{\longrightarrow}} \mathcal{T}_{f,U^{\prime}}(M)\), respectively. We write \({\rm Ext}^{1}_{\mathfrak{C}}(A,B)\) for isomorphism classes of extensions of \(B\) by \(A\) in any abelian category \(\mathfrak{C}\). Furthermore, we denote by \(\mathfrak{M}_{U}(R)\) (\(\mathfrak{M}^{\epsilon t}_{U}(R)\), \(\mathfrak{M}^{\dagger}_{U}(R)\) ) the category of all (etale, overconvergent) \((\varphi_{L},U)\)-modules over \(R\), respectively, and by \({\rm Rep}^{\dagger}_{L}(G_{L^{U}_{\infty}})\) the category of overconvergent representations of \(G_{L^{U}_{\infty}}\) consisting of those representations \(V\) of \(G_{L^{U}_{\infty}}\) such that \(\dim_{\mathbf{B}^{\dagger}_{L}}D^{\dagger}(V)=\dim_{L}V\) with \(D^{\dagger}(V):=(\mathbf{B}^{\dagger}\otimes_{L}V)^{H_{L}}\). **Theorem 5.2.10**.: _Let \(V\) be in \({\rm Rep}_{L}(G_{L})\) and \(U\subseteq\Gamma_{L}\) be any open subgroup._ 1. _For_ \(D(V)\) _the corresponding_ \((\varphi_{L},\Gamma_{L})\)_-module over_ \({\bf B}_{L}\) _we have canonical isomorphisms_ (163) \[h^{*}=h^{*}_{U,V}:H^{*}(L^{U}_{\infty},V)\stackrel{{\cong}}{{ \longrightarrow}}h^{*}(\mathcal{T}_{\varphi_{L},U}(D(V)))\] _which are functorial in_ \(V\) _and compatible with restriction and corestriction._ 2. _If_ \(V\) _is in addition overconvergent there are isomorphisms_ (164) \[h^{0}(\mathcal{T}_{\varphi_{L},U}(D^{\dagger}_{rig}(V))) \cong V^{G_{L^{U}_{\infty}}},\] (165) \[h^{1}(\mathcal{T}_{\varphi_{L},U}(D^{\dagger}_{rig}(V))) \cong H^{1}_{\dagger}(L^{U}_{\infty},V),\] _which are functorial in_ \(V\) _and compatible with restriction and corestriction and where by definition_ \(H^{1}_{\dagger}(L^{U}_{\infty},V)\subseteq H^{1}(L^{U}_{\infty},V)\) _classifies the overconvergent extensions of_ \(L\) _by_ \(V\)_. In particular, these_ \(L\)_-vector spaces have finite dimension._ 3. _If_ \(V\) _is in addition_ \(L\)_-analytic, then we have_ (166) \[H^{1}_{an}(L^{U}_{\infty},V)\stackrel{{\cong}}{{ \longrightarrow}}h^{1}(\mathcal{T}^{an}_{\varphi_{L},U}(D^{\dagger}_{rig}(V)))\] _where by definition_ 23__\(H^{1}_{an}(L^{U}_{\infty},V)\subseteq H^{1}_{\dagger}(L^{U}_{\infty},V)\subseteq H ^{1}(L^{U}_{\infty},V)\) _classifies the_ \(L\)_-analytic extensions of_ \(L\) _by_ \(V\)_._ Footnote 23: Note that the absolute Galois group of \(L^{U}_{\infty}\) is not \(L\)-analytic, so this group has not been defined earlier. Proof.: (i) is [Ku, Thm. 5.1.11.] or [KV, Thm. 5.1.1.]. The statement (iii) is [BF, Prop. 2.2.1] combined with Prop. 5.2.9 while (ii) follows from [FX] (the reference literally only covers the case \(U=\Gamma_{L}\), but the same arguments allow to extend the result to general \(U\)) as follows: Firstly, by Lemma 5.2.11 below one has an isomorphism \(h^{1}(\mathcal{T}_{\varphi_{L},U}(D^{\dagger}_{rig}(V)))\cong\operatorname{ Ext}^{1}_{\mathfrak{M}_{U}(\mathcal{R}_{L})}(\mathcal{R}_{L},D^{\dagger}_{rig}(V)).\) Then use the HN-filtration a la Kedlaya to see that any extension of etale \((\varphi_{L},U)\)-modules is etale again, whence \[\operatorname{Ext}^{1}_{\mathfrak{M}_{U}(\mathcal{R}_{L})}(\mathcal{R}_{L},D^ {\dagger}_{rig}(V))=\operatorname{Ext}^{1}_{\mathfrak{M}^{\text{\tiny eff}}_{ U}(\mathcal{R}_{L})}(\mathcal{R}_{L},D^{\dagger}_{rig}(V))\] and the latter group equals \[\operatorname{Ext}^{1}_{\mathfrak{M}^{\text{\tiny eff}}_{U}(\mathcal{R}_{L})}( \mathcal{R}_{L},D^{\dagger}_{rig}(V))\cong\operatorname{Ext}^{1}_{\operatorname {Rep}^{\dagger}_{L}(G_{L^{U}_{\infty}})}(L,V)=H^{1}_{\dagger}(L^{U}_{\infty},V)\] by Prop. 1.5 and 1.6 in (loc. cit.). For the claim in degree 0 one has to show that the inclusion \(D^{\dagger}(V)\subseteq D^{\dagger}_{rig}(V)\) induces an isomorphism on \(\varphi_{L}\)-invariants, which follows from [Ked08, Hypothesis 1.4.1, Prop. 1.2.6].24 Footnote 24: Since the _strong hypothesis_ holds by [Ked08, Hypothesis 1.4.1, Prop. 1.2.6] we also obtain an isomorphism on the \(\varphi_{L}\)-coinvariants \(H^{1}(\varphi_{L},-)\). Then the second spectral sequence above or a similar argument via the Koszul complexes as in Prop. A.0.8 implies that the canonical base change map induces an isomorphism \(h^{*}(\mathcal{T}_{\varphi_{L},U}(D^{\dagger}(V)))\cong h^{*}(\mathcal{T}_{ \varphi_{L},U}(D^{\dagger}_{rig}(V))).\) Cp. [Li, proof of Prop. 2.7]. **Lemma 5.2.11**.: _Let \(M\) be in \(\mathfrak{M}_{U}(\mathcal{R})\). Then we have a canonical isomorphism_ \[h^{1}(\mathcal{T}_{\varphi_{L},U}(M))\cong\operatorname{Ext}^{1}_{\mathfrak{M} _{U}(\mathcal{R}_{L})}(\mathcal{R}_{L},M).\] Proof.: Starting with a class \(z=[(c_{1},-c_{0})]\) in \(h^{1}(\mathcal{T}_{\varphi_{L},U}(M))\) with \(c_{1}\in C^{1}(M)\) and \(c_{0}\in C^{0}(M)=M\) (i.e., we work with _inhomogeneous_ continuous cocycle) satisfying the cocycle property \[c_{1}(\sigma\tau)=\sigma c_{1}(\tau)+c_{1}(\sigma)\text{ for all }\sigma,\tau\in U,\quad\text{and}\quad(\varphi_{L}-1)c_{1}(\tau)=(\tau-1)c_{0}\text{ for all }\tau\in U, \tag{167}\] we define an extension of \((\varphi_{L},U)\)-modules with \(E_{c}:=M\times\mathcal{R}_{L}\) as \(\mathcal{R}_{L}\)-module, \(g(m,r):=(gm+gr\cdot c_{1}(g),gr)\) for \(g\in U\) and \(\varphi_{E_{c}}((m,r)):=(\varphi_{M}(m)+\varphi_{L}(r)c_{0},\varphi_{L}(r))\); note that this defines a (continuous) group-action by the first identity in (167), while the \(U\)- and \(\varphi_{L}\)-action commute by the second identity in (167). If we change the representatives \((c_{1},-c_{0})\) by the coboundary induced by \(m_{0}\in M\), then sending \((0,1)\) to \((-m_{0},1)\) induces an isomorphism of extensions from the first to the second one, whence our map is well-defined. Conversely, if \(E\) is any such extension, choose a lift \(e\in E\) of \(1\in\mathcal{R}_{L}\) and define \[c_{1}(\tau):=(\tau-1)e\in M,\ \ c_{0}:=(\varphi_{E}-1)e,\] which evidently satisfy the cocycle conditions (167). Choosing another lift \(\tilde{e}\) leads to a cocylce which differs from the previous one by the coboundary induced by \(\tilde{e}-e\in M\), whence the inverse map is well-defined. One easily verifies that these maps are mutually inverse to each other. **Question 5.2.12**.: _Can one show that \(h^{2}(\mathcal{T}_{\varphi_{L},U}(D^{\dagger}_{rig}(V)))\) is finite-dimensional (and related to \(H^{2}(L^{U}_{\infty},V)\)) and that the groups \(h^{i}(\mathcal{T}_{\varphi_{L},U}(D^{\dagger}_{rig}(V)))\) vanish for \(i\geq 3\)_?__ **Remark 5.2.13**.: _By [FX, Thm. 0.2, Rem. 5.21] it follows that the inclusions_ \[H^{1}_{an}(L^{U}_{\infty},V)\subseteq H^{1}_{\dagger}(L^{U}_{\infty},V) \subseteq H^{1}(L^{U}_{\infty},V)\] _are in general strict. More precisely, the codimension for the left one equals \(([L^{U}_{\infty}:\mathbb{Q}_{p}]-1)\dim_{L}V^{G_{L^{U}_{\infty}}}\)._ Let us recall Tate's local duality in this context. **Proposition 5.2.14** (Local Tate duality).: _Let \(V\) be an object in \(\operatorname{Rep}_{L}(G_{L})\), and \(K\) any finite extension of \(L\). Then the cup product and the local invariant map induce perfect pairings of finite dimensional \(L\)-vector spaces_ \[H^{i}(K,V)\times H^{2-i}(K,\operatorname{Hom}_{\mathbb{Q}_{p}}(V,\mathbb{Q}_{p }(1)))\longrightarrow H^{2}(K,\mathbb{Q}_{p}(1))=\mathbb{Q}_{p}\] _and_ \[H^{i}(K,V)\times H^{2-i}(K,\operatorname{Hom}_{L}(V,L(1)))\longrightarrow H^{2 }(K,L(1))=L\] _where \(-(1)\) denotes the Galois twist by the cyclotomic character. In other words, there are canonical isomorphisms_ \[H^{i}(K,V)\cong H^{2-i}(K,V^{*}(1))^{*}\.\] Proof.: This is well known. For lack of a reference (with proof) we sketch the second claim (the first being proved similarly). Choose a Galois stable \(o_{L}\)-lattice \(T\subseteq V\) and denote by \(\pi_{L}^{n}A\) the kernel of multiplication by \(\pi_{L}^{n}\) on any \(o_{L}\)-module \(A\). Observe that we have short exact sequences for \(i\geqslant 0\) and similarly for \(T\) replaced by \(T^{*}(1)=\operatorname{Hom}_{o_{L}}(T,o_{L}(1))\). By [13, Prop. 5.7] (remember the normalisation given there!) the cup product induces isomorphism \[H^{i}(K,T/\pi_{L}^{n}T)\cong\operatorname{Hom}_{o_{L}}(H^{2-i}(K,T^{*}(1)/\pi_{ L}^{n}T^{*}(1)),o_{L}/\pi_{L}^{n})\] such that we obtain altogether canonical maps \[H^{i}(K,T)/\pi_{L}^{n}\to\operatorname{Hom}_{o_{L}}(H^{2-i}(K,T^{*}(1))/\pi_{L }^{n},o_{L}/\pi_{L}^{n})\cong\operatorname{Hom}_{o_{L}}(H^{2-i}(K,T^{*}(1)),o _{L})/\pi_{L}^{n}.\] Using that the cohomology groups are finitely generated \(o_{L}\)-modules and isomorphic to the inverse limits of the corresponding cohomology groups with coefficients modulo \(\pi_{L}^{n}\) we see that the inverse limit of the above maps induces a surjective map \[H^{i}(K,T)\twoheadrightarrow\operatorname{Hom}_{o_{L}}(H^{2-i}(K,T^{*}(1)),o_ {L})\] with finite kernel, whence the claim after tensoring with \(L\) over \(o_{L}\) using the isomorphism \(H^{i}(K,T)\otimes_{o_{L}}L\cong H^{i}(K,V)\) and analogously for \(T^{*}(1)\). Now let \(W\) be a \(L\)-analytic representation of \(G_{L}\) and set \[H^{1}_{/\!\uparrow}(L^{U}_{\infty},W^{*}(1)):=H^{1}_{\!\uparrow}(L^{U}_{ \infty},W)^{*},\] which, by local Tate duality and Thm. 5.2.10, is a quotient of \(H^{1}(L^{U}_{\infty},W^{*}(1))\). By definition, the local Tate pairing induces a non-degenerate pairing \[<,>_{Tate,L,\uparrow}:H^{1}_{\uparrow}(L^{U}_{\infty},W)\times H^{1}_{/\! \uparrow}(L^{U}_{\infty},W^{*}(1))\longrightarrow H^{2}(L,L(1))\cong L. \tag{168}\] In order to compute this pairing more explicitly in certain situations we shall use Koszul-complexes. For this we have to assume first that \(U\) is torsionfree. Following [12, SS4.2] we obtain for any complete linearly topologised \(o_{L}\)-module \(M\) with continuous \(U\)-action a quasi-isomorphism 25 Footnote 25: (unique up to homotopy, i.e., unique in the derived category of \(o_{L}\)-linear topological \(U\)-modules.) We have not yet defined any topology on the cocycles nor do we know whether the references says anything about it! \(M\) is allowed to be any complete linearly topologised \(o_{L}\)-module with continuous \(U\)-action by [10, V.1.2.6] \[K^{\star}_{U}(M)\xrightarrow{\simeq}\mathcal{C}^{\bullet}(U,M) \tag{169}\] which arises as follows: Let \(X_{\bullet}:=X_{\bullet}(U)\) and \(Y_{\bullet}=Y_{\bullet}(U)\) denote the completed standard complex [10, V.1.2.1], i.e., \(X_{n}=\mathbb{Z}_{p}[\![U]\!]^{\otimes(n+1)}\), and the standard complex computing group cohomology, i.e., \(Y_{n}=\mathbb{Z}_{p}[U]^{\otimes(n+1)}.\) Then, by [100, Lem. V.1.1.5.1] we obtain a diagram of complexes (170) which commutes up to homotopy (of filtered \(\Lambda\)-modules). Here the maps \(\Delta\) are induced by the diagonal maps \(U\to U\times U,\) e.g., \(\mathbb{Z}_{p}[[U]]\to\mathbb{Z}_{p}[[U\times U]]\cong\mathbb{Z}_{p}[[U]] \widehat{\otimes}_{\mathbb{Z}_{p}}\mathbb{Z}_{p}[[U]].\) The first column induces a morphism \[\operatorname{Hom}_{\Lambda}(K^{U}_{\bullet},M)\to\operatorname{Hom}_{ \Lambda,cts}(X_{\bullet}(U),M)\to\operatorname{Hom}_{\mathbb{Z}_{p}[U],cts}(Y_ {\bullet}(U),M),\] which is (169). The upper line induces as usual the cup product on continuous group cohomology \[H^{r}(U,M)\times H^{s}(U,N)\stackrel{{\cup_{U}}}{{\longrightarrow }}H^{r+s}(U,M\otimes N)\] via \[\operatorname{Hom}_{\mathbb{Z}_{p}[U],cts}(Y_{\bullet}(U),M)\times \operatorname{Hom}_{\mathbb{Z}_{p}[U],cts}(Y_{\bullet}(U),N)\\ \stackrel{{\times}}{{\longrightarrow}} \operatorname{Hom}_{\mathbb{Z}_{p}[U]\otimes\mathbb{Z}_{p}[U],cts}(Y_{ \bullet}(U)\otimes_{\mathbb{Z}_{p}}Y_{\bullet}(U),M\otimes N)\stackrel{{ \Delta^{*}}}{{\longrightarrow}}\operatorname{Hom}_{\mathbb{Z}_{p}[U],cts}(Y_ {\bullet}(U),M\otimes N).\] The lower line induces analogously the Koszul-product \[K^{r}_{U}(M)\times K^{s}_{U}(N)\stackrel{{\cup_{K}}}{{ \longrightarrow}}K^{r+s}_{U}(M\otimes N).\] By diagram (170) both products are compatible with each other. Let \(f\) be any continuous endomorphism of \(M\) which commutes with the \(U\)-action; it induces an operator on \(K^{\bullet}(M)\) and we denote by \(K_{f,U}(M):=\operatorname{cone}\left(K^{\bullet}(M)\stackrel{{ f-\operatorname{id}}}{{\longrightarrow}}K^{\bullet}(M)\right)[ -1]\) the mapping fibre of \(K^{\bullet}(f).\) Then the quasi-isomorphism (169) induces a quasi-isomorphism \[K_{\varphi,U}(M)\stackrel{{\simeq}}{{\longrightarrow}}\mathcal{ T}_{\varphi,U}(M). \tag{171}\] **Remark 5.2.15**.: _By a standard procedure cup products can be extended to hyper-cohomology (defined via total complexes), we follow [11, (3.4.5.2)], but for the special case of a cone, see also [11, Prop. 3.1]. In particular, we obtain compatible cup products \(\cup_{K}\) and \(\cup_{U}\) for \(K_{\varphi,U}(M)\) and \(\mathcal{T}_{\varphi,U}(M),\) respectively._ Now we allow some arbitrary open subgroup \(U\subseteq\Gamma_{L}\) and let \(L^{\prime}=L^{U}_{\infty}.\) Note that we obtain a decomposition \(U\cong\Delta\times U^{\prime}\) with a subgroup \(U^{\prime}\cong\mathbb{Z}_{p}^{d}\) of \(U\) and \(\Delta\) the torsion subgroup of \(U.\) By Lemma A.0.1 we obtain a canonical isomorphism \[K_{\varphi,U^{\prime}}(M^{\Delta})\xrightarrow{\simeq}\mathcal{T}_{\varphi,U }(M). \tag{172}\] Now let \(\check{M}\) be a finitely generated projective \(\mathcal{R}\)-module \(M\) with continuous \(U\)-action. Then \(M^{*}=\check{M}\) is again a finitely generated projective \(\mathcal{R}\)-module \(M\) with continuous \(U\)-action by Lemma 4.5.1 (i). Hence \(M\) as well as \(M^{\Delta}\) satisfies the assumptions of (155) and we have isomorphisms 26 Footnote 26: For \(X\xrightarrow{f}Y\) we have \(\mathrm{cone}(f)^{\bullet}\cong\mathrm{cone}(f^{*})[-1]\) the isomorphism being realized by multiplying with \((-1)^{i}\) on \((X^{\bullet})^{i}\) and \(\mathrm{cone}(f[n])=\mathrm{cone}((-1)^{n}f)[n]\). \[\begin{split} K_{\varphi,U}(M^{\Delta})^{*}&\cong \mathrm{cone}\left(K^{\bullet}(M^{\Delta})^{*}\xrightarrow{\varphi^{*}-1}K^{ \bullet}(M^{\Delta})^{*}\right)\\ &=\mathrm{cone}\left(K^{\bullet}((M^{\Delta})^{*})[d] \xrightarrow{\varphi^{*}-1}K^{\bullet}((M^{\Delta})^{*})[d]\right)\\ &=K_{\varphi^{*},U}((M^{*})_{\Delta})[d+1]\\ &=K_{\psi,U}(\check{M}_{\Delta})[d+1]\\ &=K_{\psi,U}(\check{M}^{\Delta})[d+1].\end{split} \tag{173}\] The last isomorphism is induced by the canonical isomorphism \(\check{M}^{\Delta}\cong\check{M}_{\Delta}\). Now note that \[D^{\dagger}_{\mathrm{rig}}(W)^{{}^{\vee}}\cong D^{\dagger}_{\mathrm{rig}}(W^{*} (\chi_{LT})) \tag{174}\] for any \(L\)-analytic representation \(W\) by the fact that the functor \(D^{\dagger}_{\mathrm{rig}}\) respects inner homs, (cp. [SV, Remark 5.6] for the analogous case \(D_{LT}\)). Hence the tautological pairing \(ev_{2}\) from (153) together with the above isomorphism (173) induces the following pairing: \[\cup_{K,\psi}\ :h^{1}(K_{\varphi,U^{\prime}}(D^{\dagger}_{rig}(W)^{\Delta}))\quad \times\quad h^{1}(K_{\psi,U^{\prime}}(D^{\dagger}_{rig}(W^{*}(\chi_{LT}))^{ \Delta})[d-1])\longrightarrow L \tag{175}\] **Remark 5.2.16**.: _For \(U=U^{\prime}\) and \(M=D^{\dagger}_{rig}(W)\), on the level of cochains this pairing is given as follows:_ \[\check{M}\oplus K^{d-1}(\check{M})\times K^{1}(M)\oplus M\to L,((x,y),(x^{ \prime},y^{\prime}))\mapsto\{y^{\prime},x\}-y(x^{\prime}),\] _where we again use that \(K^{d-1}(\check{M})\cong K^{1}(M)^{*}\) and where \(\{\,\ \}\) denotes the pairing (116). More generally, we have the following diagram_ (176) \[\begin{split} K_{\varphi,U}(M):\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ Recall that \(W=V^{*}(1)\) is \(L\)-analytic and set \(M=D^{\dagger}_{rig}(W)\) as well as \(\check{M}=D^{\dagger}_{rig}(V(\tau^{-1}))=D^{\dagger}_{rig}(W^{*}(\chi_{LT}))\). We obtain a Fontaine-style, explicit map \[pr_{U}:D^{\dagger}_{rig}(V(\tau^{-1}))^{\psi=1}\to h^{1}(K_{\psi,U^{\prime}}( \check{M}^{\Delta})[d-1]),\;m\mapsto[(\bar{m},0)], \tag{177}\] where \(\bar{m}=\frac{1}{\#\Delta}\sum_{\delta\in\Delta}\delta m\) denotes the image of \(m\) under the map \(\check{M}\twoheadrightarrow\check{M}_{\Delta}\cong\check{M}^{\Delta}\). **Remark 5.2.17**.: _Let \(U_{1}\subseteq U\) an open subgroup with torsion subgroups \(\Delta_{1}\) and \(\Delta\), respectively. Assume that the torsionfree parts \(U^{\prime}_{1}\) and \(U^{\prime}\) are generated by \(\gamma_{1}^{p^{n}},\dots\gamma_{d}^{p^{n}}\) and \(\gamma_{1},\dots\gamma_{d}\), respectively. Then, for \(M\) any complete locally convex \(L\)-vector space with continuous \(U\)-action, the restriction and corestriction maps of Koszul-complexes from section 5.2.2 extend by functoriality to the mapping fibre_ \[cor_{U}^{U_{1}}:= cor_{U^{\prime}}^{U^{\prime}_{1}}\circ K_{\varphi,U^{\prime}_{1}}(N_ {\Delta/\Delta_{1}}):K_{\varphi,U^{\prime}_{1}}(M^{\Delta_{1}})\to K_{ \varphi,U^{\prime}}(M^{\Delta})\] \[res_{U_{1}}^{U}:= K_{\varphi,U^{\prime}_{1}}(\iota)\circ res_{U^{\prime}_{1}}^ {U^{\prime}}:K_{\varphi,U^{\prime}}(M^{\Delta})\to K_{\varphi,U^{ \prime}_{1}}(M^{\Delta_{1}})\] _Here \(N_{\Delta/\Delta_{1}}:M^{\Delta_{1}}\to M^{\Delta}\) denotes the norm/trace map sending \(m\) to \(\sum_{\delta\in\Delta/\Delta_{1}}\delta m\) while \(\iota:M^{\Delta}\to M^{\Delta_{1}}\) is the inclusion. Taking duals as in (173) we also obtain_ \[cor_{U}^{U_{1}}:= (res_{U_{1}}^{U})^{*}[1-d]:K_{\psi,U^{\prime}_{1}}(M^{\Delta_{1} })\to K_{\psi,U^{\prime}}(M^{\Delta})\] \[res_{U_{1}}^{U}:= (cor_{U}^{U_{1}})^{*}[1-d]:K_{\psi,U^{\prime}}(M^{\Delta})\to K _{\psi,U^{\prime}_{1}}(M^{\Delta_{1}})\] _(co)restriction maps for the \(\psi\)-Herr complexes._ _Since inflation is compatible with restriction and corestriction one checks that the above maps are compatible under the isomorphism (163) with the usual maps in Galois cohomology. Moreover, they define such maps on \(H^{1}_{\dagger}\) and \(H^{1}_{/\dagger}\) via (165) and \(h^{1}(K_{\psi,U^{\prime}}(D^{\dagger}_{rig}(W^{*}(\chi_{LT}))^{\Delta}[d-1]) \cong H^{1}_{/\dagger}(L^{\prime},W^{*}(1))\)._ _By the discussion at the end of section 5.2.2 the restriction map \(K_{\varphi,U^{\prime}}(M^{\Delta})\xrightarrow{\operatorname{res}_{U_{1}}^{ U}}K_{\varphi,U^{\prime}_{1}}(M^{\Delta_{1}})\) and corestriction map \(K_{\varphi,U^{\prime}_{1}}(M^{\Delta_{1}})\xrightarrow{\operatorname{cor}_{U} ^{U}}K_{\varphi,U^{\prime}}(M^{\Delta})\) in degree \(0\) are given as inclusion \(M^{\Delta}\hookrightarrow M^{\Delta_{1}}\) and norm \(M^{\Delta_{1}}\xrightarrow{N_{U^{\prime},U^{\prime}_{1}}\circ N_{\Delta/ \Delta_{1}}}M^{\Delta}\), respectively, where_ \[N_{U^{\prime},U^{\prime}_{1}}:=\prod_{i=1}^{d}\sum_{k=0}^{p^{n}-1}\gamma_{i}^{ k}\in\Lambda(U^{\prime}).\] _Hence, by duality the restriction map \(K_{\psi,U}(\check{M}^{\Delta})[d-1]^{2}\xrightarrow{\operatorname{res}_{U_{1}}^{ U}}K_{\psi,U_{1}}(\check{M}^{\Delta_{1}})[d-1]^{2}\) and corestriction map \(K_{\psi,U_{1}}(\check{M}^{\Delta_{1}})[d-1]^{2}\xrightarrow{\operatorname{ cor}_{U}^{U}}K_{\psi,U}(\check{M}^{\Delta})[d-1]^{2}\) are given by the norm \(\check{M}^{\Delta}\xrightarrow{(\Delta:\Delta_{1})(N_{U^{\prime},U^{\prime}_{1 }})}\check{M}^{\Delta_{1}}\) and projection map \(\check{M}^{\Delta_{1}}\xrightarrow{\frac{1}{(\Delta:\Delta_{1})}N_{\Delta/ \Delta_{1}}}\check{M}^{\Delta}\), respectively. Here \(\iota\) denotes the involution of \(\Lambda(U)\) sending \(u\) to \(u^{-1}\). Note that the latter two descriptions also hold for the first components of \(K_{\psi,U}(\check{M}^{\Delta})[d-1]^{1}\xrightarrow{\operatorname{res}_{U_{1}}^ {U}}K_{\psi,U_{1}}(\check{M}^{\Delta_{1}})[d-1]^{1}\) and \(K_{\psi,U_{1}}(\check{M}^{\Delta_{1}})[d-1]^{1}\xrightarrow{\operatorname{ cor}_{U}^{U_{1}}}K_{\psi,U}(\check{M}^{\Delta})[d-1]^{1}\), respectively. Hence, we obtain_ \[\operatorname{cor}_{U}^{U_{1}}\circ pr_{U_{1}}=pr_{U}\text{ and }\operatorname{res}_{U_{1}}^{U}\circ pr_{U}=pr_{U_{1}}\circ N_{\Delta/\Delta_{1}} \circ\iota(N_{U^{\prime},U^{\prime}_{1}}).\] Berger and Fourquaux in contrast define a different Fontaine-style map in [BF, Thm. 2.5.8] for an \(L\)-analytic representation \(Z\) and \(N=D^{\dagger}_{\rm rig}(Z)\)27 Footnote 27: We do not know whether this map coincides with the following composite we had used in older versions and which uses the shuffle maps from Propostion 5.2.9: \[h^{1}_{L^{U}_{\infty},Z}:D^{\dagger}_{\rm rig}(Z)^{\psi_{L}= \frac{q}{\pi_{L}}}\to H^{1}_{an}(U,D^{\dagger}_{\rm rig}(Z)^{\psi_{L}=\frac{q} {\pi_{L}}})\to h^{1}(\mathcal{T}_{\varphi_{L},U}(N))\cong h^{1}(\mathcal{T}_{ \varphi_{L},U}(N^{\Delta})),\] \[y\mapsto\qquad\qquad\qquad\qquad[c_{b}(y)]\mapsto\qquad[(c_{b}(y ),-m_{c})]\mapsto[(\tilde{c_{b}}(y),-\tilde{m_{c}})], \tag{179}\] in which the cocycle \(h^{1}_{L^{U}_{\infty},Z}(y)\) is given in terms of the pair \((c_{b}(y),-m_{c})\) in the notation of Thm. 2.5.8 in (loc. cit.): \(m_{c}\) is the unique element in \(D^{\dagger}_{\rm rig}(Z)^{\psi_{L}=0}\) such that \[(\varphi_{L}-1)c_{b}(y)(\gamma)=(\gamma-1)m_{c} \tag{180}\] for all \(\gamma\in U\) and this pair defines the extension class in the sense of Lemma 5.2.11. Here, the first map is implicity given by Prop. 2.5.1 in (loc. cit.), the second one is the composite from maps arising in Cor. 2.2.3, Thm. 2.2.4, of (loc. cit.) with the natural map from analytic to continuous cohomology \[H^{1}_{an}(U,D^{\dagger}_{\rm rig}(Z)^{\psi_{L}=\frac{q}{\pi_{L}}})\to H^{1}_{ an}(U\times\Psi,D^{\dagger}_{\rm rig}(Z))\cong H^{1}_{an}(U\times\Phi,D^{ \dagger}_{\rm rig}(Z))\to H^{1}(U\times\Phi,D^{\dagger}_{\rm rig}(Z))\] combined with the interpretation of extension classes (see SS1.4 in (loc. cit.) and Lemma 5.2.11), while the last one is (172) (the concrete image \((\tilde{c_{b}}(y),-\tilde{m_{c}})\) will be of interest for us only in the situation where \(\Delta\) is trivial, when \(\tilde{m_{c}}=m_{c}\)). According to [BF, Prop. 2.5.6, Rem. 2.5.7] this map also satisfies \[cor^{U}_{U^{\prime}}\circ h^{1}_{L^{U^{\prime}}_{\infty},Z}=h^{1}_{L^{U}_{ \infty},Z}. \tag{181}\] Since \(D^{\dagger}_{\rm rig}(V(\tau^{-1})))^{\vee}\cong D^{\dagger}_{\rm rig}(V^{*}(1))\) by (174), concerning the Iwasawa-pairing we have the following **Proposition 5.2.18**.: _For a \(G_{L}\)-representation \(V\) such that \(V^{*}(1)\) is \(L\)-analytic the following diagram consisting of \(D(\Gamma_{L},K)\mbox{-}\mathfrak{l_{g}}\)-sesquilinear pairings (in the sense of (131)) is commutative_ _taking \(U^{\prime}\times\Delta=U=\Gamma_{L}\)._ Proof.: By Lemma 4.5.22, it suffices to show the case \(j=0\), i.e., the trivial character \(\chi_{triv}\). Furthermore, it suffices to show the following statement for any subgroup of the form \(\Gamma_{n}\) without any \(p\)-torsion: \[q^{-n}ev_{L_{n},\chi_{triv|\Gamma_{n}}}\circ\{x,y\}_{Iw,\Gamma_{n}}=h^{1}_{L_{n},V^{*}(1)}(x)\cup_{K,\psi}pr_{\Gamma_{n}}(y) \tag{182}\] for \(x\in D^{\dagger}_{\mathrm{rig}}(V^{*}(1))^{\psi_{L}=\frac{q}{\pi_{L}}},y\in D^{ \dagger}_{\mathrm{rig}}(V(\tau^{-1}))^{\psi_{L}=1}\). Indeed, by Remark 5.2.17, for every such \(n\), we have the commutative diagram Hence we obtain using (181) \[h^{1}_{L^{\prime},V^{*}(1)}(x)\cup_{K,\psi}pr_{U}(y) =(\mathrm{cor}\circ h^{1}_{L_{n},V^{*}(1)}(x))\cup_{K,\psi}pr_{U}(y)\] \[=h^{1}_{L_{n},V^{*}(1)}(x)\cup_{K,\psi}(\mathrm{res}\circ pr_{U}(y))\] \[=h^{1}_{L_{n},V^{*}(1)}(x)\cup_{K,\psi}(pr_{\Gamma_{n}}(N_{ \Delta}\circ\mathfrak{l}(N_{U^{\prime},\Gamma_{n}})y)),\] where we use Remark 5.2.17 for the last equality. On the other hand one easily checks that28 Footnote 28: This is obvious if you decompose \(D(U,\mathbb{C}_{p})=\bigoplus_{geU/\Gamma_{n}}D(\Gamma_{n},\mathbb{C}_{p})g\) with respect to the inverses of the representatives used in the definition of \(N_{\Delta}\circ\mathfrak{l}(N_{U^{\prime},\Gamma_{n}})\). \[ev_{L_{n},\chi_{triv}}\circ\frac{q-1}{q}\{x,y\}_{Iw,U} =\frac{q-1}{q}ev_{L_{n},\chi_{triv|\Gamma_{n}}}\circ pr_{U,\Gamma _{n}}(N_{\Delta}\circ\mathfrak{l}(N_{U^{\prime},\Gamma_{n}})\{x,y\}_{Iw,U})\] \[=\frac{q-1}{q}ev_{L_{n},\chi_{triv|\Gamma_{n}}}\circ pr_{U,\Gamma _{n}}(\{x,N_{\Delta}\circ\mathfrak{l}(N_{U^{\prime},\Gamma_{n}})y\}_{Iw,U})\] \[=\frac{q-1}{q}[U:\Gamma_{n}]^{-1}ev_{L_{n},\chi_{triv|\Gamma_{n}} }\circ\{x,N_{\Delta}\circ\mathfrak{l}(N_{U^{\prime},\Gamma_{n}})y\}_{Iw, \Gamma_{n}}\] \[=q^{-n}ev_{L_{n},\chi_{triv|\Gamma_{n}}}\circ\{x,N_{\Delta}\circ \mathfrak{l}(N_{U^{\prime},\Gamma_{n}})y\}_{Iw,\Gamma_{n}}\] where we have used Remark 4.5.21 for the last equation. In order to prove (182) choose \(n=n_{0}\) (see section 4.3.4). As recalled in (179)) the map \[h^{1}_{L_{n_{0}},V^{*}(1)}:D^{\dagger}_{\mathrm{rig}}(V^{*}(1))^{\psi_{L}= \frac{q}{\pi_{L}}}\to h^{1}(K_{\varphi_{L},\Gamma_{n_{0}}}(D^{\dagger}_{ \mathrm{rig}}(V^{*}(1))))\] is given by the cocycle \(h^{1}_{L_{n_{0}},V^{*}(1)}(x)\) in terms of the pair \((\tilde{c_{b}}(x),-m_{c})\). Note that we have \[m_{c}=\widehat{\Xi_{b}}(\varphi_{L}-1)x.\] Indeed, by [BF, Thm. 2.5.8] we have \(c_{b}(x)(b^{k}_{j})=(b^{k}_{j}-1)\widehat{\Xi_{b}}x\) for all \(j,k\geq 0\), which together with (180) and the uniqueness of \(m_{c}\) (loc. cit.) implies the claim. On the other hand we have the map (177) \[pr_{\Gamma_{n_{0}}}:D^{\dagger}_{rig}(V(\tau^{-1}))^{\psi_{L}=1}\to h^{1}(K_{ \psi,\Gamma_{n_{0}}}(D^{\dagger}_{rig}(V(\tau^{-1})))[d-1]),\ \ y\to\ \text{class of}\ (y,0).\] Thus the pairing \(\cup_{K,\psi}\) sends by construction (see diagram (176)) the above classes to \[h^{1}_{L_{n},V^{*}(1)}(x)\cup_{K,\psi}pr_{\Gamma_{n}}(y) =0(\tilde{c_{b}}(x))+\{-\widehat{\Xi_{b}}(\varphi_{L}-1)x,y\}\] \[=\{\widehat{\Xi_{b}}(\varphi_{L}-1)x,(\frac{\pi_{L}}{q}\varphi_{L} -1)y\}\] \[=<\widehat{\Xi_{b}},\{x,y\}_{Iw,\Gamma_{n}}>_{\Gamma_{n}}\] \[=q^{-n}\text{aug}(\{x,y\}_{Iw,\Gamma_{n}}).\] Here the second equality holds due to Lemma 4.5.1 because the left hand side belongs to \(D^{\dagger}_{\text{rig}}(V^{*}(1))^{\psi_{L}=0}\), the third one is (133) while the last one comes from (125). **Proposition 5.2.19**.: _For \(W\) an \(L\)-analytic representation we have a canonical commutative diagram_ \[\begin{CD}\cup_{K,\psi}:h^{1}(K_{\varphi,U^{\prime}}(D^{\dagger}_{rig}(W)^{ \Delta}))&\times&h^{1}(K_{\psi,U^{\prime}}(D^{\dagger}_{rig}(W^{*}(\chi_{LT}))^ {\Delta})[d-1])\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}_{\uparrow}(L^{\prime},W)$}& \times&H^{1}_{\uparrow}(L^{\prime},W^{*}(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{1}(L^{ \prime},W^{*}(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{1}(L^{ \prime},W^{*}(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{2}(L^{ \prime},L(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{2}(L^{ \prime},L(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{1}(L^{ \prime},W^{*}(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{2}(L^{ \prime},L(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{2}(L^{ \prime},L(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{2}(L^{ \prime},L(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W)$}&\times&H^{2}(L^{ \prime},L(1))\\ \parbox{0.0pt}{$<,>_{Tate,L,\uparrow}:H^{1}(L^{\prime},W^{*}(1))$}&\times&H^{2 }(L^{\prime},L(1))\\ \parbox{0. _would be compatible with the cyclotomic situation taking \(L=\mathbb{Q}_{p}\), \(\pi_{L}=p=q,\) i.e., the upper pairing in Proposition 5.2.18 would coincide - at least up to a sign - with the cup product pairing of Galois cohomology_ \[H^{1}(\mathbb{Q}_{p},V^{*}(j+1))\ \ \ \times\ \ \ \ H^{1}(\mathbb{Q}_{p},V(-j))\ \ \xrightarrow{\text{$H^{2}(\mathbb{Q}_{p},\mathbb{Q}_{p}(1))$}}\ \xrightarrow{\text{$inv$}}\ \mathbb{Q}_{p}.\] _using Tate's trace map_ \[inv:H^{2}(\mathbb{Q}_{p},\mathbb{Q}_{p}(1))\cong\mathbb{Q}_{p}\] _given by class field theory, if one chooses \(Z=\gamma-1\) for a topological generator \(\gamma\) of \(\Gamma_{\mathbb{Q}_{p}}\) satisfying \(\log\chi_{cyc}(\gamma)=1.\) This follows from [Ben, Prop. 1.3.4, Thm. 2.2.6], [He, Thm. 5.2,Rem. 5.3] and [KPX, Rem. 2.3.11/12]: they claim that \(-\frac{p-1}{p}inv\) corresponds to the trace map from the second cohomology group of the \(\varphi\)-Herr complex induced by sending \(f\otimes\eta\) to \(\frac{1}{\log\chi_{cyc}(\gamma)}\text{res}_{\omega_{cyc}}(f\frac{\omega_{cyc}} {1+\omega_{cyc}}).\)_ With respect to evaluating at a character we have the following analogue of Corollary 5.2.20. **Proposition 5.2.22**.: _For a \(G_{L}\)-representation \(V\) such that \(V^{*}(1)\) is \(L\)-analytic the following diagram is commutative_ _where, for the identification in the right upper corner we choose \(t_{LT}^{-1}\otimes\eta\) as a basis._ Proof.: Using Lemma 5.2.23 below the statement is reduced to \(j=0.\) Evaluation of (152) implies the claim in this case. **Lemma 5.2.23**.: _There is a commutative diagram_ Proof.: The claim follows immediately from (152), the compatibility of the usual \(D_{cris}\)-pairing with twists and the fact that \(Tw_{\chi_{LT}^{j}}^{\ \ \ }(\lambda_{t*}(\mu))=Tw_{\chi_{LT}^{j}}(\lambda)t_{*}(Tw_{ \chi_{LT}^{-j}}(\mu))\) holds. #### 5.2.4 The interpolation formula for the regulator map In this subsection we are going to prove the interpolation property for \(\mathcal{L}_{V}.\) First recall that we introduced in section 3.1 the notation \(D_{dR,L^{\prime}}(V):=(B_{dR}\otimes_{\mathbb{Q}_{p}}V)^{G_{L^{\prime}}}.\) Since \(B_{dR}\) contains the algebraic closure \(\overline{L}\) of \(L\) we have the isomorphism \[B_{dR}\otimes_{\mathbb{Q}_{p}}V=(B_{dR}\otimes_{\mathbb{Q}_{p}}L)\otimes_{L}V \xrightarrow{\cong}\prod_{\sigma\in G_{\mathbb{Q}_{p}}/G_{L}}B_{dR}\otimes_{ \sigma,L}V\] which sends \(b\otimes v\) to \((b\otimes v)_{\sigma}\). The tensor product in the factor \(B_{dR}\otimes_{\sigma,L}V\) is formed with respect to \(L\) acting on \(B_{dR}\) through \(\sigma\). With respect to the \(G_{L}\)-action the right hand side decomposes according to the double cosets in \(G_{L}\backslash G_{\mathbb{Q}_{p}}/G_{L}\). It follows, in particular, that \(D^{\rm id}_{dR}(V):=(B_{dR}\otimes_{L}V)^{G_{L}}\) is a direct summand of \(D_{dR,L}(V)\) and we denote by \(pr^{\rm id}\) the corresponding projection. Similarly, \(tan_{L,{\rm id}}(V):=(B_{dR}/B_{dR}^{+}\otimes_{L}V)^{G_{L}}\) is a direct summand of \(tan_{L}(V):=(B_{dR}\otimes_{L}V)^{G_{L}}\). More generally, also the filtration \(D^{i}_{dR,L}(V)\) decomposes into direct summands. According to [10, Appendix A] the dual Bloch-Kato exponential map is uniquely determined by the commutativity of the following diagram, in which all pairings are perfect: (183) In the Lubin-Tate setting we can also consider the dual of the identity component \(\exp_{L^{\prime},W^{*}(1),{\rm id}}\) of \(\exp_{L^{\prime},W^{*}(1)}\): 29 Footnote 29: We have the compatibility of the following pairings (184) Upon noting that under the identifications \(D_{dR,L^{\prime}}(\mathbb{Q}_{p}(1))\cong L^{\prime}\) and \(D^{\rm id}_{dR,L^{\prime}}(\mathbb{Q}_{p}(1)){\cong}L^{\prime}\) the elements \(t_{\mathbb{Q}_{p}}\otimes\eta_{cyc}\) and \(t_{LT}\otimes\eta\) are sent to \(1\), one easily checks that, if \(W^{*}(1)\) is \(L\)-analytic, whence the inclusion \(tan_{L^{\prime},\mathrm{id}}(W^{*}(1))\subseteq tan_{L^{\prime}}(W^{*}(1))\) is an equality and \(\exp_{L^{\prime},W^{*}(1),\mathrm{id}}=\exp_{L^{\prime},W^{*}(1)}\), it holds \[\mathbb{T}_{\tau^{-1}}\circ\exp_{L^{\prime},W}^{*}=\widetilde{\exp}_{L^{\prime },W,\mathrm{id}}^{*}, \tag{185}\] where \(\mathbb{T}_{\tau^{-1}}:D^{0}_{dR,L^{\prime}}(W)\to D^{\mathrm{id},0}_{dR,L^{ \prime}}(W(\tau^{-1}))\) is the isomorphism, which sends \(b\otimes v\) to \(b\frac{t_{0p}}{t_{LT}}\otimes v\otimes\eta\otimes\eta_{cyc}^{\otimes-1}\); note that \(\frac{t_{0p}}{t_{LT}}\in(B^{+}_{dR})^{\times}\), whence the filtration is preserved. Now let \(W\) be an \(L\)-analytic, crystalline \(L\)-linear representation of \(G_{L}\). Recall that \(\eta=(\eta_{n})_{n}\) denotes a fixed generator of \(T_{\pi}\) and that the map \(tw_{\chi_{LT}^{j}}^{\cdot}:D^{\dagger}_{rig}(W)\to D^{\dagger}_{rig}(W(\chi_{LT }^{j}))\) has been defined before Lemma 4.5.22. For \(D_{cris}\) twisting \(D_{cris,L}(W)\xrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, notation we also use \(\mathrm{Ev}_{W,0}\) for the analogous map \(\mathcal{O}_{K}\otimes_{L}D_{cris,L}(W)\to K\otimes_{L}D_{cris,L}(W)\). For \(x\in D(\Gamma_{L},K)\otimes_{L}D_{cris,L}(W)\) we denote by \(x(\chi^{j}_{LT})\) the image under the map \(D(\Gamma_{L},K)\otimes_{L}D_{cris,L}(W)\to K\otimes_{L}D_{cris,L}(W)\), \(\lambda\otimes d\mapsto\lambda(\chi^{j}_{LT})\otimes d\). **Lemma 5.2.24**.: _Assume that \(\Omega\) is contained in \(K\). Then there are commutative diagrams_ _and_ Proof.: For the upper diagram note that \(\eta_{0}=0\) and \(\left(\delta_{g}\cdot\eta(1,Z)\right)_{|Z=0}=1\), from which the claim follows for Dirac distributions, whence in general. For the right square we observe that \(\varphi_{L}(g(Z))_{|Z=0}=g(0)\). Regarding the lower diagram we use 4.3.25 and the relation \(\partial_{\mathrm{inv}}\circ\varphi_{L}=\pi_{L}\varphi_{L}\circ\partial\). With this notation Berger's and Fourquaux' interpolation property reads as follows: **Theorem 5.2.25** (Berger-Fourquaux [BF, Thm. 3.5.3]).: _Let \(W\) be \(L\)-analytic and \(h\geq 1\) such that \(\mathrm{Fil}^{-h}D_{cris,L}(W)=D_{cris,L}(W)\). For any \(f\in\left(\mathcal{O}^{\psi=0}\otimes_{L}D_{cris,L}(W)\right)^{\Delta=0}\) and \(y\in\left(\mathcal{O}\otimes_{L}D_{cris,L}(W)\right)^{\psi=\frac{g}{\pi_{L}}}\) with \(f=(1-\varphi_{L})y\) we have: If \(h+j\geq 1\), then_ \[\begin{split}(-1)^{h+j-1}(h+j-1)!\left\{\begin{array}{ll} \exp_{L_{n},W(\chi^{j}_{LT})}\Big{(}q^{-n}\mathrm{Ev}_{W(\chi^{j}_{LT}),n}( \partial_{\mathrm{inv}}^{-j}y\otimes e_{j})\Big{)}&\text{if $n\geq 1$;}\\ \exp_{L,W(\chi^{j}_{LT})}\Big{(}(1-q^{-1}\varphi_{L}^{-1})\mathrm{Ev}_{W(\chi^ {j}_{LT}),0}(\partial_{\mathrm{inv}}^{-j}y\otimes e_{j})\Big{)},&\text{if $n=0$.}\end{array}\right.\end{split} \tag{188}\] _If \(h+j\leq 0\), then_ \[\begin{split}\exp_{L_{n},W(\chi^{j}_{LT})}^{*}\Big{(}h^{1}_{L_{n },W(\chi^{j}_{LT})}(tw_{\chi^{j}_{LT}}(\Omega_{W,h}(f)))\Big{)}=\\ \frac{1}{(-h-j)!}\left\{\begin{array}{ll}q^{-n}\mathrm{Ev}_{W( \chi^{j}_{LT}),n}(\partial_{\mathrm{inv}}^{-j}y\otimes e_{j})&\text{if $n\geq 1$;}\\ (1-q^{-1}\varphi_{L}^{-1})\mathrm{Ev}_{W(\chi^{j}_{LT}),0}(\partial_{\mathrm{ inv}}^{-j}y\otimes e_{j}),&\text{if $n=0$.}\end{array}\right.\end{split} \tag{189}\] By abuse of notation we shall denote the base change \(K\otimes_{L}-\) of the (dual) Bloch-Kato exponential map by the same expression. Using Lemma 5.2.24 we deduce the following interpolation property for the modified big exponential map with \(x\in D(\Gamma_{L},K)\otimes_{L}D_{cris,L}(W)\) : If \(j\geqslant 0\), then \[h^{1}_{L,W(\chi^{j}_{LT})}(tw_{\chi^{j}_{LT}}(\mathbf{\Omega}_{W,1 }(x)))= \tag{190}\] \[(-1)^{j}j!\Omega^{-j-1}\exp_{L,W(\chi^{j}_{LT})}\Big{(}(1-q^{-1} \varphi_{L}^{-1})(1-\varphi_{L})^{-1}\left(x(\chi^{-j}_{LT})\otimes e_{j}\right) \Big{)};\] if \(j<0\), then \[h^{1}_{L,W(\chi^{j}_{LT})}(tw_{\chi^{j}_{LT}}(\mathbf{\Omega}_{ W,1}(f)))= \tag{191}\] \[\frac{1}{(-1-j)!}\Omega^{-j-1}\log^{*}_{L,W(\chi^{j}_{LT})}\Big{(} (1-q^{-1}\varphi_{L}^{-1})(1-\varphi_{L})^{-1}\left(x(\chi^{-j}_{LT})\otimes e _{j}\right)\Big{)},\] assuming in both cases that the operator \(1-\varphi_{L}\) is invertible on \(D_{cri,L}(W(\chi^{j}_{LT}))\) and for \(j<0\) also that the operator \(1-q^{-1}\varphi_{L}^{-1}\) is invertible on \(D_{cri,L}(W(\chi^{j}_{LT}))\) (in order to grant the existence of \(\log_{L,W(\chi^{j}_{LT})}\)). Recall that the generalized Iwasawa cohomology of \(T\in Rep_{o_{L}}(G_{L})\) is defined by \[H^{*}_{Iw}(L_{\infty}/L,T):=\varprojlim_{K}H^{*}(K,T)\] where \(K\) runs through the finite Galois extensions of \(L\) contained in \(L_{\infty}\) and the transition maps in the projective system are the cohomological corestriction maps. For \(V:=T\otimes_{o_{L}}L\in Rep_{L}(G_{L})\) we define \[H^{*}_{Iw}(L_{\infty}/L,V):=H^{*}_{Iw}(L_{\infty}/L,T)\otimes_{o_{L}}L,\] which is independent of the choice of \(T\). As usual we denote by \(cor:H^{*}_{Iw}(L_{\infty}/L,T)\to H^{*}(L^{\prime},T)\) the projection map and analogously for rational coefficients. Similarly as in (177) we have a map \[pr_{U}:D(V(\tau^{-1}))^{\psi=1}\to h^{1}(K_{\psi,U^{\prime}}(D(V(\tau^{-1}))^ {\Delta})[d-1])\cong H^{1}(L^{\prime},V),\;m\mapsto[(\bar{m},0)]. \tag{192}\] \(m\) under the map \(\check{M}\twoheadrightarrow\check{M}_{\Delta}\cong\check{M}^{\Delta}\). Note that under the assumptions of Lemma 3.3.6 for \(V(\tau^{-1})\) there is a commutative diagram (193) where the right vertical map is induced by (177). Indeed, for the commutativity of the left rectangle and the right rectangle we refer the reader to (B.0.5) and (200), respectively. Let \(y_{\chi^{-j}_{LT}}\) denote the image of \(y\) under the map \[H^{1}_{Iw}(L_{\infty}/L,T)\xrightarrow{\otimes q\nearrow}H^{1}_{Iw}(L_{\infty }/L,T(\chi^{-j}_{LT}))\xrightarrow{\mathrm{cor}}H^{1}(L,T(\chi^{-j}_{LT})) \to H^{1}(L,V(\chi^{-j}_{LT})).\] The following result generalizes [11, Thm. A.2.3] and [12, Thm. B.5] from the cyclotomic case. **Theorem 5.2.26**.: _Assume that \(V^{*}(1)\) is \(L\)-analytic with \(\mathrm{Fil}^{-1}D_{Cris,L}(V^{*}(1))=D_{Cris,L}(V^{*}(1))\) and \(D_{Cris,L}(V^{*}(1))^{\varphi_{L}=\pi_{L}^{-1}}=D_{Cris,L}(V^{*}(1))^{\varphi_{L }=1}=0\). Then it holds that for \(j\geq 0\)_ \[\Omega^{j}\mathbf{L}_{V}(y)(\chi_{LT}^{j}) =j!\Big{(}(1-\pi_{L}^{-1}\varphi_{L}^{-1})^{-1}(1-\frac{\pi_{L}}{ q}\varphi_{L})\widetilde{\exp}_{L,V(\chi_{LT}^{-j}),\mathrm{id}}^{*}(y_{\chi_{LT}^ {-j}})\Big{)}\otimes e_{j}\] \[=j!(1-\pi_{L}^{-1-j}\varphi_{L}^{-1})^{-1}(1-\frac{\pi_{L}^{j+1}}{ q}\varphi_{L})\Big{(}\widetilde{\exp}_{L,V(\chi_{LT}^{-j}),\mathrm{id}}^{*}(y_{ \chi_{LT}^{-j}})\otimes e_{j}\Big{)}\] _and for \(j\leq-1:\)_ \[\Omega^{violetj}\mathbf{L}_{V}(y)(\chi_{LT}^{j}) =\frac{(-1)^{j}}{(-1-j)!}\Big{(}(1-\pi_{L}^{-1}\varphi_{L}^{-1}) ^{-1}(1-\frac{\pi_{L}}{q}\varphi_{L})\widetilde{\log}_{L,V(\chi_{LT}^{-j}), \mathrm{id}}(y_{\chi_{LT}^{-j}})\Big{)}\otimes e_{j}\] \[=\frac{(-1)^{j}}{(-1-j)!}(1-\pi_{L}^{-1-j}\varphi_{L}^{-1})^{-1}( 1-\frac{\pi_{L}^{j+1}}{q}\varphi_{L})\Big{(}\widetilde{\log}_{L,V(\chi_{LT}^{ -j}),\mathrm{id}}(y_{\chi_{LT}^{-j}})\otimes e_{j}\Big{)},\] _if the operators \(1-\pi_{L}^{-1-j}\varphi_{L}^{-1},1-\frac{\pi_{L}^{j+1}}{q}\varphi_{L}\) or equivalently \(1-\pi_{L}^{-1}\varphi_{L}^{-1},1-\frac{\pi_{L}}{q}\varphi_{L}\) are invertible on \(D_{cris,L}(V(\tau^{-1}))\) and \(D_{cris,L}(V(\tau^{-1}\chi_{LT}^{j}))\), respectively._ Proof.: From the reciprocity formula in Corollary 5.2.2 and Propositions 5.2.20 and 5.2.22 we obtain for \(x\in D(\Gamma_{L},\mathbb{C}_{p})\otimes_{L}D_{cris,L}(V^{*}(1))\), \(y\in D(V(\tau^{-1}))^{\psi_{L}=1}\) and \(j\geq 0\) using (193) \[[x(\chi_{LT}^{-j}) \otimes e_{j},(-1)^{j}\mathbf{L}_{V}(y)(\chi_{LT}^{j})\otimes e_{ -j}]_{cris}\] \[=\Omega[x,\frac{\sigma_{-1}\mathbf{L}_{V}(y)}{violet\Omega}]^{0} (\chi_{LT}^{-j})\] \[=\Omega\frac{q-1}{q}\{\mathbf{\Omega}_{V^{*}(1),1}(x),y\}_{Iw}( \chi_{LT}^{-j})\] \[=\Omega<h_{L}^{1}\circ tw_{\chi_{LT}^{j}}\left(\mathbf{\Omega}_{V ^{*}(1),1}(x)\right),y_{\chi_{LT}^{-j}}>_{Tate}\] \[=\Omega<(-1)^{j}j!\Omega^{-j-1}\exp_{L,V^{*}(1)(\chi_{LT}^{j})}(( 1-q^{-1}\varphi_{L}^{-1})(1-\varphi_{L})^{-1}(x(\chi_{LT}^{-j})\otimes e_{j}), y_{\chi_{LT}^{j}}>_{Tate}\] \[=(-1)^{j}\Omega^{-j}j![(1-q^{-1}\varphi_{L}^{-1})(1-\varphi_{L})^ {-1}(x(\chi_{LT}^{-j})\otimes e_{j}),\widetilde{\exp}_{L,V(\chi_{LT}^{-j}), \mathrm{id}}^{*}(y_{\chi_{LT}^{-j}})]_{cris}\] \[=[x(\chi_{LT}^{-j})\otimes e_{j},(-1)^{j}\Omega^{-j}j!(1-\pi_{L}^{ -1}\varphi_{L}^{-1})^{-1}(1-\frac{\pi_{L}}{q}\varphi_{L})\widetilde{\exp}_{L, V(\chi_{LT}^{-j}),\mathrm{id}}^{*}(y_{\chi_{LT}^{-j}})]_{cris}\] Here we used (190) in the fourth equation for the interpolation property of \(\mathbf{\Omega}_{V^{*}(1),1}\). The fifth equation is the defining equation for the dual exponential map resulting from (184). Furthermore, for the last equality we use that \(\pi_{L}^{-1}\varphi_{L}^{-1}\) is adjoint to \(\varphi_{L}\) under the lower pairing. The claim follows since the evaluation map is surjective and \([\,\ ]_{cris}\) is non-degenerated. Now assume that \(j<0:\) \[[x(\chi_{LT}^{-j}) \otimes e_{j},(-1)^{j}\mathbf{L}_{V}(y)(\chi_{LT}^{j})\otimes e_{-j }]_{cris}\] \[=\Omega[x,\frac{\sigma_{-1}\mathbf{L}_{V}(y)}{\Omega}]^{0}(\chi_{ LT}^{-j})\] \[=\Omega\frac{q-1}{q}\{\mathbf{\Omega}_{V^{*}(1),1}(x),y\}_{Iw}( \chi_{LT}^{-j})\] \[=\Omega<h_{L}^{1}\circ tw_{\chi_{LT}^{j}}\left(\mathbf{\Omega}_{V ^{*}(1),1}(x)\right),y_{\chi_{LT}^{-j}}>_{Tate}\] \[=\Omega<\frac{1}{(-1-j)!}\Omega^{-j-1}\log_{L,W(\chi_{LT}^{j})}^{ \bullet}\Big{(}(1-q^{-1}\varphi_{L}^{-1})(1-\varphi_{L})^{-1}\left(x(\chi_{LT}^ {-j})\otimes e_{j}\right)\Big{)},y_{\chi_{LT}^{-j}}>_{Tate}\] \[=\frac{\Omega^{-j}}{(-1-j)!}[(1-q^{-1}\varphi_{L}^{-1})(1-\varphi_{ L})^{-1}(x(\chi_{LT}^{-j})\otimes e_{j}),\widetilde{\log}_{L,V(\chi_{LT}^{-j}), \mathrm{id}}(y_{\chi_{LT}^{-j}})]_{cris}\] \[=[x(\chi_{LT}^{-j})\otimes e_{j},\frac{\Omega^{-j}}{(-1-j)!}(1- \pi_{L}^{-1}\varphi_{L}^{-1})^{-1}(1-\frac{\pi_{L}}{q}\varphi_{L})\widetilde{ \log}_{L,V(\chi_{LT}^{-j}),\mathrm{id}}(y_{\chi_{LT}^{-j}})]_{cris}\] Now consider \(V=L(\tau\chi_{LT})\) and \(W=V(\chi_{LT}).\) Then the latter satisfies the condition of the Theorem and using Propositon 5.1.1 and Lemma 5.1.2 one easily derives the following interpolation property concerning the former for \(y=-\kappa(u),\)\(u\in U\) d for all \(r\geq 1:\) \[\mathbf{L}_{V}(y)(\chi_{LT}^{r}) =\frac{r!}{\Omega^{r}}\Big{(}(1-\pi_{L}^{-1}\varphi_{L}^{-1})^{-1 }(1-\frac{\pi_{L}}{q}\varphi_{L})\widetilde{\exp}_{L,V(\chi_{LT}^{-r}),\mathrm{ id}}^{*}(y_{\chi_{LT}^{-r}})\Big{)}\otimes e_{r}\] \[=\frac{r!}{\Omega^{r}}(1-\pi_{L}^{-r})^{-1}(1-\frac{\pi_{L}^{r}}{q })\widetilde{\exp}_{L,V(\chi_{LT}^{-r}),\mathrm{id}}^{*}(y_{\chi_{LT}^{-r}}) \otimes e_{r}.\] On the other hand we have \(\mathbf{L}_{V}(y)\otimes\mathbf{d}_{1}=\mathcal{L}_{V}(y)\) and hence by the claim concerning (145) \[\mathbf{L}_{V}(y)(\chi_{LT}^{r})\otimes\mathbf{d}_{1}\otimes \eta^{-1}\otimes t_{LT} =\mathcal{L}_{V}(y)(\chi_{LT}^{r})\otimes\eta^{-1}\otimes t_{LT}\] \[=\mathcal{L}(-\kappa(u)\otimes\eta^{-1})(\chi_{LT}^{r}),\] whence \[\mathcal{L}(-\kappa(u)\otimes\eta^{-1})(\chi_{LT}^{r})\otimes e_{1-r}=\frac{r!}{\Omega^{r}}(1-\pi_{L}^{-r})^{-1}(1-\frac{\pi_{L}^{r}}{q})\mathrm{exp}_{L,V( \chi_{LT}^{-r}),\mathrm{id}}^{*}(y_{\chi_{LT}^{-r}}).\] This is (143), i.e., together with (144) we have just obtained a new proof of Kato's reciprocity law 5.1.3 and we may consider Theorem 5.2.26 as a vast generalisation of it. ## Appendix A Cup products and local Tate duality The aim of this subsection is to discuss cup products and to prove Prop. 5.2.19. We fix some open subgroup \(U\subseteq\Gamma_{L}\) and let \(L^{\prime}=L^{U}_{\infty}\). Note that we obtain a decomposition \(U\cong\Delta\times U^{\prime}\) with a subgroup \(U^{\prime}\cong\mathbb{Z}_{p}^{d}\) of \(U\) and \(\Delta\) the torsion subgroup of \(U\). **Lemma A.0.1**.: _Let \(M_{0}\) be a complete linearly topologised \(o_{L}\)-module with continuous \(U\)-action and with a continuous \(U\)-equivariant endomorphism \(f\). Then there is a canonical quasi-isomorphism_ \[\mathcal{T}_{f,U}(M_{0})[\frac{1}{\pi_{L}}]\simeq K^{\bullet}_{f,U^{\prime}}( M_{0}[\frac{1}{\pi_{L}}]^{\Delta}).\] _If \(M_{0}\) is an \(L\)-vector space, the inversion of \(\pi_{L}\) can be omitted on both sides._ Proof.: Let \(\mathcal{C}^{\bullet}_{n}(U,M_{0})\subseteq\mathcal{C}^{\bullet}(U,M_{0})\) denote the subcomplex of normalized cochains. Since \(\Delta\) is finite, [Th, Thm. 3.7.6] gives a canonical quasi-isomorphism: \[\mathcal{C}^{\bullet}_{n}(U,M_{0})=\mathcal{C}^{\bullet}_{n}(\Delta\times U^{ \prime},M_{0})\xrightarrow{\simeq}\mathcal{C}^{\bullet}_{n}(\Delta,\mathcal{ C}^{\bullet}_{n}(U^{\prime},M_{0})).\] Here we understand the above objects in the sense of hypercohomology as total complexes of the obvious double complexes involved. After inverting \(\pi_{L}\) we may compute the right hand side further as \[\mathcal{C}^{\bullet}_{n}(\Delta,\mathcal{C}^{\bullet}_{n}(U^{\prime},M_{0})) [\frac{1}{\pi_{L}}]=\mathcal{C}^{\bullet}_{n}(\Delta,\mathcal{C}^{\bullet}_{n }(U^{\prime},M_{0})[\frac{1}{\pi_{L}}])\stackrel{{\simeq}}{{ \leftarrow}}\mathcal{C}^{\bullet}_{n}(U^{\prime},M_{0})[\frac{1}{\pi_{L}}]^{ \Delta}=\mathcal{C}^{\bullet}_{n}(U^{\prime},M_{0}^{\Delta})[\frac{1}{\pi_{L} }]\.\] Here the middle quasi-isomorphism comes from the fact that a finite group has no cohomology in characteristic zero. The right hand equality is due to the fact that \(\Delta\) acts on the cochains through its action on \(M_{0}\). Altogether we obtain a natural quasi-isomorphism \[\mathcal{C}^{\bullet}_{n}(U,M_{0})[\frac{1}{\pi_{L}}]\cong\mathcal{C}^{ \bullet}_{n}(U^{\prime},M_{0}^{\Delta})[\frac{1}{\pi_{L}}]\.\] By using [Th, Prop. 3.3.3] we may replace the normalized cochains again by general cochains obtaining the left hand quasi-isomorphism in \[\mathcal{C}^{\bullet}(U,M_{0})[\frac{1}{\pi_{L}}]\cong\mathcal{C}^{\bullet}( U^{\prime},M_{0}^{\Delta})[\frac{1}{\pi_{L}}]\cong K^{\bullet}_{U^{\prime}}(M_{0}^ {\Delta})[\frac{1}{\pi_{L}}]=K^{\bullet}_{U^{\prime}}(M_{0}[\frac{1}{\pi_{L}}] ^{\Delta})\.\] The middle quasi-isomorphism is (169). The claim follows by taking mapping fibres of the attached map \(f-1\) of complexes. **Proposition A.0.2**.: _Let \(M\) be a \(\varphi_{L}\)-module over \(\mathcal{R}=\mathcal{R}_{K}\) (cf. 4.3.3) and \(c\in K^{\times}\). Then \(M/(\psi-c)(M)\) is finite-dimensional over \(K\)._ Proof.: (The proof follows closely the proof of [KPX, Prop. 3.3.2] in the cyclotomic situation) We set \(\psi_{c}:=c^{-1}\psi\) and show that \(M/(\psi_{c}-1)(M)\) is finite-dimensional over \(K\). Choose a model \(M^{[r_{0},1)}\) of \(M\) with \(1>r_{0}>p^{\frac{-1}{(q-1)c}}\) and put \(r=r_{0}^{\frac{1}{q^{2}}}\). Recall that for all \(1>s\geq r\) we have maps \(M^{[s,1)}\xrightarrow{\psi_{c}-1}M^{[s,1)}\) (where strictly speaking we mean \(\psi_{c}\) followed by the corresponding restriction). We first show that it suffices to prove that \(\operatorname{coker}\left(M^{[r,1)}\xrightarrow{\psi_{c}-1}M^{[r,1)}\right)\) has finite dimension over \(K\). Indeed, given any \(m\in M\) we have \(m\in M^{[s,1)}\) for some \(1>s\geqslant r.\) Then there exists \(k\geqslant 0\) such that \(r\geqslant s^{q^{k}}\geqslant r_{0},\) whence \(\psi_{c}^{k}(m)\) belongs to \(M^{[r,1)}\) and represents the same class in \(M/(\psi_{c}-1)(M)\) as \(m.\) Choose a basis \(\mathbf{e}_{1}^{\prime},\ldots,\mathbf{e}_{n}^{\prime}\) of \(M^{[r_{0},1)}\) and take \(\mathbf{e}_{i}:=\varphi(\mathbf{e}_{i}^{\prime})\in M^{[r^{q},1)};\) by the \(\varphi\)-module property the latter elements also form a basis of \(M^{[r^{q},1)}\). Note that by base change these two bases also give rise to bases in \(M^{[s,1)}\) for \(1>s\geqslant r^{q}.\) Thus we find a matrix \(F^{\prime}\) with entries in \(\mathcal{R}^{[r^{q},1)}\) such that \(\mathbf{e}_{j}=\sum_{i}F_{ij}^{\prime}\mathbf{e}_{i}^{\prime}\) and we put \(F=\varphi(F^{\prime})\) with entries in \(\mathcal{R}^{[r,1)},\) i.e., \(\varphi(\mathbf{e}_{j})=\sum F_{ij}\mathbf{e}_{i}.\) Similarly let \(G\) be a matrix with values in \(\mathcal{R}^{[r^{q},1)}\subseteq\mathcal{R}^{[r,1)}\) such that \(\mathbf{e}_{j}^{\prime}=\sum_{i}G_{ij}\mathbf{e}_{i}\) and hence \(\mathbf{e}_{j}=\varphi\left(\sum_{i}G_{ij}\mathbf{e}_{i}\right).\) We identify \(M^{[r,1)}\) with \(\left(\mathcal{R}^{[r,1)}\right)^{n}\) by sending \((\lambda_{i})_{i}\) to \(\sum_{i}\lambda_{i}\mathbf{e}_{i}\) and endow it for each \(r\leqslant s<1\) with the norm given by \(\max_{i}|\lambda_{i}|_{s}.\) Note that then the "semi-linear" map \(\psi_{c}\) (followed by the corresponding restriction) on \(\left(\mathcal{R}^{[r,1)}\right)^{n}\) is given by the matrix \(G\) as follows from the projection formula (72): \[\psi_{c}(\sum_{j}\lambda_{j}\mathbf{e}_{j})=\sum_{j}\psi_{c}(\lambda_{j} \varphi(\sum_{i}G_{ij}\mathbf{e}_{i}))=\sum_{i,j}\psi_{c}(\lambda_{j})G_{ij} \mathbf{e}_{i}.\] Moreover, the restriction of \(\varphi:M^{[r,1)}\to M^{[r^{\frac{1}{q}},1)}\) to \(\sum_{I}\mathcal{O}_{K}(\mathbf{B})e_{i}\) becomes the semi-linear map \((\mathcal{O}_{K}(\mathbf{B}))^{n}\rightarrow\left(\mathcal{R}^{[r,1)}\right) ^{n}\) given by the matrix \(F\). Consider, for \(I\) any subset of the reals \(\mathbf{R},\) the \(K\)-linear map \(P_{I}:\mathcal{R}^{[r,1)}\rightarrow\mathcal{R}^{[r,1)},\sum a_{i}Z^{i} \mapsto\sum_{i\in\mathbf{Z}\cap I}a_{i}Z^{i}.\) We then introduce \(K\)-linear operators \(P_{I}\) and \(Q_{k},\)\(k\geqslant 0,\) on \(M^{[r,1)}\) by \[P_{I}((\lambda)_{i}) :=(P_{I}(\lambda_{i}))_{i},\] \[Q_{k} :=P_{(-\infty,-k)}-\frac{c\pi_{L}}{q}\varphi\circ P_{(k,\infty)}, i.e.,\] \[Q_{k}((\lambda)_{i}) =(P_{(-\infty,-k)}(\lambda)_{i})_{i}-\frac{c\pi_{L}}{q}F\cdot( \varphi(P_{(k,\infty)}(\lambda_{i})))_{i},\] because \(P_{(k,\infty)}\) factorises through \(\mathcal{O}_{K}(\mathbf{B}).\) Then the \(K\)-linear operator \(\Psi_{k}:=\operatorname{id}-P_{[-k,k]}+(\psi_{c}-1)Q_{k}\) of \(M^{[r,1)}\) satisfies \[\Psi_{k} =\psi_{c}\circ P_{(-\infty,-k)}+\frac{c\pi_{L}}{q}\varphi\circ P_{ (k,\infty)},i.e.,\] \[\Psi_{k}((\lambda_{i})_{i}) =G\cdot(\psi_{c}(P_{(-\infty,-k)}(\lambda_{i})))_{i}+F\cdot(\frac{ c\pi_{L}}{q}\varphi(P_{(k,\infty)}(\lambda_{i})))_{i},\] whence its operator norm satisfies \[\|\Psi_{k}\|_{s}\leqslant\max\{\|G\|_{s}\|\psi_{c}\circ P_{(-\infty,-k)}\|_{s },\frac{c\pi_{L}}{q}\|F\|_{s}\|\varphi\circ P_{(k,\infty)}\|_{s}\}.\] It is easy to check that, for \(1>s>q^{\frac{-1}{q-1}},\) we have \(\|\varphi\circ P_{(k,\infty)}\|_{s}\leqslant|Z|_{s}^{(q-1)k}=s^{(q-1)k}\) (using the norm relation after (56)) and \(\|\psi_{c}\circ P_{(-\infty,-k)}\|_{s}\leqslant C_{s}s^{k(1-q^{-1})}\) for some constant \(C_{s}>0.\) E.g. for the latter we have for \(\lambda=\sum_{i}a_{i}Z^{i}\in\mathcal{R}^{[r,1)}\) \[|\sum_{i<-k}\psi_{c}(a_{i}Z^{i})|_{s} \leqslant\sup_{i<-k}|a_{i}\|\psi_{c}(Z^{i})|_{s}\] \[\leqslant\sup_{i<-k}|a_{i}|C_{s}s^{\frac{i}{q}}\] \[\leqslant C_{s}\sup_{i<-k}|a_{i}\|Z|_{s}^{i}s^{i(q^{-1}-1)}\] \[\leqslant C_{s}|\lambda|_{s}s^{-k(q^{-1}-1)},\] where we use that by continuity of \(\psi_{c}\) there exists \(C_{s}\) such that \[|\psi_{c}(Z^{i})|_{s}\leq C_{s}|Z^{i}|_{s^{\frac{1}{q}}}=C_{s}s^{\frac{i}{q}}.\] Thus we may and do choose \(k\) sufficiently big such that \(\|\Psi_{k}\|_{r}\leq\frac{1}{2}.\) Given \(m_{0}\in M^{[r,1)}\) we define inductively \(m_{i+1}:=\Psi_{k}(m_{i}).\) This sequence obviously converges to zero with respect to the \(r\)-Gauss-norm. We shall show below that also for all \(s\in(r^{\frac{1}{q}},1)\) the series \((m_{i})_{i}\) tends to zero with respect to the Gauss norm \(|\ |_{s}\), i.e., by cofinality the sum \(m:=\sum_{i\geq 0}m_{i}\) converges in \(M^{[r,1)}\) for the Frechet-topology and satisfies \[m-m_{0}=m-P_{[-k,k]}(m)+(\psi_{c}-1)Q_{k}(m),\] i.e. \(P_{[-k,k]}(m)\) represents the same class as \(m_{0}\) in \(M^{[r,1)}/(\psi_{c}-1)(M).\) Since the image of \(P_{[-k,k]}\) has finite dimension, the proposition follows, once we have shown the following _Claim:_ For all \(s\in(r^{\frac{1}{q}},1)\) we have \[|\Psi_{k}(m)|_{s}\leq\max\{\frac{1}{2}|m|_{s},C_{s}\|G\|_{s}\left(\frac{s^{ \frac{1}{q}}}{r}\right)^{-k}|m_{r},|\frac{c\pi L}{q}\|F\|_{s}\left(\frac{s^{q} }{r}\right)^{k^{\prime}}|m|_{r}\}.\] Indeed, we fix such \(s\) and may choose \(k^{\prime}\geq k\) such that \(\|\Psi_{k^{\prime}}\|_{s}\leq\frac{1}{2}.\) Then \(\Psi_{k}=\Psi_{k^{\prime}}-\psi_{c}\circ P_{[-k^{\prime},-k)}-\frac{c\pi L}{q} \varphi\circ P_{(k,k^{\prime}]},\) whence the claim as for \(\lambda\in\mathcal{R}^{[r,1)}\) \[|\psi_{c}\circ P_{[-k^{\prime},-k)}(\lambda)|_{s} \leq C_{s}\left(\frac{\frac{1}{q}}{r}\right)^{-k}|\lambda|_{r}\] \[|\varphi\circ P_{(k,k^{\prime}]}(\lambda)|_{s} \leq\left(\frac{s^{q}}{r}\right)^{k^{\prime}}|\lambda|_{r}\] by similar estimations as above. **Remark A.0.3**.: _This result answers the expectation from [BF, Remark 2.3.7.] positively._ **Corollary A.0.4**.: _Let \(V^{*}(1)\) be \(L\)-analytic and \(M:=D^{\dagger}_{rig}(V^{*}(1))\)._ 1. _The cohomology group_ \(h^{2}(K^{\bullet}_{\psi,U^{\prime}}((M)^{\Delta})[d-1])\) _is finite dimensional over_ \(L\)_._ 2. _We have isomorphisms_ \[h^{1}(K^{\bullet}_{\psi,U^{\prime}}(D^{\dagger}_{rig}(V(\tau^{-1 }))^{\Delta})[d-1])^{*} \cong h^{1}(K^{\bullet}_{\varphi,U^{\prime}}(M^{\Delta}))\] \[\cong H^{1}_{\dagger}(L^{\prime},V^{*}(1)),\] _and_ \[h^{2}(K^{\bullet}_{\psi,U^{\prime}}(D^{\dagger}_{rig}(V(\tau^{-1 }))^{\Delta})[d-1])^{*} \cong h^{0}(K^{\bullet}_{\varphi,U^{\prime}}(M^{\Delta}))\] \[=(V^{*}(1))^{G_{L^{\prime}}}.\] Proof.: (i) Since \(h^{2}(K^{\bullet}_{\psi,U^{\prime}}((M)^{\Delta})[d-1])\) is a quotient of \((M/(\psi-1)(M))^{\Delta}\) by (176) this follows from the Proposition. (ii) We are in the situation of Remark 5.2.6 (i) with regard to \(\mathcal{C}=K^{\bullet}_{\psi,U^{\prime}}(D^{\dagger}_{rig}(V(\tau^{-1}))^{ \Delta})[d-1]\) and \(i=2,3\) in the notation of the remark. Indeed, for \(h^{3}(\mathcal{C})=\mathcal{C}^{4}=0\) by construction and \(\mathcal{C}^{3}=0\) as well as \(h^{2}(\mathcal{C})\) is finite by (i). Hence the first isomorphism follows in both cases from (173) using the reflexivity of \(M\). The second isomorphisms arise by Lemma A.0.1 together with (165) and (164), respectively. We quickly discuss the analogues of some results of SSI.6 in [ChCo2]. First we remind the reader of the definition of \(\tilde{\mathbf{A}}:=W(\mathbb{C}_{p}^{\flat})_{L}\), \[\tilde{\mathbf{A}}^{\dagger}:=\{x=\sum_{n\geq 0}\pi_{L}^{n}[x_{n}]\in\tilde{ \mathbf{A}}:|\pi_{L}^{n}|x_{n}|_{\flat}^{r}\xrightarrow{n\to\infty}0\text{ for some }r>0\}\] \(\mathbf{A}^{\dagger}:=\tilde{\mathbf{A}}^{\dagger}\cap\mathbf{A}^{30}\) and \(\mathbf{A}^{\dagger}_{L}:=(\mathbf{A}^{\dagger})^{H_{L}}\). **Remark A.0.5**.: _There is also the following more concrete description for \(\mathbf{A}^{\dagger}_{L}\) in terms of Laurent series in \(\omega_{LT}:\)_ \[\mathbf{A}^{\dagger}_{L}=\{F(\omega_{LT})\in\mathbf{A}_{L}|F(Z)\text{ converges on }\rho\leq|Z|<1\text{ for some }\rho\in(0,1)\}\subseteq\mathbf{A}_{L}.\] _Indeed this follows from the analogue of [ChCo1, Lem. II.2.2] upon noting that the latter holds with and without the integrality condition: "\(rv_{p}(a_{n})+n\geq 0\) for all \(n\in\mathbb{Z}\)" (for \(r\in\overline{\mathbf{R}}\backslash\mathbf{R}\)) in the notation of that article. 31 In particular we obtain canonical embeddings \(\mathbf{A}^{\dagger}_{L}\subseteq\mathbf{B}^{\dagger}_{L}\hookrightarrow \mathcal{R}_{L}\) of rings._ Footnote 30: In the literature one also finds the subring \(\tilde{\mathbf{A}}^{\dagger}_{\leq 1}:=\bigcup_{r>0}W^{r}_{\leq 1}( \mathbb{C}_{p}^{\flat})_{L}\) of \(\tilde{\mathbf{A}}^{\dagger}\) where \(W^{r}_{\leq 1}(\mathbb{C}_{p}^{\flat})_{L}=\{x\in W^{r}(\mathbb{C}_{p}^{\flat})_{L} |\ |x|_{r}\leq 1\}\) consists of those \(x\in\tilde{\mathbf{A}}\) such that \(|\pi_{L}^{n}|x_{n}|_{\flat}^{r}\xrightarrow{n\to\infty}0\) and \(|\pi_{L}^{n}|x_{n}|_{b}^{r}\leq 1\) for all \(n\). Denoting by \(\tilde{\mathbf{A}}^{\dagger,s}_{St}\) the ring defined in [Ste, Def. 3.4] we have the equality \(W^{r}_{\leq 1}(\mathbb{C}_{p}^{\flat})_{L}=\tilde{\mathbf{A}}^{\dagger,\frac{ \alpha}{q^{n}}}_{St}\) corresponding to \(\tilde{\mathbf{A}}^{\dagger,s}_{(\frac{q-1}{q^{n}})}\)- in the notation of [ChCo1, §II.1] for \(q=p\). For these relations use the following normalisations compatible with [Ste]: \(|\pi_{L}|=\frac{1}{q}\), \(v_{\mathbb{C}_{p}^{\flat}}(\omega)=\frac{q}{q-1}\), \(v_{\pi_{L}}(\pi_{L})=1\), \(|x|_{b}=q^{-v_{\mathbb{C}_{p}^{\flat}}}\), \(|\omega|_{b}=q^{-\frac{\alpha}{q-1}}=|\pi_{L}|^{\frac{\alpha}{q-1}}\), where \(\omega=\omega_{LT}\mod\pi_{L}\) as in section 3.1. Furthermore, \(|x|_{r}=|\pi_{L}|^{V(x,\frac{q-1}{rq})}\) and \(|\omega_{LT}|_{r}|=|\pi_{L}|^{\frac{\alpha}{q-1}}=|\omega|_{r}^{\frac{\alpha}{ q-1}}\), where \(V(x,r):=\inf_{k}\left(v_{\mathbb{C}_{p}^{\flat}}(x_{k})\frac{q-1}{rq}+k\right)\) for \(x=\sum_{k\geq 0}\pi_{L}^{k}[x_{k}]\in\tilde{\mathbf{A}}\). For \(x\in\tilde{\mathbf{A}}^{\dagger}\) we have \(V(x,r)=\frac{q-1}{rq}V_{St}(x,r)\) where \(V_{St}(x,r)\) uses the notation in [Ste]. Note also that \(\omega_{LT}^{-1}\) is contained in \(W^{\frac{q-1}{q}}_{\leq 1}(\overleftrightarrow{L_{\infty}})_{L}\) by [Ste, Lem. 3.10] (in analogy with [ChCo1, Cor. II.1.5]). Now consider the subring \(A=\mathbf{A}^{+}_{L}[[\frac{\pi_{L}}{Z^{q-1}}]]=\{x=\sum_{k}a_{k}Z^{k}\in \mathbf{A}_{L}|v_{\pi_{L}}(a_{k})\geq-\frac{k}{q-1}\}\subseteq\mathbf{A}_{L}\). For \(x\in\mathbf{A}_{L}\) and each inter \(n\geq 0\), we define \(w_{n}(x)\) to be the smallest integer \(k\geq 0\) such that \(x\in Z^{-k}A+\pi_{L}^{n+1}\mathbf{A}_{L}\). It satisfies \(w_{n}(x+y)\leq\max\{w_{n}(x),w_{n}(y)\}\) and \(w_{n}(xy)\leq w_{n}(x)+w_{n}(y)\) (since \(A\) is a ring) and \(w_{n}(\varphi(x))\leq qw_{n}(x)\) (use that \(\frac{\varphi(Z)}{Z^{q}}\in A^{\times}\), whence \(\varphi(Z^{-k})A=Z^{-qk}A\)). Set for \(n\geq 2,m\geq 0\) the integers \(r(n):=(q-1)q^{n-1}\), \(l(m,n)=m(q-1)(q^{n-1}-1)=m(r(n)-(q-1))\) and define \(\mathbf{A}^{\dagger,n}_{L}=\{x=\sum_{k}a_{k}Z^{k}\in\mathbf{A}_{L}|v_{\pi_{L}}(a _{k})+\frac{k}{r(n)}\to\infty\text{ for }k\mapsto-\infty\}\). Then, by Remark A.0.5 and thefootnote 30, 31 we obtain that \(\mathbf{A}^{\dagger}_{L}=\bigcup_{n\geq 2}\mathbf{A}^{\dagger,n}_{L}\). **Lemma A.0.6**.: _Let \(x=\sum_{k}a_{k}Z^{k}\in\mathbf{A}_{L}\) and \(l\geqslant 0,n\geqslant 2.\) Then_ * _we have_ (194) \[w_{m}(x)\leqslant l\Leftrightarrow v_{\pi_{L}}(a_{k})\geqslant\min\{m+1,-\frac{ k+l}{q-1}\}\text{ for }k<-l.\] * \(x\in\mathbf{A}_{L}^{\dagger,n}\) _if and only if_ \(w_{m}(x)-l(m,n)\) _goes to_ \(-\infty\) _when_ \(m\) _runs to_ \(\infty.\)__ Item (ii) of the Lemma is an analogue of [13, Prop. III 2.1 (ii)] for \(\mathbf{A}_{L}^{\dagger,n}\) instead of \(\mathbf{A}_{L,\leqslant 1}^{\dagger,n}=\{x=\sum_{k}a_{k}Z^{k}\in\mathbf{A}_{L}^{ \dagger,n}|v_{\pi_{L}}(a_{k})+\frac{k}{r(n)}\geqslant 0\text{ for all }k\leqslant 0\}.\) Proof.: (i) follows from the fact that \(x\in Z^{-l}A\) if and only if \(v_{\pi_{L}}(a_{k})\geqslant-\frac{k+l}{q-1}\) for \(k<-l.\) (ii) Let \(M,N=M(q-1)\gg 0\) be arbitrary huge integers and assume first that \(x\in\mathbf{A}_{L}^{\dagger,n}.\) Then \[w_{m}(x)-l(m,n)\leqslant-N \tag{195}\] is equivalent to \[v_{\pi_{L}}(a_{k})\geqslant\min\{m+1,-\frac{k+l(m,n)-N}{q-1}\}\text{ for }k<-l(m,n)+N. \tag{196}\] by (i). To verify this relation for \(m\) sufficiently huge, we choose a \(k_{0}\in\mathbb{Z}\) such that \(v_{\pi_{L}}(a_{k})+\frac{k}{r(n)}\geqslant N\geqslant 0\) for all \(k\leqslant k_{0}.\) Now choose \(m_{0}\) with \(-l(m_{0},n)<k_{0}\) and fix \(m\geqslant m_{0}.\) For \(-\frac{k}{r(n)}>m\) we obtain \(v_{\pi_{L}}(a_{k})\geqslant m+1,\) because \(k<k_{0}\) holds. For \(k\) with \[k\geqslant-r(n)m\Leftrightarrow\frac{k}{r(n)}-\frac{k+l(m,n)}{q-1}\leqslant 0 \tag{197}\] we obtain \(v_{\pi_{L}}(a_{k})\geqslant-\frac{k+l(m,n)-N}{q-1}.\) Thus the above relation holds true. Vice versa choose \(m_{0}\) such that (195) holds for all \(m\geqslant m_{0},\) and fix \[k\leqslant k_{0}:=-r(n)\max\{Mq^{n-1},m_{0}\}.\] Let \(m_{1}\) be the unique integer satisfying \(r(n)M-k\geqslant r(n)m_{1}\geqslant r(n)M-k-r(n).\) In particular, we have \(m_{1}+1+\frac{k}{r(n)}\geqslant M\) and \(k\geqslant-r(n)m_{1},\) which implies \(-\frac{k+l(m_{1},n)-N}{q-1}+\frac{k}{r(n)}\geqslant M\) by (197). Moreover, it holds \(m_{1}\geqslant m_{0}\) and \(k<-l(m_{1},n)+N\) (using \(k\leqslant r(n)M-r(n)m_{1}=-l(m_{1},n)+q^{n-1}N-(q-1)m_{1}\) and \(m_{1}>(q^{n-1}-1)M\) by our assumption on \(k\)). Hence we can apply (196) to conclude \(v_{\pi_{L}}(a_{k})+\frac{k}{r(n)}\geqslant M\) as desired. The analogue of Lemma 6.2 in (loc. cit.) holds by the discussion in [14] after Remark 2.1. This can be used to show the analogue of Corollary 6.3, viz \(w_{n}(\psi(x))\leqslant 1+\frac{w_{n}(x)}{q}.\) Now fix a basis \((e_{1},\ldots,e_{d})\) of \(D(T)\) over \(\mathbf{A}_{L}\) and denote by \(\Phi=(a_{ij})\) the matrix defined by \(e_{j}=\sum_{i=1}^{d}a_{ij}\varphi(e_{i}).\) The proof of Lemma 6.4 then carries over to show that for \(x=\psi(y)-y\) with \(x,y\in D(T)\) we have \[w_{n}(y)\leqslant\max\{w_{n}(x),\frac{q}{q-1}\left(w_{n}(\Phi)+1\right)\}, \tag{198}\] where \(w_{n}(\Phi)=\max_{ij}w_{n}(a_{ij})\) and \(w_{n}(a)=\max_{i}w_{n}(a_{i})\) for \(a=\sum_{i=1}^{d}a_{i}\varphi(e_{i})\) with \(a_{i}\in\mathbf{A}_{L}.\) **Lemma A.0.7**.: _Let \(T\in\mathrm{Rep}_{o_{L}}(G_{L})\) such that \(T\) is free over \(o_{L}\) and \(V=T\otimes_{o_{L}}L\) is overconvergent. Then the canonical map \(D^{\dagger}(T)\to D(T)\) induces an isomorphism \(D^{\dagger}(T)/(\psi-1)(D^{\dagger}(T))\cong D(T)(\psi-1)(D(T)).\)_ Proof.: We follow closely the proof of [Li, Lem. 2.6], but note that he claims the statement for \(D^{\dagger}_{\leqslant 1}(T).\) Choose a basis \(e_{1},\ldots,e_{d}\) of the \(\mathbf{A}^{\dagger}_{L}\)-module \(D^{\dagger}(T),\) which is free because \(\mathbf{A}^{\dagger}_{L}\) is a henselian discrete valuation ring with respect to the uniformiser \(\pi_{L},\) compare with [Ked15, Def. 2.1.4]. Since \(V\) is overconvergent it is also a basis of \(D(T).\) Due to etaleness and since \((\mathbf{A}^{\dagger}_{L})\cap\mathbf{A}^{\times}_{L}=(\mathbf{A}^{\dagger}_ {L})^{\times}\) also \(\varphi(e_{1}),\ldots,\varphi(e_{d})\) is a basis of all these modules. Given \(x=\psi(y)-y\) with \(x\in D^{\dagger}(T)\) and \(y\in D(T)\) there is an \(m>0\) such that all \(x_{i},a_{ij}\) lie in \(\mathbf{A}^{\dagger,m}_{L}\) for some \(m.\) Since \(q\geqslant 2\) it follows from the criterion in Lemma A.0.6 (ii) combined with (198) that all \(y_{i}\) belong to \(\mathbf{A}^{\dagger,m+1}_{L},\) whence \(y\in D^{\dagger}(T).\) This shows injectivity. In order to show surjectivity we apply Nakayama's Lemma with regard to the ring \(o_{L}\) upon recalling that \(D(T)/(\psi-1)\) is of finite type over it. Indeed, by left exactness of \(D^{\dagger}\) we obtain \(D^{\dagger}(T)/\pi_{L}D^{\dagger}(T)\subseteq D^{\dagger}(T/\pi_{L}T)=D(T/\pi _{L}T).\) Since these are vector spaces over \(\mathbf{E}_{L}\) of the same dimension, they are equal, whence \[(D^{\dagger}(T)/(\psi-1))/(\pi_{L})=(D^{\dagger}(T)/(\pi_{L}))/(\psi-1)=(D(T)/ (\pi_{L}))/(\psi-1)=(D(T)/(\psi-1))/(\pi_{L}).\] **Corollary A.0.8**.: _Under the assumption of Lemma 3.3.6 for \(V(\tau^{-1})\), the inclusion of complexes_ \[K^{\bullet}_{\psi,U^{\prime}}(D^{\dagger}(V(\tau^{-1}))^{\Delta})\subseteq K^ {\bullet}_{\psi,U^{\prime}}(D(V(\tau^{-1}))^{\Delta})\] _is a quasi-isomorphism._ Proof.: Forming Koszul complexes with regard to \(U^{\prime}\) we obtain the following diagram of (double) complexes with exact columns in which the bottom line is an isomorphism of complexes because under the assumptions \(\psi-1\) induces an automorphism of \(D(V(\tau^{-1}))/D^{\dagger}(V(\tau^{-1}))\) and as the action of \(\Delta\) commutes with \(\psi.\) Hence, going over to total complexes gives an exact sequence \[0\to K^{\bullet}_{\psi,U^{\prime}}(D^{\dagger}(V(\tau^{-1}))^{\Delta})\to K^ {\bullet}_{\psi,U^{\prime}}(D(V(\tau^{-1}))^{\Delta})\to K^{\bullet}_{\psi,U^{ \prime}}((D(V(\tau^{-1}))/D^{\dagger}(V(\tau^{-1})))^{\Delta})\to 0,\] in which \(K^{\bullet}_{\psi,U^{\prime}}((D(V(\tau^{-1}))/D^{\dagger}(V(\tau^{-1})))^{ \Delta})\) is acyclic, whence the statement follows. **Remark A.0.9**.: _Instead of using Lemma 3.3.6 (for crystalline, analytic representations) one can probably show by the same techniques as in [ChCo2, Prop. III.3.2(ii)] that for any overconvergent representation \(V\) we have \(D^{\dagger}(V)^{\psi=1}=D(V)^{\psi=1}.\)_ The interest in the following diagram, the commutativity of which is shown before Lemma B.0.5, stems from the discrepancy that the reciprocity law has been formulated and proved in the setting of \(K_{\psi,U^{\prime}}^{\bullet}(D_{rig}^{\dagger}(V(\tau^{-1}))^{\Delta})[d-1]\) while the regulator map originally lives in the setting of \(K_{\psi,U^{\prime}}^{\bullet}(D(V(\tau^{-1}))^{\Delta})[d-1]\): (199) \[\begin{array}{ccccc}\mathcal{C}^{\bullet}(G_{L^{\prime}},V^{*}(1))&\times \\ &\underset{\underset{\text{\tiny$V$}}{\text{\tiny$\begin{array}{c}\cup G_{L^{ \prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}}\\ \text{\tiny$\begin{array}{c}\cup G_{L^{\prime}\\ \text{\tiny$\begin{\cdot}[]{c}\cup G_{L^{\prime}}\\ \text{\tiny$\cdot$\begin{\cdot}[]{c}\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot \begin{array}{c}\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot\begin{array}{c }\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$ \begin{array}{c}\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$ \begin{array}{c}\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$ \cdot$\cdot$\cdot$\begin{array}{c}\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$ \cdot$\cdot$\cdot$\cdot\begin{array}{c}\cup G_{L^{\cdot}\\ \text{\tiny$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot \begin{array}{c}\cup G_{L^{\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$ \cdot$\cdot$\cdot$\cdot\begin{array}{c}\cup G_{L^{\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\begin{array}{c}\cup G_{L^{\prime}} \\ \text{\tiny$\begin{array}{c}\cup G_{L^{\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\cdot$\, \cdot$\cdot$\cdot$\cdot$\cdot$\cdot\begin following commutative diagram (201) By [FK, Prop. 1.6.5 (3)] (see also [Ne, (8.4.8.1)]) we have a canonical isomorphism \[o_{L}\otimes^{\mathbb{L}}_{\Lambda(U)}R\Gamma(L^{\prime},\mathbb{T})\cong R\Gamma (L^{\prime},o_{L}\otimes_{\Lambda(U)}\mathbb{T})\cong R\Gamma(L^{\prime},T) \tag{202}\] where we denote by \(R\Gamma(L^{\prime},-)\) the complex \(\mathcal{C}^{\bullet}(G_{L^{\prime}},-)\) regarded as an object of the derived category. Dually, by a version of Hochschild-Serre, there is a canonical isomorphism \[R\operatorname{Hom}_{\Lambda}(o_{L},R\Gamma(L^{\prime},\mathbb{T}^{\vee}(1))) \cong R\Gamma(L^{\prime},T^{\vee}(1)). \tag{203}\] It follows that the isomorphism \[R\Gamma_{Iw}(L_{\infty}/L^{\prime},T)\cong R\operatorname{Hom}_{o_{L}}(R \Gamma(L^{\prime},\mathbb{T}^{\vee}(1)),L/o_{L})[-2]\] induced by the upper line of (201) induces an isomorphism \[o_{L}\otimes^{\mathbb{L}}_{\Lambda(U)}R\Gamma_{Iw}(L_{\infty}/L^{\prime},T) \cong R\operatorname{Hom}_{o_{L}}(R\operatorname{Hom}_{\Lambda}(o_{L},R\Gamma (L^{\prime},\mathbb{T}^{\vee}(1))),L/o_{L})[-2], \tag{204}\] which is compatible with the lower cup product pairing in (201) via the canonical identifications (202) and (203). **Lemma B.0.1**.: _There is a canonical isomorphism \(R\Gamma(L^{\prime},\mathbb{T}^{\vee}(1))\cong\mathcal{T}_{\varphi}(D(T^{\vee }(1)))\) in the derived category._ Proof.: See [KV, Thm. 5.1.11]. For the rest of this section we assume that \(U\subseteq\Gamma_{L}\) is an open _torsionfree_ subgroup. **Lemma B.0.2**.: _Let \(T\) be in \(\operatorname{Rep}_{o_{L}}(G_{L})\) of finite length. Set \(\Lambda:=\Lambda(U)\) and let \(\gamma_{1},\dots,\gamma_{d}\) be topological generators of \(U\). Then we have a up to signs canonical isomorphism of complexes_ \[\operatorname{Hom}^{\bullet}_{\Lambda}(K_{\bullet}(\gamma),\mathcal{T}_{ \varphi}(D(T^{\vee}(1))))^{\vee}[-2]\cong\operatorname{tot}\big{(}\mathcal{T} _{\psi}(D(T(\tau^{-1})))[-1]\otimes_{\Lambda}K_{\bullet}(\gamma^{-1})(\Lambda )^{\bullet}\big{)}\] _where \(-^{\vee}\) denotes forming the Pontrjagin dual._ Proof.: Upon noting that \(\mathcal{T}_{\varphi}(D(T^{\vee}(1)))^{\vee}[-2]\cong\mathcal{T}_{\psi}(D(T( \tau^{-1})))[-1]\) (canonically up to a sign!) this is easily reduced to the following statement \[\operatorname{Hom}^{\bullet}_{\Lambda}(K_{\bullet}(\gamma),M)^{\vee}\cong M ^{\vee}\otimes_{\Lambda}K_{\bullet}(\gamma^{-1})(\Lambda)^{\bullet},\] which can be proved in the same formal way as (155), and a consideration of signs. **Remark B.0.3**.: _For every \(M\in\mathfrak{M}(\mathbf{A}_{L})\) we have a canonical isomorphism_ \[\operatorname{Hom}^{\bullet}_{\Lambda}(K^{U}_{\bullet},\mathcal{T}_{\varphi} (M))\cong K_{\varphi,U}(M)\] _up to the sign \((-1)^{n}\) in degree \(n\) and a non-canonical isomorphism_ \[\operatorname{tot}\big{(}\mathcal{T}_{\psi}(M)[-1]\otimes_{\Lambda}K^{U}_{ \bullet}(\Lambda)^{\bullet}\big{)}\cong K_{\psi,U}(M)[d-1]\] _(involving the self-duality of the Koszul complex). Here, the right hand sides are formed with respect to the same sequence of topological generators as the left hand sides._ Proof.: By our conventions in section 5.2.1\(K_{\varphi,U}(M)\) is the total complex of the double complex \(\operatorname{Hom}^{\bullet}(K_{\bullet}(\Lambda)^{\bullet},M)\xrightarrow{1- \varphi_{\bullet}}\operatorname{Hom}^{\bullet}(K_{\bullet}(\Lambda)^{\bullet},M).\) A comparison with the total Hom-complex (with the same sign rules as in section 5.2.1) shows the first claim. For the second statement we have \[\operatorname{tot}\left(\mathcal{T}_{\psi}(M)[-1]\otimes_{ \Lambda}K_{\bullet}(\Lambda)^{\bullet}\right) \cong\operatorname{tot}\left(\mathcal{T}_{\psi}(M)\otimes_{ \Lambda}K_{\bullet}(\Lambda)^{\bullet}\right)[-1]\] \[=\operatorname{tot}\left(\mathcal{T}_{\psi}\left(M\otimes_{ \Lambda}K_{\bullet}(\Lambda)^{\bullet}\right)\right)[-1]\] \[\cong\operatorname{tot}\left(\mathcal{T}_{\psi}\left(M\otimes_{ \Lambda}K^{\bullet}(\Lambda)[d]\right)\right)[-1]\] \[=\operatorname{tot}\left(\mathcal{T}_{\psi}\left(K^{\bullet}(M)[ d]\right)\right)[-1]\] \[=\operatorname{cone}\left(K_{U}^{\bullet}(M)[d]\xrightarrow{1- \psi}K_{U}^{\bullet}(M)[d]\right)[-2]\] \[\cong K_{\psi,U}(M)[d-1].\] The first isomorphism involves a sign on \(\mathcal{T}_{\psi}^{1}(M).\) The third isomorphisms stems from (154) while the last isomorphism again involves signs. **Theorem B.0.4**.: _There are canonical isomorphisms_ \[R\Gamma_{Iw}(L_{\infty}/L,T) \cong\mathcal{T}_{\psi}\left(D(T(\tau^{-1}))\right)[-1] \tag{206}\] \[K_{\psi,U}(D(T(\tau^{-1})))[d-1] \xrightarrow{\simeq}R\Gamma(L^{\prime},T). \tag{205}\] _in the derived category \(D_{perf}(\Lambda_{o_{L}}(\Gamma_{L}))\) of perfect complexes and in the derived category \(D^{+}(o_{L})\) of bounded below cochain complexes of \(o_{L}\)-modules, respectively._ Proof.: The first isomorphism is [KV, Thm. 5.2.54 ] while the second one follows from this and (202) as \[R\Gamma_{Iw}(L_{\infty}/L,T)\otimes_{\Lambda_{o_{L}}(U)}^{\mathbb{ L}}o_{L} \cong\mathcal{T}_{\psi}\left(D(T(\tau^{-1}))\right)[-1]\otimes_{ \Lambda}^{\mathbb{L}}K_{\bullet}(\Lambda)^{\bullet}\] \[=\operatorname{tot}\left(\mathcal{T}_{\psi}\left(D(T(\tau^{-1})) \right)[-1]\otimes_{\Lambda}K_{\bullet}(\Lambda)^{\bullet}\right)\] \[=K_{\psi,U}(D(T(\tau^{-1})))[d-1].\] by Remark B.0.3. By Lemma B.0.2 and Remark B.0.3 we see that, for \(T\) be in \(\operatorname{Rep}_{o_{L}}(G_{L})\) of finite length, \[K_{\varphi,U}(D(T^{\vee}(1)))=R\operatorname{Hom}_{\Lambda}(o_{L},\mathcal{T}_ {\varphi}(D(T^{\vee}(1)))))[2] \tag{207}\] is dual to \[K_{\psi,U}(D(T(\tau^{-1})))=o_{L}\otimes_{\Lambda(U)}^{\mathbb{L}}\mathcal{T}_ {\psi}(D(T(\tau^{-1})))[-1], \tag{208}\] such that the upper rectangle in the diagram (199) commutes by (204), taking inverse limits and inverting \(\pi_{L}\). **Lemma B.0.5**.: _Let \(T\) be in \(\operatorname{Rep}_{o_{L}}(G_{L}).\) Then the left rectangle in (193) is commutative._ Proof.: (Sketch) By an obvious analogue of Remark 5.2.17 it suffices to show the statement for \(U=\Gamma_{n}\cong\mathbb{Z}_{p}^{d}.\) In this situation we have a homological spectral sequence \[H_{i,cts}(U,H_{Iw}^{-j}(L_{\infty}/L,T))\Longrightarrow H_{cts}^{-i-j}(L^{ \prime},T)\] which is induced by (202), see [Ne, (8.4.8.1)] for the statement and missing notation. We may and do assume that \(T\) is of finite length. Then, on the one hand, the map \(H^{1}_{Iw}(L_{\infty}/L,T)\stackrel{{ cor}}{{\longrightarrow}}H^{1}( L^{\prime},T)\) is dual to \(H^{1}(L^{\prime},T^{\vee}(1))\stackrel{{ res}}{{\longrightarrow}}H^{1}(L_{ \infty},T^{\vee}(1))\), which sits in the five term exact sequence of lower degrees associated with the Hochschild-Serre spectral sequence. As explained just before this lemma the above homological spectral sequence arises by dualizing from the latter. Hence \(cor\) shows up in the five term exact sequence of lower degrees associated with this homological spectral sequence. On the other hand via the isomorphisms (202) and (206) the latter spectral sequence is isomorphic to \[H_{i,cts}(U,h^{-j}(\mathcal{T}_{\psi}\left(D(T(\tau^{-1}))\right)[-1]) \Longrightarrow h^{-i-j}(K_{\psi,U}(D(T(\tau^{-1})))[d-1])\] and one checks by inspection that \(cor\) corresponds to \(pr_{U}\).
2305.12820
MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering
Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery.
Vaishali Pal, Andrew Yates, Evangelos Kanoulas, Maarten de Rijke
2023-05-22T08:25:15Z
http://arxiv.org/abs/2305.12820v2
# MultiTabQA: Generating Tabular Answers ###### Abstract Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery. ## 1 Introduction Question answering (QA) over multiple tables aims to provide exact answers to natural language questions with evidence from one or more tables Jin et al. (2022). This is in contrast to single-table QA, which has been the focus of tabular QA research to date Liu et al. (2021); Nan et al. (2021); Zhu et al. (2021); Herzig et al. (2020). Even though groups of related tables are ubiquitous in real-world corpora, such as relational databases or tables in a web page, multi-table QA remains a largely unexplored area. To address this gap, we propose a new task of answering questions over multiple tables. Our multi-table QA model, MultiTabQA,1 addresses novel challenges introduced by multi-table context. These include complex queries involving chains of reasoning, disambiguation of relevant table names at each reasoning step, and generating a final table as answer. It also leads to novel question-types that are unnatural in a single-table setting. For instance, questions involving operations specific to multiple tables, such as Cartesian products (_outer joins_, _inner joins_) and set operations (such as _intersect_, _union_, _in_), are unique to and common in a multi-table scenario. Furthermore, such multi-table operations often result in a tabular answer and they necessitate table generation capabilities of the QA model. Footnote 1: Code and data are at: [https://github.com/kolk/MultiTabQA](https://github.com/kolk/MultiTabQA) Figure 1: Multi-table QA. The QA model generates a tabular answer from either a natural language question or an SQL query and one or more tables as input context. Zhong et al., 2017; Cai et al., 2022), which have been the dominant approach to answering multi-table complex questions. Such methods transform a natural question to a logical form, which is used to query a relational database to extract the answer. However, these techniques are restricted to relational databases and cannot be applied to tables from other sources such over web tables, tables in text documents, and any non-normalized tables. Additionally, they require expensive and expert human annotations (Yu et al., 2018; Lee et al., 2021) formulating SQL queries from natural questions. (ii) Modeling the problem as a sequence generation/classification task (Yin et al., 2020; Zhang et al., 2020; Herzig et al., 2020; Zhu et al., 2021; Liu et al., 2021; Cheng et al., 2021; Nan et al., 2021; Ma et al., 2022; Pal et al., 2022; Jin et al., 2022), where an end-to-end trained neural model is not only responsible for question/query understanding but also table reasoning. Existing end-to-end neural models are either classification-based (Herzig et al., 2020; Zhu et al., 2021), where the model detects the answer span and classifies one table operator associated with the span, or they are sequence generation-based (Nan et al., 2021; Zhang et al., 2020; Liu et al., 2021), where the model generates the answer as a span of text in an auto-regressive manner. Our work focuses on the latter direction of research. We train a neural model to mimic a semantic parser and generate the answer. A clear distinction of our work compared to existing end-to-end models is that our proposed model, MultiTabQA, does not operate in the constrained setting of a single input table, but can accommodate one or more tables in the input and the associated multi-table operators. Additionally, MultiTabQA performs the task of structured table generation, which imposes structure aspects to the generated output such as table schemas, alignments of rows and columns, relationships between column-headers and column values. Generating structured tables as output requires table-specific evaluation metrics which we define and use to evaluate the generated tables. To effectively train the model, we generate a pre-training dataset with multi-table SQL queries and tabular answers built over an existing semantic parsing dataset, Spider (Yu et al., 2018). Our dataset consists of \(132,645\) samples comprising of SQL queries, associated natural language questions, input tables, and tabular answers. To the best of our knowledge, this is the first work to address the task of multi-table QA and generate tabular output. Our main contributions can be summarized as: 1. We fill-in the gap of existing tabular QA methods, which operate only on single tables, by proposing a new task of answering questions over multiple tables. Our work increases the breadth of question types that can be handled by single tabular QA methods. 2. Our proposed multi-table QA model generates structured tables imposed by multi-table operations. Table generation introduces generation challenges such as maintaining row-column alignment, table-header generation, etc. 3. We release a multi-table pre-training dataset comprising of \(132,645\) samples of SQL queries and tabular answers. 4. We introduce table generation metrics that capture different levels of granularity and strictness to evaluate our proposed model. ## 2 Methodology We frame multi-table question answering as a sequence-to-sequence task and train an auto-regressive transformer encoder-decoder model to generate the tabular answer. Given a question \(Q\) consisting of a sequence of \(k\) tokens \(q_{1},q_{2},\ldots,q_{k}\) and a set of \(N\) tables, \(T_{N}=\{t_{1},t_{2},\ldots,t_{n}\}\), the goal of the multi-table QA model is to perform chains of _operations_ over \(T_{N}\), constrained by \(Q\), and generate a table \(T_{out}\). The model always generates a table, \(T_{out}\), which can be single celled for scalar answers, single rowed or columned for list-based answers, and multiple rows and columns for tabular answers. In all cases, the model also generates column headers revealing important semantics associated with the generated values. Training approach.We follow a Curriculum Learning approach (Bengio et al., 2009) by sequentially increasing the complexity of tasks to train MultiTabQA. The first stage of training is a pre-training step where the training objective is two-fold: (i) learn to generate correct tabular answers from SQL, and (ii) understand the associations between related input tables. The final training stage is fine-tuning where the model learns to understand natural language questions with their inherent ambiguity in addition to retaining its ability of reasoning over tables and generating a tabular answer. We discuss the training process in detail in Section 4. Model input/output.The input to the model is a sequence comprised of the query or the natural language question, followed by a sequence of input tables, represented by the table name and the corresponding flattened table. Table names are important for disambiguating tables in multi-table QA setting. Specifically, the input sequence is represented as \(question\left[table_{1}\,rep\right]\left[table_{2}\,rep\right]\ldots\left[table_{n} \,rep\right]\) where \(\left[table_{i}\,rep\right]\) is the representation of the \(i\)-th table. As depicted in Figure 2, the \(i\)-th table is flattened in row-major format and represented as ``` <table_name>:\(n_{1}\,n_{2}\,\mid\)col:\(h_{1}\mid h_{2}\mid\ldots\mid h_{k}\) row1:\(r_{1}^{1}\mid\ldots\mid r_{1}^{m}\ldots\)rowk:\(r_{k}^{1}\mid\ldots\mid r_{k}^{m}\), ``` where \(n_{1},\ldots,n_{2}\) is the sequence of table name tokens, \(h_{j}\) is \(j\)-th column header, \(r_{m}^{i}\) is the \(i\)-th row and \(m\)-th column cell. The boldface words are keywords specifying semantics of the next tokens. The output of the model is also a flattened table in row-major format, i.e., \(\left[table_{ans}\,rep\right]\), but without a table name. As depicted in Figure 2, the generated table, \(\left[table_{ans}\,rep\right]\), is of the form: ``` \(h_{1}\mid h_{2}\mid\ldots\mid h_{k}\)row1:\(r_{1}^{1}\mid\ldots\mid r_{1}^{m}\) row2:\(r_{2}^{1}\mid\ldots\mid r_{2}^{m}\ldots\)rowk:\(r_{k}^{1}\mid\ldots\mid r_{k}^{m}\). ``` ## 3 Dataset To effectively train a multi-table QA model, the dataset needs to cover three aspects: (i) multi-table context, (ii) tabular answers, and (iii) natural questions. Given the absence of large-scale datasets covering all three aspects, we transform existing semantic parsing and single-table QA datasets to focus on a single aspect before training with samples covering all three aspects. ### Single table pre-training dataset One of the sub-tasks of pre-training is to generate tabular answers. We hypothesize that tuning the model to generate tables may lead to a warm-start and better convergence in a multi-table QA setting. To enable such experiments, we modify the large-scale single-table QA Tapex pre-training dataset Liu et al. (2021) to accommodate tabular answers. The dataset contains \(1,834,419\) samples of query, input table and factoid answers. The tables in the dataset are not named as there is no need for table disambiguation in a single table setting. The SQL queries are semi-formal (do not contain the FROM clause with a table name) and cannot be used to query a real SQL database. We insert a placeholder table name in the queries and the corresponding input tables to extract the tabular answer from the database. Transforming the factoid answers to tables leads to single-celled or single-rowed tables. The modified dataset helps the model to understand simple tables and reason over semi-formal queries to generate simple tables. ### Multi-table pre-training dataset We develop a multi-table pre-training dataset over the database of Spider Yu et al. (2018). Spider is a cross-domain complex semantic parsing dataset for text-to-SQL translation. It consists of \(10,181\) questions and \(5,693\) SQL queries. The questions are over \(200\) databases of multiple tables covering 138 different domains. The training, development and test splits do not contain overlapping databases to test a model's generalizability to new databases. We first adapt the existing samples of Spider for our task. We use the ground-truth SQL queries of Spider as input query for pre-training over multiple tables. We automatically extract all input table Figure 2: Architecture of MultiTabQA model. Given a natural language question/SQL query and the associated tables as an input sequence, the seq2seq model performs tabular reasoning and generates a tabular answer. Start of an input table is identified with keyword **<table_name>** which also indicates that the next tokens comprises the table name. **col:** indicates that the next tokens are table headers. Rows in a table are identified with keyword **row i:**, columns are separated by l. names from the SQL query and retrieve the input tables2 from the relational database. The query, extracted table names, and retrieved tables are inputs to our multi-table QA model. We extract the answer table with the SQL query by querying the relational database. Answer table headers reveal important semantics of the associated column values such as the numeric operation (_average_, _sum_, etc.), numeric scales (million, thousand, kms, meters, etc.), or entity facets (name, date, etc.). This process generates \(3816\) samples comprising of _query_, _question_, _table_names_, _tables_ and _answer_. Footnote 2: We use SQLite3 and pandas for extracting tables. We further augment the modified Spider dataset with \(132,645\) samples of synthetic queries. This leads to an augmented multi-table pre-training dataset of \(136,461\) unique training samples comprising of \(3816\) Spider samples and \(132,645\) synthetic samples. The validation set comprises of \(536\) samples from the Spider validation set pre-processed as described above to adapt to our task. Existing work on semantic parsing Shi et al. (2020); Yu et al. (2021) have utilized hand-crafted templates to generate large-scale corpora of synthetic queries, but are constrained in their coverage with no multi-table operations Shi et al. (2020) or limited coverage with no table _joins_ and lacking diversity in _set_ operations Yu et al. (2021). This motivates us to generate our augmented pre-training dataset for multi-table QA using multi-table SQL templates. Our synthetic queries are generated from \(45\) manually crafted templates over the Spider database and hand-crafted rules for operation types. The query templates have placeholders for aggregation, relational operations, table name and headers which are randomly assigned during query generation process. For example, to generate multi-table _join_ queries, we instantiate the templates by randomly choosing tables from a database with at least one common header. For _set_ operations, all tables participating in a multi-table query requires all table headers to match. We design SQL templates in increasing order of complexity starting with simple SQL templates and progressively adding components which increases its complexity. For example, for single-table queries, we use the simplest template _"SELECT * FROM [table_name]"_ whereas for multi-table templates such as _joins_, the simplest template is _"SELECT T1.[table1_cols], T2.[table2_cols] FROM [table_name] as T1 JOIN [table_name2] as T2 ON T1.[common_col] = T2.[common_col]"_. We progressively add SQL components such as aggregations, _where_ conditions, _group by_ and _having_ clauses to generate templates of increasing complexity. This process results in \(14\) templates for _joins_, \(4\) templates for each set operation: _intersect_, _union_ and _except_. To avoid catastrophic forgetting for single table queries, we also instantiate \(14\) single-table queries with increasing complexity. Quality control.We ensure correctness of the synthetic samples by discarding SQL queries that executes to an error or empty table. We also apply the process on the modified Spider, Atis and GeoQuery data to discard SQL query and the corresponding natural language question to ensure that all questions are answerable. ### Multi-table QA dataset We fine-tune and evaluate our model on the natural language questions of semantic parsing datasets: Spider, GeoQuery Zelle and Mooney (1996), and Atis Price (1990); Dahl et al. (1994). GeoQuery is a semantic parsing dataset to query into a database of United States geography.3 Atis is a semantic parsing dataset4 with a collection of \(4,379\) questions, corresponding SQL queries and a relational database to a flight booking system Iyer et al. (2017). Similar to the Spider dataset processing described in Section 3.2, we first extract the input table names from the available SQL queries and query the relational database for the input tables.5 We also extract the tabular answers using the SQL queries. We discard any samples that executes to an error or empty table. We use the corresponding natural language question for each SQL query as the user utterance for fine-tuning. This results in \(6,715\) training samples and \(985\) validation samples for Spider. We also process the \(600\) GeoQuery samples provided in Iyer et al. (2017) to create a subset of \(530\) training samples, \(49\) validation samples and \(253\) test samples. We process and generate an Atis subset of \(384\) training samples, \(45\) evaluation samples and \(86\) test samples. We discard Atis queries with very large input tables (with > \(10,000\) rows). This restriction enables us to correctly evaluate question answering capabilities of a model by ignoring samples with truncated input sequences including entire input tables from the second table onward. Truncation of tables leads to incorrect answers for any numeric operation such as _average_, _intersect_ and the evaluation scores would no longer reflect reasoning capabilities of the model. ## 4 Training We follow a curriculum learning approach by sequentially training the model on sub-tasks of increasing complexity as depicted in Figure 3. Broadly, we first pre-train the seq2seq model to mimic an SQL parser and further fine-tune it on the downstream multi-table QA task. Pre-training the model on unambiguous SQL queries leads to better convergence and warm-start for the closely related downstream multi-table QA task. We further segregate the pre-training by first addressing the simpler sub-task of generating tables from single table queries. This is immediately followed by pre-training on multi-table query answering where complex SQL queries are utilized to train the model to learn multi-table associations from unambiguous complex queries, reason over the tables and generate tabular answer. The final stage of training is the downstream multi-table QA from natural language questions. Natural language introduces ambiguity, ellipses and co-references which increases complexity and is thus the final stage of training. For each stage, we choose the model with the best table exact match accuracy on the corresponding validation set, defined in Section 5, as the initialization for training the next stage. ### Pre-training Pre-training of MultiTabQA is conducted in two stages in a curriculum learning fashion: Stage 1 is single table QA where the model learns to generate tabular answers from relatively simple SQL queries. Stage 2 is multi-table QA where the model trained in Stage 1 is further tuned for multi-table SQL QA. Stage 1.We first train MultiTabQA on the task of generating tables from SQL queries over single tables. The tabular answer to be generated is simple and single-columed. For this stage, we use the modified Tapex pre-training corpus described in Section 3.1. We train the model on \(1,834,419\) samples for two epochs. This stage provides a good initialization for multi-table QA in next stages. Stage 2 + Stage 3.We further pre-train the model on multi-table QA. For this, we tune our model on SQL queries from the modified Spider and synthetic dataset. We call tuning with only the modified Spider SQL samples _Stage 2_, and tuning with only the synthetic dataset _Stage 3_. We utilize the larger augmented dataset comprising of the modified Spider SQL (Stage 2) and our synthetic samples (Stage 3) as described in Section 3.2 to train the final pre-trained model for \(30\) epochs. We call this setting _Stage 2+3_. We compare these three multi-table pre-training settings in Section 6. ### Fine-tuning The final stage of training is fine-tuning the pre-trained model on natural language questions. Natural questions are ambiguous compared to formal SQL and used at the last stage of training. We fine-tune the pre-trained model on the \(6,715\) natural questions, extracted input and output tables for Spider as described in Section 3 and evaluate on \(985\) samples of the validation set. To observe the performance of the pre-trained model on out-of-domain database tables, we also fine-tune the pre-trained model on Atis and GeoQuery datasets. For all the fine-tuning datasets, we train for \(60\) epochs. ## 5 Evaluation metrics While denotation accuracy has been widely used in semantic parsing [20, 19, 21, 18], it is not directly applicable for our task where tabular input encoding, reasoning, and generation are performed by the same model. Evaluating the answer table not only requires matching the generated values but also the table structure. Moreover, tables store factual information such as named entities, dates, numbers, etc in an ordered manner. This makes lexical metrics measuring surface-form overlap more suitable than semantic metrics measuring the underlying meaning of paraphrased sequences. Moreover, table components such as rows, columns and cells are standalone units which capture different levels Figure 3: Four stage training procedure. The first three stages are pre-training, followed by fine-tuning. of semantics and relationships with the surrounding table component. For example, rows capture data records while columns capture the features of each record. Cells capture the lowest level of self-contained facts and requires complete match with the target. For example, a cell with the entity "United Kingdom" should not be partially matched with the predictions "United Nation", "United" or "Kingdom". Similarly, a numeric value such as "123.45" should not be partially matched with "12.45", "23.45" or "12". Numeracy pose a challenge to seq2seq models (Nogueira et al., 2021; Pal and Baral, 2021) specially in the extrapolation setting where semantic match of unseen numbers may not be an ideal. Considering all these factors, we focus on lexical match to measure model effectiveness. Table exact match.We define _table exact match Accuracy Table EM_) as the percentage of predicted tables which exactly matches the target tables. Table exact match evaluates ordering of rows, columns and table headers and exact lexical matching of table values. It is a strict binary measure which treats partial matches as incorrect. However, many queries do not impose ordering among columns or rows, and strict table exact match may not be the ideal indication of model efficacy. To measure partial correctness, we treat rows, columns and cells as units at varying levels of granularity which have ordered associations among the values within the unit. We evaluate partial correctness with exact match of rows, columns and cells. Row exact match.To relax the strict criterion of table exact match, we first measure correctness on table rows. Row exact match do not consider ordering of rows in the generated table but requires ordering of values within the row. We define a correctly generated row to be a predicted row that exactly matches any target rows in the target table. _Row exact match precision_ is the percentage of correctly generated rows among all the predicted rows in the evaluation dataset. _Row exact match recall_ is the percentage of correctly generated rows among all the target rows in the evaluation dataset. Column exact match.Unlike rows, which represent records in relational databases, columns represent attributes where column header provides semantic meaning to the values. Hence, a correct column is defined as a generated column that first exactly matches a target column header and further the column values. Column exact match measures ordering of values within a column. _Column exact match precision_ is the percentage of correctly generated columns among all generated columns in the evaluation set. _Column exact match recall_ is the percentage of correctly generated columns among all target columns in the evaluation set. Cell exact match._Cell exact match_ is the most relaxed measure of model efficacy at the lowest level of granularity (cells) where table structure is not measured. A cell is correct if it matches any cell in the corresponding target table. _Cell exact match precision_ is the percentage of correctly predicted cells among all predicted cells in the dataset. _Cell exact match recall_ is the percentage of correctly predicted cells among all target cells in the dataset. ## 6 Experimental setup and results We use tapex-base(Liu et al., 2021) as the base model for all our experiments. tapex-base is a single table question answering model (140M parameters) trained to approximate table reasoning by pre-training to mimic an SQL parser. For both the pre-training and fine-tuning process, we use a batch size of \(8\) and gradient accumulation of \(32\) to emulate an effective batch size of \(256\), a learning \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Table**} & \multicolumn{2}{c}{**Row EM (\%)**} & \multicolumn{2}{c}{**Column EM (\%)**} & \multicolumn{2}{c}{**Cell EM (\%)**} \\ \cline{3-11} & & **EM (\%)** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline \multirow{2}{*}{Spider} & tapex-base & 18.99 & 17.28 & 19.83 & 18.27 & 19.75 & 19.39 & 19.57 & 23.15 & 27.71 & 25.03 \\ & MultiTabQA & **25.19*** & **22.88\(\dagger\)** & **24.64*** & **23.70*** & **26.86*** & **26.76*** & **26.81*** & **28.07\(\dagger\)** & **31.23*** & **29.55*** \\ \hline \multirow{2}{*}{GeoQ} & tapex-base & 39.84 & 22.43 & 30.74 & 24.89 & 39.48 & 39.76 & 39.62 & 21.98 & 30.88 & 24.67 \\ & MultiTabQA & **52.22*** & **72.39*** & **46.90*** & **41.38*** & **52.10*** & **52.22*** & **52.16*** & **37.16*** & **46.92*** & **41.33*** \\ \hline \multirow{2}{*}{Atis} & tapex-base & 72.20 & **57.07\(\dagger\)** & 57.69 & **55.08** & **72.20\(\dagger\)** & 72.20 & 72.20 & **57.07\(\dagger\)** & 57.69 & **54.48** \\ & MultiTabQA & **73.88\(\dagger\)** & 38.29 & **92.19*** & 54.36 & 69.55 & **75.24\(\dagger\)** & **72.29** & 38.16 & **92.56*** & 54.16 \\ \hline \hline \end{tabular} \end{table} Table 1: Average scores of models fine-tuned on 5 different seeds with Multitable-Natural Questions (NQ) datasets. tapex-base is used as baseline while MultiTabQA is our fine-tuned model. Table EM indicates table exact match accuracy. For all other table units (row, column, and cell), P is Precision, R is Recall, and F1 is F1 score for exact match metric. An (*) denotes significance at p < 0.005 and an (\(\dagger\)) denotes a significance at p < 0.05 for t-test. rate is \(1e^{-9}\). The maximum sequence length of both encoder and decoder is set to \(1024\). We run all our pre-training experiments on four A6000 48GB GPUs and fine-tuning on one A6000 GPU. We observe from Figure 4 that the three stage pre-training leads to a warm-start for fine-tuning and better convergence compared to the baseline tapex-base. For our experiments, we compare the effectiveness of the MultiTabQA model with fine-tuned tapex-base on the \(6,715\) natural questions from Spider. The fine-tuned tapex-base acts as baseline for studying the adaptability of state-of-the-art single table model to a multi-table setting. We report the mean scores of 5 training runs initialized with different seeds in Table 1. We conduct statistical significance test (t-test) on the mean scores of the 5 runs and report the significance with \(p<0.05\) and \(p<0.005\). We observe that our multi-stage training process leads to improvement in scores on all table exact match accuracy across all datasets compared to fine-tuned tapex-base. The difference in table exact match is highest for GeoQuery where MultiTabQA outperforms tapex-base by \(12.38\%\), Spider by \(6.20\%\) and Atis by \(1.68\%\). For F1 and Recall scores on row, column and cell exact match, a similar pattern is observed where MultiTabQA outperforms tapex-base on all datasets. MultiTabQA outperforms tapex-base by \(5.43\%\) on row F1, \(7.24\%\) on column F1, and \(4.52\%\) on cell F1 for Spider. On GeoQuery, MultiTabQA outperforms by \(16.49\%\) on row F1, \(12.54\%\) on column F1 and \(16.66\%\) on cell F1 scores. All results on Spider and GeoQuery are significant with a p-value less than a critical value of \(0.05\) indicating strong evidence that MultiTabQA is a superior model. On Atis, we observe that MultiTabQA underperforms on precision but outperforms on recall by a large margin. The difference in recall is larger than precision indicating that MultiTabQA generates more target rows, columns and cells of Atis correctly (higher recall) and hallucinates spurious rows and cells (lower precision). However, the F1 scores are better for MultiTabQA. tapex-base is unable to correctly generate target rows, cells and columns (lower recall), but the few generated ones are correct (higher precision). The low number of test samples (85) of Atis and variations in the hallucinations in different runs makes the precision scores statistically non-significant. However, the recall scores provide very strong evidence (\(p<0.005\)) of the superiority of MultiTabQA in generating correct table units compared to tapex-base. Qualitative analysis.Multi-table QA models must perform numeric reasoning, understand multi-table schemas and comprehend natural language. A success case also depicts this. For the question _how many likes does kyle have?_ with 2 input tables: \begin{tabular}{|c|c|c|} \multicolumn{2}{c}{highschooler} & \multicolumn{2}{c}{likes} \\ \hline id & name & grade \\ \hline 1510 & jordan & 9 \\... &... &... \\ **1934** & **kyle** & 12 \\ 1661 & logan & 12 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \multicolumn{2}{c}{student id} & \multicolumn{2}{c}{like\_id} \\ \hline 1689 & 1709 \\... &... &... \\ 1501 & **1934** \\ **1934** & 1501 \\ \hline \end{tabular} with target: \begin{tabular}{|c|c|c|} \multicolumn{2}{c}{student(*)} & \multicolumn{2}{c}{student id} & \multicolumn{2}{c}{like\_id} \\ \hline 1 & 1689 & 1709 \\... &... &... \\ **1934** & **1934** \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \multicolumn{2}{c}{student id} & \multicolumn{2}{c}{like\_id} \\ \hline 1689 & 1709 \\... &... &... \\ 1501 & **1934** \\ **1934** & 1501 \\ \hline \end{tabular} \end{table} Table 1: MultiTabQA identifies inter-table association of column _id_ of table _highschooler_ and column _student_id_ of table _likes_. It correctly disambiguates the lexical occurrence of \(1934\) in columns _like_id_ and _student_id_ and correctly performs _count_. Figure 4: Validation table exact match scores of MultiTabQA vs. tapex-base on Spider evaluation set natural language questions during fine-tuning. The points are highest validation scores for each model. correctness of individual table units without measuring the ordering. Column metrics measure predicted column _PetType_ as correct and _avg(weight)_ as incorrect without measuring ordering of the 2 columns. Row _cat_! _12.0_ is measured as correct, while _dog_! _13.4_ is measured incorrect without measuring the ordering among them. Out of the \(4\) target cells, _cat_, _dog_, _12.0_ are measured as correct. Impact of the number of input tables.The number of input tables increases the complexity of the questions and directly impacts the effectiveness of the models. We segregate evaluation on Spider validation set on the basis of number of input tables and compare the results to study the impact of input table number. We observe from Figure 5 that effectiveness reduces as the number of tables increases for both MultiTabQA and tapex-base. However, MultiTabQA fares better than tapex-base when the number of input tables increases. MultiTabQA generates whole tables, rows, columns and cells better than tapex-base as observed in Figure 4(a), 4(b), 4(c) and 4(d). The gain of MultiTabQA in table exact match for one-table context is around \(8.81\%\), for two-tables context around \(4.37\%\), and it performs similar to tapex-base for three-tables context. It also has a significant higher score on rows, columns and cells, on both single and multi-tabular context. We also observe that while the column and table EM decreases dramatically when using several tables (Figure 4(a) and 4(c)), the row and cell EM does not (Figure 4(b) and 4(d)). This indicates that MultiTabQA can generate rows and cells as effectively in single and multiple input tables settings but fail to do so for columns and consequently for the whole table. This is due to the fact that certain columns in the answer, particularly ones with numbers such as floats, are challenging to generate. The error from the incorrect columns propagates and are accumulated in the table EM leading to a significant drop in performance for multi-table queries. Ablation on training stages.We perform ablation on the pre-training stages to analyse the contribution of each dataset. The simplest setting is to pre-train with Spider SQL queries, i.e., Stage 2. To evaluate the effectiveness of single table Tapex pre-training samples, the next setting comprises of stages 1 and 2, i.e., pre-train with Tapex pre-training and Spider SQL dataset. The final comparison is with the three-stage pre-training as described in Section 4.1. The results for one run of the experiments are displayed in Table 2. We observe that table exact match is highest for both pre-training and fine-tuning for the three-stage training. Stage 2 fares better than Stage 1+2 on table exact match, \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} **Pre-training** \\ **stages** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Query** \\ **type** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **Table** \\ **EM(\%)** \\ \end{tabular} } & \multicolumn{3}{c}{**Row (\%)**} & \multicolumn{3}{c}{**Column (\%)**} & \multicolumn{3}{c}{**Cell (\%)**} \\ \cline{3-11} & & & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline 2 & & \(21.46\) & \(18.60\) & \(18.88\) & \(18.74\) & \(21.98\) & \(21.90\) & \(21.94\) & \(24.19\) & \(25.89\) & \(25.01\) \\ 1+2 & SQL & \(20.52\) & \(14.13\) & \(20.06\) & \(16.58\) & \(18.87\) & \(20.87\) & \(19.82\) & \(19.24\) & \(25.83\) & \(22.05\) \\ 1+2+3 & & **29.10** & **23.15** & **25.62** & **24.32** & **31.66** & **31.50** & **31.58** & **29.95** & **32.92** & **31.36** \\ \hline 2 & & \(19.41\) & \(16.51\) & \(19.48\) & \(17.87\) & \(20.13\) & \(20.11\) & \(20.12\) & \(21.12\) & \(26.55\) & \(23.52\) \\ 1+2 & NL & \(20.12\) & \(11.67\) & \(21.09\) & \(15.03\) & \(19.54\) & \(19.97\) & \(19.76\) & \(16.26\) & \(29.22\) & \(20.90\) \\ 1+2+3 & & **24.49** & **24.95** & **24.87** & **24.91** & **26.80** & **26.91** & **26.86** & **28.44** & **31.06** & **29.69** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation on datasets in our multi-stage pre-training processes for 1 run of experiments. The two sections show scores for different question types: SQL queries (top) and natural language (NL) questions (bottom). In a section each row shows a training process with different stages: Pre-training on Stage 2, pre-training on Stages 1+2, and all pre-training Stages 1+2+3. Table EM is table exact match accuracy; P is Precision; R is Recall; and F1 is F1 score for exact match of row, column, and cell. Figure 5: Evaluation results on Spider evaluation samples segregated by number of input tables. and generally has better precision and F1 scores but lower recall. The three-stage pre-training with our synthetic data augmented with Spider outperforms the other settings and confirms the effectiveness of our synthetic data samples in boosting model efficacy. ## 7 Related work Tabular QA is a research direction in the broader topic of table understanding (Jena et al., 2022; Shigarov, 2022) in natural language processing. Recent advances in table representation (Eisenschlos et al., 2021) and pre-training (Cheng et al., 2021; Liu et al., 2022; Cheng et al., 2021), table fact verification (Gu et al., 2022; Zhou et al., 2022), table numeric reasoning (Shankarampeta et al., 2022; Zhou et al., 2022), table-to-text generation (Andrejczuk et al., 2022), text-to-table generation (Wu et al., 2022), table summarization (Jain et al., 2018; Chen et al., 2013; Zhang et al., 2020), and table question answering (Yin et al., 2020; Zhang et al., 2020; Herzig et al., 2020; Zhu et al., 2021; Liu et al., 2021; Cheng et al., 2021; Nan et al., 2021; Ma et al., 2022; Pal et al., 2022; Jin et al., 2022; Zhou et al., 2022) have shown the adaptability of language models to table processing. ## 8 Conclusion In this work, we propose a new task of multi-table question answering without intermediate logical forms to fill the gap of existing end-to-end table QA research which focused only on single-table QA. We release a pre-training dataset of \(132,645\) samples to effectively train a seq2seq model. We fine-tune and evaluate our model, MultiTabQA, on natural language questions of three datasets: Spider, GeoQuery and Atis, to test the efficacy in a multi-table setting. As many multi-table questions result in tables, we train the model to generate tables. This necessitates table-specific metrics at various levels of granularity which we design to evaluate the effectiveness of our model. We demonstrate that such metrics is insightful in understanding model behavior. MultiTabQA outperforms existing state-of-the-art single table QA model fine-tuned to adapt to a multi-table QA setting. ## 9 Limitations Our synthetic pre-training dataset was automatically generated from manual templates, which inspite of dataset creation scalability and low cost, may limit the diversity of the generated SQL queries. Our model, MultiTabQA, requires improvement in numeracy understanding and numeric operations. Real numbers are specially challenging and the model may not be able to correctly generate all the digits of the number correctly rending the generated cell incorrect. Furthermore, large input tables pose a challenge as the input sequence may get truncated beyond the model's maximum sequence length. This has practical limitation in the size and number of input tables which the model can accommodate before truncation which leads to incorrect answers. ## 10 Ethical Considerations The task and model proposed in the paper is aimed at broadening the scope of TabularQA research. All the datasets used in this research, apart from our synthetic data, are publicly available in peer-reviewed articles and referenced in this paper. The synthetic SQL dataset we release was generated over a standard benchmark database which has been annotated by 11 Yale students as mentioned in the original paper. Our synthetic samples use templates annotated by the authors of this work and do not use any user-specific data or information. We will be providing open access to our datasets for use in future research under the MIT License. All datasets, including the synthetic pre-training dataset and all datasets adapted for multi-table QA will be released. Our model is built over tapex-base which in turn has been trained over part-base. Our work did not explicitly handle any bias which exists in the aforementioned pre-trained models. ## 11 Acknowledgements We thank Elsevier's Discovery Lab for their support throughout this project and funding this work. This work was also supported by the Dutch Research Council (NWO) under project numbers 016.Vidi.189.039 and 314-99-301, by H2020-EU.3.4. Societal Challenges, Smart, Green and Integrated Transport (814961), and by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through NWO, [https://hybrid-intelligence-centre.nl](https://hybrid-intelligence-centre.nl). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
2310.09708
Flow Oriented Perturbation Theory
Flow Oriented Perturbation Theory (FOPT) is a novel approach to Feynman diagrams based on the coordinate (position) space description of Quantum Field Theories (QFT). FOPT offers interesting features regarding the computation of higher-loop Feynman amplitudes such as combinatorial and canonical Feynman rules, explicit infrared singularity factorization on a per-diagram level and the potential to have manifest cancellation of real and virtual singularities. In these proceedings we briefly summarize the derivation of FOPT and present its Feynman rules for covariant diagrams, S-matrix elements and cut diagrams in massless scalar QFT, supported by examples. We then discuss the extension of FOPT to massless fermion fields and indicate steps towards the treatment of massive lines in arbitrary dimensions.
Alexandre Salas-Bernárdez, Michael Borinsky, Zeno Capatti, Eric Laenen
2023-10-15T02:28:08Z
http://arxiv.org/abs/2310.09708v1
# Flow Oriented Perturbation Theory ###### Abstract: Flow Oriented Perturbation Theory (FOPT) is a novel approach to Feynman diagrams based on the coordinate (position) space description of Quantum Field Theories (QFT). FOPT offers interesting features regarding the computation of higher-loop Feynman amplitudes such as combinatorial and canonical Feynman rules, explicit infrared singularity factorization on a per-diagram level and the potential to have manifest cancellation of real and virtual singularities. In these proceedings we briefly summarize the derivation of FOPT and present its Feynman rules for covariant diagrams, S-matrix elements and cut diagrams in massless scalar QFT, supported by examples. We then discuss the extension of FOPT to massless fermion fields and indicate steps towards the treatment of massive lines in arbitrary dimensions. FOPT's Feynman Rules for massless scalar QFT ### Feynman rules for coordinate space amplitudes Flow Oriented Perturbation Theory [1] provides an alternative perturbative decomposition of correlation functions as \[\Gamma(x_{1},...,x_{|V^{\mathrm{ext}}|})=\left\langle 0|T(\varphi(x_{1})\cdots \varphi(x_{|V^{\mathrm{ext}}|}))|0\right\rangle=\sum_{(G,\boldsymbol{\sigma})} \frac{1}{\mathrm{Sym}(G,\boldsymbol{\sigma})}A_{G,\boldsymbol{\sigma}}(x_{1}, \ldots,x_{|V^{\mathrm{ext}}|})\, \tag{1}\] where the sum runs over all topologically different directed graphs (digraphs), \((G,\boldsymbol{\sigma})\), _i.e._ graphs \(G\) with a specified energy flow on each propagator (an orientation \(\boldsymbol{\sigma}\)). This representation is obtained by performing all time integrations over internal vertices, \(\int dy_{\nu}^{0}\), of a given Feynman integral corresponding to graph \(G\) with internal (external) vertices \(V^{\mathrm{int}}\) (\(V^{\mathrm{ext}}\)) and edges \(E\), \[A_{G}(x_{1},\ldots,x_{|V^{\mathrm{ext}}|})=\frac{(-ig)^{|V^{\mathrm{int}}|}}{( 2\pi)^{2|E|}}\left[\prod_{\nu\in V^{\mathrm{int}}}\int\mathrm{d}^{4}y_{\nu} \right]\prod_{e\in E}\frac{1}{-z_{e}^{2}+i\eta}. \tag{2}\] In performing the time integrations, an individual covariant Feynman integral will be expressed into its different energy flow-oriented components: \[\frac{1}{\mathrm{Sym}\,G}A_{G}(x_{1},\ldots,x_{|V^{\mathrm{ext}}|})=\sum_{( \boldsymbol{\sigma})}\frac{1}{\mathrm{Sym}(G,\boldsymbol{\sigma})}A_{G, \boldsymbol{\sigma}}(x_{1},\ldots,x_{|V^{\mathrm{ext}}|}). \tag{3}\] The integral expression for \(A_{G,\boldsymbol{\sigma}}(x_{1},\ldots,x_{|V^{\mathrm{ext}}|})\) can be found using the following Feynman rules [1]: 1. \(A_{G,\boldsymbol{\sigma}}=0\) if the digraph \((G,\boldsymbol{\sigma})\) is not energy-conserving, _i.e._ the completed digraph \((G,\boldsymbol{\sigma})^{\circ}\) (found by joining all external vertices in the special vertex \(\circ\)) is not strongly connected. 2. Multiply by a factor of \(-ig\) for each interaction vertex. 3. For each edge \(e\) of \(G\) multiply by a factor \(\frac{-i}{(8\pi^{2})|\overline{z}_{e}|}\) where \(\overline{z}_{e}=\overline{y}_{\nu}-\overline{y}_{u}\) and \(\overline{y}_{\nu},\overline{y}_{u}\) are the coordinates of the internal or external vertices to which the edge \(e\) is incident. 4. For each admissible energy-flow path, p, of \((G,\boldsymbol{\sigma})\) (i.e. for each energy cycle in the canonical cycle basis of \((G,\boldsymbol{\sigma})^{\circ}\)) multiply by a factor of \(i/\left(\gamma_{\mathrm{p}}+\tau_{\mathrm{p}}+i\eta\right)\), where \[\gamma_{\mathrm{p}}=\sum_{e\in\mathrm{p}}|\overline{z}_{e}|\] (4) is the sum over all edge lengths that are in the path p and \(\tau_{\mathrm{p}}\) is either the time passed between the starting and ending external vertices of the path or vanishes if the cycle does not go through the \(\circ\) vertex. 5. For each internal vertex \(v\) of the graph \(G\) integrate over three-dimensional space \(\int\mathrm{d}^{3}\overline{y}_{\nu}\) and multiply by \(2\pi\). We can summarize these Feynman rules as follows. For a given digraph \((G,\boldsymbol{\sigma})\) with cycle basis \(\Gamma\), where all interaction vertices in \(G\) are internal vertices and vice-versa, we have \[A_{G,\boldsymbol{\sigma}}(x_{1},\ldots,x_{|V^{\mathrm{ext}}|})=\frac{(2\pi g )^{|V^{\mathrm{int}}|}}{(-4\pi^{2})^{2|E|}}\left(\prod_{v\in V^{\mathrm{int}}} \int\mathrm{d}^{3}\overline{y}_{v}\right)\left(\prod_{e\in E}\frac{1}{2| \overline{z}_{e}|}\right)\prod_{\mathrm{p}\in\Gamma}\frac{1}{\gamma_{\mathrm{ p}}+\tau_{\mathrm{p}}+i\eta}. \tag{5}\] We next illustrate the application of these rules in a specific example. Triangle exampleTo illustrate the FOPT Feynman rules for covariant diagrams, we will consider the following covariant graph For the case where energy flows into the diagram through edge \(e_{1}\), this graph has 12 energy-conserving orientations and 6 distinct configurations under the simultaneous replacement of \(x_{2}\leftrightarrow x_{3}\) and \(y_{2}\leftrightarrow y_{3}\). These are: The first orientation (\(a\)) is decomposed into its canonical cycle basis, \(\{\mathrm{p}_{1},\mathrm{p}_{2},\mathrm{p}_{3}\}\), as \[\begin{split}\includegraphics[width=142.26378pt]{figs/figs/figs/figs/ figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figfigs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figfigs/figs/figs/figs/figfigs/figfigs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figfigs/figs/figs/figfigs/figs/figs/figfigs/figs/figs/figs/figs/figfigs/figfigs/figs/figs/figfigs/figs/figs/figs/figfigs/figfigs/figfigs/figs where \(Z\) are renormalization constants, and \(s_{G,\sigma}=(\{p_{i}\}_{i\in V^{\rm ext}_{\rm in}},\{p_{f}\}_{f\in V^{\rm ext}_{ \rm out}})\) is the _reduced_ S-matrix element without trivial prefactors, \[s_{G,\sigma}=\int\left.\left[\frac{\prod_{v\in V^{\rm int}\setminus\{w\}}{ \rm d}^{3}\widetilde{\mathcal{Y}}_{v}}{\prod_{e\in E^{\rm int}}2|\widetilde{z}_ {e}|}\right]\right|\left.\left[\frac{\prod_{a\in V^{\rm ext}}e^{-i\widetilde{ \mathcal{Y}}_{\overline{a}}\cdot\widetilde{p}_{a}}}{[\prod_{c\in\Gamma^{\rm int }}\gamma_{c}]}\widetilde{\mathcal{F}}_{G,\sigma}^{\{p_{a}^{0}\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, We can parameterize the polytope by setting \(\mathbf{E}=(E_{1},E_{2},E_{3})=(E,-p_{3}^{0},-p_{2}^{0}-E)\) and let \(E\) vary between \(0\) and \(-p_{2}^{0}\). The polytope \(\mathcal{F}^{\{p_{A}^{0}\}}_{\mathbf{G},\mathbf{\sigma}}\) is therefore a line segment. Using this parameterization, we can explicitly evaluate the Fourier transformation of the flow polytope associated to the digraph above, \[\widehat{\mathcal{F}}^{\{p_{A}^{0}\}}_{\mathbf{G},\mathbf{\sigma}}(\mathbf{ \gamma}^{\dagger}+i\varepsilon\mathbf{1}) =\int_{\mathcal{F}^{\{p_{A}^{0}\}}_{\mathbf{G},\mathbf{\sigma}}}\mathrm{d} \mathbf{E}\ e^{i\mathbf{E}\cdot(\mathbf{\gamma}^{\dagger}+i\varepsilon\mathbf{1})}=\int_{0}^{- p_{2}^{0}}\mathrm{d}E\ e^{iE(\gamma_{1}^{\dagger}+i\varepsilon)-iP_{3}^{0}( \gamma_{2}^{\dagger}+i\varepsilon)-i(P_{2}^{0}+E)(\gamma_{3}^{\dagger}+i \varepsilon)}\] \[=-p_{2}^{0}e^{-iP_{3}^{0}(\gamma_{2}^{\dagger}+i\varepsilon)-ip_ {2}^{0}(\gamma_{2}^{\dagger}+\frac{1}{2}\gamma_{1}^{\dagger}-\frac{1}{2} \gamma_{3}^{\dagger}+i\varepsilon)}\ \mathrm{sinc}\left(\frac{p_{2}^{0}(\gamma_{1}^{ \dagger}-\gamma_{3}^{\dagger})}{2}\right)\, \tag{13}\] where \(\mathrm{sinc}(x)=\frac{\sin(x)}{x}\). This expression is manifestly bounded as \(\mathrm{sinc}(x)\leq 1\). Finally, the reduced S-matrix contribution of the digraph above is, \[s_{G,\mathbf{\sigma}}(\{p_{1}\},\{p_{2},p_{3}\})=\int\frac{\left[\prod_{v\in\{2,3 \}}\mathrm{d}^{3}\vec{\gamma}_{v}\right]\left[e^{-i\vec{\gamma}_{2}\cdot\vec{ p}_{2}-i\vec{\gamma}_{3}\cdot\vec{p}_{3}}\right]}{8|\vec{z}_{4}||\vec{z}_{5}|| \vec{z}_{6}|}\widehat{\mathcal{F}}^{\{p_{A}^{0}\}}_{\mathbf{G},\mathbf{\sigma}}(\mathbf{ \gamma}^{\dagger}+i\varepsilon\mathbf{1})\Big{|}_{\vec{y}_{1}=0}\, \tag{14}\] where we used the freedom guaranteed by translation invariance to fix one vertex position at the origin, in this case \(\vec{y}_{1}=0\). ### Feynman rules for cut diagrams The FOPT Feynman rules for a digraph \((G,\mathbf{\sigma})\) with a cut \(\mathfrak{C}\) are: 1. The integral is \(0\) if the closed directed graph \((G,\mathbf{\sigma})^{\circ}\) is not strongly connected or if the admissible paths on the cut do not go from the \(\bigodot\)-side to the \(\bigodot\)-side of the graph. 2. Multiply a factor of \(-ig\)\((ig)\) for each \(\bigodot\)-side (\(\bigodot\)-side) interaction vertex. 3. For each internal vertex \(v\in V^{\mathrm{int}}\) of the digraph \((G,\mathbf{\sigma})\) integrate over 3-dimensional space with the measure \(2\pi\int\mathrm{d}^{3}\vec{\gamma}_{v}\). 4. For each edge \(e\) of the graph multiply a factor of \(\frac{\pi i}{8\pi^{2}|\vec{z}_{e}|}\) with a \(-\) sign for a \(\bigodot\)-side or a cut edge, and a \(+\) sign for a \(\bigodot\)-side edge. 5. For each entirely uncut directed admissible path, \(\mathrm{p}_{\ell}\), of \((G,\mathbf{\sigma})^{\circ}\) multiply a factor of \[\frac{i}{\sum_{e\in\mathrm{p}_{\ell}}|\vec{z}_{e}|+\tau_{\mathrm{p}_{\ell}}+i\eta} \text{if }\mathrm{p}_{\ell}\ \text{consists entirely of }\bigodot\text{-side edges}\] \[\frac{i}{-\sum_{e\in\mathrm{p}_{\ell}}|\vec{z}_{e}|+\tau_{\mathrm{p}_ {\ell}}+i\eta} \text{if }\mathrm{p}_{\ell}\ \text{consists entirely of }\bigodot\text{-side edges}\] where the sum in the denominator goes over all edges that are in the admissible path \(\mathrm{p}_{\ell}\) and \(\tau_{\mathrm{p}_{\ell}}\) is the time difference that has passed while going through the \(\circ\) vertex, or \(0\) if the admissible path does not go through the \(\circ\) vertex, i.e. is a cycle. 6. For each directed admissible path \(\mathrm{p}_{\ell}\) of \((G,\mathbf{\sigma})^{\circ}\) that passes the cut \(\mathfrak{C}\), multiply a factor of \[\frac{-2t|\vec{z}_{e\epsilon}|}{\left(\sum_{e\in\mathrm{p}_{\ell}^{\bigodot}}| \vec{z}_{e}|-\sum_{e\in\mathrm{p}_{\ell}^{\bigodot}}|\vec{z}_{e}|+\tau_{ \mathrm{p}_{\ell}}+i\eta\right)^{2}-\vec{z}_{e\epsilon}^{2}}\,\] where we sum over the uncut \(\bigcirc\)-side and \(\bigcirc\)-side edges in \(\mathrm{p}_{\ell}\), \(\mathrm{p}_{\ell}^{\bigcirc}\) and \(\mathrm{p}_{\ell}^{\bigcirc}\), and where \(e_{\epsilon}\) denotes the unique edge of the admissible path that is on the cut. The edge is unique because, once the path passes over the cut edge, the energy cannot flow back through the cut. ExampleAs an example we consider the cut integrals associated to the following graph, (15) We have the following three different admissible cuts (as permutations of the internal vertices result in topologically indistinguishable graphs), (16) Recall that in addition to the positivity requirements only energy flows from \(\bigcirc\) to \(\bigcirc\) are allowed on cut edges. Therefore only the following energy flows are compatible with the cuts and the positive energy requirement: (17) (18) In the picture above, each row features only one orientation of the graph and each column a possible cut. In this example, there are only two admissible paths compatible with a given cut. The cut diagram \((1a)\) has the following three routes (1a)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\longrightarrow\)\(\longrightarrow\longrightarrow\)\(\ Hence, applying the FOPT-cut Feynman rules from above to the cut diagram (\(1a\)) results in the following expression \[A_{(\mathbf{\sigma},\mathbf{\xi})_{(1a)}} =-8\frac{(2\pi)^{2}g^{4}}{(8\pi^{2})^{5}}\int\frac{\mathrm{d}^{3} \vec{y}_{1}\mathrm{d}^{3}\vec{y}_{2}}{|\vec{z}_{1}||\vec{z}_{2}||\vec{z}_{3}|| \vec{z}_{4}||\vec{z}_{5}|}\times \tag{19}\] \[\times\frac{|\vec{z}_{2}|}{(-|\vec{z}_{3}|+\tau+i\eta)^{2}-\vec{z} _{2}^{\,2}}\frac{|\vec{z}_{5}|}{(|\vec{z}_{1}|-|\vec{z}_{3}|+\tau+i\eta)^{2}- \vec{z}_{5}^{\,2}}\frac{|\vec{z}_{4}|}{(|\vec{z}_{1}|+\tau+i\eta)^{2}-\vec{z}_ {4}^{\,2}}. \tag{20}\] where we accounted for the admissible paths through the cut, 23, 153 and 14 via the appropriate denominators, and \(\tau=x_{2}^{0}-x_{1}^{0}\). One can check that the remaining cut diagrams, which have a different sized cut from (\(1a\)), will have the same integral measure as (\(1a\)) [1]. This implies that virtual and real IR divergences could cancel locally in FOPT. ## 2 Massless fermion lines in FOPT In this section we extend the FOPT framework to massless fermion lines. To do so we use that the fermion propagator in coordinate space, \(S(x)\), is related to the scalar propagator \(\Delta(x)\) by \(S(x)=\gamma_{\mu}\partial^{\mu}\Delta(x)\). This leads one to modify the intermediate steps of the derivation of FOPT by considering the integral (to be contracted with \(\gamma_{\mu}\)) \[I_{e}^{\mu}=\int\,dz_{e}^{0}\frac{z_{e}^{\mu}\delta\left(z_{e}^{0}-x_{e}^{0} \right)}{(-z_{e}^{0}+\vec{z}_{e}^{\,2}+i\eta)^{2}}=\int_{-\infty}^{+\infty} \frac{dE_{e}}{2\pi}\int\,dz_{e}^{0}\frac{z_{e}^{\mu}e^{iE_{e}(z_{j}^{0}-x_{e} ^{0})}}{(-z_{e}^{0}+\vec{z}_{e}+i\eta)^{2}(z_{e}^{0}+\vec{z}_{e}+i\eta)^{2}}\, \tag{21}\] for each fermionic edge \(e\) of a given diagram, where \(x_{e}^{0}\) is the time component of the propagator's argument. We see that the integration in \(z_{e}^{0}\), after proper closing of the contour of integration, will pick up the residues of the two double poles at \(z_{e}^{0}=\pm(|\vec{z}_{e}|+i\eta)\). These are: * For spatial components of the numerator (dropping the \(i\eta\)), \[\mathrm{Res}(f,\pm(|\vec{z}_{e}|+i\eta))=\theta(\pm E_{e})z_{e}^{i}\Big{(} \frac{i2|\vec{z}_{e}|E_{e}\mp 2}{(2|\vec{z}_{e}|)^{3}}\Big{)}e^{\pm iE_{e}(|\vec{z}_{e}| \mp x_{e}^{0})}\.\] (22) * For the time component of the numerator, \[\mathrm{Res}(f,\pm(|\vec{z}_{e}|+i\eta))=\theta(\pm E_{e})\Big{(}\frac{\pm iE _{e}}{4|\vec{z}_{e}|}\Big{)}e^{\pm iE_{e}(|\vec{z}_{j}|\mp x_{e}^{0})}\.\] (23) Hence, this integration produces, after defining the lightlike vector \(\vec{z}_{e,\sigma_{e}=\pm 1}^{\mu}=(\pm|\vec{z}_{e}|,\vec{z}_{e})\), \[I_{e}^{\mu}=\frac{i}{(2|\vec{z}_{e}|)^{3}}\sum_{\sigma_{e}^{\pm}=\pm 1}\vec{z}_{e,\sigma_{e}}^{\mu}\Big{(}2|\vec{z}_{e}|\frac{\partial}{\partial|\vec{z}_{e}|} -2\sum_{i=1}^{3}\delta^{\mu i}\Big{)}\int_{-\infty}^{+\infty}dE_{e}^{0}\Big{(} \theta(\sigma_{e}E_{e}^{0})e^{iE_{e}^{0}(\sigma_{e}|\vec{z}_{e}|-x_{e}^{0}+i \eta)}\Big{)}\, \tag{24}\] where \(\sigma_{e}\) assigns \(\pm 1\) to an edge \(e\) for a positive or negative energy flow. This expression can be treated similar to the scalar FOPT case. Thus, the resulting Feynman rule is that each fermion line, \(e\), contributes with an extra factor \[\boxed{\gamma_{\mu}\frac{\vec{z}_{e,\sigma_{e}}^{\mu}}{(2|\vec{z}_{e}|)^{2}} \Big{(}2\sum_{i=1}^{3}\delta^{\mu i}-2|\vec{z}_{e}|\frac{\partial}{\partial| \vec{z}_{e}|}\Big{)}}\,\] where we point out that two upper indices are repeated. ## 3 Steps towards FOPT for massive scalar lines in arbitrary dimensions Just as to the loop-tree duality [2, 3, 4, 5, 6, 7, 8, 9, 10], FOPT has similarities to Light-Cone Ordered Perturbation Theory (LCOPT) [11], and most treatments in LCOPT can be extended to FOPT. In [11], the inclusion of massive lines and the extension of LCOPT to arbitrary dimensions is performed by using the dispersive representation of a scalar propagator of mass, \(m\), in \(D=4-2\varepsilon\) dimensions, \[\Delta(z^{2},m)\ =\ \int_{0}^{\infty}\frac{dz^{\prime\,2}}{\pi}\ \frac{\text{Im }\Delta\left(z^{\prime\,2}+i\eta,m\right)}{-z^{2}+{z^{\prime\,2}}+i\eta}. \tag{25}\] The imaginary parts for the massless and massive scalar propagators in \(D=4-2\varepsilon\) dimensions are given in [11, 12]. Following this, one must modify eq. (2) as \[A_{G}(x_{1},\ldots,x_{|V^{\text{ext}}|})=\frac{(-ig)^{|V^{\text{ int}}|}}{(2\pi)^{2|E|}}\left[\prod_{v\in V^{\text{int}}}\int\,\text{d}^{4}y_{v} \right]\left[\prod_{e\in E}\int_{0}^{\infty}\frac{dz^{\prime\,2}_{e}}{\pi} \frac{\text{Im }\Delta\left(z^{\prime\,2}_{e}+i\eta,m\right)}{-z^{2}_{e}+z^{ \prime\,2}_{e}+i\eta}\right]. \tag{26}\] With this representation, it is possible to perform the full treatment of FOPT to obtain that an orientation, \(\sigma\), contributing to a graph \(G\) in a massive scalar \(D\)-dimensional QFT equals \[A_{G,\sigma}(x_{1},\ldots,x_{|V^{\text{ext}}|})=\\ =\frac{(2\pi g)^{|V^{\text{int}}|}}{(-4\pi^{2})^{|E|}}\left(\prod _{v\in V^{\text{int}}}\int\,\text{d}^{3}\vec{y}_{v}\right)\left(\prod_{e\in E }\int_{0}^{\infty}\frac{dz^{\prime\,2}_{e}}{\pi}\frac{\text{Im }\Delta\left(z^{\prime\,2}_{e}+i\eta,m\right)}{2\sqrt{|\vec{z}_{e}|^{2}+z^{ \prime\,2}_{e}}}\right)\prod_{\text{p}\in\Gamma}\frac{1}{\gamma_{\text{p}}+ \tau_{\text{p}}+i\eta}. \tag{27}\] Where now each path length, \(\gamma_{\text{p}}\), is modified as \[\gamma_{\text{p}}=\sum_{e\in\text{p}}\left(\sqrt{|\vec{z}_{e}|^{2}+z^{\prime \,2}_{e}}\right). \tag{28}\] Thus, FOPT can be extended to massive lines and arbitrary dimensions by the inclusion of dispersive integrals and by substituting \(|\vec{z}_{e}|\to\sqrt{|\vec{z}_{e}|^{2}+z^{\prime\,2}_{e}}\) for each edge of a given diagram. These dispersive integrals disappear when the massless and four-dimensions limits are taken, since the discontinuity of \(\Delta\) vanishes away from the lightcone and approaches a delta function, reproducing the known results of [1]. The extensions of FOPT presented in these proceedings are part of ongoing research [13].
2310.11147
Uncovering wall-shear stress dynamics from neural-network enhanced fluid flow measurements
Friction drag from a turbulent fluid moving past or inside an object plays a crucial role in domains as diverse as transportation, public utility infrastructure, energy technology, and human health. As a direct measure of the shear-induced friction forces, an accurate prediction of the wall-shear stress can contribute to sustainability, conservation of resources, and carbon neutrality in civil aviation as well as enhanced medical treatment of vascular diseases and cancer. Despite such importance for our modern society, we still lack adequate experimental methods to capture the instantaneous wall-shear stress dynamics. In this contribution, we present a holistic approach that derives velocity and wall-shear stress fields with impressive spatial and temporal resolution from flow measurements using a deep optical flow estimator with physical knowledge. The validity and physical correctness of the derived flow quantities is demonstrated with synthetic and real-world experimental data covering a range of relevant fluid flows.
Esther Lagemann, Steven L. Brunton, Christian Lagemann
2023-10-17T10:56:26Z
http://arxiv.org/abs/2310.11147v2
# Uncovering wall-shear stress dynamics from neural-network enhanced fluid flow measurements ###### Abstract Friction drag from a turbulent fluid moving past or inside an object plays a crucial role in domains as diverse as transportation, public utility infrastructure, energy technology, and human health. As a direct measure of the shear-induced friction forces, an accurate prediction of the wall-shear stress can contribute to sustainability, conservation of resources, and carbon neutrality in civil aviation as well as enhanced medical treatment of vascular diseases and cancer. Despite such importance for our modern society, we still lack adequate experimental methods to capture the instantaneous wall-shear stress dynamics. In this contribution, we present a holistic approach that derives velocity and wall-shear stress fields with impressive spatial and temporal resolution from flow measurements using a deep optical flow estimator with physical knowledge. The validity and physical correctness of the derived flow quantities is demonstrated with synthetic and real-world experimental data covering a range of relevant fluid flows. ## 1 Introduction Whenever a fluid flow passes a surface, the induced velocity gradient generates tangential stresses, known as wall-shear stress, at the fluid-structure interface. The precise knowledge of the wall-shear stress and the associated friction forces is highly relevant in various domains ranging from the transportation sector [19, 52, 74, 84] and the public utility infrastructure [16, 22, 28] to energy conversion [15, 20, 58] and human health related areas [2, 10, 24, 78, 85]. For instance, accurate wall-shear stress predictions are essential for the development of friction drag reduction techniques in civil aviation [3, 51, 63] with a direct impact on fuel efficiency and emissions. Fuel consumption scales almost linearly with the aerodynamic drag at cruise condition [66], so drag reduction of only a few percent can substantially reduce the required fossil fuels and the aircraft emissions. Access to instantaneous, highly resolved wall-shear stress information is also medically relevant, since shear-related friction forces can promote vascular dysfunction and atherosclerosis of human arteries [12, 30, 73]. Recent research has also suggested that wall-shear stress in the lymphatic vasculature plays a crucial role in cancer progression and metastasis [42]. However, these efforts are currently challenged by a critical lack of accurate spatial and temporal resolution of the wall-shear stress dynamics for real-world applications. Numerical simulations can provide high spatial and temporal resolutions, but the computational costs associated with high-fidelity simulations currently limit these computations to rather simple flows with limited physical complexity, e.g., low flow speeds and basic geometries [80]. Experimental measurements provide access to much more realistic flow conditions, but are handicapped by a notorious lack of methods that simultaneously capture the temporal and the spatial evolution of the wall-shear stress [59]. For instance, surface hot films [1, 4, 18] and wall-mounted hot-wire probes [29, 56, 75] can only measure temporal fluctuations at a single location, while oil-film interferometry [48, 71, 76] only visualizes the spatial wall-shear stress without temporal dynamics. Alternatively, emerging research focuses on developing empirical models to infer the wall-shear stress from velocity data further away from the wall [9, 52, 54, 55] or from limited sensor measurements [5, 6]. However, such models are not yet used as a replacement for actual wall-shear stress data because they possess limited validity and generalizability, they typically rely on empirical coefficients, and there is no general agreement on a single model representative of all physical processes. To overcome the existing constraints, we propose a novel workflow that combines experimental measurements and data-driven modeling (i.e., machine learning) to provide accurate wall-shear stress dynamics with an exceptional spatial and temporal resolution. We rely on a well-established optical velocity measurement technique called particle-image velocimetry (PIV) [64, 69] since it is widely used in fluid dynamics laboratories. PIV derives the flow field from images of tracer particles, which are added to the flow for visualization purposes. The particle displacement between two consecutive images and their interframing time is used to calculate the velocity distribution. In traditional PIV processing, the original image pairs are divided into sub-windows, which are cross-correlated to obtain an averaged velocity estimate per window [64]. This statistical approach inherently coarsens the spatial resolution of the resulting velocity field compared to the original image size. For instance, a generic PIV evaluation of particle image pairs containing \(1024\times 1024\) px\({}^{2}\) results in just \(124\times 124\) velocity vectors. Although the wall-shear stress can theoretically be derived from the fluid velocity close to the wall by physical laws [62], the typically low data resolution and the largely underpredicted velocity gradients close to the wall [33] usually inhibit such a derivation from PIV data. Existing approaches specifically tailored to the wall-shear stress estimation like the Clauser chart method [17, 23] and the single-pixel ensemble correlation [34, 82] are restricted to time-averaged quantities, i.e., wall-shear stress dynamics cannot be uncovered. An emerging body of research has successfully demonstrated how fluid dynamics research can benefit from incorporating machine learning techniques [8, 13, 36, 45, 81]. In this respect, deep optical flow networks [31, 77] were particularly tailored to enhance PIV processing. By maintaining the original image resolution in the resulting velocity field, the recently proposed deep learning framework RAFT-PIV [37, 38] avoids the spatial resolution reduction and the gradient smoothing of established PIV routines without compromising on physical accuracy. For the example given above, RAFT-PIV yields 64 times more velocity vectors and an eightfold resolution refinement in each spatial direction. Consequently, the combination of a deep optical flow network with physical laws constitutes a promising approach to derive accurate wall-shear stress measures with outstanding spatial and temporal resolution from PIV velocity measurements. Our proposed workflow, incorporating the novel framework _WSSflow_, is visualized in figure 1 based on experimental measurements of a blood vessel model. WSSflow ties in with the success of RAFT-PIV to derive the wall-shear stress dynamics from optical-flow based high-resolution velocity distributions close to the wall. More precisely, we use the fact that the flow in the viscous sublayer, which is a very thin wall-parallel layer adjacent to the boundary, is dominated by viscous forces. This allows a direct calculation of Figure 1: **Workflow using the novel deep learning framework _WSSflow_ to estimate velocity and wall-shear stress fields exemplified by the elastic blood vessel model.** I.) Experimental particle-image velocimetry (PIV) measurements with the fluid dynamical model are performed. The fluid flow is observed via tracer particles, which are illuminated by a pulsed laser light sheet and captured with a high-quality camera. II.) Gray-scale sequential particle image pairs are obtained from the PIV measurement with the bounding vessel wall at the top and bottom of each image. III.) Each particle image pair is evaluated by the deep learning framework WSSflow, which outputs an optical flow at the input image resolution and a high resolution wall-shear stress distribution. In detail, both images are processed by a shared feature encoder, a subsequent all-pairs correlation, and a 4-level correlation pyramid. The output is fed into a global motion aggregation module (GMA) together with the context encoded first particle image. The GMA addresses image occlusions by transferring flow field information from non-occluded regions to image parts with low texture. The GMA output, the context encoded first particle image, and the correlation pyramid output are processed by a Convolutional Gated Recurrent Unit (ConvGRU), which iteratively updates the flow prediction until the final optical flow is obtained. The flow field close to the wall is extracted and combined with physical laws to derive the wall-shear stress \(\tau_{w}(x)=\eta u(x)/y\), where \(u(x)\) is the streamwise velocity at streamwise location \(x\), \(\eta\) the dynamic viscosity of the fluid, and \(y\) the wall-normal location at which \(u\) is measured. IV.) WSSflow provides (a) a time series of two-dimensional high-resolution velocity fields and (b) the temporal evolution of the spatially developing wall-shear stress. V.) The obtained flow field information can be used prospectively for medical diagnostics and therapy. the wall-shear stress \(\tau_{w}(x,t)\) as a function of time \(t\) and streamwise location \(x\) from the streamwise velocity \(u(x,y,t)\), the dynamic viscosity of the fluid \(\eta\), and the wall-normal location \(y\) at which \(u\) is measured. The only prerequisite for this approach is that the measurement setup captures the viscous sublayer by at least one pixel, which is no limiting factor in the majority of applications. We verify the successful estimation of time-averaged and instantaneous fluid flow quantities with fluid dynamical test cases for which either a ground truth distribution or an analytic solution exists. Apart from synthetically generated particle images, we also demonstrate a convincing quality of the wall-shear stress estimation in real experimental settings that are subject to measurement uncertainties. Besides important academic configurations like turbulent channel flows, we further demonstrate the generalization ability of our framework on challenging experimental measurements of an elastic blood vessel flow. Overall, the success of WSSflow on this broad range of fluid flows provides compelling evidence of a new capability to obtain quantities of interest, such as wall-shear stress, that were previously inaccessible with existing experimental and computational approaches. ## 2 Results As an effective workflow to extract temporal and spatial velocity and wall-shear stress distributions from PIV measurements, our neural processing tool WSSflow presents a significant practical advance for many real-world applications. To demonstrate the generalization ability of our method, we carry out systematic experiments in three representative measurement campaigns. The capability of WSSflow to extract true physical quantities with higher resolution is verified by comparisons with a state-of-the-art (SOTA) PIV routine and with complementary high-fidelity numerical simulations and analytic solutions, which are referred to as _baseline_ data. Details of the algorithms and the baseline data are provided in the Supplementary Information. DatasetsWe consider two academic configurations frequently used for fundamental turbulence research, namely flat-plate and wavy turbulent channel flows, and a real-world inspired oscillating flow through a flexible blood vessel model. Illustrations of the experimental setups and the data processing workflow are provided in figure 2. The turbulent channel flow refers to the turbulent flow through a rectangular duct, where the length and the width of the duct are much larger than the height. An important measure to quantify the respective flow conditions is the friction Reynolds number \(Re_{\tau}=u_{\tau}h/\nu\), where \(u_{\tau}\) is the friction velocity and \(\nu=\eta/\rho\) the kinematic viscosity with \(\eta\) being the dynamic viscosity and \(\rho\) the fluid density. Since the friction velocity is directly related to the wall-shear stress \(\tau_{w}=\rho u_{\tau}^{2}\), a precise knowledge of the wall-shear stress is of high significance for the flow state quantification and for fundamental turbulence research. The wavy turbulent channel flow is a turbulent channel flow with a sinusoidal side wall on one side of the rectangular duct. Thus, this flow exhibits changing pressure gradients along the streamwise direction creating challenging flow topologies. Characteristics like an unsteady shear layer, recirculation, and local detachment make it a representative configuration for other flows that experience similar features in a variety of real-world applications. For both channel flows, synthetic and real-world experimental data are analyzed. Synthetic particle images Figure 2: **Outline of the experimental measurements and data processing steps.** Three test cases are investigated in this study: I. a turbulent channel flow (top), II. a wavy turbulent channel flow (center), and III. an elastic blood vessel flow (bottom). At first, PIV measurements are conducted for each configuration. A pulsed laser light sheet illuminates tracer particles, which are added to visualize the flow, and particle image pairs are recorded with a high-speed camera. From each image pair of the time series, the novel framework WSSflow extracts the instantaneous velocity field \(u=f(x,y)\) and the instantaneous spatial wall-shear stress distribution \(\tau_{w}=f(x)\). Both turbulent channel flows are investigated in an Eiffel-type wind tunnel with an exchangable side wall allowing to switch between a flat (case I) and a wavy (case II) lower channel wall. The sketches represent an extract of the measurement section equipped with two glass windows for optical access. The measurement plane is oriented in the streamwise (\(x\)) and wall-normal (\(y\)) direction with the fluid flow from left to right. The physical blood vessel model (case III) is made from transparent silicone to allow optical access. It is installed in a water tunnel facility specifically designed to study a variety of human blood vessel models. The oscillatory pipe flow is generated with a piston pump, while two reservoirs generate the desired transmural pressure. A heating circuit is tempering the water-glycerin mixture to 25.6\({}^{\circ}\)C such that the rheological properties of the fluid replicate those of human blood at 37\({}^{\circ}\)C. The measurement plane is oriented in the streamwise (\(x\)) and radial (\(r\)) direction. Please note that the sketches of the experimental setups are not to scale. Technical drawings and detailed descriptions are provided in the Supplementary Information. are necessary to compare the WSSflow output against a known ground truth. Since the true underlying flow field of experimental PIV data is unknown on an instantaneous level, these datasets can only be compared with the baseline for time-averaged (statistically converged) quantities. However, since we are specifically interested in the wall-shear stress distribution at each individual time instant, we have to rely on synthetic data to demonstrate the accuracy of the WSSflow output. To generate realistic and physically significant synthetic particle images, we use the flow fields obtained from high-fidelity numerical simulations as the underlying flow topology. The rendering pipeline is described in the Supplementary Information. Bridging the academic and the real world, an experimental model of an elastic blood vessel is investigated as a third test case. Since cardiovascular diseases are the leading cause of death worldwide, a detailed understanding of how such diseases emerge and develop is of utmost importance. One key aspect is the intricate interaction of the blood flow and the elastic vessel wall since it influences the endothelial cell function and phenotype. A direct measure of this interaction is the wall-shear stress. Thus, a precise knowledge of its instantaneous and spatially developing behavior is an inevitable requirement for human health research. Here, we study the oscillating flow through a transparent elastic vessel model at flow conditions similar to the human abdominal aorta. ### Turbulent channel flow The streamwise velocity and the wall-shear stress distributions estimated by the proposed deep learning framework WSSflow are given in figure 3. The upper row (a-c) depicts quantities from the synthetic dataset and the lower row (d-f) is related to real-world experimental data. Where available, ground truth information is provided via the baseline data and estimated quantities based on SOTA are given for a comparison to a state-of-the-art PIV processing tool. Overall, the figure demonstrates an exceptional accuracy of WSSflow in providing physically correct flow quantities, especially with respect to the instantaneous wall-shear stress. Excellent agreement is observed between the streamwise velocity profiles obtained via WSSflow and the baseline profiles. This holds for the profiles averaged in time and streamwise direction (a,d) as well as an instantaneous profile at an arbitrary time step (b). Comparing the spatial resolution of the velocity profiles given by WSSflow with the output from the state-of-the-art PIV processing SOTA, the tremendous spatial resolution ability of WSSflow is striking. Furthermore, the capabilities of WSSflow allow to resolve the near-wall region (small \(y^{+}\)), which is of utmost importance for, e.g., understanding drag generation and turbulence production. On the contrary, SOTA is not able to represent the viscous sublayer (\(y^{+}\leq 5\)) nor can it accurately capture the velocity gradients in the buffer layer (\(5\lesssim y^{+}\lesssim 30\)) and the lower logarithmic layer (\(y^{+}\gtrsim 30\)). This issue is widely known in the experimental fluid dynamics community [64, 69] since it is rooted in the statistical approach of traditional PIV processing. By outputting a velocity estimate for each sub-window instead of each pixel like WSSflow, the original data resolution is massively coarsened. The inherent averaging across each sub-window further constrains the ability to resolve velocity gradients, which are particularly strong close to the wall. These limitations inhibit a derivation of the wall-shear stress from such data due to missing and/or unreliable velocity information in the viscous sublayer. Therefore, figure 3 (c,f) only depict the instantaneous wall-shear stress distributions of WSSflow. The synthetic configuration (c) follows the baseline remarkably well. Due to non-existing baseline information for real-world data, the experimental results (f) cannot be compared quantitatively. However, the magnitude and the frequency of the fluctuations is comparable to the synthetic data derived from high-fidelity numerical simulations, which indicates a physical significance in a qualitative sense. Overall, the results related to the turbulent channel flow data build a valuable basis for future experimental analyses targeting, for instance, how different scales from the outer layer (large \(y^{+}\)) interact within a turbulent wall-bounded flow to modulate the wall-shear stress dynamics. Such analyses are currently limited to numerical investigations since the near-wall flow field cannot be sufficiently well resolved in space and time in experimental settings. On the contrary, high-fidelity numerical simulations are restricted to rather low Reynolds numbers due to the immense increase of the computational cost at high Reynolds numbers [80]. Hence, being able to resolve the instantaneous and spatially developing wall-shear stress experimentally is of utmost importance to understand the fundamental behavior of turbulent wall-bounded flows [54; 56] and how it can be actively manipulated to the benefit of society in the context of, e.g., friction drag reduction [51; 53]. For instance, in civil aviation, drag reduction of only a few percent can substantially reduce the required fossil fuels and the environmental burden [66]. Consequently, our proposed framework could pave the way for a more sustainable and environmentally friendly transportation section. Figure 3: **Velocity and wall-shear stress distributions of the turbulent channel flow data.** Figures in the upper row (a-c) are related to the synthetic particle images and the lower row (d-f) provides findings based on the experimental data. (a,d) depict the averaged streamwise velocity profile \(\bar{u}^{+}\) as a function of the wall-normal distance \(y^{+}\), where the \(+\) sign denotes scaling in inner units, i.e., a normalization by the friction velocity \(u_{\tau}\) and the kinematic viscosity \(\mathbf{v}\). (b,e) show the instantaneous inner-scaled velocity profiles \(u^{+}\) of an arbitrary time step and streamwise position as a function of \(y^{+}\). (c,f) present the instantaneous wall-shear stress distributions \(\tau_{w}\) of an arbitrary time step along the inner-scaled streamwise direction \(x^{+}\). The velocity profiles and the instantaneous wall-shear stress distribution obtained from the synthetic data (a-c) via WSSflow follow the baseline remarkably well. The velocity profiles of the traditional PIV processing tool SOTA have a much coarser spatial resolution and do not provide velocity information in the buffer layer and below (\(y^{+}\lesssim 30\)). Moreover, the instantaneous velocity profile deviates severely from the true underlying distribution in the lower logarithmic layer and below (\(y^{+}\lesssim 100\)). Since the experimental data do not possess a known ground truth on an instantaneous level, only the time-averaged velocity profile can be compared to the baseline data which match extremely well. The frequency and magnitude of the experimental wall-shear stress fluctuations (f) is comparable to the synthetic data obtained from a high-fidelity numerical simulation, which demonstrates a physical significance in a qualitative sense although a quantitative comparison is not possible. ### Wavy turbulent channel flow Similar to the flat turbulent channel flow, synthetic and experimental particle images of the wavy turbulent channel flow are analyzed. Although the flow features exhibit more complex dynamics compared to the flat turbulent channel flow, the flow quantities extracted by WSSflow are similarly accurate. Instantaneous and time-averaged streamwise velocity (figure 4) and wall-shear stress (figure 5) distributions match the baseline data remarkably well. As in the case of the turbulent channel flow, the flow quantities given by SOTA have a coarser spatial resolution than the neural counterpart. Moreover, when approaching the wall (\(y^{+}=0\)), the deviation of the velocity profiles from the baseline increases progressively as observed in figure 4. Even though SOTA does not capture the velocity values closest to the wall, the measurement setup is specifically designed to allow a detection of the viscous sublayer and the buffer layer even with classical PIV processing tools. Therefore, in contrast to the flat turbulent channel flow, a wall-shear stress estimation based on SOTA is possible. This enables a direct comparison between traditional and novel processing of the experimental dataset as Figure 4: **Velocity distributions of the wavy turbulent channel flow data.** (a) shows an instantaneous flow field with a white dotted line that originates at the wave trough and indicates the position of the velocity profiles in (b-e). Velocity profiles in the upper row (b,c) are related to the synthetic particle images and the lower row (d,e) provides findings based on the experimental data. Note that these two datasets capture slightly different flows because the numerical data underlying the synthetic particle images has an adiabatic wall condition, while the experimental setup employs an isothermal boundary condition. (b,d) depict the time-averaged streamwise velocity profiles \(\bar{u}^{+}\) as a function of the wall-normal distance \(y^{+}\), where the \(+\) sign denotes scaling in inner units. (c,e) show instantaneous velocity profiles \(u^{+}\) of an arbitrary time step. Where baseline data are available for comparison (b-d), the velocity distributions of WSSflow follow the baseline exceptionally well. The velocity profiles of the traditional PIV processing tool SOTA have a much coarser spatial resolution, do not provide velocity information directly at the wall, and deviate severely from the true instantaneous distribution (c) in the buffer layer and below (\(y^{+}\lesssim 30\)). Since the experimental data do not possess a known ground truth on an instantaneous level, the instantaneous velocity profiles (e) can only be validated qualitatively. However, the data provided by WSSflow is reasonably smooth, whereas the SOTA output contains non-physical jumps in the buffer layer and below. provided in figure 5 (d,e). The deviations between the streamwise evolution of the time-averaged and the instantaneous wall-shear stress determined by WSSflow and by SOTA are immense. For the time-averaged profile (d), WSSflow represents the baseline extraordinarily well except for the region near the wave crest (\(x^{+}\approx 350\)). Laser light reflections at the wall cover the viscous sublayer in this area such that an accurate wall-shear stress determination fails. However, the satisfactory consistency with the baseline at the remaining locations demonstrates the success of the novel framework as long as measurement errors are minimized. On the contrary, the capability of SOTA to capture the true physics is severely limited. Neither the large-scale trend of the mean wall-shear stress (d) nor the instantaneous distribution (e) is represented precisely by this state-of-the-art PIV processing. Consequently, even if the viscous sublayer is captured by a traditional PIV evaluation routine as in the present setup, the severe velocity gradient smoothing inherent to this approach [64; 69] inhibits an accurate estimation of the wall-shear stress. Figure 5: **Wall-shear stress distributions of the wavy turbulent channel flow data.** (a) shows an instantaneous flow field with a black dashed line that follows the wall contour and indicates the spatial extent of the wall-shear stress data given in (b-e). Wall-shear stress distributions in the upper row (b,c) are related to the synthetic particle images and the lower row (d,e) provides findings based on the experimental data. Note that these two datasets capture slightly different flows because the numerical data underlying the synthetic particle images has an adiabatic wall condition, while the experimental setup employs an isothermal boundary condition. (b,d) show the time-averaged wall-shear stress distributions \(\tau_{w}\) along the inner-scaled streamwise direction \(x^{+}\). Only every 4th datapoint is shown for clarity. (c,e) present the instantaneous wall-shear stress distributions \(\tau_{w}\) of an arbitrary time step. Both distributions obtained from the synthetic data (b,c) via WSSflow follow the baseline exceptionally well. Likewise, the time-averaged wall-shear stress estimated by WSSflow from the experimental data (d) accurately matches the baseline. Only one severe deviation occurs near the wave crest on the right side (\(x^{+}\approx 350\)), which arises from laser light reflections at the wall that could not be avoided in the experimental setup. Since the experimental data do not possess a known ground truth on an instantaneous level, the instantaneous wall-shear stress distribution (e) can only be validated qualitatively. Nevertheless, the wall-shear stress fluctuation intensity is comparable to the synthetic data indicating a successful estimation. Note that although the spatial resolution of the experimental data is sufficiently high to provide wall-shear stress estimates using the traditional evaluation tool (SOTA), the obtained distributions are not physically meaningful for both, time-averaged (d) and instantaneous (e) quantities. This issue highlights the necessity of a novel framework like WSSflow that provides accurate physical flow quantities for detailed flow analyses. The high-resolution experimental velocity and wall-shear stress distributions obtained via WSSflow are extremely important to understand how pressure gradients of varying strength impact the interaction of different flow scales. Even more relevant is the related analysis of how the pressure induced modification of the interaction phenomena affects the intensity and the dynamics of the wall-shear stress fluctuations, which can now conveniently be studied using the newly proposed framework WSSflow. Although such analyses are on a fundamental academic level, their findings impact a variety of real-world applications that rely on the efficiency of, e.g., transportation systems. ### Elastic blood vessel flow We are considering the oscillatory flow through a transparent elastic blood vessel model at flow conditions similar to the human abdominal aorta. The streamwise velocity field and the wall-shear stress distributions obtained by evaluating the experimental particle images with WSSflow are shown in figure 6. The phase-averaged velocity field is shown in the upper part of figure 6 (a) in dependence of the radial position and the phase angle, whereas the baseline data of a rigid vessel is provided in the lower part. For this representation, instantaneous velocity fields are averaged in the streamwise direction and over six oscillation cycles. Both distributions appear very similar because of the small elastic vessel dilatation of 0.4 % with respect to the rigid vessel diameter. Relative to the vessel wall thickness, the wall movement amounts to 8.3 %. An instantaneous wall-shear stress distribution along the streamwise direction and in dependence of the phase angle (b) shows that the wall-shear stress possesses a similar phase dependence than the velocity field but with a phase shift relative to the velocity. Minor variations in the streamwise direction are observed, which highlight the importance of instantaneous spatially developing wall-shear stress distributions for detailed flow analysis, e.g., with respect to the arterial disease development. Applying a spatial and phase average to all wall-shear stress fields (c) allows a comparison to the baseline wall-shear stress of a rigid Figure 6: **Velocity and wall-shear stress distributions of the elastic blood vessel data. (a) shows color contours of the phase-averaged streamwise velocity as a function of the phase angle \(\phi\) and the radial position \(r/R_{\nu}\) with \(R_{\nu}\) being the vessel radius. The upper part (\(r/R_{\nu}>0\)) reflects the experimental data evaluated with WSSflow and the lower part (\(r/R_{\nu}<0\)) is the baseline data. Since the phase-averaged flow is rotationally symmetric, positive and negative radial directions are equivalent. (b) provides color contours of the instantaneous wall-shear stress distribution of the experimental data obtained via WSSflow as a function of \(\phi\) and the streamwise position \(x^{+}\). (c) presents the phase-averaged experimental and baseline wall-shear stress distributions as a function of the phase angle \(\phi\). Only every 4th datapoint of the WSSflow prediction is shown for clarity. The experimentally obtained phase-averaged velocity distribution matches the baseline very well (a). Similar consistency occurs for the phase-averaged wall-shear stress distribution (c) with minor deviations at the peak values, which demonstrates the fluid-structure interaction. The observed similarities indicate that also the instantaneous wall-shear stress distribution (b), which is an interim result for obtaining the phase-averaged data, must be of physical significance.** vessel flow, which shows that minor deviations primarily occur at absolute peak values. These emanate from the fluid-structure interaction between the blood flow and the elastic vessel. The non-smoothness of the experimental data originates from the fact that only six oscillation cycles are phase-averaged. Overall, these findings show that a wall-shear stress estimation from particle images of a real-world relevant experimental setup is remarkably accurate when using WSSflow. This bears enormous potential for experimental studies targeting the impact of the wall-shear stress dynamics on health related conditions such as atherosclerosis, where spatially resolved wall-shear stress data on an instantaneous level is key to understand the time and space dependent phenomena and develop sophisticated prevention and treatment strategies. ## 3 Discussion and conclusion In this paper, we demonstrate that the combination of experimental PIV measurements with cutting-edge deep learning methods allows for practical and accurate approaches to derive fluid flow quantities that are typically difficult - in most cases even impossible - to measure. Precisely, we propose a novel end-to-end workflow based on _WSSflow_ which determines highly resolved velocity and wall-shear stress information from widely used planar PIV measurements. In contrast to traditional PIV evaluation, a velocity field is provided at the original image resolution. Leveraging physical knowledge, our framework WSSflow then calculates the wall-shear stress from the near-wall velocity distribution in an instantaneous and spatially resolved fashion. That is, no further assumptions or modeling hypotheses, which typically underlie complementary approaches, are deployed. The success of the proposed framework is demonstrated using synthetic and experimental datasets of three test cases. In particular, the synthetic particle images of a flat and a wavy turbulent channel flow are used to verify that the estimated temporally and spatially resolved wall-shear stress distributions accurately follow the underlying baseline data. The application to experimental measurement data additionally including a realistic oscillating blood flow through an elastic vessel clearly demonstrate a physical significance of the estimated flow quantities even for particle images containing typical measurement uncertainties. Comparisons to a state-of-the-art PIV evaluation show the distinct superiority of the novel framework WSSflow with respect to spatial resolution, the ability to resolve local velocity gradients, the ability to capture near-wall physics, and the overall accuracy of the estimated flow quantities. Thus, it can be concluded that WSSflow constitutes a potential game changer for experimental fluid dynamics. It allows to derive an extremely important flow quantity, i.e., the wall-shear stress, with an outstanding spatial and temporal resolution as it is required for detailed flow analysis in both, academia and real-world related disciplines. For instance, our method can be used to experimentally study novel friction drag reduction techniques for the transportation sector or the mechanisms by which cardiovascular diseases develop and how these phenomena can be attenuated to prevent vascular dysfunction. Since the flow data obtained via WSSflow possess a very high spatial resolution, they can match or even surpass the spatial resolution of high-fidelity large-scale simulations. Consequently, a key direction for future work will be to validate and benchmark these exceptionally costly simulations using the proposed workflow. In follow-up work, it would be interesting to exploit the full potential of our approach via a direct coupling with numerical simulations using data assimilation techniques. That is, future models leveraging our workflow pipeline might be used to derive reduced-order or parameterized model formulations that can be directly queried from numerical solvers. This would amount in an immense reduction of the computational cost with the tremendous benefit of experimentally validated and/or derived models. Another promising future direction is the introduction of physically motivated inductive biases. Extending the physical knowledge provided to the learner has the potential to enhance the measurement data in different aspects as demonstrated on the basis of Physics-Informed Neural Networks (PINNs) [65] which are based on a Navier-Stokes equations regularized loss. Especially in regions of marginal texture, immense occlusion, or overexposure due to reflections, physically motivated regularizers and latent dynamical models [39] might be able to provide otherwise unavailable flow field information and could also infer additional flow quantities like temperature and pressure. Moreover, a direct coupling with causal risk frameworks [40] is very interesting to additionally extract causal relations in the underlying flow. ## 4 Methods In this section, we provide information about the proposed approach, the evaluation scheme, and a detailed presentation of the applied architecture and associated implementation. **Notation.** Let \(X\) be the input of input size \((T\times M\times N\times C)\). Entries are denoted \(X_{t,m,n,c}\) with \(t\) being the snapshot index in the temporal domain, \(m,n\) the spatial indices in the horizontal and vertical image direction, and \(c\) the differentiation between the first and second particle image. Occasionally, we overload this notation by a simplified indexing when the entire set of first or second images is used, hence \(X_{1}\) denotes all \(T\) images in \(X\) or formally \(X_{1}=X_{:,:,:,1}\). Since training and inference datasets correspond to different image feature distributions (which necessarily do not match), training data is indicated via \(X^{*}\) while inference data used for post-processing is denoted by \(X\) (further details appear below). During inference, our neural optical flow framework WSSflow is employed to output optical flows for all \(T\) image pairs in \(X\). Optical flow is denoted by \(V\in\mathbb{R}^{m\times n\times 2}=(V_{1},V_{2})\) with \(V_{1},V_{2}\) being the horizontal and vertical optical flow component, respectively. Thus, the general optical flow problem is formalized as \(V=F_{\theta}(X)\), with \(F\) being any arbitrary non-linear but learnable mapping and \(\theta\) covering the entire set of trainable parameters in WSSflow. Specific network weights are given by \(w\) whose dimensions become clear from the context. Non-trainable network entities such as the correlation volume \(\mathcal{C}\) are introduced when necessary. \(\mathcal{L}\) refers to the loss minimizing the loss objective during training. Relevant fluid mechanical quantities are introduced occasionally when necessary and become clear from the context. **Problem statement.** We focus on the setting in which available inputs are (I1) particle image pairs as an \((T\times M\times N\times C)\) matrix whose first dimension corresponds to measurement snapshots, whose second and third dimension are spatial dimensions in streamwise and wall-normal direction, and whose last dimension represents two individual particle images separated by a small interframing time \(\Delta t\). (I2) length and time scale knowledge of the measurement setup providing information to calibrate pixel-based image features yielding real-world velocity data. For (I1), we assume that the particle images cover a broad range of flow scales simultaneously comprising the large scales in the outer flow and the near-wall small-scale flow structures. Precisely, we require the measurement setup to resolve the viscous sublayer by at least one pixel in camera coordinates. Complementary, a sufficient number of particles is required to predict the displacement distribution reliably. In fact, this holds similarly for wall-distant regions such as the buffer layer and beyond. For (I2), we assume precise calibration information to convert pixel-based image features to real-world flow information. Hence, the interframing time \(\Delta t\) and a pixelwise calibration model is expected to be known. Note that a pixelwise wall location needs to be accessible, too. Moreover, fluid properties such as temperature, density, and viscosity are required. Suppose all input requirements are fulfilled, the underlying learning task of our approach can be formulated as follows: Given the inputs (I1) and (I2), our goal is to predict a pixelwise optical flow field for each individual image pair in \(X\) and derive highly relevant but typically difficult to measure near-wall flow quantities such as the wall-shear stress \(\tau_{w}\) over the entire temporal and spatial domain captured in \(X\). **Learning scheme.** With the notation given above, our goal is to learn an optical flow estimator predicting high-resolution displacement distributions of recorded particle image pairs. To this end, we train a parameterized network \(F_{\theta}\), i.e. a non-linear function \(F\) with a set of unknown trainable parameters \(\theta\). This is possible since training is performed in a supervised fashion assuming a carefully designed synthetic dataset with known ground-truth labels. Note that training is performed prior and independently to inference on \(X\). Hence, we require our network \(F_{\theta}\) to generalize well to _unknown_ out-of-distribution real-world image features in an out-of-the-box fashion making training particularly important for the overall velocity and wall-shear stress derivation. The architecture we use for \(F_{\theta}\) is detailed below, but for now assume that this has been specified. Then, given the training data \(X^{*}\) (which is independent in feature and flow distribution from our inference data \(X\)) and the training labels \(Y^{*}\) for all image pairs \(X^{*}\), we train the set of parameters \(\hat{\theta}(X^{*})\) under a loss that is supervised by the labels \(Y^{*}\). In our setting, we consider a \(L_{1}\)-loss between ground truth and optical flow estimate for all image pairs \(X^{*}\), where the ground truth provides the real optical flow field (see details below). Assuming a suitable training dataset \(X^{*}\) with corresponding labels \(Y^{*}\), minimizing this \(L_{1}\)-loss during training prevents exploding weights by default and enables sufficient generalization capabilities as demonstrated empirically. **Architectural details of WSSflow.** The proposed framework WSSflow is an extended version of RAFT-PIV [37], which is a neural optical flow estimator based on the successful RAFT architecture [77] and specifically designed for the use case of PIV images. It is unique in the sense that it operates entirely on a specific input resolution and updates iteratively its flow predictions. WSSflow consists of the following stages: a feature and a context extracting block, the computation of a pixelwise correlation volume using an all-pairs correlation, a GMA, iterative updates based on a ConvGRU, and a physics informed transformation as shown in Figure 1. In a first step, the shared feature encoder derives latent embeddings \(E_{1}\), \(E_{2}\) of size (\(M\times N\times D\)) for each input image individually using three convolutional neural network modules. These encoding steps map the input particle images into a dense feature representation with \(D\) dimensions. On an abstract level, these feature map represent an extract of the most relevant image patterns like edges or simple textures while deeper convolutional filters capture more complex relations or even entire objects. To this end, the neural optical flow estimator is trained to predict the local displacement between two images based on the detection of various patterns. The second stage of WSSflow is designed to compute the similarity of both image features \(E_{1}\) and \(E_{2}\) using a full correlation volume between all spatial indices of both feature maps. The similarity between two pixel embeddings (\(\mathbf{E}_{i,j},\mathbf{E}_{k,l}\)) is measured by the dot product between the two individual feature vectors yielding the so-called correlation volume \[\mathcal{C}_{ijkl}=\sum_{d=1}^{D}(E_{1})_{ijd}\cdot(E_{2})_{kld} \tag{1}\] with \(i,j\) denoting pixel coordinates in the image embedding \(E_{1}\) and \(k,l\) in \(E_{2}\). Here, all-pairs correlation means that every pixel is correlated with every other pixel. Hence, \(\mathcal{C}(i,j,:,:)\) represents a correlation map of pixel \((i,j)\) in \(E_{1}\) with all pixels of the second image \(E_{2}\). Based on this key idea, a 4-layer correlation pyramid is formed (see Figure 1). Precisely, the last two dimensions of \(\mathcal{C}\) are pooled sequentially from level to level using kernel sizes and strides of \(1,2,4,8\). Thus, WSSflow maintains high resolution information of the first image while effectively addressing large object displacements applying pooling operations along the features of the second image. Note that this correlation volume is only computed once in advance and only small neighborhoods of a specific target location are extracted during the update process. For clarification, let us assume that the current optical flow of the \(i\)-th update step is available. Now, each pixel in \(E_{1}\) is mapped to its estimated location in \(E_{2}\). To obtain the valid index of the correlation volume, the index operation considers a specific neighbourhood around the estimated location in \(E_{2}\) and is applied to all levels of the correlation pyramid using a constant radius of 4px on the neighbourhood grid. Subsequently, the resulting values of each level are concatenated to a single feature map. The idea behind this complex index operation can be formulated as follows: for a specific target location \((i,j)\) in encoding \(E_{1}\), the corresponding value in the correlation volume expresses the similarity of a pixel embedding at position \((k,l)\) in encoding \(E_{2}\) and its level-specific neighbourhood. Hence, the higher the pyramid level of the extracted patch, the more spatial information of the original image features is covered, but the coarser the resolution of the correlation information becomes. This multi-scale approach allows WSSflow to resolve large displacements of small objects quite accurately. In this work, we augment WSSflow by a global motion aggregation (GMA) module similar to the one proposed in [32]. It operates on the combined input of the correlation pyramid output and the context encoded first particle image. The idea behind this component can be formulated as follows: The knowledge of linked pixels with high similarity and the corresponding feature embeddings in non-occluded regions can be transferred to occluded image parts. To do so, the GMA considers long-range feature connections extracted by a self-attention mechanism and estimates motion features which are propagated to occluded regions. Its main design is strongly related to the recently famous self-attention mechanism used in transformer networks [21]. From a high-level perspective, the GMA utilizes attention in a specific way to introduce the notion of memory over time. Precisely, the attention weights indicate the relevance in time and pixel space. Unlike convolutions whose receptive field is limited to a fraction of the input sequence, the self-aware attention formulation principally grants access to all parts of the entire input sequence. As a result, all pixel embeddings can be considered simultaneously and the network learns to pay attention to the most relevant features depending on the task at hand. From a technical point of view, self-attention can be computed as follows: Let \(E_{q}\in\mathbb{R}^{(M\cdot N)\times D_{q}}\) be the context feature map, \(E_{q,i}\in\mathbb{R}^{D_{q}}\) is its \(i\)-th feature vector, and \(\mathbf{E}_{r}\in\mathbb{R}^{(M\cdot N)\times D_{r}}\) denotes the motion features captured in the correlation volume. Then, the aggregated motion features can be expressed by the self-aware linear recombination of the query, key and value vector as follows [32]: \[\hat{E}_{r}=E_{r}+\alpha\sum_{j=1}^{(M\cdot N)}\mathcal{F}\left(\Theta(E_{q,i} ),\Phi(E_{q,j})\right)\zeta(E_{r,j}), \tag{2}\] where \(\alpha\) is a learned parameter and \(\Theta\), \(\Phi\), and \(\zeta\) denote the projection of the query, key and value vectors which are given for the \(i\)-th and \(j\)-th feature vector by \[\Theta(E_{q,i}) = w_{Q}E_{q,i} \tag{3}\] \[\Phi(E_{q,j}) = w_{K}E_{q,j}\] (4) \[\zeta(E_{r,j}) = w_{V}E_{r,j}. \tag{5}\] The function \(\mathcal{F}\) represents the similarity attention function [79] and reads \[\mathcal{F}\left(\mathbf{a}_{i},\mathbf{b}_{j}\right)=\frac{\exp\left( \mathbf{a}_{i}^{T}\mathbf{b}_{j}/\sqrt{D_{q}}\right)}{\sum_{j=1}^{(M\cdot N)} \exp\left(\mathbf{a}_{i}^{T}\mathbf{b}_{j}/\sqrt{D_{q}}\right)}. \tag{6}\] The final output is then a concatenation of the previous hidden state, the general motion features, and the aggregated motion features. It is subsequently decoded in the ConvGRU [72] to obtain a new optical flow update, which basically allows a combination of motion vectors and self-aware context features. A ConvGRU is a type of recurrent neural network that stores hidden state information of previous steps to modulate a limited content memory. Thus, observations of previous steps are taken into account when estimating future predictions. The main difference to a standard recurrent neural network is the so-called update and reset gate. The applied ConvGRU takes flow, correlation, and the hidden state of the context network as input and yields a new hidden state \(\mathbf{h}_{t}\). Subsequently, \(\mathbf{h}_{t}\) is passed through two convolutional layers and finally outputs the flow update \(\Delta V\). Thus, the final flow prediction is a combination of the sequence of residual flow updates. The benefit of a ConvGRU to perform the iterative refinement lies in the reduction of the search space due to its recurrent nature. This ConvGRU allows the network to balance the prediction of earlier optical flows and its current hidden state to compute a new optical flow update. The final output of the ConvGRU, i.e., the optical flow, is a high-resolution velocity field at the original image resolution. Precisely, WSSflow estimates one velocity vector for each pixel of the particle image pair. Moreover, the velocity information is combined with physical knowledge to calculate the space-dependent wall-shear stress. We use the fact that the flow in the viscous sublayer, which is a very thin wall-parallel layer adjacent to the boundary with \(y^{+}\leq 5\), is dominated by viscous forces. Therefore, the inner-scaled streamwise velocity \(u^{+}=u/u_{\tau}\) equals the inner-scaled wall-normal position \(y^{+}=\rho u_{\tau}y/\eta\), where \(u_{\tau}\) is the friction velocity, \(\rho\) the fluid density, and \(\eta\) the dynamic viscosity of the fluid. Since the wall-shear stress \(\tau_{w}\) is directly related to the friction velocity \(\tau_{w}=\rho u_{\tau}^{2}\), it follows from \(u^{+}=y^{+}\) that \(\tau_{w}=\eta u/y\). In other words, we deploy the linearity of the velocity gradient in the viscous sublayer. This allows us to estimate the wall-shear stress dynamics as a function of time and space, i.e., one wall-shear stress value is obtained for each streamwise location and each time instant. The only prerequisite for this approach is that the measurement setup captures the viscous sublayer by at least one pixel in camera coordinates. If more pixels cover the viscous sublayer, all velocity information in the viscous sublayer is used to calculate a wall-shear stress value as an average across the wall-normal direction. Our studies showed that this typically results in more accurate wall-shear stress estimates. Thus, we recommend a minimum number of three pixels in the viscous sublayer which has proven to outweigh measurement uncertainties arising directly at the wall, e.g., laser light reflections and a misalignment of the wall boundary on a sub-pixel level. #### Training details of WSSflow. All network architectures are implemented in the open source framework PyTorch [60]. All modules are initialized from scratch using random weights. During training, an AdamW-Optimizer [47] is applied starting at an initial learning rate \(\epsilon_{0}=0.0001\). Gradients are clipped to the range [-1,1]. Furthermore, the learning rate is reduced by a factor of five once the evaluation metrics stopped improving for 15 consecutive epochs. The minimum learning rate is set to \(\epsilon_{min}=10^{-8}\). This learning rate scheduler shows the best results in the current study. In total, 200 epochs are used for training. All computations are run on a single GPU node equipped with four NVIDIA A100 resulting in an overall training time of approximately \(22\,\mathrm{h}\) for a global batch size of 8. With WSSflow operating exclusively on the initial input resolution, a training image size of \(128\times 128\,\mathrm{px}^{2}\) is chosen to bypass extreme memory consumption when computing the all-pairs correlation volume. To overcome this issue during inference, evaluation is performed on subpatches of each image pair which are stitched together in a final step. **Principles of PIV and traditional PIV evaluation routine SOTA.** For visualization purposes, PIV requires the addition of tracer particles to the flow. The particles' properties are matched to the flow characteristics such that the particle motion closely follows the flow dynamics. Using a pulsed laser light sheet that is synchronized with a high-quality camera, particle images are acquired. Based on the displacement of the tracer particles between two consecutive images and their interframing time, the velocity distribution can be derived. In traditional PIV processing, this is performed on a statistical basis [64]. Therefore, the original particle image is divided into finite-size interrogation windows which are cross-correlated. The position of the correlation peak indicates the averaged displacement within the respective image section. This approach substantially lowers the spatial resolution of the resulting velocity field compared to the original image size. For instance, for a particle image pair of size \(1024\times 1024\)\(\mathrm{px}^{2}\), which might typically be evaluated with an interrogation window size of \(32\times 32\)\(\mathrm{px}^{2}\) and a maximum window overlap of 75 %, only \(124\times 124\) velocity vectors are obtained. Furthermore, the spatial averaging involved in such statistical approaches smooths velocity gradients within each interrogation window. The proposed framework WSSflow is benchmarked against a high-performance in-house code [49, 50] which represents the current gold-standard of PIV evaluation algorithms and is referred to as SOTA. First, a 4-level multi-grid evaluation with an initial window size of \(256\times 256\,\mathrm{px}^{2}\) and a final window size of \(32\times 32\,\mathrm{px}^{2}\) is performed. This enables the detection of displacements larger than half of the interrogation window size with successive refinement. Subsequently, a five-step predictor-corrector is used in an iterative fashion. Here, a subpixel accuracy is achieved by applying the approach of [7]. A second-order accurate displacement estimate is derived by shifting and deforming the interrogation windows by half of the initially computed displacement featuring a B-Spline-3 interpolation to obtain a per-pixel accurate displacement field during window deformation. Image interpolation is based on a 4th-order Lanczos interpolation [26]. Following [70], an integral displacement predictor is used to ensure convergence during iterations yielding a weighted average of the per-pixel displacement field over the entire subwindow. The complementary corrector is designed according to [64]. That is, the cross-correlation between corresponding interrogation windows of size \(32\times 32\,\mathrm{px}^{2}\) (with 75% overlap) is computed using a 3-point Gaussian peak estimator. The windows are weighted by a 2D Gaussian with a standard deviation of \(\sigma=0.4\). Outlier detection is performed using a normalized median test and the respective outliers are replaced by an interpolation of the neighboring \(3\times 3\) vectors.
2308.04066
Smooth Fields of Hilbert Spaces, Hermitian bundles and Riemannian Direct Images
Given a field of Hilbert spaces there are two ways to endow it with a smooth structure: the standard and geometrical notion of Hilbert (or Hermitian) bundle and the analytical notion of smooth field of Hilbert spaces. We study the relationship between these concepts in a general framework. We apply our results in the following interesting example called Riemannian direct images: Let $M,N$ be Riemannian oriented manifolds, $\rho:M\to N$ be a submersion and $\pi:E\to M$ a finite dimensional vector bundle. Also, let $M_\lambda=\rho^{-1}(\lambda)$ and fix a suitable measure $\mu_\lambda$ in $M_\lambda$. Does the field of Hilbert spaces $\mathcal{H}(\lambda)=L^2(M_\lambda,E)$ admits a smooth field of Hilbert space structure? or a Hilbert bundle structure? In order to provide conditions to guarantee a positive answer for these questions, we develop an interesting formula to derivate functions defined on $N$ as a integral over $M_\lambda$.
Fabian Belmonte, Harold Bustos
2023-08-08T06:02:53Z
http://arxiv.org/abs/2308.04066v1
# Smooth fields of Hilbert spaces, Hermitian bundles and Riemannian Direct Images ###### Abstract Given a field of Hilbert spaces there are two ways to endow it with a smooth structure: the standard and geometrical notion of Hilbert (or Hermitian) bundle and the analytical notion of smooth field of Hilbert spaces. We study the relationship between these concepts in a general framework. We apply our results in the following interesting example called Riemannian direct images: Let \(M,N\) be Riemannian oriented manifolds, \(\rho:M\to N\) be a submersion and \(\pi:E\to M\) a finite dimensional vector bundle. Also, let \(M_{\lambda}=\rho^{-1}(\lambda)\) and fix a suitable measure \(\mu_{\lambda}\) in \(M_{\lambda}\). Does the field of Hilbert spaces \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},E)\) admits a smooth field of Hilbert space structure? or a Hilbert bundle structure? In order to provide conditions to guarantee a positive answer for these questions, we develop an interesting formula to derivate functions defined on \(N\) as a integral over \(M_{\lambda}\). ## 1 Introduction. Let \(p:H\to N\) be a field of Hilbert spaces, i.e. \(p\) is a surjective map such that \(\mathcal{H}(\lambda):=p^{-1}(\lambda)\) is a Hilbert space, and denote by \(\langle\cdot,\cdot\rangle_{\lambda}\) the corresponding inner product. We denote by \(\Gamma\) the set of all sections of such field. For any pair of sections \(\varphi,\psi\in\Gamma\), we set the function \[h(\varphi,\psi)(\lambda)=\langle\varphi(\lambda),\psi(\lambda)\rangle_{\lambda}.\] In order to obtain an interesting mathematical object, we should add further assumptions on a given field of Hilbert spaces. Let us approach this issue first from a geometrical framework recalling the canonical notion of a smooth Hilbert bundle. In what follows we will also require the notion of Banach manifold, which is defined as the usual notion of manifold but allowing that the charts take values onto an open subset of a Banach space (instead of some Euclidean space). For details see subsection I.2.1 in [8]. **Definition 1.1**.: _Let \(p:H\to N\) be a field of Hilbert spaces. Assume that \(H,N\) are Banach manifolds and \(p\) is a smooth map. A smooth Hilbert bundle structure on \(p:H\to N\) is given by specifying an open cover \(\{U_{i}\}_{i\in I}\) of \(N\), a family of Hilbert spaces \(\{\mathcal{H}_{i}\}_{i\in I}\) and, for each \(\lambda\in U_{i}\), a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\mapsto\mathcal{H}_{i}\) such that the maps \(\mathcal{T}_{i}:p^{-1}(U_{i})\to U_{i}\times\mathcal{H}_{i}\) given by \(\mathcal{T}_{i}(x)=(p(x),T_{i}(p(x))(x))\) are diffeomorphisms, for every \(i\in I\). We denote by \(\Gamma^{\infty}(N,H)\) the space of smooth sections \(\varphi:N\to H\)._ We set the field of unitary operators \(T_{i}(\lambda)\) instead of the local trivialization maps \(\mathcal{T}_{i}\) as customary in the literature, because this will become useful later. We obtain a more interesting and subtle concept when we also assume the existence of a connection over a Hilbert bundle. **Definition 1.2**.: _Let \(H,N\) be Banach manifolds and \(\mathrm{Vect}(N)\) the space of smooth vector fields on \(N\). A Hermitian bundle is a Hilbert bundle \(p:H\to N\) together with a map \(\nabla:\mathrm{Vect}(N)\times\Gamma^{\infty}(N,H)\to\Gamma^{\infty}(N,H)\) such that, for \(X,Y\in\mathrm{Vect}(N)\), \(a\in C^{\infty}(N)\) and \(\varphi,\psi\in\Gamma^{\infty}(N,H)\):_ * \(\nabla_{X+Y}=\nabla_{X}+\nabla_{Y}\)_,_ \(\nabla_{aX}=a\nabla_{X}\)_,_ \(\nabla_{X}(a\varphi)=X(a)\varphi+a\nabla_{X}(\varphi)\)__ * \(h(\varphi,\psi)\in C^{\infty}(N)\) _and_ \(Xh(\varphi,\psi)=h(\nabla_{X}\varphi,\psi)+h(\varphi,\nabla_{\overline{X}}\psi)\)__ _The map \(\nabla\) is called a connection of the Hilbert bundle \(p:H\to N\)._ Let \(p:H\to N\) be a field of Hilbert spaces. If all the Hilbert spaces \(\mathcal{H}(\lambda)\) admit an orthonormal basis with the same cardinality \(J\), then one can endow \(p:H\to N\) with an artificial Hilbert bundle structure. Indeed, fix an orthonormal basis \(\{\varphi_{j}(\lambda)\}_{j\in J}\) of \(\mathcal{H}(\lambda)\), for each \(\lambda\in N\). Then, computing the corresponding Fourier coefficients on each \(\mathcal{H}(\lambda)\) defines an unitary operator \(T(\lambda):\mathcal{H}(\lambda)\to l^{2}(J)\). We can endow \(H\) with a Banach manifold structure taking as a global chart the map \(x\to(p(x),T(p(x))x)\); then, by definition, \(p:H\to N\) becomes a Hilbert bundle. We would like to construct Hilbert bundles structures in a more intrinsic manner, but first let us approach the problem of introducing a notion of smoothness for a field of Hilbert spaces in a more analytical way. Let us consider the trivial case: \(H=N\times\mathcal{H}\). Then, a section is a function \(\varphi:N\to\mathcal{H}\) and the space of smooth sections is \(C^{\infty}(N,\mathcal{H})\). Heuristically, in this case we know how to derivate certain sections before endowing with a smooth structure the space \(H\). In general, given a field of Hilbert spaces, we require the existence of a space of sections where we have an apriori way to compute a sort of derivation. The approach of endowing with a suitable space of sections the field of Hilbert spaces was applied by Dixmier and Douady long ago [3] to solve the analogue problem in the topological framework, using the notion of continuous field of Hilbert spaces defined by Godement [5]. Surprisingly, the problem in the smooth framework was not considered until few years ago by L. Lempert and R. Szoke in [8], where they introduced the following notion of smooth field of Hilbert spaces. **Definition 1.3**.: _Let \(N\) be a finite dimensional smooth manifold. A smooth structure on a field of Hilbert spaces \(p:H\to N\) is given by specifying a set of sections \(\Gamma^{\infty}\), closed under addition and under multiplication by elements of \(C^{\infty}(N)\), and a map \(\nabla:\operatorname{Vect}(N)\times\Gamma^{\infty}\to\Gamma^{\infty}\) such that, for \(X,Y\in\operatorname{Vect}(N)\), \(a\in C^{\infty}(N)\) and \(\varphi,\psi\in\Gamma^{\infty}\):_ 1. \(\nabla_{X+Y}=\nabla_{X}+\nabla_{Y}\)_,_ \(\nabla_{aX}=a\nabla_{X}\)_,_ \(\nabla_{X}(a\varphi)=X(a)\varphi+a\nabla_{X}(\varphi)\)__ 2. \(h(\varphi,\psi)\in C^{\infty}(N)\) _and_ \(Xh(\varphi,\psi)=h(\nabla_{X}\varphi,\psi)+h(\varphi,\nabla_{\overline{X}}\psi)\)__ 3. \(\mathcal{H}^{\infty}(\lambda):=\{\varphi(\lambda)\mid\varphi\in\Gamma^{ \infty}\}\) _is dense in_ \(\mathcal{H}(\lambda)\)_, for all_ \(\lambda\in N\)_._ _We will call such a triple \((H\to N,\Gamma^{\infty},\nabla)\) a smooth field of Hilbert spaces with connection \(\nabla\)._ Our first goal is the following: given a smooth field of Hilbert spaces \((H\to N,\Gamma^{\infty},\nabla)\), to find conditions as sharp as we can to guarantee the existence of a Hermitian structure on \(p:H\to N\) with connection \(\tilde{\nabla}\) such that \(\Gamma^{\infty}\subseteq\Gamma^{\infty}(N,H)\) and \(\tilde{\nabla}\mid_{\Gamma^{\infty}}=\nabla\). L. Lempert and R. Szoke approached this problem, but assuming that the required trivialization is globally defined (i.e. \(U_{i}=N\), see definition 1.1) and that the connection is flat (i.e. \(\nabla_{X}\nabla_{Y}-\nabla_{Y}\nabla_{X}=\nabla_{[X,Y]}\), for every \(X,Y\in\operatorname{Vect}\left(N\right)\)). We will approach that problem in section 2, without assuming those conditions. From a functional analytic perspective, one of the advantages of the notion of smooth field of Hilbert spaces is that it allows to introduce the family of Frechet spaces \(\{\Gamma^{n}(N)\}_{n\in\mathbb{N}_{0}\cup\{\infty\}}\), which is obtained completing \(\Gamma^{\infty}\) with a naturally defined family of seminorms (for details, see subsection 1.1). Eventually, we will prove the following result. **Proposition 1.4**.: _Let \((H\to N,\Gamma^{\infty},\nabla)\) be a smooth field of Hilbert spaces. Also let \(\{U_{i}\}_{i\in I}\) be an open cover of \(N\) and \(\{\mathcal{H}_{i}\}_{i\in I}\) be a family of Hilbert spaces, and assume that for each \(\lambda\in U_{i}\), there is a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\). If \(T_{i}(\Gamma^{\infty}(U_{i}))=C^{\infty}(U_{i},\mathcal{H}_{i})\) for every \(i\in I\), then \(H\mapsto N\) admits an Hermitian bundle structure with connection \(\tilde{\nabla}\) such that \(\Gamma^{\infty}(N,H)=\Gamma^{\infty}(N)\) and \(\tilde{\nabla}\mid_{\Gamma^{\infty}}=\nabla\)._ Notice that if \(T_{i}:\Gamma^{\infty}\mid_{U_{i}}\to C^{\infty}(U_{i},\mathcal{H}_{i})\) is continuous (with respect to the topology of \(\Gamma^{\infty}(U_{i})\)), then \(T_{i}(\Gamma^{\infty}(U_{i}))=C^{\infty}(U_{i},\mathcal{H}_{i})\). Most of the techniques that we develop to approach our problems, including the continuity of each \(T_{i}\), will require to extend the notion of smooth field of operators introduced in [2], which we briefly recall here. Let \((H_{1},N,\Gamma^{\infty}_{1},\nabla^{1})\) and \((H_{2},N,\Gamma^{\infty}_{2},\nabla^{2})\) be smooth fields of Hilbert spaces, and also let \(A=\{A(\lambda)\}_{\lambda\in N}\) be a field a operators such that \(\mathcal{H}^{\infty}_{1}(\lambda)\subseteq D[A(\lambda)]\) and \(\mathcal{H}^{\infty}_{2}(\lambda)\subseteq D[A^{*}(\lambda)]\), where \(D[A(\lambda)]\) and \(D[A^{*}(\lambda)]\) are the domains of the operators \(A(\lambda)\) and \(A^{*}(\lambda)\) respectively, for each \(\lambda\in N\). We will say that \(A\) is a smooth field of operators if \(A(\Gamma^{\infty}_{1})\subseteq\Gamma^{\infty}_{2}(N)\) and \(A^{*}(\Gamma^{\infty}_{2})\subseteq\Gamma^{\infty}_{1}(N)\). We summarize details and some of the main general features of smooth fields of operators in subsection 1.1. For instance (proposition 1.8), it turns out that the connections on the underlying fields of Hilbert spaces induce a well-defined connection \(\tilde{\nabla}\) on the space of smooth field of operators, defined by \[\tilde{\nabla}_{X}(A)=\nabla^{2}_{X}A-A\nabla^{1}_{X}\,,\qquad X\in\operatorname {Vect}(N).\] In section 2 we approach our problem in two stages: first, looking for conditions to guarantee the existence of a suitable Hilbert bundle structure, and second, looking for conditions to guarantee the existence of a suitable Hermtian bundle structure. Let \(p:H\to N\) be a field of Hilbert spaces, with \(N\) a Banach manifold. Assume that there are given an open cover \(\{U_{i}\}\) of \(N\), a family of Hilbert spaces \(\{\mathcal{H}_{i}\}\), and for each \(\lambda\in U_{i}\), a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\). The existence of a suitable Hilbert bundle structure is usually reformulated in terms of the so called transition maps \(\tau_{i,j}(\lambda)=T_{j}(\lambda)T_{i}^{*}(\lambda)\), \(\lambda\in U_{i}\cap U_{j}\). Indeed, in the literature some times is required the each \(\tau_{i,j}:U_{i}\cap U_{j}\to\mathcal{B}(\mathcal{H}_{i},\mathcal{H}_{j})\) is norm smooth. However, we will show in proposition 2.1 that it is enough to require strong smoothness, i.e the map \(\lambda\to\tau_{ij}(\lambda)x\) belongs to \(C^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{j})\), for every \(x\in\mathcal{H}_{i}\). We will find a condition equivalent to strong smoothness assuming that each \(\tau_{ij}\) defines a smooth field of operators on a dense smooth subspace of \(C^{\infty}(N,\mathcal{H}_{i})\). More precisely, using the connection \(\hat{\nabla}\), when the fiber Hilbert spaces are constants (i.e. \(\mathcal{H}_{1}(\lambda)=\mathcal{H}_{1}\) and \(\mathcal{H}_{2}(\lambda)=\mathcal{H}_{2}\)), lead us to a characterization of strong smoothness in terms of the consecutive derivatives of the field of operators under consideration (see proposition 2.3), which we apply in corollary 2.4 over the transition field of operators \(\tau_{ij}\). Instead of dealing with the field of operators \(\tau_{ij}\), it seems more practical to analyze directly the field of operators \(T_{i}(\lambda)\). In proposition 2.5, applying our notion of smooth field of operators and \(\hat{\nabla}\), we will provide natural conditions over each \(T_{i}\) to insure the existence of a Hilbert bundle structure on the underlying field of Hilbert spaces. In order to provide conditions to guarantee the existence of Hermitian structure, under suitable assumptions, it is useful to introduce the fields of operators \(\tilde{A}(X)=-\hat{\nabla}_{X}(T_{i})T_{i}^{*}\), for \(X\in\operatorname{Vect}\left(U_{i}\right)\). Essentially, we will prove that the family \((U_{i},\mathcal{H}_{i},T_{i})\) provides an Hermitian bundle structure if and only if each \(\tilde{A}_{i}(X)\) maps \(C^{\infty}(U_{i},\mathcal{H}_{i})\) into itself. As a consequence, we will obtain proposition 1.4 and corollary 2.8 that will be applied later to show that our examples define Hermitian bundles. One of the main outcomes of this article is the introduction of a proper definition of a smooth trivialization for a smooth field of Hilbert spaces. Actually, we will provide two different notions, the first one is meant to imply the existence of a Hilbert bundle structure (see definition 2.9 a)), which comes from proposition 2.5. The second one is meant to imply the existence of Hermitian bundle structure (see definition 2.9 b)), which comes from proposition 1.4. A notion of trivialization was also introduced in [8], but only considering the global and flat case. At the end of section 2, we will provide an abstract result concerning the construction of a smooth trivialization (see theorem 2.10). Once again, this problem was approached in [8], but in the global and flat case. Nevertheless, from there we borrow the idea of considering the space of horizontal sections, but in a general setting. However, we have not been able yet to establish conditions to guarantee the existence of horizontal section passing through every point in this generality (flatness is fundamental in the proof of that result in [8]). In section 3 we develop an interesting example, which we call direct Riemannian direct images, borrowing part of the name from [8], where the analogue example was developed in the holomorphic framework. Let \(M\), \(N\) be oriented Riemannian manifolds and \(\rho:M\to N\) be a surjective smooth submersion. Assume there is a finite dimensional Hermitian vector bundle \(\pi:E\to M\) with a given Hermitian connection \(\nabla^{E}\). We set \(M_{\lambda}=\rho^{-1}(\lambda)\) and volume form \(\mu_{\lambda}\) on \(M_{\lambda}\). For each \(\lambda\in N\), one can define the Hilbert space \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},E)\) consisting of the norm square integrable sections on \(M_{\lambda}\). In order to construct a smooth structure on the field of Hilbert spaces \(\mathcal{H}(\lambda)\), it is necessary to set a space of sections \(\Gamma^{\infty}\) and to compute the derivative \(Xh(\varphi,\psi)\), for \(X\in\operatorname{Vect}(N)\) and \(\varphi,\psi\in\Gamma^{\infty}\). Since the inner products on each \(\mathcal{H}(\lambda)\) is an integral over \(M_{\lambda}\) and we are looking for a connection satisfying condition _ii)_ in definition 1.3, we would like to compute derivatives of functions of the form \[F(\lambda)=\int_{M_{\lambda}}f\,d\mu_{\lambda}\,,\quad f\in C^{\infty}_{c}(M) \,,\;\lambda\in N.\] Since \(\rho\) is a submersion, \(D\rho(x)\) is an isomorphism between the spaces \([T_{x}M_{\lambda}]^{\perp}\) and \(T_{\lambda}N\), where \(\lambda=\rho(x)\) and the orthogonal complement is computed using the Riemannian structure on \(M\). This implies that we can pull-back a vector field \(X\) on \(N\) to a vector field \(\tilde{X}\) on \(M\), pointing in the normal direction at each \(M_{\lambda}\). Let \(J(x)\) denote the determinant of the restriction of \(D\rho(x)\) to the spaces \([T_{x}M_{\rho(x)}]^{\perp}\). Using the divergence theorem and the coarea formula (see appendix B), we obtain in theorem 3.5 that \[XF(\lambda)=\int_{M_{\lambda}}\tilde{X}(f)+(\operatorname{div}(\tilde{X})- \operatorname{div}X(\lambda))f\,\mu_{\lambda}\,,\qquad\text{ for }X\in\operatorname{Vect}(N)\,,f\in C^{\infty}_{c}(M)\] where \(\mu_{\lambda}=J^{-1}\eta_{\lambda}\) and \(\eta_{\lambda}\) is the volume form on \(M_{\lambda}\) induced by its Riemannian structure. In subsection 3.2, we apply the latter formula to prove that we can define a smooth structure on the field of Hilbert spaces \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},E)\), by taking \(\Gamma^{\infty}=\Gamma^{\infty}_{c}(M,E)\), i.e. the space of smooth compact supported sections (we identify \(\varphi\in C^{\infty}_{c}(M,E)\) with the section given by \(\varphi(\lambda)=\varphi|_{M_{\lambda}}\)) and defining the connection by \[\nabla_{X}(\varphi)=\nabla^{E}_{\tilde{X}}(\varphi)+\frac{1}{2}\big{(}\operatorname {div}(\tilde{X})-\operatorname{div}(X)\circ\rho\big{)}\varphi\,,\quad\text{ for }\varphi\in\Gamma^{\infty}.\] After this, one would like construct a Hermitian bundle structure on the field of Hilbert spaces \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},E)\). Notice that, if \(\rho\) defines a fiber bundle, then locally the fibers \(M_{\lambda}\) are diffeomorphic to a single manifold \(K\). Each of those diffeomorphisms induces unitary operator between \(L^{2}(K)\) and the corresponding \(L^{2}(M_{\lambda})\). We extend this fact to our framework and apply our results in section 2 to find conditions to construct a smooth local trivialization (see theorem 3.10 and remark 3.11). In particular, under the required conditions, there is a Hermitian bundle structure on the given field of Hilbert spaces. We finish subsection 3.2 discussing two particular cases, the trivial line bundle \(E=M\times\mathbb{C}\) and the tangent bundle \(E=TM\). In the first case, since sections on \(E\to M\) are functions \(\varphi:M\to\mathbb{C}\), we have that \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda})\) and \(\Gamma^{\infty}=C^{\infty}_{c}(M)\). Since \(\nabla^{E}\) is flat, the same happen with our \(\nabla\); thus if \(\rho\) defines a fiber bundle the condition in theorem 3.10 to insure the existence of a trivialization is trivially satisfied (see corollary 3.12). In the second particular case, we use the Levi-Civita connection \(\nabla^{L}\) on the tangent bundle, and we rewrite the conditions in theorem 3.10 in terms of the Christoffel symbols of \(\nabla^{L}\) (see corollary 3.13). ### Basic properties of smooth fields of Hilbert spaces and smooth fields of operators Let \((H\to N,\Gamma^{\infty},\nabla)\) a smooth field of Hilbert spaces and \(U\subseteq N\) open. Let us recall the definition of the space \(\Gamma^{n}(U)\) given in subsection 3.1 in [8], \(n\in\mathbb{N}\cup\{\infty\}\). The space \(\Gamma^{0}(U)\) is the \(C(U)\)-module of those sections of \(H\) that are locally uniform limits of a sequence in \(\Gamma^{\infty}\). The space \(\Gamma^{1}(U)\) is the \(C^{1}(U)\)-module of those \(\varphi\in\Gamma^{0}(U)\) for which there is a sequence \(\varphi_{j}\in\Gamma^{\infty}\) such that \(\varphi_{j}\to\varphi\) locally uniformly, and for every \(X\in\operatorname{Vec}(U)\), the sequence \(\nabla_{X}\varphi_{j}\) converges locally uniformly. For such \(\varphi\), we can define \(\nabla_{X}\varphi=\lim\nabla_{X}\varphi_{j}\) (lemma 3.1.2 in [8]). The space \(\Gamma^{n}(U)\) is defined inductively: \(\varphi\in\Gamma^{n}(U)\) if \(\varphi\) and \(\nabla_{X}\varphi\) belongs to \(\Gamma^{n-1}(U)\), for all \(X\in\operatorname{Vect}(U)\). Finally, \(\Gamma^{\infty}(U)=\bigcap\Gamma^{n}(U)\). The spaces \(\Gamma^{n}(U)\) and \(\Gamma^{\infty}(U)\) are Frechet spaces with the seminorms defined by \[||\varphi||_{C,X_{1},\cdots X_{m}}=\sup\{||\nabla_{X_{1}}\cdots\nabla_{X_{m}} \varphi(\lambda)||:\lambda\in C\}, \tag{1}\] where \(C\subseteq U\) is compact, \(X_{1},\cdots,X_{m}\in\operatorname{Vect}(U)\) and \(m\leq n\) (we can take \(X_{1},\cdots X_{m}\in\Xi\), with \(\Xi\subset\operatorname{Vect}(U)\) finite and generating the tangent plane at each \(\lambda\in U\)). **Remark 1.5**.: Lemma 2.2.3 in [8] implies that \(\Gamma^{\infty}|_{U}\) define a smooth structure on \(H|_{U}\mapsto U\) with connection given by \(\nabla^{U}_{X}(\varphi|_{U})=\nabla_{X}(\varphi)|_{U}\). It will become useful to extend the notion of smooth field of operators given [2]. **Definition 1.6**.: _Let \((H^{1}\to N,\Gamma^{\infty}_{1},\nabla^{1})\) and \((H^{2}\to N,\Gamma^{\infty}_{2},\nabla^{2})\) be smooth field of Hilbert spaces. Also let \(A=\{A(\lambda)\}\) be a field of operators with domains \(D[A(\lambda)]\subseteq\mathcal{H}_{1}(\lambda)\) and \(A(\lambda):D[A(\lambda)]\to\mathcal{H}_{2}(\lambda)\), for each \(\lambda\in N\). We say that \(A\) is \(n\)-times smooth if_ * \(D[A(\lambda)]\) _contains_ \(\mathcal{H}^{\infty}_{1}(\lambda):=\{\varphi(\lambda)\mid\varphi\in\Gamma^{ \infty}_{1}\}\) _and_ \(D[A^{*}(\lambda)]\) _contains_ \(\mathcal{H}^{\infty}_{2}(\lambda)\)_._ * \(A(\Gamma^{\infty}_{1})\subseteq\Gamma^{n}_{2}(N)\) _and_ \(A^{*}(\Gamma^{\infty}_{2})\subseteq\Gamma^{n}_{1}(N)\)_._ _We denote by \(\mathfrak{A}^{n}(\Gamma^{\infty}_{1},\Gamma^{\infty}_{2})\) the space of \(n\)-times smooth fields of operators. We say that a field of operators \(A\) is smooth if \(A\) belongs to \(\mathfrak{A}^{n}(\Gamma^{\infty}_{1},\Gamma^{\infty}_{2})\), for every \(n\in\mathbb{N}\). We denote by \(\mathfrak{A}^{\infty}(\Gamma^{\infty}_{1},\Gamma^{\infty}_{2})\) the space of smooth fields of operators. We also define \(\mathfrak{A}^{n}(\Gamma^{\infty}_{1})=\mathfrak{A}^{n}(\Gamma^{\infty}_{1}, \Gamma^{\infty}_{1})\) and \(\mathfrak{A}^{\infty}(\Gamma^{\infty}_{1})=\mathfrak{A}^{\infty}(\Gamma^{ \infty}_{1},\Gamma^{\infty}_{2})\)._ **Remark 1.7**.: Let \(A\) a \(n\)-times smooth field of operators. For a given function \(a\in C^{\infty}(N)\), using the rule \(\nabla_{X}(aA\varphi)=X(a)A\varphi+a\nabla_{X}(A\varphi)\) one can to show that the operator \(aA\) is also a \(n\)-smooth field of operators. Moreover, the set of fields of operators form a \(C^{\infty}(N)\)-module. See [2] for more properties Let \(A\) be an smooth field of operators. Then, for each \(X\in\operatorname{Vect}N\), the operator \(\hat{\nabla}_{X}(A)=\nabla^{2}_{X}A-A\nabla^{1}_{X}\) also maps \(\Gamma^{\infty}_{1}\) into \(\Gamma^{\infty}_{2}(N)\), and by definition we have the Leibniz's rule \[\nabla^{2}_{X}(A\varphi)=\hat{\nabla}_{X}(A)\varphi+A\nabla^{1}_{X}(\varphi), \tag{2}\] for every \(\varphi\in\Gamma_{1}^{\infty}\). The following result summarize the main properties of \(\hat{\nabla}\) and it is a straightforward generalization of theorem 2.6 in [2]. **Proposition 1.8**.: _Let \((H^{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) and \((H^{2}\to N,\Gamma_{2}^{\infty},\nabla^{2})\) be smooth field of Hilbert spaces and \(A\in\mathfrak{A}^{\infty}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\). Then \(\hat{\nabla}_{X}(A)=[\nabla_{X},A]\) is also a smooth field of operators and satisfies the following properties for all \(X,Y\in\operatorname{Vect}(N)\), \(f\in C^{\infty}(N)\)._ 1. \(\hat{\nabla}_{X+Y}(A)=\hat{\nabla}_{X}(A)+\hat{\nabla}_{Y}(A)\,,\quad\hat{ \nabla}_{fX}(A)=f\hat{\nabla}_{X}(A)\)__ 2. \(\hat{\nabla}_{X}(fA)=X(f)A+f\hat{\nabla}_{X}(A),\)__ 3. \(h(\hat{\nabla}_{X}(A)\varphi,\psi)=h(\varphi,\hat{\nabla}_{\overline{X}}(A^{ *})\psi)\)_, for every_ \(\varphi\in\Gamma_{1}^{\infty}\) _and_ \(\psi\in\Gamma_{2}^{\infty}\)_._ 4. \(\hat{\nabla}_{\overline{X}}(A^{*})(\lambda)\subseteq[\hat{\nabla}_{X}(A)( \lambda)]^{*}\)_, for each_ \(\lambda\in N\)_._ The proof of the previous result is practically the same than the proof of theorem 2.6 in [2]. Nevertheless, for the sake of self containment, let us point out that _i)_ and _ii)_ follow from a direct computation and _iii)_ follows from condition _ii)_ in definition 1.3. In order to show that \(\hat{\nabla}_{X}(A)\) is a field of operators, we claim that, for each \(\lambda\in N\) and \(u\in\mathcal{H}_{1}^{\infty}(\lambda)\), we can define \[\hat{\nabla}_{X}\left(A\right)(\lambda)u=(\hat{\nabla}_{X}(A)\varphi)( \lambda),\] where \(\varphi\in\Gamma^{\infty}\) is such that \(\varphi(\lambda)=u\). Notice that property _iii)_ and the density of \(\mathcal{H}_{2}^{\infty}(\lambda)\) in \(\mathcal{H}_{2}(\lambda)\) implies that if \(\varphi(\lambda)=0\), then \(\hat{\nabla}_{X}(A)\varphi(\lambda)=0\). The latter implies that \(\hat{\nabla}_{X}(A)(\lambda)\) is well defined (independent of the required \(\varphi\)). **Remark 1.9**.: Let us explain the concepts we introduced so far by considering the trivial case, i.e. \(H=N\times\mathcal{H}\), where \(\mathcal{H}\) is a Hilbert space. Also, let \(\Gamma^{\infty}\) be a smooth \(C^{\infty}(N)\)-submodule of \(C^{\infty}(N,\mathcal{H})\) and take as connection \(\nabla_{X}=X\). 1. If \(\Gamma^{\infty}\) is dense in \(C^{\infty}(N,\mathcal{H})\), then \(\Gamma^{\infty}\) defines a smooth field of Hilbert spaces structure on \(H=N\times\mathcal{H}\). Indeed, condition _iii)_ in definition 1.3 follows from considering vectors in \(\mathcal{H}\) as constant sections. Moreover, by definition \(\Gamma^{\infty}(N)=C^{\infty}(N,\mathcal{H})\). We expect that the converse claim holds true as well, but we will not need such result at least in this article. 2. Let \(D\subseteq\mathcal{H}\) a dense subspace. An important example of a dense smooth \(C^{\infty}(N)\)-submodule is the space \(P^{\infty}(N,D)\) of all the sections \(f\in C^{\infty}(N,\mathcal{H})\) of the form \(f(\lambda)=\sum_{j}^{n}a_{k}(\lambda)x_{k}\), with \(a_{k}\in C^{\infty}(N)\) and \(x_{k}\in D\). Indeed, the case \(D=\mathcal{H}\) follows from [9, Proposition 44.2] and the general case follows from an \(\epsilon/2\)-argument. 3. If \(\Gamma_{1}^{\infty}\) is a dense smooth \(C^{\infty}(N)\)-submodule of \(C^{\infty}(N,\mathcal{H}_{1})\), \(\Gamma_{2}^{\infty}\) is a dense smooth \(C^{\infty}(N)\)-submodule of \(C^{\infty}(N,\mathcal{H}_{2})\) and \(\Gamma_{1}^{\infty}\) contains the constants sections, then \(\hat{\nabla}\) coincide with the canonical strong derivation. Indeed, if \(A\in\mathfrak{A}^{1}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\) then equation (2) implies that \[X(Au)=\hat{\nabla}_{X}(A)u,\] for any \(u\in\mathcal{H}_{1}\). In what follows, in the trivial case we will denote by \(\hat{X}(A)\) the field of operators \(\hat{\nabla}_{X}(A)\). The following trivial result asserts that Leibniz's rule also holds for \(\hat{\nabla}\) whenever it makes sense. **Proposition 1.10**.: _Let \((H^{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\), \((H^{2}\to N,\Gamma_{2}^{\infty},\nabla^{2})\) and \((H^{3}\to N,\Gamma_{3}^{\infty},\nabla^{3})\) be smooth field of Hilbert spaces. Also let \(B\in\mathfrak{A}^{\infty}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\). If \(A:\Gamma_{2}^{\infty}(N)\to\Gamma_{3}^{\infty}(N)\), then_ \[\hat{\nabla}_{X}(AB)=\hat{\nabla}_{X}(A)B+A\hat{\nabla}_{X}(B),\] _for every \(X\in\operatorname{Vect}\left(N\right)\). The same identity holds if \(B:\Gamma_{1}^{\infty}\to\Gamma_{2}^{\infty}\) and \(A\in\mathfrak{A}^{\infty}(\Gamma_{2}^{\infty},\Gamma_{3}^{\infty})\)._ The following result provides some general conditions on a smooth field of operators in terms of its derivatives to guarantee that it can be extend to the completed space \(\Gamma_{1}^{\infty}(N)\). In the next subsection we will find more subtle conditions when \(H^{2}\to N\) is trivial. **Proposition 1.11**.: _Let \((H^{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) and \((H^{2}\to N,\Gamma_{2}^{\infty},\nabla^{2})\) be smooth field of Hilbert spaces and fix \(n\in\mathbb{N}\cup\{0\}\). If \(A\in\mathfrak{A}^{n}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\) and \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(A)\) is locally uniformly bounded fields of operators, for every \(k\leq n\) and \(X_{1},\cdots,X_{k}\in\operatorname{Vect}N\), then \(A\left(\Gamma_{1}^{n}(N)\right)\subseteq\Gamma_{2}^{n}(N)\) and \(A:\Gamma_{1}^{n}(N)\to\Gamma_{2}^{n}(N)\) is continuous._ Proof.: We will prove our claim by induction. If \(A\) is locally uniformly bounded, then it defines a continuous operator on \(\Gamma_{1}^{\infty}\) with respect to the locally uniform convergence topology, thus \(A(\Gamma_{1}^{0}(N))\subseteq\Gamma_{2}^{0}(N)\) and the case \(n=0\) follows. For the case \(n=1\), let \(\varphi\in\Gamma_{1}^{1}(N)\). Then there is a sequence \((\varphi_{m})\) in \(\Gamma_{1}^{\infty}(N)\) such that \(\varphi_{m}\to\varphi\) and \(\nabla_{X}^{1}(\varphi_{m})\to\nabla_{X}^{1}(\varphi)\), where the limits are taking in the locally uniform convergence topology and \(X\in\operatorname{Vect}\left(N\right)\). Then \[\nabla_{X}^{2}(A\varphi_{m})=\hat{\nabla}_{X}(A)\varphi_{m}+A\nabla_{X}^{1}( \varphi_{m}).\] The case \(n=0\) implies that the right hand side converges to \(\hat{\nabla}_{X}(A)\varphi+A\nabla_{X}^{1}(\varphi)\). Thus, by definition \(A\varphi\in\Gamma_{1}^{2}(N)\) and \[\nabla_{X}^{2}(A\varphi)=\hat{\nabla}_{X}(A)\varphi+A\nabla_{X}^{1}(\varphi). \tag{3}\] Assume our claim follows for \(n-1\) and let \(A\in\mathfrak{A}^{n}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\) and \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(A)\) is locally uniformly bounded fields of operators, for every \(k\leq n\) and \(X_{1},\cdots,X_{k}\in\operatorname{Vect}N\). Then \(A\left(\Gamma_{1}^{n-1}(N)\right)\subseteq\Gamma_{2}^{n-1}(N)\) and \(\hat{\nabla}_{X}(A)\left(\Gamma_{1}^{n-1}(N)\right)\subseteq\Gamma_{2}^{n-1}(N)\), for every \(X\in\operatorname{Vect}\left(N\right)\). Moreover, if \(\varphi\in\Gamma_{1}^{n}(N)\), then equation (3) implies that \(\nabla_{X}^{2}(A\varphi)\in\Gamma_{2}^{n-1}(N)\). Therefore, \(A\varphi\in\Gamma_{2}^{n}(N)\). The continuity of \(A\) follows from noticing that \(\nabla_{X_{1}}^{2}\cdots\nabla_{X_{k}}^{2}(A\varphi)\) is a sum of terms of the form \(\hat{\nabla}_{X_{i_{1}}}\cdots\hat{\nabla}_{X_{i_{n}}}(A)(\nabla_{X_{j_{1}}} ^{1}\cdots\nabla_{X_{j_{m}}}^{1}\varphi)\), where \(\{i_{1},\cdots,i_{n}\}\cup\{j_{1},\cdots,j_{m}\}=\{1,\cdots k\}\) and \(\{i_{1},\cdots,i_{n}\}\cap\{j_{1},\cdots,j_{m}\}=\emptyset\). **Definition 1.12**.: _For each \(n\in\mathbb{N}\), we denote by \(\mathfrak{A}^{n}_{lb}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\) the space formed by all the fields of operators \(A\in\mathfrak{A}^{n}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\) such that \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(A)\) is a locally uniformly bounded fields of operators, for every \(0\leq k\leq n\) and \(X_{1},\cdots,X_{k}\in\operatorname{Vect}N\). Similarly, we denote by \(\mathfrak{A}^{\infty}_{lb}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\) the space formed by the field of operators \(A\) such that \(A\in\mathfrak{A}^{n}_{lb}(\Gamma_{1}^{\infty},\Gamma_{2}^{\infty})\), for every \(n\in\mathbb{N}\). We define \(\mathfrak{A}^{n}_{lb}(\Gamma_{1}^{\infty})=\mathfrak{A}^{n}_{lb}(\Gamma_{1}^{ \infty},\Gamma_{1}^{\infty})\) and \(\mathfrak{A}^{\infty}_{lb}(\Gamma_{1}^{\infty})=\mathfrak{A}^{\infty}_{lb}( \Gamma_{1}^{\infty},\Gamma_{1}^{\infty})\)._ ## 2 Local trivializations and Hilbert bundles The following result is well known for finite dimensional vector bundles, but we could not find it correctly stated in the literature for the infinite dimensional case (see Remark 2.2 below). **Proposition 2.1**.: _Let \(\pi:H\to N\) be a field of Hilbert spaces, with \(N\) a Banach manifold. Let \(\{U_{i}\}_{i\in I}\) be an open cover of \(N\) and \(\{\mathcal{H}_{i}\}_{i\in I}\) a family of Hilbert spaces, and assume that for each \(\lambda\in U_{i}\), there is a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\). Also let \(\tau_{ij}:U_{i}\cap U_{j}\to\mathcal{B}(\mathcal{H}_{i},\mathcal{H}_{j})\), the map given by \(\tau_{i,j}(\lambda)=T_{j}(\lambda)T_{i}^{*}(\lambda)\). The following statements are equivalent._ * _There exist a Banach manifold structure on_ \(H\) _such that the open cover_ \(\{U_{i}\}_{i\in I}\) _together with the family of Hilbert spaces_ \(\{\mathcal{H}_{i}\}_{i\in I}\) _and the unitary operators_ \(T_{i}(\lambda)\) _defines a smooth Hilbert bundle structure on_ \(\pi:H\to N\)_._ * _Each_ \(\tau_{ij}\) _is strongly smooth, i.e._ \(\tau_{ij}x\in C^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{j})\)_, for each_ \(x\in\mathcal{H}_{i}\)_._ * _Each_ \(\tau_{ij}\) _belongs to_ \(\mathfrak{A}^{\infty}(P^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{i}),P^{\infty}(U_{i }\cap U_{j},\mathcal{H}_{j}))\)_._ The maps \(\tau_{ij}\) are called the transition maps. Proof.: By definition a) implies b). Conversely, we claim that the maps \(T_{i}:\pi^{-1}(U_{i})\to U_{i}\times\mathcal{H}_{i}\) define charts on \(H\). Indeed, \(T_{j}T_{i}^{*}(\lambda,x)=(\lambda,T_{j}(\lambda)T_{i}^{*}(\lambda)x)=(\lambda, \tau_{ij}(\lambda)x)\). Since the map \(\mathcal{B}(\mathcal{H}_{i},\mathcal{H}_{j})\times\mathcal{H}_{i}\ni(A,x)\to Ax\in \mathcal{H}_{j}\) is bilinear, each \(T_{j}T_{i}^{*}\) is smooth if and only if the map \(U_{i}\ni\lambda\to\tau_{ij}(\lambda)x\in\mathcal{H}_{j}\) is smooth for each \(x\in\mathcal{H}_{i}\), i.e. the maps \(T_{i}\) defines charts if and only if each \(\tau_{ij}\) is strongly smooth. The equivalence between b) and c) follows from recalling that \(P^{\infty}(U_{i}\cap U_{j},\mathcal{H})\) is the space of sections generated by the constant sections (see remark 1.9 b))) and noticing that \(\tau_{ij}^{*}=\tau_{ji}\). **Remark 2.2**.: Instead of b), in the literature the definition of Hilbert bundle sometimes requires that each \(\tau_{ij}\) is norm smooth. For instance, see [6]. In particular, with such requirement there are fewer examples of Hilbert bundles than with definition 1.1. Notice that the proof of b) implies a) above was essentially borrow from proposition III.1.2 in [6]. We will provide a forth statement equivalent to a), b) or c) in proposition 2.1. We will replace \(P^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{i})\) in condition c) by any dense smooth submodule of \(C^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{i})\) and we will require another technical condition in terms of the derivatives of each \(\tau_{ij}\) (defined by proposition 1.8; see also remark 1.9). Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be Hilbert spaces, and \(A(\lambda):\mathcal{H}_{1}\to\mathcal{H}_{2}\) be a bounded operator, for each \(\lambda\in N\). Clearly, \(A\) and \(A^{*}\) are strongly smooth fields of operators if and only if \(A\in\mathfrak{A}^{\infty}(P^{\infty}(N,\mathcal{H}_{1}),P^{\infty}(N, \mathcal{H}_{2}))\). In such case we also have that \[X_{1}\cdots X_{k}(Ax)=\hat{X}_{1}\cdots\hat{X}_{k}(A)x,\] for every \(x\in\mathcal{H}_{1}\) and \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(N\right)\). Therefore, the domain of each \(\hat{X}_{1},\cdots\hat{X}_{k}(A)\) is \(\mathcal{H}_{1}\), and since it also has a densely defined adjoint, each \(\hat{X}_{1},\cdots\hat{X}_{k}(A)\) is bounded. Moreover, the previous identity also shows that \(\hat{X}_{1},\cdots\hat{X}_{k}(A)\) is strongly continuous. The same conclusion follows for \(A^{*}\). In other words, each \(\hat{X}_{1},\cdots\hat{X}_{k}(A)\) belongs to \(C(N,B(\mathcal{H}_{1},\mathcal{H}_{2})_{*-st})\), where \(B(\mathcal{H}_{1},\mathcal{H}_{2})_{*-st}\) is the space of bounded operators endowed with the \(*\)-strong topology. The following result asserts that the latter property characterizes strongly smooth fields of operators, even if we replace \(P^{\infty}(N,\mathcal{H}_{i})\) by any dense smooth \(C^{\infty}(N)\)-submodule of \(C^{\infty}(N,\mathcal{H}_{i})\). **Proposition 2.3**.: _Let \(\tilde{\Gamma}_{1}^{\infty}\) be a dense smooth submodule of \(C^{\infty}(N,\mathcal{H}_{1})\), \(\tilde{\Gamma}_{2}^{\infty}\) be a dense smooth submodule of \(C^{\infty}(N,\mathcal{H}_{2})\) and \(A\) belongs to \(\mathfrak{A}(\tilde{\Gamma}_{1}^{\infty},\tilde{\Gamma}_{2}^{\infty})\). \(A\) is strongly smooth if and only if \(\hat{X}_{1},\cdots\hat{X}_{k}(A)\) belongs to \(C(N,B(\mathcal{H}_{1},\mathcal{H}_{2})_{*-st})\), for every \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(N\right)\)._ Proof.: We have already shown that if \(A\) is strongly smooth then each \(\hat{X}_{1},\cdots\hat{X}_{k}(A)\in C(N,B(\mathcal{H}_{1},\mathcal{H}_{2})_{* -st})\), for every \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(N\right)\). Conversely, let \(x\in\mathcal{H}_{1}\). Let us show that \(Ax\) belongs to \(C^{1}(N,\mathcal{H}_{2})\). Since \(\hat{X}(A)x\) belongs to \(C^{0}(N,\mathcal{H}_{2})\), according to lemma 5.1.1 [8] (or appendix A), it is enough to show that the map \(\lambda\to\langle A(\lambda)x,y\rangle\) is smooth and \[X(\langle Ax,y\rangle)(\lambda)=\langle\hat{X}(A)(\lambda)x,y\rangle,\] for every \(y\in\mathcal{H}_{2}\). Let \(f_{n}\in\tilde{\Gamma}_{2}^{\infty}\) such that \(f_{n}\to y\) (in the canonical topology of \(C^{\infty}(N,\mathcal{H}_{2})\)). In particular, \(f_{n}\) converges locally uniformly to \(y\) and \(X(f_{n})\) converges locally uniformly to \(0\), for every \(X\in\operatorname{Vect}N\). Since \(Ax\) is locally uniformly bounded, \(\langle Ax,f_{n}\rangle\) converge locally uniformly to \(\langle Ax,y\rangle\). Thus, we only need to prove that \(X(\langle Ax,f_{n}\rangle)\) converges locally uniformly to \(\langle\hat{X}(A)x,y\rangle\). Since \(A\) is a smooth field of operators, \(\langle Ax,f_{n}\rangle\) is smooth and \[X(\langle Ax,f_{n}\rangle)=X(\langle x,A^{*}f_{n}\rangle)=\langle x,\hat{ \overline{X}}(A^{*})f_{n}\rangle+\langle x,A^{*}\overline{X}(f_{n})\rangle= \langle\hat{X}(A)x,f_{n}\rangle+\langle Ax,\overline{X}(f_{n})\rangle.\] Therefore, since \(Ax\) and \(\hat{X}(A)x\) are locally uniformly bounded, \(X(\langle Ax,f_{n}\rangle)\) converge locally uniformly to \(\langle\hat{X}(A)x,y\rangle\). Following the same argument, we can show by induction that \(Ax\) belongs to \(C^{n}(N,\mathcal{H}_{2})\), for every \(n\in\mathbb{N}\), and this finishes the proof. **Corollary 2.4**.: _For each \(i,j\in I\), let \(U_{i}\), \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\) and \(\tau_{ij}\) as in proposition 2.1. Also, let \(\Gamma^{\infty}\) be a \(C^{\infty}(N)\)-module of sections of \(H\to N\) and define \(\tilde{\Gamma}_{ij}^{\infty}:=T_{i}(\Gamma^{\infty}|_{U_{i}\cap U_{j}})\). Assume that each \(\tilde{\Gamma}_{ij}^{\infty}\) is a dense smooth subspace in \(C^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{i})\). The field of operators \(\tau_{ij}\) is strongly smooth if and only if_ * \(\hat{X}_{1},\cdots\hat{X}_{k}(\tau_{ij})\) _belongs to_ \(C(U_{i}\cap U_{j},B(\mathcal{H}_{i},\mathcal{H}_{j})_{*-st})\)_, for every_ \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(U_{i}\cap U_{j}\right)\)_._ Let \((H\to N,\Gamma^{\infty},\nabla)\) be a smooth field of Hilbert spaces. The notion of smoothness of fields of operators given in definition 1.6 and the extension of the connection to such fields of operators allow us to consider the field of operators of the form \(\hat{X}(A)\) in proposition 2.3 and its corollary 2.4. However, in order to find further conditions to guarantee that a given family of triples \((U_{i},\mathcal{H}_{i},T_{i})_{i\in I}\) as above defines a Hilbert bundle structure on \(H\mapsto N\) (or equivalently, that each \(\tau_{ij}\) is strongly smooth), it seems more subtle to require that each \(T_{i}\) defines a smooth field of operators. In other words, we will assume that \(T_{i}(\Gamma^{\infty}|_{U_{i}})\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\) and that there is a dense smooth subspace \(\tilde{\Gamma}_{i}^{\infty}\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\) such that \(T^{*}(\tilde{\Gamma}_{i}^{\infty})\subseteq\Gamma^{\infty}(U_{i})\). For instance, this happen when \(\tilde{\Gamma}_{i}^{\infty}:=T_{i}(\Gamma^{\infty}|_{U_{i}})\) is a dense smooth subspace of \(C^{\infty}(U_{i},\mathcal{H}_{i})\) (as in the previous corollary). Notice that, if \(\tilde{\Gamma}_{i}^{\infty}\) is a dense smooth subspace of \(C^{\infty}(U_{i},\mathcal{H}_{i})\) and each \(T_{i}\) belongs to \(\mathfrak{A}_{lb}^{\infty}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{i})\), then each \(\tau_{ij}\) is strongly smooth. Indeed, proposition 1.11 implies that \(\Gamma^{\infty}(U_{i})\). Thus, \(\tau_{ij}x=T_{j}T_{i}^{*}x\) belongs to \(C^{\infty}(U_{i}\cap U_{j},\mathcal{H}_{j})\), for every \(x\in\mathcal{H}_{i}\). Actually, under those assumptions we will prove that \(H\to N\) admits a Hermitan bundle structure (see corollary 2.8). The following result asserts that, when \(\tilde{\Gamma}_{i}^{\infty}=P^{\infty}(U_{i},\mathcal{H}_{i})\), we can weaken the condition \(T_{i}\in\mathfrak{A}_{lb}^{\infty}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{ i})\) and still obtain a Hilbert bundle structure. **Proposition 2.5**.: _Let \((H\to N,\Gamma^{\infty},\nabla)\) be a smooth field of Hilbert spaces. Also, let \(\{U_{i}\}_{i\in I}\) be an open cover of \(N\) and \(\{\mathcal{H}_{i}\}_{i\in I}\) a family of Hilbert spaces, and assume that for each \(\lambda\in U_{i}\), there is a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\). Also assume that \(T_{i}(\Gamma^{\infty}|_{U_{i}})\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\) and \(T_{i}^{*}x\) belongs to \(\Gamma^{\infty}(U_{i})\), for every \(x\in\mathcal{H}_{i}\)._ * \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\) _is a field of bounded operators, for every_ \(k\in\mathbb{N}\) _and_ \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(U_{i}\right)\)_._ * \(T_{i}(\Gamma^{\infty}(U_{i}))\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\) _if and only if_ \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\) _maps_ \(\Gamma^{\infty}(U_{i})\) _into_ \(C(U_{i},\mathcal{H}_{i})\)_, for every_ \(k\in\mathbb{N}\) _and_ \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(U_{i}\right)\)_._ * _Let_ \(A_{i}(X):=T_{i}^{*}\hat{\nabla}_{X}(T_{i})\)_. If_ \(A_{i}(X)\left(\Gamma^{\infty}(U_{i})\right)\subseteq\Gamma^{\infty}(U_{i})\)_, then_ \(T_{i}(\Gamma^{\infty}(U_{i}))\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\)_._ Proof.: By definition, we have that \[\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i}^{*})x=\nabla_{X_{1}} \cdots\nabla_{X_{k}}(T_{i}^{*}x),\] for every \(k\in\mathbb{N}\) and \(X_{1},\cdots X_{k}\in\operatorname{Vect}\left(U_{i}\right)\). In particular, the domain \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i}^{*})(\lambda)\) is \(\mathcal{H}_{i}\), for every \(\lambda\in U_{i}\). Proposition 1.8 v) and condition iii) in definition 1.3 implies that \([\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i}^{*})]^{*}\) is densely defined. The closed graph theorem and proposition 1.8 v) imply that \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i}^{*})\) is bounded and \[\left[\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i}^{*})\right]^{*}= \hat{\nabla}_{\overline{X_{i}}}\cdots\hat{\nabla}_{\overline{X_{k}}}(T_{i}),\] for every \(k\in\mathbb{N}\) and \(X_{1},\cdots X_{k}\in\operatorname{Vect}\left(U_{i}\right)\). In particular, a) follows. In order to show b), assume that \(\overline{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\) maps \(\Gamma^{\infty}(U_{i})\) into \(C(U_{i},\mathcal{H}_{i})\), for every \(k\in\mathbb{N}\) and \(X_{1},\cdots,X_{k}\in\operatorname{Vect}\left(U_{i}\right)\). It is enough to prove by induction in \(n\) that \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\left(\Gamma^{\infty}(U_{ i})\right)\subseteq C^{n}(U_{i},\mathcal{H}_{i})\), for every \(k\in\mathbb{N}\cup\{0\}\), \(X_{1},\cdots X_{k}\in\operatorname{Vect}\left(U_{i}\right)\) and \(n\in\mathbb{N}\). The case \(n=0\) and \(k\in\mathbb{N}\) is precisely our initial assumption. Since \(T_{i}\) is unitary, \(T_{i}\left(\Gamma^{0}(U_{i})\right)\subseteq C(U_{i},\mathcal{H}_{i})\) and the case \(k=0\) and \(n=0\) follows. Assume that \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\left(\Gamma^{\infty}(U_{i} )\right)\subseteq C^{n-1}(U_{i},\mathcal{H}_{i})\), for every \(k\in\mathbb{N}\cup\{0\}\), \(X_{1},\cdots X_{k}\in\operatorname{Vect}\left(U_{i}\right)\). If \(\varphi\in\Gamma^{\infty}(U_{i})\), then the map \(\langle\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\varphi,x\rangle= \langle\varphi,\hat{\nabla}_{\overline{X_{1}}}\cdots\hat{\nabla}_{\overline{X_ {k}}}(T_{i}^{*})x\rangle\) belongs to \(C^{\infty}(U_{i})\), for every \(x\in\mathcal{H}_{i}\), and we have that \[X\left(\langle\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\varphi,x \rangle\right)=\langle\nabla_{X}(\varphi),\hat{\nabla}_{\overline{X_{1}}}\cdots \hat{\nabla}_{\overline{X_{k}}}(T_{i}^{*})x\rangle+\langle\varphi,\nabla_{ \overline{X}}(\hat{\nabla}_{\overline{X_{1}}}\cdots\hat{\nabla}_{\overline{X_{k}}}(T _{i}^{*})x\rangle\rangle\] \[=\langle\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\nabla_{X}(\varphi)+ \hat{\nabla}_{X}\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\varphi,x\rangle.\] Since \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\nabla_{X}(\varphi)+\hat{ \nabla}_{X}\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\varphi\in C^{n-1 }(U_{i},\mathcal{H}_{i})\), [8, Lemma 5.1.1 ] (or appendix A) implies that, for each \(k\in\mathbb{N}\cup\{0\}\) and \(X_{1},\cdots X_{k}\in\operatorname{Vect}\left(U_{i}\right)\), \(\hat{\nabla}_{X_{1}}\cdots\hat{\nabla}_{X_{k}}(T_{i})\left(\Gamma^{\infty}(U_{i}) \right)\subseteq C^{n}(U_{i},\mathcal{H}_{i})\). Therefore, \(T_{i}(\Gamma^{\infty}(U_{i}))\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\). The converse statement in b) is trivial. We follow a similar but simpler argument to show c). Assume that \(A_{i}(X)(\Gamma^{\infty}(U_{i}))\subseteq\Gamma^{\infty}(U_{i})\). It is enough to prove by induction that \(T_{i}\left(\Gamma^{\infty}(U_{i})\right)\subseteq C^{n}(U_{i},\mathcal{H}_{i})\), for every \(n\in\mathbb{N}\). We showed the case \(n=0\) during the proof of b). Assume that \(T_{i}\left(\Gamma^{\infty}(U_{i})\right)\subseteq C^{n-1}(U_{i},\mathcal{H}_{i})\). If \(\varphi\in\Gamma^{\infty}(U_{i})\), then the map \(\langle T_{i}\varphi,x\rangle=\langle\varphi,T_{i}^{*}x\rangle\) belongs to \(C^{\infty}(U_{i})\), for every \(x\in\mathcal{H}_{i}\), and we have that \[X(\langle T_{i}\varphi,x\rangle)=\langle\nabla_{X}(\varphi),T_{i}^{*}x\rangle+ \langle\varphi,\nabla_{\overline{X}}(T_{i}^{*}x)\rangle=\langle T_{i}\nabla_{X}( \varphi)+\hat{\nabla}_{X}(T_{i})\varphi,x\rangle=\langle T_{i}(\nabla_{X}( \varphi)+A_{i}(X)(\varphi)),x\rangle\] Since \(\nabla_{X}(\varphi)+A_{i}(X)(\varphi)\in C^{n-1}(U_{i},\mathcal{H}_{ \[\tilde{X}(\tau_{ij})f=(\tau_{ij}\tilde{A}_{i}(X)-\tilde{A}_{i}(X)\tau_{ij})f \tag{4}\] \[\tilde{A}_{i}(X+Y)=\tilde{A}_{i}(X)+\tilde{A}_{i}(Y),\qquad\tilde{A}_{i}(aX)=a \tilde{A}_{i}(X). \tag{5}\] \[\langle\tilde{A}_{i}(X)f,g\rangle=\langle f,-\tilde{A}_{i}(\overline{X})g\rangle, \tag{6}\] for every \(f,g\in\tilde{\Gamma}_{i}^{\infty}\), \(a\in C^{\infty}(U_{i})\) and \(X,Y\in\operatorname{Vect}\left(U_{i}\right)\). Notice that if we define \(A_{i}(X)=T_{i}^{*}\tilde{\nabla}_{X}(T_{i})\) (as in proposition 2.5), then \(A_{i}(X)=-T_{i}^{*}\tilde{A}_{i}(X)T_{i}\) on \(\Gamma^{\infty}|_{U_{i}}\). Moreover, the family of fields of operators \(A_{i}(X)\) also satisfies equations (5) and (6) replacing \(f,g\) for \(\varphi,\psi\in\Gamma^{\infty}|_{U_{i}}\). If we do not assume that \(T_{i}(\Gamma^{\infty}|_{U_{i}})\) is a dense smooth subspace of \(C^{\infty}(U_{i},\mathcal{H}_{i})\), and instead we assume that there is a dense smooth subspace \(\tilde{\Gamma}_{i}^{\infty}\) of \(C^{\infty}(U_{i},\mathcal{H}_{i})\) such that \(T_{i}\in\mathfrak{A}^{\infty}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{i}^{ \infty})\), then to define \(\tilde{A}_{i}(X)\) we would also need to assume that \(T_{i}(\Gamma^{\infty}(U_{i}))\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\). **Proposition 2.6**.: _Let \((H\to N,\Gamma^{\infty},\nabla)\) be a smooth field of Hilbert spaces. Also, let \(\{U_{i}\}_{i\in I}\), \(\{\mathcal{H}_{i}\}_{i\in I}\) and \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\) define Hilbert bundle structure on \(H\to N\) such that \(\Gamma^{\infty}(N)\subseteq\Gamma^{\infty}(N,H)\). Assume that there is a dense smooth subspace such that \(T_{i}\in\mathfrak{A}^{\infty}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{i}^{ \infty})\), for each \(i\in I\). The following statement are equivalent:_ * \(H\to N\) _admits a Hermitian bundle structure with connection_ \(\tilde{\nabla}\) _such that_ \(\tilde{\nabla}|_{\Gamma^{\infty}}=\nabla\)_._ * \(\tilde{A}_{i}(X)=-\tilde{\nabla}_{X}(T_{i})T_{i}^{*}:\tilde{\Gamma}_{i}^{ \infty}\mapsto C^{\infty}(U_{i},\mathcal{H}_{i})\) _is a field of bounded operators such that_ \(\tilde{A}_{i}(X)\left[C^{\infty}(U_{i},\mathcal{H}_{i})\right]\subseteq C^{ \infty}(U_{i},\mathcal{H}_{i})\) _and equations (_4_), (_5_) and (_6_) holds true, for every_ \(i\in I\)_,_ \(a\in C^{\infty}(U_{i})\)_,_ \(f,g\in C^{\infty}(U_{i},\mathcal{H}_{i})\) _and_ \(X,Y\in\operatorname{Vect}\left(U_{i}\right)\)_._ Proof.: If we assume a), then b) follows from taking \(\Gamma^{\infty}=\Gamma^{\infty}(N,H)\) in the definition of \(\tilde{A}_{i}(X)\) (so \(\tilde{\Gamma}_{i}^{\infty}=C^{\infty}(U_{i},\mathcal{H}_{i})\)). Conversely, if we assume b), we can define for \(\varphi\in\Gamma^{\infty}(N,H)\), \[\tilde{\nabla}_{X}(\varphi)|_{U_{i}}:=T_{i}^{*}(X+\tilde{A}_{i}(X))T_{i}( \varphi|_{U_{i}}).\] Equation (4) implies that \(\tilde{\nabla}_{X}(\varphi)\) is well defined on \(N\). Equations (5) and (6) imply conditions i) and ii) in the definition of a Hermitian bundle. Thus, \(\tilde{\nabla}\) defines the required connection and this finishes the proof. **Remark 2.7**.: Notice that proposition 1.4 is an immediate consequence of the latter proposition. **Corollary 2.8**.: _Let \((H\to N,\Gamma^{\infty},\nabla)\) be a smooth field of Hilbert spaces. Also let \(\{U_{i}\}_{i\in I}\) be an open cover of \(N\) and \(\{\mathcal{H}_{i}\}_{i\in I}\) a family of Hilbert spaces, and assume that for each \(\lambda\in U_{i}\), there is a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\). Assume that there is a dense smooth subspace \(\tilde{\Gamma}_{i}^{\infty}\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\) such that \(T_{i}\in\mathfrak{A}^{\infty}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{i}^{ \infty})\), for each \(i\in I\). If \(T_{i}\in\mathfrak{A}^{\infty}_{lb}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{i}^ {\infty})\), then \(T_{i}\) defines an isomorphism of Frechet spaces between \(\Gamma^{\infty}(U_{i})\) and \(C^{\infty}(U_{i},\mathcal{H}_{i})\). In particular, in such case \(H\mapsto N\) admits a Hermitian bundle structure and \(\Gamma^{\infty}(N)=\Gamma^{\infty}(N,H)\). If \(\tilde{\Gamma}_{i}^{\infty}=T_{i}(\Gamma^{\infty}|_{U_{i}})\) is a dense smooth subspace of \(C^{\infty}(U_{i},\mathcal{H}_{i})\) and \(\tilde{A}_{X}^{i}\in\mathfrak{A}^{\infty}_{lb}(\tilde{\Gamma}_{i}^{\infty})\) for every \(X\in\operatorname{Vect}\left(U_{i}\right)\), then \(T_{i}\in\mathfrak{A}^{\infty}_{lb}(\Gamma^{\infty}|_{U_{i}},\tilde{\Gamma}_{i}^ {\infty})\)._ Proof.: By definition, the density of \(\tilde{\Gamma}_{i}^{\infty}\) implies that \(\tilde{\Gamma}_{i}^{\infty}(U_{i})=C^{\infty}(U_{i},\mathcal{H}_{i})\). Therefore, proposition 1.11 implies that \(T\left(\Gamma^{\infty}(U_{i})\right)\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\) and \(T^{*}\left(C^{\infty}(U_{i},\mathcal{H}_{i})\right)\subseteq\Gamma^{\infty}(U_ {i})\) (continuously). Since \(TT^{*}=I\), it follows that \(T\left(\Gamma^{\infty}(U_{i})\right)=C^{\infty}(U_{i},\mathcal{H}_{i})\) (and \(T^{*}\left(C^{\infty}(U_{i},\mathcal{H}_{i})\right)=\Gamma^{\infty}(U_{i})\)). Notice that \(\tilde{\nabla}_{X}(T_{i})=-\tilde{A}_{i}(X)T_{i}\). Thus, induction and proposition 1.10 finish our proof. **Definition 2.9**.: _Let \((H\to N,\Gamma^{\infty},\nabla)\) be a field of Hilbert spaces. Also let \(\{U_{i}\}_{i\in I}\) be an open cover of \(N\) and \(\{\mathcal{H}_{i}\}_{i\in I}\) a family of Hilbert spaces, and assume that for each \(\lambda\in U_{i}\), there is a unitary operator \(T_{i}(\lambda):\mathcal{H}(\lambda)\to\mathcal{H}_{i}\)._ * _We say that_ \((U_{i},\mathcal{H}_{i},T_{i})_{i\in I}\) _is a weak smooth local trivialization of_ \((H\to N,\Gamma^{\infty},\nabla)\) _if for each_ \(i\in I\) _and_ \(x\in\mathcal{H}_{i}\)_,_ \(T_{i}^{*}x\) _belongs to_ \(\Gamma^{\infty}(U_{i})\) _and_ \(T_{i}\left(\Gamma^{\infty}(U_{i})\right)\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\)_._ * _We say that_ \((U_{i},\mathcal{H}_{i},T_{i})_{i\in I}\) _is a smooth local trivialization of_ \((H\to N,\Gamma^{\infty},\nabla)\) _if for each_ \(i\in I\)_,_ \(T_{i}(\Gamma^{\infty}(U_{i}))=C^{\infty}(U_{i},\mathcal{H}_{i})\)_._ Given a smooth field of Hilbert spaces \((H\to N,\Gamma^{\infty},\nabla)\), we would like to find conditions under it is possible to construct a local trivialization. This problem was considered only globally in [8]. Indeed, one of their main results asserts that if \((H\to N,\Gamma^{\infty},\nabla)\) is analytic and flat, then there is a (global) trivialization \(T:H\to V\) such that \(T(\Gamma^{\infty})\subseteq C^{\infty}(N,V)\) and \(XT=T\nabla_{X}\), where \(N\) is required to be a connected and simply connected analytic manifold, and \(V\) is a suitable Hilbert space (see theorem 5.1.2 in [8]). The most difficult part in the proof is to show that analiticity implies that for each \(\lambda\in N\), there exists an open neighborhood \(U\) such that through every point in \(H|_{U}\) there passes a horizontal section \(\varphi\in\Gamma^{\infty}(U)\) (lemma 4.2.1 in [8]). Since, \(N\) was assumed to be simply connected, the latter property holds globally (lemma 4.1.3 in [8]). In our case, in order to obtain an Hermitian bundle, we will need the existence of horizontal sections, but we do not need to assume that \(N\) is simply connected. Essentially, we will extend the proof of theorem 5.12 in [8] taking into account the additional difficulties coming from local trivializations and non-flatness. **Theorem 2.10**.: _Let \((H\to N,\Gamma^{\infty},\nabla)\) be a smooth field of Hilbert spaces, where \(N\) is a connected finite dimensional manifold. Assume that for every \(\lambda\in N\) there is an open neighborhood \(U\) and a family of \(\Gamma^{\infty}(U)\)-smooth field of operators \(\{A(X)\}_{X\in\operatorname{Vect}{(U)}}\) such that:_ * _Equations (_5_) and (_6_) holds true._ * \(A(X):\)__\(\Gamma^{\infty}(U)\mapsto\Gamma^{\infty}(U)\) _is continuous with respect the topology induced from_ \(\Gamma^{n}(U)\)_, for every_ \(X\in\operatorname{Vect}{(U)}\) _and_ \(n\in\mathbb{N}\)_._ * _Through every point in_ \(H|_{U}\) _there passes a section_ \(\varphi\in\Gamma^{\infty}(U)\) _such that_ \(\nabla_{X}\varphi=A(X)\varphi\)_, for every_ \(X\in\operatorname{Vect}{(U)}\)_._ _Then \((H\to N,\Gamma^{\infty},\nabla)\) admits a smooth local trivialization._ We say that a section \(\varphi\) is horizontal with respect to the family of field of operators \(\{A(X)\}\) if the identity \(\nabla_{X}\varphi=A(X)\varphi\) holds for every \(X\in\operatorname{Vect}{(U)}\). When \(A(X)=0\) for every \(X\in\operatorname{Vect}{(U)}\), we recover the notion of horizontal section given in [8]; thus our result generalize the result given in [8] considering local trivializations and also considering the case \(A(X)\neq 0\). Proof.: Let \(\{U_{i}\}_{i\in I}\) be an open cover of \(N\) and \(\{A_{i}(X)\}_{X\in\operatorname{Vect}{(U_{i})}}\) be a family of smooth fields of operators satisfying a) b) and c) for each \(i\in I\). Also let \[\mathcal{H}_{i}=\{\varphi\in\Gamma^{\infty}(U_{i})\mid\text{$\varphi$ is horizontal with respect to}\,\{A_{i}(X)\}_{X\in\operatorname{Vect}{(U_{i})}}\}\] Following the proof of lemma 4.1.1 in [8], notice that, if \(\varphi,\psi\in\mathcal{H}_{i}\) \[X\left(\langle\varphi,\psi\rangle\right)=\langle\nabla_{X}\varphi,\psi\rangle +\langle\varphi,\nabla_{\overline{X}}\psi\rangle=\langle A_{i}(X)\varphi, \psi\rangle+\langle\varphi,A_{i}(\overline{X})\psi\rangle=0.\] Since \(N\) is connected, the map \(\langle\varphi,\psi\rangle\) is constant on \(N\) for every \(\varphi,\psi\in\mathcal{H}_{i}\), thus it defines an inner product on \(\mathcal{H}_{i}\). Moreover, since through every point in \(H|_{U_{i}}\) there passes a horizontal section, the map \(\tilde{T}_{i}(\lambda):\mathcal{H}_{i}\mapsto\mathcal{H}(\lambda)\) given by \(\tilde{T}_{i}(\lambda)\varphi=\varphi(\lambda)\) is onto. In particular, \(\mathcal{H}_{i}\) is a Hilbert space and \(\tilde{T}_{i}(\lambda)\) is unitary. Let us show that if we take \(T_{i}(\lambda):=\tilde{T}_{i}^{*}(\lambda)\), then \((U_{i},\mathcal{H}_{i},T_{i})_{i\in I}\) is a smooth local trivialization. For simplicity, whenever we interpret a horizontal section \(\varphi\) as a point in the Hilbert space \(\mathcal{H}_{i}\) we will denote it by \(\hat{\varphi}\). By definition, if \(\varphi\) is a horizontal section in \(\Gamma^{\infty}(U_{i})\), then \(T_{i}\varphi=\hat{\varphi}\). Let us prove that \(T_{i}(\Gamma^{\infty}(U_{i}))\subseteq C^{\infty}(U_{i},\mathcal{H}_{i})\). Let \(\psi\in\Gamma^{\infty}(U_{i})\). Since each \(T_{i}(\lambda)\) is unitary, clearly \(T_{i}\psi\in C(U_{i},\mathcal{H}_{i})\) (in fact, \(T_{i}(\Gamma^{0}(U_{i}))\subseteq C(U_{i},\mathcal{H}_{i})\)). Assume that \(T_{i}\psi\in C^{n}(U_{i},\mathcal{H}_{i})\). For any \(\hat{\varphi}\in\mathcal{H}_{i}\) and \(X\in\operatorname{Vect}(U_{i})\), we have that \(\langle T_{i}\psi,\hat{\varphi}\rangle=\langle\psi,\varphi\rangle\) belongs to \(C^{n+1}(U_{i})\) and \[X\langle T_{i}\psi,\hat{\varphi}\rangle=X\langle\psi,\varphi\rangle=\langle \nabla_{X}(\psi),\varphi\rangle+\langle\psi,\nabla_{\overline{X}}(\varphi) \rangle=\langle T_{i}\nabla_{X}(\psi),\hat{\varphi}\rangle+\langle\psi,A( \overline{X})\varphi\rangle\] \[=\langle T_{i}(\nabla_{X}-A(X))\psi,\hat{\varphi}\rangle\] Since \(T(\nabla_{X}-A(X))\psi\) belongs to \(C^{n}(U_{i},\mathcal{H}_{i})\), lemma 5.1.1 in [8] (or appendix A) implies that \(T_{i}\psi\in C^{n+1}(U_{i},\mathcal{H}_{i})\) and \(X(T_{i}\psi)=T_{i}(\nabla_{X}-A(X))\psi\). Let \(\tilde{\Gamma}_{i}\) the space of sections of the form \(\sum^{m}a_{j}\hat{\varphi}_{j}\), with \(a_{j}\in C^{\infty}(U_{i})\) and \(\hat{\varphi}_{j}\in\mathcal{H}_{i}\). Clearly, \(T_{i}^{*}(\tilde{\Gamma}_{i})\subseteq\Gamma^{\infty}(U_{i})\). Let us show that each \(T_{i}^{*}\) is continuous. We will prove by induction on \(n\) that if \((f_{k})\) is a sequence such that \(f_{k}\to f\) in \(\tilde{\Gamma}_{i}\) with the \(C^{n}\)-topology then \(T_{i}^{*}(f_{k})\to T_{i}^{*}(f)\) with the \(\Gamma^{n}(U_{i})\)-topology. Since \(T_{i}^{*}\) is unitary, the claim follows trivially for \(n=0\). For \(n\geq 1\), assume the claim holds true for \(n-1\). It is enough to show that \(T_{i}^{*}(f_{k})\to T_{i}^{*}(f)\) in \(\Gamma^{n-1}(U_{i})\) and \(\nabla_{X}[T_{i}^{*}(f_{k})]\to\nabla_{X}[T_{i}^{*}(f)]\) in \(\Gamma^{n-1}(U_{i})\), for every \(X\in\operatorname{Vect}{(U_{i})}\). Since \(f_{k}\to f\) in \(C^{n-1}(U_{i},\mathcal{H}_{i})\), the first part follows from the inductive hypothesis. Moreover, b) implies that \(A(X)T_{i}^{*}(f_{k})\to A(X)T_{i}^{*}(f)\) in \(\Gamma^{n-1}(U_{i})\). Therefore, since \(X(f_{k})\to X(f)\) in \(C^{n-1}(U_{i},\mathcal{H}_{i})\), we have that \[\nabla_{X}(T_{i}^{*}f_{k})=T_{i}^{*}X(f_{k})-A(X)T_{i}^{*}f_{k}\;\longrightarrow \;T_{i}^{*}X(f)-A(X)T_{i}^{*}f=\nabla_{X}(T_{i}^{*}f).\] Since \(\Gamma_{i}^{\infty}\) is dense in \(C^{\infty}(U_{i};\mathcal{H}_{i})\), we have that \(T_{i}^{*}\left[C^{\infty}(U_{i},\mathcal{H}_{i})\right]\subseteq\Gamma^{\infty} (U_{i})\). Moreover, if \(f\in C^{\infty}(U_{i},\mathcal{H}_{i})\), then \(f=T_{i}(T_{i}^{*}f)\in T_{i}(\Gamma^{\infty}(U_{i}))\). Therefore \(T_{i}(\Gamma^{\infty}(U_{i}))=C^{\infty}(U_{i},\mathcal{H}_{i})\) and this finishes the proof. **Corollary 2.11**.: _If \((H\to N,\Gamma^{\infty},\nabla)\) is a flat analytic field of Hilbert spaces, then it admits a full local trivialization \((U_{i},\mathcal{H}_{i},T_{i})_{i\in I}\) such that \(A_{i}=0\)._ So far, we have provided condition to induce a Hilbert bundle or Hermitian bundle structures from smooth field of Hilbert spaces structures. It is also important to be able to determine when two different smooth fields of Hilbert spaces induce the same Hilbert bundle structure. The proof of the following result is straightforward. **Proposition 2.12**.: _Let \((\Gamma_{1}^{\infty},\nabla^{1})\) and \((\Gamma_{2}^{\infty},\nabla^{2})\) be two smooth field of Hilbert spaces structures on \(H\mapsto N\) admitting weak smooth local trivializations. If \(\Gamma_{1}^{\infty}(N)=\Gamma_{2}^{\infty}(N)\), then the corresponding Hilbert bundles structures are equivalent._ Notice that, if \((\Gamma_{1}^{\infty},\nabla^{1})\) and \((\Gamma_{2}^{\infty},\nabla^{2})\) admit smooth local trivializations and the corresponding Hilbert bundles structures are equivalent, then \(\Gamma_{1}^{\infty}(N)=\Gamma_{2}^{\infty}(N)\). Let us end this section providing a way to construct trivializations. Let \((H_{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) and \((H_{2}\to N,\Gamma_{2}^{\infty},\nabla^{2})\) be smooth fields of Hilbert spaces and \(S:\Gamma_{1}^{\infty}\mapsto\Gamma_{2}^{\infty}(N)\) be a smooth field of unitary operators. Assume that \((H_{2}\to N,\Gamma_{2}^{\infty},\nabla^{2})\) admits a local trivialization \((U_{i},\mathcal{H}_{i},T_{i}^{2})\). Then \(T_{i}^{1}=T_{i}^{2}S\) defines a local trivialization of \((H_{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) and \[\tilde{A}_{i}^{1}(X)=\tilde{A}_{i}^{2}(X)+T_{i}^{2}S\hat{\nabla}_{X}(S^{*})(T_ {i}^{2})^{*}.\] For instance, if \((H_{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) is a projectively flat smooth field of Hilbert spaces and the curvature is exact, then one can construct a flat smooth field of Hilbert spaces \((H_{2}\to N,\Gamma_{2}^{\infty},\nabla^{2})\) and a smooth field of operators \(S:\Gamma_{1}^{\infty}\mapsto\Gamma_{2}^{\infty}\), as explained in subsection I.2.4 in [8]. In particular, \((H_{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) admits a local trivialization if \((H_{1}\to N,\Gamma_{1}^{\infty},\nabla^{1})\) is also analytic (see theorem 2.4.2 in [8]). ## 3 Riemannian direct images Let \(M\) and \(N\) be oriented Riemannian manifolds with dimension \(m\) and \(k\) respectively and \(k<m\). Let \(\rho:M\to N\) be a smooth submersion. Let \(M_{\lambda}:=\rho^{-1}(\lambda)\); the implicit function theorem guarantees that \(M_{\lambda}\) is a \((m-k)\)-submanifold of \(M\), for each \(\lambda\in N\). Recall that by definition, \(D\rho(x):T_{x}M\to T_{\rho(x)}N\) is an epimorphism, for each \(x\in M\). Moreover, since \(\mathrm{Ker}D\rho(x)=T_{x}M_{\rho(x)}\), the restriction of \(D\rho(x)\) to \(T_{x}^{\perp}M_{\rho(x)}\) defines an isomorphism. Regarding this fact, the following notation will be useful. **Definition 3.1**.: _Let \(\rho:M\to N\) be a submersion and \(\eta_{\lambda}\) the Riemannian volume form on \(M_{\lambda}\)._ * _For_ \(X\in\mathrm{Vect}(N)\)_, we denote by_ \(\check{X}\) _the only vector field on_ \(M\) _normal to each_ \(M_{\lambda}\) _such that_ \(D\rho(\check{X})=X\)_, in other words_ \[\check{X}(q)=(D\rho(q)|_{T_{q}^{\perp}M_{\rho(q)}})^{-1}(X(\rho(q)))\,,\quad q \in M.\] * _We denote_ \(J(q):=J_{\rho}(q):=\det[D\rho|_{T_{q}^{\perp}M_{\rho(q)}}]\)_._ * _We define the volume form_ \(\mu_{\lambda}=J^{-1}\eta_{\lambda}\)_._ The implicit function theorem guarantees that \(\check{X}\) is a smooth vector field. We interpret \(\check{X}\) as the natural lift of \(X\) from \(N\) to \(M\). Notice that if \(N=\mathbb{R}\), then \(J_{\rho}(x)=||\nabla\rho(x)||\). Let \(\pi:E\to M\) be a finite dimensional Hermitian vector bundle, with fiber \(F\) and a Hermitian connection \(\nabla^{E}\). Denote by \(\Gamma(M,E)\) the corresponding space of sections. We will endow the field of Hilbert spaces \[\mathcal{H}(\lambda)=L^{2}(M_{\lambda},E):=\left\{\varphi\in\Gamma(M_{\lambda}, E)\,\big{|}\,\varphi\text{ is measurable},\int_{M_{\lambda}}\|\varphi(x)\|_{x}^{2}d\mu_{\lambda}(x)< \infty\right\} \tag{7}\] with an explicit smooth structure, where \(\|\cdot\|_{x}\) is the norm on \(\pi^{-1}(x)\). We will show that such field admits a trivialization if \(\rho\) defines a fiber bundle and an addition condition on \(\nabla^{E}\), which becomes trivial when \(\rho\) proper map. Within the holomorphic framework (i.e. \(M,N\) are complex manifolds and \(\rho\) is holomorphic), such problem was considered in chapter II of [8]. However, the construction there can not be adapted to the smooth framework, as it explained in subsection 6.5 of [8]. In fact, the authors of [8], motivated by some geometric quantization problems, suggested that in order to overcome the latter issue we might also require an Ereshmann's connection on \(M\). In our case, the canonical Ereshmann's connection \(M\ni x\to T_{x}^{\perp}M_{\rho(x)}\) appears naturally. ### Derivating integrals over \(M_{\lambda}\) with respect to \(\lambda\). Since the inner products on each \(\mathcal{H}(\lambda)\) is an integral over \(M_{\lambda}\) and we are looking for a connection satisfying condition _ii)_ in definition 1.3, we would like to compute derivatives of functions of the form \[F(\lambda)=\int_{M_{\lambda}}f\,d\mu_{\lambda}\,,\quad f\in C^{\infty}(M)\,, \;\lambda\in N. \tag{8}\] Let us consider the case \(N=\mathbb{R}^{k}\). Thus, \(\rho=\rho_{1}\times\cdots\times\rho_{k}\) and we have that \(T_{x}^{\perp}M_{\rho(x)}=\mathrm{span}\langle\nabla\rho_{i}(x)\mid i=1,\cdots,k\rangle\), where \(\rho_{i}\in C^{\infty}(M)\). If \(\lambda=\lambda_{1}\times\cdots\times\lambda_{k}\in\mathbb{R}^{k}\), it will become useful to consider the submanifolds \[M_{\lambda}^{i}:=\{x\in M\mid\rho_{n}(x)=\lambda_{n},\forall n\neq i\}\,, \quad i=1,\ldots k. \tag{9}\] Clearly \(M_{\lambda}\) is a submanifold of \(M_{\lambda}^{i}\) of co-dimension \(1\). Let also \(J_{i}(x)=J_{\tilde{\rho}_{i}}(x)\), where \(\tilde{\rho}_{i}:=\rho_{1}\times\cdots\times\widehat{\rho}_{i}\times\cdots \times\rho_{k}\), i.e. \(\tilde{\rho}_{i}\) is obtained by removing \(\rho_{i}\) from \(\rho\). In particular, \(M_{\lambda}^{i}=\widetilde{\rho}_{i}^{-1}(\lambda)\). **Lemma 3.2**.: _Let \(M\) be a Riemannian manifold and \(\rho_{i}\in C^{\infty}(M)\), with \(i=1,\cdots,k\) and \(k<m\). Assume that \(\rho=\rho_{1}\times\cdots\times\rho_{k}\) is submersion and let \(\pi_{x}^{i}\) be the orthogonal projection of \(T_{x}M\) onto the subspace \(\langle\nabla\rho_{n}(x)\mid i\neq n\rangle^{\perp}\)._ * _For each_ \(i=1,\ldots,k\)_, the vector field_ \(\frac{\partial}{\partial\lambda_{i}}\) _is given by_ \[\frac{\ddot{\partial}}{\partial\lambda_{i}}(x)=\frac{1}{||\pi_{x}^{i}(\nabla \rho_{i}(x))||^{2}}\pi_{x}^{i}(\nabla\rho_{i}(x)).\] (10) * _The gradient of the restriction of_ \(\rho_{i}\) _to_ \(M_{\lambda}^{i}\) _is_ \(\pi^{i}(\nabla\rho_{i})\)_. In particular, for each_ \(x\in M\)_, we have that_ \(J(x)=J_{i}(x)||\pi_{x}^{i}(\nabla\rho_{i}(x))||\)_._ Proof.: Let \(v\) be the only vector orthogonal to each \(M_{\lambda}\) such that \(D\rho(v)=\frac{\partial}{\partial\lambda_{i}}\). Thus, \(v\) is the only vector such that \(D\rho(v)(\lambda_{n})=\delta_{in}\), for each \(n=1,\cdots,k\). By definition, \(D\rho(v)(\lambda_{n})=\langle\nabla\rho_{n},v\rangle\). It is straightforward to check that the right hand side of (10) satisfies the required condition. For the second part of our result, if \(x\in M_{\lambda}\), then \(\nabla(\rho_{i}|_{M_{\lambda}^{i}})(x)\in T_{x}M_{\lambda}^{\perp}\cap\langle \nabla\rho_{n}(x)\mid i\neq n\rangle^{\perp}\). Thus, \(\nabla(\rho_{i}|_{M_{\lambda}^{i}})(x)=C\pi_{x}^{i}(\nabla\rho_{i}(x))\) for some real constant \(C\). Moreover, \[\langle\nabla(\rho_{i}|_{M_{\lambda}^{i}})(x),\pi_{x}^{i}(\nabla\rho_{i}(x)) \rangle=D\rho(x)[\pi_{x}^{i}(\nabla\rho_{i}(x))](\lambda_{i})=||\pi_{x}^{i}( \nabla\rho_{i}(x))||^{2}.\] Therefore \(C=1\). The last claim follows after recalling that \(J(x)=|\det(D\rho(x)|_{T_{x}^{\perp}M_{\lambda}})|\) and using the orthogonal decomposition \(T_{x}^{\perp}M_{\lambda}=\mathrm{span}\{||\pi_{x}^{i}(\nabla\rho_{i}(x))||\frac {\partial}{\partial\lambda_{i}}\}\oplus T_{x}^{\perp}M_{\lambda}^{i}\). In order to compute derivatives of \(F\) defined in (8), we will need to consider the notion of divergence of a vector field. Let \(Y\) be a vector field on \(M\) and \(\eta\) a volume form on \(M\). By definition, the divergence of \(Y\) with respect to \(\eta\) is the unique smooth function \(\mathrm{div}_{\eta}(Y)\in C^{\infty}(M)\) such that \[\mathcal{L}_{Y}(\eta)=\mathrm{div}_{\eta}(Y)\eta,\] where \(\mathcal{L}\) is the Lie derivative on \(M\). If \(\eta\) is the canonical volume form on \(M\) (coming from its Riemannian structure), then we omit \(\eta\) in the notation, i.e. we write \(\mathrm{div}(Y)\). In [8], it was considered an extension of the previous definition of the divergence suitable for our framework. Let \(\nu\) be a \((m-k)\)-form such that its restriction to each \(M_{\lambda}\) is a volume form. The divergence of \(Y\) with respect to \(\nu\) is the unique smooth function \(\mathrm{div}_{\nu}(Y)\) such that \[\mathcal{L}_{Y}(\nu)|_{M_{\lambda}}=\mathrm{div}_{\nu}(Y)\nu|_{M_{\lambda}}, \forall\lambda\in N\] The following lemma provides some identities concerning the computation of the divergence of vector fields with respect to different forms. **Lemma 3.3**.: _Let \(M\) y \(N\) be smooth manifolds with volume forms \(\eta\) y \(\zeta\) respectively. Fix an smooth submersion \(\rho:M\mapsto N\) and let \(\nu\) be a \((m-k)\)-form on \(M\) such that \(\eta=\nu\wedge\rho^{*}(\zeta)\)._ * _If_ \(Y\) _is a vector field on_ \(M\)_, then_ \[\operatorname{div}_{\nu}Y(x)=\operatorname{div}_{\eta}(Y)-\operatorname{div}_{ \zeta}(D\rho(Y))\circ\rho.\] _Moreover, if_ \(Y\) _is tangent to each_ \(M_{\lambda}\) _then_ \(\operatorname{div}_{\eta}Y(x)=\operatorname{div}_{\mu_{\rho(x)}}Y_{\rho(x)}(x)\)_, where_ \(Y_{\lambda}\) _denotes the restriction of_ \(Y\) _to_ \(M_{\lambda}\)_._ * _Let_ \(J\) _be a non-vanishing smooth function on_ \(M\) _and let_ \(\omega=J^{-1}\eta\)_. Then, for each vector field_ \(Y\) _on_ \(M\)_, we have that_ \[\operatorname{div}_{\omega}(Y)=J\operatorname{div}_{\eta}(J^{-1}Y)= \operatorname{div}_{\eta}(Y)-J^{-1}Y(J).\] _Similarly, if_ \(L\) _is a smooth manifolds endowed with volume form_ \(\omega\)_,_ \(Y\) _is a vector field on_ \(M\)_, and_ \(\Psi:M\to L\) _is a diffeomorphism, then_ \[\operatorname{div}_{\eta}(Y)=J_{\Psi}\operatorname{div}_{\omega}(D\Psi(J_{ \Psi}^{-1}Y))=\operatorname{div}_{\omega}(D\Psi Y)\circ\Psi-J_{\Psi}^{-1}Y(J_{ \Psi}),\] _where_ \(J_{\Psi}\) _is the Jacobian of_ \(\Psi\)_._ * _For each_ \(X,Y\in\operatorname{Vect}(M)\)_, one has_ \(\operatorname{div}([X,Y])=X(\operatorname{div}(Y))-Y(\operatorname{div}(X))\)_._ Proof.: Since \(\eta=\nu\wedge\rho^{*}(\zeta)\), we have that \[\operatorname{div}_{\eta}(Y)\nu\wedge\rho^{*}(\zeta)=\mathcal{L}_{Y}(\nu) \wedge\rho^{*}(\zeta)+\nu\wedge\rho^{*}(\mathcal{L}_{D\rho(Y)}\zeta)=\mathcal{ L}_{Y}(\nu)\wedge\rho^{*}(\zeta)+(\operatorname{div}_{\zeta}(D\rho(Y))\circ\rho)\,\nu \wedge\rho^{*}(\zeta).\] Since \(\rho^{*}(\zeta)(x)\) vanishes on \(TM_{\rho(x)}\), the latter identity implies that \[(\operatorname{div}_{\eta}(Y)-\operatorname{div}_{\zeta}(D\rho(Y))\circ\rho)) \,\nu|_{M_{\lambda}}=\mathcal{L}_{Y}(\nu)|_{M_{\lambda}}=\operatorname{div}_ {\nu}(Y)\nu|_{M_{\lambda}}\] and this shows the first claim of part a). The second claim follows from the previous identity and noticing that coarea formula (see Appendix A or [4]) implies that the restriction of \(\nu\) to any \(M_{\lambda}\) coincides with \(\mu_{\lambda}:=J^{-1}\eta_{\lambda}\). The first claim in part b) follows from noticing that \(\mathcal{L}_{X}\omega=J^{-1}\mathcal{L}_{X}\eta+X(J^{-1})\eta\) and that \(JX(J^{-1})=-J^{-1}X(J)\). The second claim in part b) follows from the same argument, but using that \(J_{\Psi}\eta=\Psi^{*}(\omega)\) and that the pullback of \(\Psi\) exchanges the Lie derivatives. To prove c), we compute \[\operatorname{div}([X,Y])\eta =\mathcal{L}_{[X,Y]}\eta=\mathcal{L}_{X}\mathcal{L}_{Y}\eta- \mathcal{L}_{Y}\mathcal{L}_{X}\eta=\mathcal{L}_{X}\operatorname{div}(Y)\eta -\mathcal{L}_{Y}\operatorname{div}(X)\eta\] \[=\big{(}X(\operatorname{div}(Y))+\operatorname{div}(Y)\operatorname {div}(X)-Y(\operatorname{div}(X))-\operatorname{div}(X)\operatorname{div}(Y) \big{)}\eta\] \[=\big{(}X(\operatorname{div}(Y))-Y(\operatorname{div}(X))\big{)}\eta.\] **Remark 3.4**.: It is not difficult to show the existence of \(\nu\) such that \(\eta=\nu\wedge\rho^{*}(\zeta)\), but clearly it is not unique. For instance, we can define \(\nu\) locally applying the implicit function theorem. For details, see lemma 2.1 in [1]. Now we can prove the following derivation formula. **Theorem 3.5**.: _Let \(\rho\in C^{m}(M,N)\) and \(f\in C^{r}_{c}(M)\), with \(r\leq m\). The map \(F:N\to\mathbb{R}\) given by_ \[F(\lambda)=\int_{M_{\lambda}}f\mu_{\lambda}\] _is \(r\) times differentiable, where \(\mu_{\lambda}\) is defined in 3.1. Moreover, if \(X\) is a vector field on \(N\), then_ \[XF(\lambda)=\int_{M_{\lambda}}\operatorname{div}_{\nu}(f\tilde{X})\,\mu_{ \lambda}=\int_{M_{\lambda}}\tilde{X}(f)+(\operatorname{div}(\tilde{X})- \operatorname{div}X(\lambda))f\,\mu_{\lambda},\] _where \(\nu\) is any \((m-k)\)-form on \(M\) such that \(\eta=\nu\wedge\rho^{*}\zeta\)._ Proof.: We shall separate the proof in three cases: \(N=\mathbb{R}\) and \(X=\frac{\partial}{\partial X}\), \(N=\mathbb{R}^{k}\) and \(X\) is arbitrary, and finally the general case. **Case 1, \(N=\mathbb{R}\) and \(X=\frac{\partial}{\partial\lambda}\).** Fix \(\lambda^{0}\in\mathbb{R}\) and let \(h>0\) be small enough. Consider the submanifold with boundary \[M_{\lambda^{0},\lambda^{0}+h}=\{x\in M\mid\lambda^{0}\leq\rho(x)\leq\lambda^{0 }+h\}.\] Clearly \(\partial M_{\lambda^{0},\lambda^{0}+h}=M_{\lambda^{0}}\cup M_{\lambda^{0}+h}\) and the outgoing normal vector is \(v=\frac{1}{||\nabla\rho||}\nabla\rho\). Therefore, the divergence theorem implies that \[\int_{M_{\lambda^{0}+h}}f\mu_{\lambda^{0}+h}-\int_{M_{\lambda^{0}}}f\mu_{ \lambda^{0}}=\int_{M_{\lambda^{0},\lambda^{0}+h}}\operatorname{div}(f|| \nabla\rho||^{-1}v)\eta.\] Using lemma 3.2 and coarea formula (see Appendix A or [4]) on the right hand side, we obtain \[F(\lambda^{0}+h)-F(\lambda^{0})=\int_{\lambda^{0}}^{\lambda^{0}+h}\left(\int _{M_{\lambda}}||\nabla\rho||^{-1}\operatorname{div}\left(f\frac{\check{\partial }}{\partial\lambda}\right)\eta_{\lambda}\right)\mathrm{d}\lambda.\] Therefore, the fundamental theorem of calculus implies that \[\frac{\partial F}{\partial\lambda}(\lambda)=\int_{M_{\lambda}}\operatorname{ div}\left(f\frac{\check{\partial}}{\partial\lambda}\right)\mu_{\lambda}.\] **Case 2, \(N=\mathbb{R}^{k}\) and \(X\) is arbitrary.** Let \(\rho=\rho_{1}\times\cdots\times\rho_{k}\). Notice that \[F(\lambda)=\int_{M_{\lambda}}(||\pi^{i}(\nabla\rho_{i})||J^{-1}f)||\pi^{i}( \nabla\rho_{i})||^{-1}\eta_{\lambda}.\] Applying the previous case with \(M=M_{\lambda}^{i}\) defined in (9) and using lemma 3.2, we obtain \[\frac{\partial F}{\partial\lambda_{i}}(\lambda) =\int_{M_{\lambda}}\operatorname{div}\left(||\pi^{i}(\nabla\rho_{ i})||J^{-1}f\frac{\check{\partial}}{\partial\lambda_{i}}\right)||\pi^{i}( \nabla\rho_{i})||^{-1}\eta_{\lambda}\] \[=\int_{M_{\lambda}}J_{i}\operatorname{div}\left(J_{i}^{-1}f\frac{ \check{\partial}}{\partial\lambda_{i}}\right)\mu_{\lambda}.\] Since \(\frac{\check{\partial}}{\partial\lambda_{i}}\in TM_{\lambda}^{i}\) and divergence in the previous identity is computed on \(M_{\lambda}^{i}\) with respect to \(\eta_{\lambda}^{i}\), lemma 3.3 implies our result for \(f\in C_{c}^{r}(M)\) and \(X=\frac{\partial}{\partial\lambda_{i}}\). If \(X=\sum_{i}a_{i}\frac{\partial}{\partial\lambda_{i}}\), with \(a_{i}\in C^{\infty}(\mathbb{R}^{k})\), then \[\int_{M_{\lambda}}\operatorname{div}(f\bar{X})\,\mu_{\lambda} =\sum_{i}a_{i}(\lambda)\frac{\partial F}{\partial\lambda_{i}}( \lambda)+\sum_{i}\int_{M_{\lambda}}f\frac{\check{\partial}}{\partial\lambda_{ i}}(a_{i}\circ\rho)\,\mu_{\lambda}\] \[=XF(\lambda)+(\sum\frac{\partial a}{\partial\lambda_{i}})(\lambda )F(\lambda).\] **Case 3, arbitrary N.** For the general case, take \(\lambda\in V\subseteq N\) and \(\Psi:V\to W\subseteq\mathbb{R}^{k}\) a local coordinate. Notice that \[F(\lambda)=\int_{M_{\lambda}}(J_{\Psi}\circ\rho)fJ_{\rho}^{-1}(J_{\Psi}\circ \rho)^{-1}\eta_{\lambda},\] where \(J_{\Psi}\) is the Jacobian of \(\Psi\). Since \(J_{\Psi\circ\rho}=(J_{\Psi}\circ\rho)J_{\rho}\), applying the previous case to \(\Psi\circ\rho\), we obtain \[XF(\lambda) =\int_{M_{\lambda}}[\operatorname{div}((J_{\Psi}\circ\rho)f\bar{X })-\operatorname{div}D\Psi X(\Psi(\lambda))J_{\Psi}(\lambda)f]J_{\Psi}^{-1}( \lambda)J_{\rho}^{-1}\eta_{\lambda}\] \[=\int_{M_{\lambda}}\operatorname{div}(f\bar{X})+[J_{\Psi}^{-1}X(J_ {\Psi})-\operatorname{div}(D\Psi X)\circ\Psi](\lambda)f\mu_{\lambda}.\] Part b) of lemma 3.3 finishes the proof. **Remark 3.6**.: The only step where we used that \(f\) has compact support was when we applied the divergence theorem over the space \(M_{\lambda^{\circ},\lambda^{\circ}+h}\), but such identity would also hold if \(M_{\lambda^{\circ},\lambda^{\circ}+h}\) is compact. Therefore, if \(\rho\) is proper the previous formula holds for any \(f\in C^{r}(M)\). **Remark 3.7**.: Clearly the map \[\tilde{F}(\lambda)=\int_{M_{\lambda}}f\eta_{\lambda}\,,\] obtained by replacing \(\mu_{\lambda}\) by \(\eta_{\lambda}\), is also smooth and we have that \[X\tilde{F}(\lambda)=\int_{M_{\lambda}}J^{-1}\operatorname{div}(Jf\tilde{X})- \operatorname{div}X(\lambda)f\,\eta_{\lambda}=\int_{M_{\lambda}}\operatorname {div}(f\tilde{X})-\operatorname{div}X(\lambda)f+J^{-1}\tilde{X}(J)f\,\eta_{ \lambda}.\] ### Smooth structure and trivialization In this subsection we denote by \(H\to N\) the field of Hilbert spaces \(\mathcal{H}(\lambda)\) defined in (7). We will use theorem 3.5 to endow \(H\to N\) with a smooth structure. However, it is not clear if this smooth field of Hilbert spaces admits a trivialization, unless further geometrical conditions are assumed. For instance, we will apply corollary 2.8 to show that if \(\rho\) is also a proper map then \(H\to N\) admits a trivialization. Indeed, under that assumption \(\rho\) defines a fiber bundle (Ehresmann's theorem). In particular, there is a smooth manifold \(K\) such that locally each \(M_{\lambda}\) is diffeomorphic to \(K\). Using a trivialization of the vector bundle \(E\to M\) we can define a the required unitary operators \(T(\lambda)\) (see theorem 3.10). Since, \(F\cong\pi^{-1}(x)\) is a finite dimensional Hilbert space, it follows that \(E\to M\) can be consider as a field of Hilbert spaces. Let \(\Gamma_{c}^{\infty}(M,E)\) be the space of compact supported sections. It is clear that \(\Gamma_{c}^{\infty}(M,E)\) with the Hermitian connection \(\nabla^{E}\) is a smooth structure on the field \(E\to M\). Notice that, for a given \(\varphi\in\Gamma_{c}^{\infty}(M,E)\) the restriction \(\varphi|_{M_{\lambda}}\) lies in \(\mathcal{H}(\lambda)\) for all \(\lambda\in N\). Moreover, we can extend sections defined on \(M_{\lambda}\) to sections on \(M\) by using the same argument applied to extend functions proposed in [7, Lemma 5.34]. **Lemma 3.8**.: _Let \(\lambda\in N\) and let \(f\in\Gamma_{c}^{\infty}(M_{\lambda},E)\) be a smooth section with compact support. There exists a compact supported section \(\varphi\in\Gamma_{c}^{\infty}(M,E)\) such that \(\varphi|_{M_{\lambda}}=f\)._ Since \(\rho\equiv\lambda\) on \(M_{\lambda}\), \(\Gamma_{c}^{\infty}(M,E)\) is a \(C^{\infty}(N)\)-module, where the multiplication between \(a\in C^{\infty}(N)\) and a section \(\varphi\) is defined by \((a\circ\rho)\varphi\). **Theorem 3.9**.: _Let \(\pi:E\to M\) be a finite dimensional Hermitian bundle with a Hermitian connection \(\nabla^{E}\) and let \(\rho:M\to N\) be a submersion. Let \(\Gamma^{\infty}=\Gamma_{c}^{\infty}(M,E)\) and \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},E)\), where \(M_{\lambda}\) is endow with the volume form \(\mu_{\lambda}\) defined in 3.1. The map \(\nabla:\operatorname{Vect}(M)\times\Gamma^{\infty}\to\Gamma^{\infty}\) given by_ \[\nabla_{X}(\varphi)=\nabla^{E}_{\tilde{X}}(\varphi)+\frac{1}{2}(\operatorname {div}(\tilde{X})-\operatorname{div}(X)\circ\rho)\varphi,\] _defines a smooth structure on \(H\to N\) with curvature \(R[X,Y]=R^{E}[\tilde{X},\tilde{Y}]\), where \(R^{E}\) denotes the curvature of \(\nabla^{E}\)._ Proof.: The first two conditions of i) in definition 1.3 are straightforward. To show the third one, we compute \[\nabla_{X}(a\varphi) =\nabla^{E}_{\tilde{X}}(a\varphi)+\frac{1}{2}(\operatorname{div} (a\tilde{X})-\operatorname{div}(aX)\circ\rho)\varphi\] \[=\tilde{X}(a\circ\rho)\varphi+a\nabla^{E}_{\tilde{X}}(\varphi)+ \frac{1}{2}\big{(}\tilde{X}(a\circ\rho)+a\operatorname{div}(X)-X(a)-a \operatorname{div}(X)\big{)}\varphi\] \[=X(a)\varphi+a\nabla^{E}_{\tilde{X}}(\varphi)+\frac{1}{2}\big{(} X(a)+a\operatorname{div}(\tilde{X})-X(a)-a\operatorname{div}(X)\big{)}\varphi\] \[=X(a)\varphi+a\nabla^{E}_{\tilde{X}}(\varphi)+a\frac{1}{2}( \operatorname{div}(\tilde{X})-\operatorname{div}(X)\circ\rho)\varphi\] \[=X(a)\varphi+a\nabla_{X}(\varphi)\,,\] where we have used the identity \(\operatorname{div}(aX)=X(a)+a\operatorname{div}(X)\). The condition _ii)_ follows by Theorem 3.5. Indeed, we compute for \(X\in\operatorname{Vect}(N)\), \[Xh(\varphi,\psi)(\lambda) =\int_{M_{\lambda}}\check{X}(h^{E}(\varphi,\psi))+\big{(} \operatorname{div}(\check{X})-\operatorname{div}(X)\big{)}h^{E}(\varphi,\psi) \,\mu_{\lambda}\] \[=\int_{M_{\lambda}}h^{E}(\nabla^{E}_{\check{X}}\varphi,\psi)+ \frac{1}{2}\big{(}\operatorname{div}(\check{X})-\operatorname{div}(X)\big{)}h^ {E}(\varphi,\psi)\,\mu_{\lambda}\] \[\qquad+\int_{M_{\lambda}}h^{E}(\varphi,\nabla^{E}_{\check{X}} \psi)+\frac{1}{2}\big{(}\operatorname{div}(\check{X})-\operatorname{div}(X) \big{)}h^{E}(\varphi,\psi)\,\mu_{\lambda}\] \[=\int_{M_{\lambda}}h^{E}\big{(}\nabla^{E}_{\check{X}}\varphi+ \frac{1}{2}\big{(}\operatorname{div}(\check{X})-\operatorname{div}(X)\big{)} \varphi,\psi\big{)}\,\mu_{\lambda}\] \[\qquad+\int_{M_{\lambda}}h^{E}\big{(}\varphi,\nabla^{E}_{\check{ X}}\psi+\frac{1}{2}\big{(}\operatorname{div}(\check{\check{X}})-\operatorname{ div}(\check{X})\big{)}\,\mu_{\lambda}\] \[=h(\nabla_{X}\varphi,\psi)(\lambda)+h(\varphi,\nabla_{\check{X}} \psi)(\lambda)\,,\] where \(h^{E}\) denotes the hermitian form in the field \(E\to M\). Condition iii) follows by noticing that the set of compact supported smooth sections of \(\Gamma^{\infty}(M_{\lambda},E)\) are dense in \(\mathcal{H}(\lambda)\) and, by the Lemma 3.8 all compact supported section has a smooth compactly supported extension to \(M\). We now compute the curvature of \(\nabla\). \[\nabla_{X}\nabla_{Y}-\nabla_{Y}\nabla_{X}-\nabla_{[X,Y]} =R^{E}(\check{X},\check{Y})+\frac{1}{2}(\check{X}(\operatorname{ div}\check{Y})-\check{Y}(\operatorname{div}\check{X})-\operatorname{div} [\check{X},\check{Y}])\] \[\qquad+\frac{1}{2}(X(\operatorname{div}Y)-Y(\operatorname{div}X )-\operatorname{div}[X,Y])\,.\] By part c) lemma 3.3, it follows that \(R(X,Y)=R^{E}(\check{X},\check{Y})\). We will show that, if \(\rho\) defines a fiber bundle, then it induces a trivialization for field of Hilbert spaces \(H\to N\) in a natural way. Let \(K\) be the fiber of \(\rho\). Thus, there is a open covering \(\{U_{i}\}\) of \(N\) and a family of diffeomorphisms \(\Phi_{i}:\rho^{-1}(U_{i})\to U_{i}\times K\) such that \(\rho=\operatorname{proj}_{1}\circ\Phi_{i}\), where \(\operatorname{proj}_{1}\) is the projection in the first coordinate. Let \((V_{i},F,T^{E}_{i})_{i\in I}\) be a local trivialization of the Hermitian finite dimensional vector bundle \(E\to M\). Refining the covering of \(N\), if it is necessary, we can assume that \[\rho^{-1}(U_{i})=V_{i}.\] Recall that, for each \(Y\in\operatorname{Vect}(V_{i})\), there is a smooth field of operators \(\tilde{A}^{E}_{i}(Y):C^{\infty}(V_{i},F)\to C^{\infty}(V_{i},F)\) such that \[T^{E}_{i}\nabla^{E}_{Y}f=Y(T^{E}_{i}f)+\tilde{A}^{E}_{i}(Y)T^{E}_{i}f\,. \tag{11}\] For \(\lambda\in N\), define the map \[T_{i}(\lambda):L^{2}(M_{\lambda},E)\to L^{2}(K,F) \tag{12}\] \[\big{[}T_{i}(\lambda)u\big{]}(k)=T^{E}_{i}(\Phi^{-1}_{i}(\lambda, k))\big{[}\big{(}J^{-1/2}_{\Phi_{i}}\cdot u\big{)}\big{(}\Phi^{-1}_{i}( \lambda,k)\big{)}\big{]}\,,\] where \(J_{\Phi_{i}}\) denotes the Jacobian of \(\Phi_{i}\). Since each \(T^{E}_{i}(x)\) is unitary, the change of variable formula implies that each \(T_{i}(\lambda)\) is unitary. Let us prove that \(T_{i}(\lambda)\) defines a local trivialization of the smooth field of Hilbert spaces \((H\to N,\Gamma^{\infty}_{c}(M,E),\nabla)\), where \(\nabla\) is defined in theorem 3.9. **Theorem 3.10**.: _Let \(\rho:M\to N\) be a fiber bundle with fiber \(K\) and let \(\pi:E\to M\) be a finite dimensional Hermitian vector bundle with fiber \(F\). Let \(\{U_{i}\}_{i\in I}\) be a covering of \(N\) and \(\Psi_{i}:\rho^{-1}(U_{i})\to U\times K\) a local trivialization for \(\rho:M\to N\), such that \(V_{i}=\rho^{-1}(U_{i})\) is a open covering of \(M\) over which we can define a local trivialization \((V_{i},F,T^{E}_{i})_{i\in I}\) for \(\pi:E\to M\). Let \(\tilde{A}^{E}_{i}\) and \(T_{i}(\lambda)\) defined by equation (11) and equation (12) respectively. Fix a volume form \(\eta_{0}\) on \(K\). If \(\tilde{A}^{E}_{i}(\tilde{X})\in C^{\infty}_{b}(\rho^{-1}(C),\mathcal{B}(F))\) for every \(C\subset U_{i}\) compact and every \(X\in\operatorname{Vect}(U_{i})\), then the family \((U_{i},L^{2}(K,F),T_{i})_{i\in I}\) is a smooth local trivialization of the field \(H\to N\) and the operators \(\tilde{A}_{i}\) are given by_ \[(\tilde{A}_{i}(X)f)(\lambda,k)=\tilde{A}^{E}_{i}(\tilde{X})(\Phi^{-1}_{i}( \lambda,k))f(\lambda,k)\,,\] _for each \(f\in C^{\infty}(U_{i},L^{2}(K,F))\)._ **Remark 3.11**.: Since \(F\) is finite dimensional, once one fix a basis, \(\tilde{A}_{i}^{E}(\tilde{X})\in C_{b}^{\infty}(\rho^{-1}(C),\mathcal{B}(F))\) if and only if the entries of the corresponding matrix belongs to \(C_{b}^{\infty}(\rho^{-1}(C))\). In particular, if \(\rho\) is a proper map, then \(\tilde{A}_{i}^{E}(\tilde{X})\in C_{b}^{\infty}(\rho^{-1}(C),\mathcal{B}(F))\). Proof.: We will apply corollary 2.8. By definition, \(\tilde{\Gamma}_{i}^{\infty}=T_{i}(\Gamma^{\infty}|_{U_{i}})=C_{c}^{\infty}(U_ {i}\times K,F)\) which it is a dense smooth subspace of \(C^{\infty}(U_{i},L^{2}(K,F))\). It remains to show that the field of operators \(\tilde{A}_{i}(X)\) and all its derivatives are locally uniformly bounded, for every \(X\in\text{Vect}\,(U_{i})\). Let us compute \(\tilde{A}_{i}(X)\). Let \(\varphi\in\Gamma^{\infty}|_{U_{i}}=\Gamma_{c}^{\infty}(U_{i},E)\), and we use the identification \((T_{i}\varphi)(\lambda,k)=[T_{i}(\lambda)\varphi(\lambda)](k)=[T_{i}(\lambda) \varphi|_{M_{i}}](k)\,\). Using this, one can rewrite \[\big{(}T_{i}\varphi\big{)}=\big{(}T_{i}^{E}[(J_{\Phi_{i}}^{-1/2}\cdot\varphi) ]\big{)}\circ\Phi_{i}^{-1}\,,\] Hence, considering \(X\) as a field on \(U_{i}\times K\) acting trivially on \(K\), we have that \[\big{(}XT_{i}\varphi\big{)}(\lambda,k)=\big{(}\tilde{X}T_{i}^{E}[J_{\Phi_{i}} ^{-1/2}\cdot\varphi]\big{)}(\Phi_{i}^{-1}(\lambda,k))\,.\] Then, the relation (11) implies that \[\big{(}XT_{i}\varphi\big{)}(\lambda,k) =\big{(}\tilde{X}T_{i}^{E}[J_{\Phi_{i}}^{-1/2}\cdot\varphi]\big{)} (\Phi_{i}^{-1}(\lambda,k))\] \[=\Big{(}T_{i}^{E}\nabla_{\tilde{X}}^{E}\big{[}J_{\Phi_{i}}^{-1/2} \cdot\varphi\big{]}+\tilde{A}_{i}^{E}(\tilde{X})T_{i}^{E}\big{[}J_{\Phi_{i}}^ {-1/2}\cdot\varphi\big{]}\Big{)}(\Phi_{i}^{-1}(\lambda,k))\,.\] Using part b) of lemma 3.3 we have that \[\nabla_{\tilde{X}}^{E}\big{[}J_{\Phi_{i}}^{-1/2}\cdot\varphi\big{]} =\tilde{X}(J_{\Phi_{i}}^{-1/2})\varphi+J_{\Phi_{i}}^{-1/2}\nabla_ {\tilde{X}}^{E}\varphi\] \[=J_{\Phi_{i}}^{-1/2}\Big{(}-\frac{1}{2}J_{\Phi_{i}}^{-1}\tilde{X} (J_{\Phi_{i}})\varphi+\nabla_{\tilde{X}}^{E}\varphi\Big{)}\] \[=J_{\Phi_{i}}^{-1/2}\Big{(}-\frac{1}{2}(\text{div}(\tilde{X})- \text{div}(X)\circ\rho)\varphi+\nabla_{\tilde{X}}^{E}\varphi\Big{)}\] \[=J_{\Phi_{i}}^{-1/2}\nabla_{\tilde{X}}\varphi\,.\] Hence, we obtain \[(XT_{i}\varphi)(\lambda,k)=\Big{(}T_{i}^{E}\big{[}J_{\Phi_{i}}^{-1/2}\nabla_ {\tilde{X}}\varphi\big{]}+\tilde{A}_{i}^{E}(\tilde{X})T_{i}^{E}[J_{\Phi_{i}}^ {-1/2}\varphi]\Big{)}(\Phi_{i}^{-1}(\lambda,k)).\] Then, \(XT_{i}\varphi=T_{i}\nabla_{\tilde{X}}\varphi+(\tilde{A}_{i}^{E}(\tilde{X}) \circ\Phi^{-1})T_{i}\varphi\). Therefore, we get that \[(\tilde{A}_{i}(X)f)(\lambda,k)=\tilde{A}_{i}^{E}(\tilde{X})(\Phi_{i}^{-1}( \lambda,k))f(\lambda,k)\] for \(f\in C_{c}^{\infty}(U_{i}\times K,F)\,\). If \(g\in C_{c}^{\infty}(K,F)\), the latter identity implies that \[(\tilde{A}_{i}(X)g)(\lambda,k)=\tilde{A}_{i}^{E}(\tilde{X})(\Phi_{i}^{-1}( \lambda,k))g(k)\] Thus, \[[X_{1}\cdots X_{n}(\tilde{A}_{i}(X))g](\lambda,k)=\tilde{X}_{1}\cdots\tilde{X }_{n}[\tilde{A}_{i}^{E}(\tilde{X})](\Phi_{i}^{-1}(\lambda,k))g(k),\] for every \(g\in C_{c}^{\infty}(K,F)\) and \(X_{1},\cdots X_{n},X\in\text{Vect}\,(U_{i})\). Finally, \[\sup_{\lambda\in C}\|X_{1}\cdots X_{n}[\tilde{A}_{i}(X)](\lambda)g\|\leq\sup_{ x\in\rho^{-1}(C)}\Big{\{}\|\hat{X}_{1}\cdots\hat{X}_{n}[\tilde{A}_{i}^{E}( \tilde{X})](x)\|\Big{\}}\,\|g\|.\] Since each \(\tilde{A}_{i}^{E}(\tilde{X})\in C_{b}^{\infty}(\rho^{-1}(C),\mathcal{B}(F))\), this finishes our proof. There are two interesting particular cases that we would like to consider, the trivial line bundle \(E=M\times\mathbb{C}\) and the tangent bundle \(E=TM\) endowed with the Levi-Civita connection. **Corollary 3.12**.: _Let \(\rho:M\to N\) be a smooth submersion. Consider the field of Hilbert spaces \(H\to N\) with \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},\mu_{\lambda})\)._ 1. _The map_ \(\nabla_{X}:C_{c}^{\infty}(M)\to C_{c}^{\infty}(M)\) _given by_ \[\nabla_{X}(f)=\tilde{X}(f)+\frac{1}{2}(\operatorname{div}(\tilde{X})- \operatorname{div}(X)\circ\rho)f\] _defines a flat smooth structure on_ \(H\to N\)_._ 2. _Assume that_ \(\rho\) _is a smooth fiber bundle and_ \((U_{i},K,\Phi_{i})_{i\in I}\) _be a smooth trivialization for_ \(\rho\)_. Fix a volume form on_ \(K\)_. The family_ \((U_{i},L^{2}(K),T_{i})_{i\in I}\) _defines a smooth local trivialization of_ \(H\to N\)_, where_ \(T_{i}(\lambda):L^{2}(M_{\lambda},\mu_{\lambda})\to L^{2}(K)\) _is given by_ \[T_{i}(\lambda)u(k)=J_{\Phi_{i}}^{-1/2}u(\Phi_{i}^{-1}(\lambda,k))\,.\] Recall that the tangent bundle \(TM\) over a Riemannian manifold \(M\) admits a canonical Hermitian structure. The corresponding connection is called the Levi-Civita connection and we will denoted by \(\nabla^{L}\). By definition the sections are vectors fields over \(M\). In a local trivialization \((V_{i},F)\), \(\nabla^{L}\) is determinate by the so called Christoffel symbols \(\Gamma^{l}_{kj}\), which are defined by the identity \[\nabla^{L}_{\partial_{k}}\partial_{j}=\sum_{l}\Gamma^{l}_{kj}\partial_{l}\,.\] Thus, in our notation \(A_{i}(\partial_{j})\) in the canonical basis induced by the coordinate system is the matrix with entrances \(\Gamma^{l}_{kj}\). Then, we have the following result. **Corollary 3.13**.: _Let \(\rho:M\to N\) be a smooth submersion. Consider the field of Hilbert spaces \(H\to N\) with \(\mathcal{H}(\lambda)=L^{2}(M_{\lambda},TM)\). Let \(\operatorname{Vect}_{c}(M)\) be the space of smooth vector fields with compact support over \(M\) and \(\nabla^{L}\) denotes the Levi-Civita connection._ 1. _The map_ \(\nabla:\operatorname{Vect}(N)\times\operatorname{Vect}_{c}(M)\to \operatorname{Vect}_{c}(M)\) _given by_ \[\nabla_{X}Y=\nabla^{L}_{\tilde{X}}Y+\frac{1}{2}\big{(}\operatorname{div}( \tilde{X})-\operatorname{div}(X)\circ\rho)Y\] _defines a smooth structure with curvature_ \(R(X,Y)=R^{g}(\tilde{X},\tilde{Y})\)_, where_ \(R^{g}\) _is the Riemannian curvature of_ \(M\)_. Moreover, the following formula holds_ \[\nabla_{X}\tilde{Y}-\nabla_{Y}\tilde{X}=[\tilde{X},\tilde{Y}]+\frac{1}{2} \big{(}\operatorname{div}(\tilde{X})-\operatorname{div}(X)\circ\rho\big{)}Y- \frac{1}{2}\big{(}\operatorname{div}(\tilde{Y})-\operatorname{div}(Y)\circ \rho\big{)}X.\] (13) 2. _Assume that_ \(\rho\) _is a smooth fiber bundle and_ \((U_{i},K,\Phi_{i})_{i\in I}\) _be a smooth trivialization for_ \(\rho\)_. Fix a volume form on_ \(K\)_. If the Christoffel symbols_ \(\Gamma^{l}_{kj}\) _of the Levi-Civita connection belongs to_ \(C_{b}^{\infty}(\rho^{-1}(C))\) _for each compact set_ \(C\subset U_{i}\) _and_ \(i\in I\)_, then the family_ \((U_{i},L^{2}(K,F),T_{i})_{i\in I}\) _defines a smooth local trivialization of_ \(H\to N\)_, where_ \(T_{i}(\lambda):L^{2}(M_{\lambda},\mu_{\lambda})\to L^{2}(K,F)\) _is given by equation (_12_),_ Proof.: The only claim that does not follows from theorems 3.9 and 3.10 is equation (13), but this is a direct consequence of the fact that the Levi-Civita connection is torsion free. ## Appendix A Weakly smoothness The following result from [8] is applied many times in section 2. We decided to include it for a self-contained presentation. We also use this opportunity to fix a typo in the original article. Essentially, this result asserts that weakly smoothness implies norm smoothness. **Lemma A.1**.: _[_8_, Lemma 5.1.1]_ _Let \(N\) be a finite dimensional manifold and \(\mathcal{H}\) be a Hilbert space with inner product \(\langle\cdot,\cdot\rangle\). Also, let \(f\in C^{n-1}(N;\mathcal{H})\), \(n\in\mathbb{N}\). If for every \(X\in\operatorname{Vect}(N)\) there is \(f_{X}\in C^{n-1}(N;\mathcal{H})\) such that_ \[\langle f,\theta\rangle\in C^{n}(N)\quad\text{ and }\quad X\langle f,\theta \rangle=\langle f_{X},\theta\rangle\,,\quad\theta\in\mathcal{H}\] _then \(f\in C^{n}(N;\mathcal{H})\) and \(Xf=f_{X}\)._ Coarea Formula For the sake of a self-contained presentation of our results, in this appendix we recall the so called coarea formula. Let \(M\) and \(N\) be oriented Riemannian manifolds of dimension \(m\) and \(k\) respectively with \(k<m\), and let \(\rho:M\to N\) be a submersion. Also let \(\eta\) and \(\zeta\) be the corresponding Riemannian volume forms and \(J_{\rho}\) as in definition 3.1. Then for any measurable function \(f\) on \(M\), the coarea formula asserts that \[\int_{M}f(x)J_{\rho}(x)\eta(x)=\int_{N}\left(\int_{M_{\lambda}}f(z)\eta_{ \lambda}(z)\right)\zeta(\lambda), \tag{14}\] whenever \(f\) is nonnegative or \(fJ_{\rho}\in L^{1}(M)\). Since \(J_{\rho}(x)=[\det(D\rho(x)D\rho(x)^{*})]^{1/2}\), the function \(J_{\rho}\) is well-defined even if \(\rho\) is a smooth map. Moreover, the Morse-Sard theorem implies that the coarea formula holds even if \(\rho\) is not a submersion. Notice that in such case, the set of regular points form an open set in \(M\). **Remark B.1**.: For \(M=\mathbb{R}^{m}\), coarea formula can be found in [4] Theorem 3.2.12. It is stated using the \((n-k)\)-Hausdorff measure restricted to \(M_{\lambda}\), but it is well known that it coincides with \(\eta_{\lambda}\) in our case. The result for g Riemannian manifolds follows from the case \(M=\mathbb{R}^{n}\). Recall that, we introduce in definition 3.1 the volume form \(\mu_{\lambda}=J^{-1}\eta_{\lambda}\). Then, the coarea formula becomes the identity \[\int_{M}f(x)\eta(x)=\int_{N}\left(\int_{M_{\lambda}}f(z)\mu_{\lambda}(z) \right)\zeta(\lambda).\] In particular, it follows that the map \(T:L^{2}(M)\to\int_{N}^{\oplus}L^{2}(M_{\lambda},\mu_{\lambda})\zeta(\lambda)\) given by \(Tf(\lambda)=f|_{M_{\lambda}}\) is unitary.
2301.10518
A Deep Gaussian Process Model for Seismicity Background Rates
The spatio-temporal properties of seismicity give us incisive insight into the stress state evolution and fault structures of the crust. Empirical models based on self-exciting point-processes continue to provide an important tool for analyzing seismicity, given the epistemic uncertainty associated with physical models. In particular, the epidemic-type aftershock sequence (ETAS) model acts as a reference model for studying seismicity catalogs. The traditional ETAS model uses simple parametric definitions for the background rate of triggering-independent seismicity. This reduces the effectiveness of the basic ETAS model in modelling the temporally complex seismicity patterns seen in seismic swarms that are dominated by aseismic tectonic processes such as fluid injection rather than aftershock triggering. In order to robustly capture time-varying seismicity rates, we introduce a deep Gaussian process formulation for the background rate as an extension to ETAS. Gaussian processes (GPs) are a robust non-parametric model for function spaces with covariance structure. By conditioning the lengthscale structure of a GP with another GP, we have a deep-GP: a probabilistic, hierarchical model that automatically tunes its structure to match data constraints. We show how the deep-GP-ETAS model can be efficiently sampled by making use of a Metropolis-within-Gibbs scheme, taking advantage of the branching process formulation of ETAS and a stochastic partial differential equation (SPDE) approximation for Mat\'ern GPs. We illustrate our method using synthetic examples, and show that the deep-GP-ETAS model successfully captures multiscale temporal behavior in the background forcing rate of seismicity. We then apply the results to two real-data catalogues: the Ridgecrest, CA July 5 2019 Mw 7.1 event catalogue and the 2016--2019 Cahuilla, CA earthquake swarm.
Jack B. Muir, Zachary E. Ross
2023-01-25T10:58:52Z
http://arxiv.org/abs/2301.10518v2
# A Deep Gaussian Process Model for ###### Abstract The spatio-temporal properties of seismicity give us incisive insight into the stress state evolution and fault structures of the crust. Empirical models based on self-exciting point-processes continue to provide an important tool for analyzing seismicity, given the epistemic uncertainty associated with physical models. In particular, the epidemic-type aftershock sequence (ETAS) model acts as a reference model for studying seismicity catalogs. The traditional ETAS model uses simple parametric definitions for the background rate of triggering-independent seismicity. This reduces the effectiveness of the basic ETAS model in modelling the temporally complex seismicity patterns seen in seismic swarms that are dominated by asesismic tectonic processes such as fluid injection rather than aftershock triggering. In order to robustly capture time-varying seismicity rates, we introduce a deep Gaussian process formulation for the background rate as an extension to ETAS. Gaussian processes (GPs) are a robust non-parametric model for function spaces with covariance structure. By conditioning the lengthscale structure of a GP with another GP, we have a deep-GP: a probabilistic, hierarchical model that automatically tunes its structure to match data constraints. We show how the deep-GP-ETAS model can be efficiently sampled by making use of a Metropolis-within-Gibbs scheme, taking advantage of the branching process formulation of ETAS and a stochastic partial differential equation (SPDE) approximation for Matern GPs. We illustrate our method using synthetic examples, and show that the deep-GP-ETAS model successfully captures multiscale temporal behavior in the background forcing rate of seismicity. We then apply the results to two real-data catalogues: the Ridgerest, CA July 5 2019 Mw 7.1 event catalogue, showing that deep-GP-ETAS can successfully characterize a classical aftershock sequence; and the 2016-2019 Cahuilla, CA earthquake swarm, which shows two distinct phases of aseismic forcing concordant with a fluid injection driven initial sequence, arrest of the fluid along a physical barrier, and release following the largest Mw 4.4 event of the sequence. Keywords: Seismicity, Statistical methods, Statistical seismology, Inverse theory ## Introduction The study of seismicity patterns is a core concern of observational seismology. By documenting seismicity in the form of catalogs, we aim to better understand the mechanics of the crust as it evolves through time. Increasing progress in observational seismological instrumentation and methodology have led to increasingly complete and accurate catalogs, which have revealed fascinating fine-scale structure in seismicity (e.g. Ross et al. (2019, 2020); Shelly et al. (2016); Wei et al. (2011)). However, as a comprehensive physical theory of seismicity remains elusive, we still rely heavily on empirical statistical models to understand catalogs of seismic events. The de facto standard statistical model for the analysis of seismicity at short time-scales is the epidemic-type aftershock sequence (ETAS) model, introduced by Ogata (1988) for temporal catalogs, and extended to spatio-temporal catalogs by Ogata (1998). The ETAS model explicitly combines a background rate of earthquake activity, independent of past events, with self-excitation of aftershocks following the combined Omori-Utsu (Utsu (1961)) aftershock decay law and an earthquake productivity law developed by Ogata (1998) in reference to the earthquake catalog classification schema of Utsu (1971). The ETAS model allows us to decouple the background forcing of seismicity from self-excitation (i.e. it declusters earthquake catalogs), and allows for statistical forecasting of earthquake rates into the future once calibrated. The spatio-temporal variations in the ETAS parameters have given useful insight into the evolution of seismicity. Whilst classical aftershock sequences are normally well modelled by the self-triggering component of ETAS, swarm-type sequences are insufficiently clustered to conform to ETAS with a stationary forcing rate, and instead reflect aseismic changes in forcing (e.g. Kumazawa & Ogata (2014)). In the zero-clustering limit, the behaviour of swarms may be modelled as an inhomogenous Poisson process; however, in reality some degree of clustering must be accounted for, which poses a difficult challenge as it is impossible to directly observe both the self-excitation rate and aseismic forcing rate independently. Many methodologies have been proposed to overcome this challenge. For instance, Marsan et al. (2013) proposed deriving a time-dependent aseismic forcing rate by iteratively obtaining the maximum-likelihood estimate (MLE) parameters of an ETAS model with per-earthquake forcing rate, updating the forcing given the MLE parameters, and then temporally smoothing the resulting rate to avoid overfitting -- this method was applied to the analysis of the foreshock sequence of the \(M_{w}\) 9.0 2011 Tohoku earthquake in Marsan & Enescu (2012). Hainzl et al. (2013) then used the method proposed in Marsan et al. (2013) to investigate the impact of transients in the aseismic forcing rate on the prediction of aftershock productivity, an important consideration for accurate short-term forecasting of seismic activity. Kumazawa et al. (2016) investigated the prolific Izu Peninsula swarms, and found a close fit between gradient-penalized piecewise-linear functions and a simple model based on exponential decay of strains recorded at Higashi-Izu which allowed them to infer synchronization between earthquake swarm activity and volumetric strain changes induced by magma intrusions. These successes in analysis motivate us to investigate improvements in methodology that can robustly quantify the uncertainty in estimates of aseismic forcing rates. In general, as we have no knowledge of the functional form of background variability, it is preferable to use a non-parametric formulation for aseismic forcing; however, solving for a non-parametric background rate model has traditionally been difficult and does not permit a Hessian based estimate of uncertainty at the MLE. Recent studies (Molkenthin et al. (2022); Ross (2021)) have advocated for a fully Bayesian approach, in which the background rate and ETAS parameter priors are specified, combined with the ETAS likelihood to produce a posterior probability distribution, and sampled using Markov-Chain Monte Carlo methods (MCMC). While MCMC is in general an expensive technique, it is an unfortunate necessity for the ETAS problem due to the lack of a general closed-form conjugate model that would allow direct draws from the posterior distribution. Molkenthin et al. (2022), in particular, show how the background rate may be successfully modeled in non-parametric fashion by making use of Gaussian Processes (GP) to specify the background prior and using a branching-structure formulation of the ETAS model to decouple the background events from aftershocks, which are handled by self-excitation. Beyond the confines of the ETAS model, GPs have also proven useful in the spatial analysis of seismic sequences. Bayliss et al. (2020) investigated numer ous geophysical covariates for seismicity in a log-Gaussian Cox process model of spacial seismicity (e.g. distance to mapped faults, strain rates, etc.), and used a GP to handle additional unspecified seismic intensity rate variability. Ross et al. (2022) investigated the hyperparameters of fully GP specified log-Gaussian Cox process models for seismic sequences as a quantitative measure of geometrical properties such as the anisotropy of seismicity. GPs are a flexible and robust framework for non-parametric modelling, however in the context of geophysical inversion problems they suffer from two major deficiencies. The first is that, without approximations, they scale poorly to large problem sizes (e.g. large seismic catalogs), as specifying the covariance structure requires inverting a matrix at \(O(n^{3})\) expense where \(n\) is the catalog size. The second is that basic GP analysis assumes homogeneity of the underlying statistical distribution of the prior -- this assumption poorly models abrupt spatio-temporal changes in behavior that are found in the real Earth. In this study, our contributions are as follows. We show how the aforementioned issues may be resolved by making use of a deep GP model (DGP) approximated by the solution to a stochastic partial differential equation (SPDE, Lindgren et al. (2011)). We also improve the rate of MCMC convergence by making use of appropriate modern samplers that exploit the structure of the SPDE approximation and the gradient of the combined semiparametric model posterior. Finally, we illustrate the DGP-ETAS model using a combination of synthetic catalogs, the Ridgecrest, CA earthquake sequence, and the Cahuilla, CA earthquake swarm. ## Sampling the ETAS Model The temporal ETAS model is a self-exciting marked point process defined on the space \(\mathcal{T}\times\mathcal{M}\), where \(\mathcal{T}=(0,T)\) is the time domain of interest and \(\mathcal{M}\) is the mark space of the process, which for ETAS is the event magnitude. Realizations of the process are earthquake catalogs \(\mathcal{C}\) where the \(i^{th}\) event is defined by the tuple \((t_{i},m_{i})\). \(\mathcal{C}\) is endowed with a cutoff magnitude \(m_{0}\), which is the lower limit of observability. The temporal ETAS model may be defined by its conditional intensity function, which gives the instantaneous rate of earthquakes given the background forcing and past events. Assuming that earthquake triggering is time-independent, the conditional intensity is the sum of the background rate and the triggering: \[\lambda(t|\mathcal{C},\mu,\theta)=\mu(t)+\sum_{i\in\mathcal{C}|t_{i}<t}\phi(t- t_{i}|m_{i},\theta). \tag{1}\] The most common parameterization of \(\phi\), and the one we use in this work, is \[\phi(t-t_{i}|m_{i},\theta) =\kappa(m_{i}|\theta)g(t-t_{i}|\theta) \tag{2}\] \[\kappa(m_{i}|\theta) =K\exp(\alpha(m_{i}-m_{0}))\] (3) \[g(t|\theta) =(t+c)^{-p}. \tag{4}\] With this definition, the ETAS model triggering parameters are \(\theta=(K,\alpha,c,p)\). The magnitude term \(\kappa\) is proportional to the Utsu aftershock-productivity law, while the temporal term \(g\) is proportional to the modified Omori-Utsu aftershock decay law. The log-likelihood of observing catalog \(\mathcal{C}\) is given by \[\log p(\mathcal{C}|\mu,\theta)=\sum_{i\in\mathcal{C}}\log(\lambda(t_{i}| \mathcal{C},\mu,\theta))-\int_{0}^{T}\lambda(t|\mathcal{C},\mu,\theta). \tag{5}\] Given that the conditional-likelihood itself contains a sum over the catalog, the log-likelihood of the standard ETAS model thus has a double loop, and the effort required to compute it scales with the number of events squared. When considering heterogeneous background rates, the problem is even worse as the background parameters influence the ETAS likelihood, dramatically increasing the complexity and dimensionality of the likelihood surface. Following Ross (2021), a useful reformulation that improves computational tractability is to make use of the splitting property of Poisson processes (Veen and Schoenberg, 2008). This formulation of ETAS introduces the auxiliary branching variables \(\{B_{i}\}_{i=1}^{N}\), where \(B_{i}=j\) tells us that the \(i^{th}\) event in \(\mathcal{C}\) had "parent" \(j\) and \(B_{i}=0\) tells us that the \(i^{th}\) earthquake was a "background" event. This factorization splits the estimation of the background rate and the ETAS parameters into two sub-problems, where the background catalog is described by an inhomogeneous Poisson process and the aftershocks are handled by ETAS without background forcing -- this greatly simplifies inference of all parameters, reduces the expense of computing the ETAS likelihood and serendipitously declusters the catalog as a byproduct of sampling. We introduce sub-catalogs \(\mathcal{C}_{j}=\{i|B_{i}=j\}\) so that \(|\mathcal{C}_{j}|\) is the number of earthquakes triggered by event \(j\) (or the background for \(\mathcal{C}_{0}\)). The conditional intensity for the \(i^{th}\) event then becomes \[\lambda(t_{i}|\mathcal{C},\mu,\theta,B_{i})=\begin{cases}\mu(t_{i})&B_{i}=0\\ \phi(t_{i}-t_{B_{i}}|m_{B_{i}},\theta)&1\leq B_{i}<i,\end{cases} \tag{6}\] and the log-likelihood function becomes \[\log p(\mathcal{C}|\mu,\theta,B) =\sum_{i\in\mathcal{C}_{0}}\log(\mu(t_{i}))-\int_{0}^{T}\mu(t)dt\] \[+\sum_{j\in\mathcal{C}}\left[|\mathcal{C}_{j}|\log\kappa(m_{j}|K, \alpha)-\kappa(m_{j}|K,\alpha)G(T-t_{j}|c,p)+\sum_{i\in\mathcal{C}_{j}}\log g( t_{i}-t_{j}|c,p)\right], \tag{7}\] where \[G(T-t_{j}|c,p)=\int_{0}^{T-t_{j}}g(t|c,p)dt. \tag{8}\] Note that the double sum in this form does not suffer quadratic scaling because each earthquake has a unique parent and so is only ever visited once in the interior sum. Marginalizing over the branching variables gives us the probability distribution of \(\mu\) and \(\theta\) conditioned on all possible combinations of parent/child earthquake pairs (i.e. with the likelihood defined by Equation 5). The conditional probability for the branching variables is analytic, and is given by \[p(B_{i}=j|\mu,\theta)=\begin{cases}\frac{\mu(t_{i})}{Z_{i}}&j=0\\ \frac{\kappa(m_{j}|K,\alpha)g(t_{i}-t_{j}|c,p)}{Z_{i}}&1\leq j<i,\end{cases} \tag{9}\] where the normalization factor is \[Z_{i}=\mu(t_{i})+\sum_{j<i}\kappa(m_{j}|K,\alpha)g(t_{i}-t_{j}|c,p). \tag{10}\] ## Deep Gaussian Processes With the general form of the latent branching-variable ETAS model specified, it comes time to specify the model for the time-variable background rate \(\mu(t)\). Given that the ETAS model is a general phenomenological description of seismicity, it is appropriate to model the background rate with a general function. We use the framework of Gaussian Processes (GPs) as a robust non-parametric description of the background seismicity rate (Molkenthin et al., 2022; Rasmussen & Williams, 2006). GPs act as priors over function space (in our case the space of continuous functions on the interval \([0,T]\)), and are defined by the property that the distribution \(\{f(x_{i})\}\) of a GP \(f\) evaluated at any finite collection of points \(\{x_{i}\}\) is given by a multivariate normal with mean function \(m(x)\) and covariance function \(C(x,x^{\prime})\), i.e. \[\begin{bmatrix}f(x_{1})\\ f(x_{2})\\ \vdots\\ f(x_{n})\end{bmatrix}\sim N\left(\begin{bmatrix}m(x_{1})\\ m(x_{2})\\ \vdots\\ m(x_{n})\end{bmatrix},\begin{bmatrix}C(x_{1},x_{1})&C(x_{1},x_{2})&\ldots&C( x_{1},x_{n})\\ C(x_{2},x_{1})&C(x_{2},x_{2})&\ldots&C(x_{2},x_{n})\\ \vdots&\vdots&\ddots&\vdots\\ C(x_{n},x_{1})&C(x_{n},x_{2})&\ldots&C(x_{n},x_{n})\end{bmatrix}\right). \tag{11}\] The mean and covariance functions completely specify the GP. The covariance function, which specifies the regularity and characteristic length-scales of the GP, is the central object in GP modelling. A prototypical example would be the squared exponential covariance \[C(x,x^{\prime})=\sigma\exp\left[-\frac{||x-x^{\prime}||^{2}}{2l^{2}}\right], \tag{12}\] which generates functions \(f\) that are infinitely differentiable and have characteristic length-scale \(l\) and amplitude \(\sigma\). Another commonly used function is the Matern covariance, which is ubiquitous in geospatial statistics. The Matern has the form \[C(x,x^{\prime})=\sigma^{2}\frac{2^{1-\nu}}{\Gamma(\nu)}\left(\sqrt{2\nu}\frac{ ||x-x^{\prime}||}{l}\right)^{\nu}K_{\nu}\left(\sqrt{2\nu}\frac{||x-x^{\prime} ||}{l}\right), \tag{13}\] where the \(\Gamma\) is the generalized factorial or Gamma function, \(K_{\nu}\) is the modified Bessel function of the second kind and \(\nu\) is the regularity parameter. Functions that have Matern covariance are differentiable \([\nu]-1/2\) times. In particular, the squared exponential covariance is the special case of the limit of the Matern covariance as \(\nu\) goes to infinity. The added roughness of lower \(\nu\) Matern covariance typically better represents realistic phenomena. For data with Gaussian noise, GP regression models have a closed form solution once the mean and covariance are fully specified, and careful selection of the form of the covariance matrix can lead to rich and fully interpretable models with a high degree of predictive power (Rasmussen & Williams, 2006). However, while GP regression models have analytic solutions, inverse problems do not in general, as is the case with the estimation of the aseismic forcing rate \(\mu(t)\) considered here. Furthermore, the construction of suitable covariance kernels in a classical-GP framework requires a substantial degree of domain expertise, and consequently a large fraction of the GP modelling literature assumes covariance functions that are homogeneous and isotropic for simplicity (see e.g. Gelfand & Schliep (2016) for a brief review). The loss of model flexibility inherent in this assumption can result in substantial misfit and prediction error, either due to underfitting or overfitting. Because of these limitations, learning methods for automatic discovery of appropriate covariance functions have become a major focus of GP research in order to improve computational efficiency and reduce domain expertise requirements, whilst maintaining the attractive features of GP robustness and interpretability. In this work, we investigate deep Gaussian Processes (DGPs) as a means of learning appropriate covariance functions. In the DGP framework, multiple GPs are chained together, either feeding the outputs of one GP to the inputs of another (Damianou & Lawrence, 2013), or alternatively by letting the outputs of one GP change the covariance function of the next (Roininen et al. (2016), see Fig. 1). The latter approach is particularly appealing, because the covariance function can be easily evaluated for each layer of the DGP, resulting in a fully interpretable model. Additionally, by making use of a stochastic partial differential equation approximation to GPs, DGP models can be very efficiently simulated, as we will describe below. ### Approximating DGPs with Stochastic Partial Differential Equations GP models are expressive, robust, easily interpretable and easily implementable -- however, the use of pure GP models in large-scale inverse problem settings has been limited. This is due to their inherently poor scaling. The evaluation of a GP model requires inversion of an \(n\times n\) covariance matrix for a GP conditioned on \(n\) points. The construction of this matrix scales in memory like \(O(n^{2})\) and the computational complexity of inversion scales like \(O(n^{3})\). As a result, GP approximation methods are an active area of research. The key work of Lindgren et al. (2011) established a link between GP models using the Matern class of covariance functions and the solution to a stochastic partial differential equation (SPDE). The advantage of this approach is that the linear system required to solve the SPDE is extremely sparse, becoming tridiagonal in the case of an approximate 1D Matern. As a result, sampling from the approximate GP defined by the SPDE solution is vastly cheaper than the full GP. Both large-scale problems and nonlinear inverse problems (requiring many GP evaluations) become feasible, when the full GP solution would not be. To draw samples for the 1D inhomogeneous Matern-1/2 process using the SPDE approximation, we solve the following for \(u\): \[(I-l^{2}\Delta)u=\sqrt{l\sigma^{2}}W, \tag{14}\] where \(W\) is a white noise process and \(\Delta\) is the Laplacian operator. The SPDE is solved by approximating it by the finite-difference method, which gives rise to a tridiagonal Laplacian. The boundary conditions (usually Neumann) bias the solution to the SPDE near the edges, so typically the domain is extended so that the boundaries are not near the time interval of interest. Explicitly, the system that is solved for \(u\) at each layer of the DGP is \[\begin{bmatrix}1+2\frac{l_{1}^{2}}{h^{2}}&-\frac{2}{h^{2}}&\ldots&0&0\\ -\frac{1}{h^{2}}&1+2\frac{l_{2}^{2}}{h^{2}}&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&1+2\frac{l_{n-1}^{2}}{h^{2}}&-\frac{1}{h^{2}}\\ 0&0&\ldots&-\frac{2}{h^{2}}&1+2\frac{l_{n}^{2}}{h^{2}}\end{bmatrix}\begin{bmatrix} u_{1}\\ u_{2}\\ \vdots\\ u_{n-1}\\ u_{n}\end{bmatrix}=\begin{bmatrix}\sqrt{l_{1}\sigma^{2}}&0&\ldots&0&0\\ 0&\sqrt{l_{2}\sigma^{2}}&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&\sqrt{l_{n-1}\sigma^{2}}&0\\ 0&0&\ldots&0&\sqrt{l_{n}\sigma^{2}}\end{bmatrix}\begin{bmatrix}W_{1}\\ W_{2}\\ \vdots\\ W_{n-1}\\ W_{n}\end{bmatrix} \tag{15}\] For a given fixed-length discretization with grid spacing \(h\), the white noise process \(W\) on the grid can be generated by sampling a vector of unit normals \(w\) and scaling it by \(W=w/h\), where the \(1/h\) factor ensures that the average integrated power of the process is 1 per unit length, which in turn means the characteristic amplitude of the SPDE solution is \(\sigma\). We designate the solution, given \(l\), \(\sigma\), and a particular realization of white noise forcing \(w\) as \(u=S[w,l,\sigma]\).For DGP models, the lengthscale term \(l\) is itself a function of time after the first layer: \(l=l(t)\). To approximate the multi-layer DGP model using an SPDE approach, we set the lengthscale for the first layer \(l_{1}\) to be uniform (where the index now refers to the DGP layer being considered), and then generate \(u_{1}=S[w_{1},l_{1},\sigma_{1}]\) by solving the SPDE with a sample of white noise forcing \(w_{1}\). We then set the lengthscale for the second layer \(l_{2}=\mu_{1}\exp(u_{1})\) for a scale constant \(\mu_{1}\), making \(l_{2}\) strictly positive, and obtain \(u_{2}=S[w_{2},l_{2},\sigma_{2}]\) using white noise \(w_{2}\) etc. The background rate of seismicity is also strictly positive. For the DGP-ETAS model, the background rate \(\mu(t)\) is therefore given by \(\mu(t)=\mu_{1}\exp(u_{1})\) for a one layer model, \(\mu(t)=\mu_{2}\exp(u_{2})\) for a two-layer model and so on. For an \(n\) layer model, the scale constants \(\mu_{i}\) have units of length for \(i<n\) as they correspond to an average lengthscale of layer \(i+1\), and units of inverse time for \(i=n\) as they correspond to the average background rate. This iteration can be carried out to arbitrarily high order, however, as is carefully investigated in Dunlop et al. (2018), the additional expressivity of non-uniform lengthscales saturates for more than around 3 layers. For inverse problems where the posterior uncertainty is likely to be high, such as the DGP-ETAS model investigated here, a 2-layer DGP provides the best tradeoff between computational complexity and expressivity. Figure 1 shows the flow of information in an example 2-layer DGP. ## Model Priors Parameter prior specification is a crucial part of Bayesian modelling. Given data \(y\) and parameters \(\theta\), Bayes' rule tells us that the posterior is proportional to the likelihood of observing the data given the parameters, multiplied by the prior distribution of those parameters: \(p(\theta|y)\propto p(y|\theta)p(\theta)\). When the data are very informative (i.e. \(p(y|\theta)\) is sharply peaked), then the prior typically has little impact on our final interpretation, however if the data are uninformative, the prior can substantially affect the results. It is therefore important that the prior is specified in a way that ensures the posterior is well behaved. For the DGP-ETAS model, we consider all of the model parameters to be uncorrelated, _a priori_, and discuss our choice of priors below. ### ETAS parameters The four ETAS parameters, \(K\), \(\alpha\), \(c\), \(p\) are all strictly positive (in particular, \(p>=1\)). \(\alpha\), \(c\), \(p\) generally have "typical" ranges for a particular setting, whereas \(K\) instead depends on the lower magnitude cutoff of the catalog. We use Inverse-Gamma priors for \(\alpha\), \(c\) and the auxiliary variable \(\tilde{p}=p-1\), which we tune to place 98% of the probability mass within a designated typical range for the relevant problem. For \(K\), we use a normal distribution truncated between 0 and an upper bound \(K_{upper}\) in order to suppress an effect in the MCMC sampling that traps chains in an infeasible part of model space -- this occurs when \(K\) is large and all earthquakes after the first in the catalogue are ascribed to triggering rather than background effects. For the constant background rate reference ETAS model, we use a Gamma distribution prior for \(\mu\), as the conditional posterior for \(\mu\) is then given in closed form by \(p(\mu|K,\alpha,c,p,B,\mathcal{C})=Gamma(a+|\mathcal{C}_{0}|,1/(b+T))\) where \(a\) and \(b\) are the parameters of the Gamma prior; we always estimate \(a=|\mathcal{C}|/T\) and \(b=|\mathcal{C}|/2T\). ### DGP parameters The white noise parameters \(w_{i}\) are given a standard multivariate Normal prior with identity covariance \(w_{i}\sim N(0,1)\). Lengthscale parameters (e.g. \(l_{1}\), and \(\mu_{1}\) for the two-layer model) have Uniform priors; for multilayer models we have found it is normally useful to have the ranges of the lengthscales be disjoint so that the different GP layers capture variability at different scales -- for instance \(l_{1}\sim U(100,300)\) and \(\mu_{1}\sim U(10,200)\) for the DGP-ETAS model in Figure 2. The rate parameter (\(\mu_{1}\) for the one layer model, \(\mu_{2}\) for the two layer model) is given a Gamma prior for consistency with the basic ETAS model. The GP variance parameters \(\sigma_{i}\) are given standard half-Normal priors \(N_{+}(0,1)\) (i.e. a zero-mean Gaussian with standard deviation 1, truncated to be strictly positive). ## Sampling Scheme While the full DGP-ETAS model is quite complicated, the parameters of the model naturally partition into blocks that are conditionally simple to sample from, if the other blocks are held fixed. As such, we use a Metropolis-within-Gibbs scheme to draw samples from the posterior distribution for the DGP-ETAS model. In Metropolis-within-Gibbs, each block of parameters is updated sequentially (Gibbs sampling), with the update conditional on the other blocks. For each block update, a new set of parameter values is proposed using an appropriate sampling method for that block, and then the proposal is accepted or rejected according to a rule that guides results in sampling the posterior (a Metropolis update). The specific form of the rule depends on the way the samples are proposed. To sample both the ETAS scalar parameters \(K,\alpha,c,p\) and the GP hyperparameters \(l_{1},\mu_{i},\sigma_{i}\), we use Hamiltonian Monte Carlo (HMC, Betancourt (2017); Neal (2011)). HMC utilizes gradient information to accelerate the traversal of the posterior, resulting in samples with low autocorrelation, and is a good general-purpose choice for MCMC sampling non-linear problems. We use forward-mode Figure 1: Schematic of a 2 layer DGP based on the covariance operator definition. Further layers may be similarly concatenated to produce deeper networks, however the greater expressive capacity is unlikely to be required in an inverse problem context. Parameters without a representative trace are scalars, those with traces are functions defined on the interval of interest. The parameter values are those used for the synthetic square catalog described in the examples section. The schematic is arranged sequentially downward, so that each line feeds into the one below. automatic differentiation to obtain the gradient of the derivative in respect to the scalar parameters. For the white-noise vectors \(w_{i}\), we use Elliptical Slice Sampling (ESS; Murray et al., 2010), which is a particularly efficient way to sample the conditional posterior when the prior distribution is a multivariate normal, which is always the case for \(w_{i}\). The branching variables \(B\) can be exactly sampled given the conditional distribution defined in Equation 9. For the 2-layer DGP-ETAS model, we then have the following algorithm for drawing posterior samples, where for notational convenience we write \(\Theta_{\xi}\) as the collection of all variables not including \(\xi\): ``` Initialize all variables from priors for\(i=1:N\)do \(w_{1}\leftarrow\)ESS(\(p(w_{1}|\mathcal{C},\Theta_{w_{1}})\)) \((\mu_{1},l_{1},\sigma_{1})\leftarrow\)HMC(\(p(\mu_{1},l_{1},\sigma_{1}|\mathcal{C},\Theta_{\mu_{1},l_{1},\sigma_{1}}),l_{s},n_{s}\)) \(w_{2}\leftarrow\)ESS(\(p(w_{2}|\mathcal{C},\Theta_{w_{2}})\)) \((\mu_{2},\sigma_{2})\leftarrow\)HMC(\(p(\mu_{2},\sigma_{2}|\mathcal{C},\Theta_{\mu_{2},\sigma_{2}}),l_{s},n_{s}\)) \(B\gets p(B|\mathcal{C},\Theta_{B})\) \((K,\alpha,c,\tilde{p})\leftarrow\)HMC(\(p(K,\alpha,c,\tilde{p}|\mathcal{C},\Theta_{K,\alpha,c,\tilde{p}}),l_{s},n_{s}\)) endfor ``` **Algorithm 1** DGP-ETAS model for two GP layers \(l_{s}\) and \(n_{s}\) are the hyperparameters that control the HMC integrator step length and number of integrator steps per sample, which we set to 0.01 and 10 respectively -- empirically, these values result in good MCMC sampling efficiency for all examples. ## Examples As a first illustrative example, we investigate the inversion of synthetic earthquake catalogs, generated using the ETAS model simulated with Ogata's thinning algorithm (Ogata (1981)). Three catalogs were generated, on a target interval of \(T=1000\) days length: a catalog with constant background rate \[\mu_{const}(t)=0.2\ \text{day}^{-1}; \tag{16}\] a catalog with a Gaussian background rate \[\mu_{gauss}(t)=\left(4.8*\exp\left(-\frac{(t-T/2)^{2}}{2*\left(\frac{T}{8} \right)^{2}}\right)+0.2\right)/5\ \text{day}^{-1}; \tag{17}\] and a catalog with a square wave background rate \[\mu_{square}(t)=\begin{cases}5\ \text{day}^{-1}&\frac{T}{4}<t<\frac{3T}{4}\\ 0.04\ \text{day}^{-1}&\text{otherwise}\end{cases}. \tag{18}\] All catalogs were generated with an implicit cutoff magnitude \(M_{0}=0\), \(K_{true}=0.2\), \(\alpha_{true}=1.25\), \(p_{true}=1.2\), \(c_{true}=0.05\). The catalog magnitudes follow the Gutenberg-Richter law with \(b=1.0\). In these synthetic experiments, as well as the real data experiments that follow, we generate \(N=100000\) samples for 6 independent MCMC chains, which we compare to ensure that the chains are individually well mixed. The time taken for the MCMC sampling to complete is detailed in Table 1. Figure 2 shows the output of the synthetic inversions for these three test catalogs, using the basic ETAS model, the single layer GP model, and the two layer DGP model. In this test, we are primarily interested in showing that the GP and DGP models do not over-fit the constant model, and to observe their behavior for variable background rate models. In the case of the constant model, we can see that the GP and DGP models do not seem to overfit compared to the ground truth background model. For the Gaussian background model, both the GP and DGP capture well the true model, as is expected given that the Gaussian perturbation can be well described with a single characteristic timescale for the GP. For the square perturbation, the DGP captures the rapid onset of change in background behavior at the edges of the square perturbation, indicating that it is learning the heterogeneity in the timescale of variations. Compared to the single-layer GP model, the edge is significantly sharper, indicating the improved performance of the two layer DGP model in this case. Figure 3 zooms into the case of the square wave perturbation with the DGP model, and shows the output posterior timescale \(l_{2}\); we see that the characteristic timescale is shortened at the edges of the perturbation. The square example also best highlights the impact of model specification on the time taken for MCMC sampling to complete. If the model is mis-specified, then the posterior is typically difficult to traverse, leading to poor sampling performance; thus, the constant-rate model takes a similar time to sample as the one-layer model, despite the one-layer model having many more parameters, as it cannot well-capture the true behavior of the catalog. The two-layer DGP model actually takes less time to compute than the single-layer GP model for this case, as it has an easier time capturing the rapidly changing behavior at the edges of the square perturbation. The evolution of MCMC parameters with iteration number for all test cases, also known as the MCMC chain plots, are found in the Supplement. Having illustrated the properties of the DGP-ETAS model on synthetic data, we now turn to two examples for real data. First, we estimate a DGP-ETAS model for the background rate evolution of the Ridgecrest, CA Mw 7.0 July 5, 2019 foreshock / aftershock sequence. High resolution seismic catalogs show that this sequence qualitatively obeys a classical aftershock model (Shelly (2020)), with rates of seismicity by eye appearing to follow two superimposed Omori-Utsu decay curves corresponding to triggering by the largest (\(M_{w}\) 6.4) foreshock and \(M_{w}\) 7.1 mainshock. We therefore expect that the background rate estimate should be roughly constant, and that there be no external forcing to the Ridgecrest sequence. The catalog of 920 events consists of Southern California Seismic Network (SCSN) located events with \(M_{L}\geq 3\) with the time range of UTC 2019-07-01T00:00:00 to 2019-07-15T00:00:00 and between (35.4699\({}^{\circ}\), 36.0361\({}^{\circ}\)) latitude and (-117.9435\({}^{\circ}\), -117.2132\({}^{\circ}\)) longitude. To obtain an estimate for reference ETAS parameters, we used events in the same location between UTC 2000-01-01T00:00:00 and 2019-07-01T00:00:00 to run the classical MLE ETAS routine in SAPP (Statistical Analysis of Point Processes with R, Ogata (2006)). Figure 4 shows the distribution of the number of events in the same location as the Figure 2: Results of inversions for the background rate of synthetic catalogs; the output matrix contains the combinations for zero, one and two layer DGP models, for constant, Gaussian and square perturbations of the background rate. Note that the zero layer DGP corresponds to a normal ETAS model. Catalog events are shown as gray circles, the ground truth background rate as the red line, and the 90% credible interval, 50% credible interval and median of the posterior are shown in increasingly dark shades of blue. Figure 3: Results of the inversion for background rate of a synthetic catalog using a square wave perturbation as the input. Catalog events are shown as gray circles, the ground truth background rate as the red line, and the 90% credible interval, 50% credible interval and median of the posterior are shown in increasingly dark shades of blue. Figure 4: Results of the inversion for background rate of the Ridgecrest, CA Mw 7.0 July 5, 2019 earthquake catalog with a lower magnitude cutoff of 3.0. The results are shown for UTC 2019-07-01T00:00:00 to 2019-07-15T00:00:00. Catalog events are shown as gray circles, and the 90% credible interval, 50% credible interval and median of the posterior are shown in increasingly dark shades of blue. The total earthquake rate, smoothed over a 41 event window, is shown in black. Due to the large difference between the total and background rates, the rates are plotted in log-space. As expected for an aftershock-driven event sequence, there is no large change in the expectation values of the background rate coincident with the two large events. The small increase observed may account for the triggering effect of earthquakes below the magnitude cutoff. 4 shows the catalog and estimated background rate. We also show the smoothed total rate of earthquakes, defined similarly to Marsan et al. (2013) by looking at each earthquake, and finding the time elapsed for a window of 20 events either side of the central earthquake (i.e 41 events total). As was hypothesized, we see no significant evidence of substantial background rate changes during this portion of the sequence, noting that the estimated daily background rate remains far below the actual rate of earthquake production even to the 90% credible interval. The smooth increase after the two largest events may be attributed to the triggering effect of events that fall below the cutoff magnitude of the catalog (and hence, implicitly, the minimum magnitude produced by the formulation of ETAS used in this study), but which are still large enough to trigger future earthquakes (Sornette and Werner, 2005a,b). We then investigate a sequence with a totally different character -- the Cahuilla, CA 2016-2019 earthquake swarm. This swarm qualitatively appears relatively unclustered prior to a \(M_{w}4.4\) earthquake late in the sequence, and instead has been hypothesized to be driven by the evolution of pore fluid pressure along the fault due to fluid injection (Ross et al. (2020)). The catalog of 461 events consists of SCSN events with \(M_{L}\geq 1.71\) with time range UTC 2016-01-01T00:00:00 to 2019-12-31T00:00:00 and between (33.42043\({}^{\circ}\), 33.58032\({}^{\circ}\)) latitude and (-116.84723\({}^{\circ}\), -116.68990\({}^{\circ}\)) longitude. The reference background rate estimate was obtained in this case by assuming Poissonian earthquake statistics for the preceding time range of UTC 2000-01-01T00:00:00 to 2016-01-01T00:00:00. Figure 5 shows the output. We note that the background rate estimate accelerates sharply around sequence day 500, and then slowly decays, before abruptly accelerating again after the largest \(M_{w}4.4\) earthquake before dying quickly away. The secondary peak is significant, and indicates increased seismicity after the \(M_{w}4.4\) event is not purely aftershock driven, although we note that the impact of aftershocks on the total rate is significant as indicated by the discrepancy between the smoothed total rate of earthquakes and the estimated posterior of \(\mu\) for this second peak. Note that the smoothed total rate and the posterior \(\mu\) are very similar for the first peak, indicating that the sequence is primarily driven aseismically prior to the \(M_{w}4.4\) event. This timeline matches well the sequence of events outlined by Ross et al. (2020), in which the rate of earthquakes increases geometrically with spreading fluid intrusion until reaching a physical barrier around day 600 at which point the swarm is geometrically constrained. Coincident with the \(M_{w}4.4\) event, the swarm circumvents this barrier, increasing the seismicity rate once more. ## Discussion and Conclusions We have illustrated the proposed DGP-ETAS model using both synthetic and real datasets, showing that it can capture the heterogeneous timescale structure in the background rate of earthquake sequences. It is pertinant to discuss the DGP-ETAS model in the context of existing methodology for investigating time variable background rates of seismicity. ETAS models have most commonly been solved using maximum likelihood estimates (MLE), in which the negative log-likelihood is minimized using numerical optimization. MLE based methods can incorporate time-varying background rates \(\mu(t)\) that are parameterized by functions such as piecewise constant, piecewise linear or splines. The complexity of these parametric forms can be controlled using information criteria methods and regularization (Kumazawa and Ogata, 2014; Kumazawa et al., 2016). Alternatively, Marsan et al. (2013) suggest a semi-parametric approach, wherein a catalog is alternatively fit with MLE ETAS given a prescribed forcing rate, and then forcing rate is updated and smoothed to best fit the catalog with those ETAS parameters. The result is a background-rate model without a specific functional form (although the level of smoothing strongly impacts the results). MLE methods have the advantage of fast computation, although the poor numerical scaling of traditional ETAS Figure 5: Results of the inversion for the background rate of the Cahuilla, CA earthquake swarm with a lower magnitude cutoff of 1.71. The results are shown for UTC 2016-01-01T00:00:00 to 2019-12-31T00:00:00. Catalog events are shown as gray circles, and the 90% credible interval, 50% credible interval and median of the posterior are shown in increasingly dark shades of blue. The total earthquake rate, smoothed over a 41 event window, is shown in black. There is a significant change in the background rate associated with this swarm, including two peaks of activity, one of which appears to be associated with a sub-cluster of activity just before sequence day 1000. formulations limits the catalog size, and it is challenging to incorporate the branching variable formulation inside a numerical optimization framework. The MLE is a single point estimate, and does not provide robust uncertainty estimates for the parameters, including the background rate. Semi-parametric methods such as that of Marsan et al. (2013) still require multiple iterations for convergence. Additionally, the MLE cannot explicitly include _a priori_ information about ETAS parameters, although a Bayesian _maximum a posteriori_ (MAP) estimate can be similarly formulated to the MLE and solved to give a point estimate with prior information. A Bayesian formulation of ETAS, in contrast, gives access to uncertainty estimates for parameters, a robust framework for incorporating _a priori_ knowledge, and can easily handle the branching variable formulation to accelerate computation of the ETAS log-likelihood. A GP based Bayesian formulation (e.g. as proposed here and in Molkenthin et al. (2022)) is semi-parametric and avoids making strong, explicit assumptions as to the shape of aseismic forcing, a feature that is useful for interpretation. It is interesting to observe the affinity of GP based methods to the smoothing approach of Marsan et al. (2013), in which the smoothing window can be likened to a top-hat shaped kernel. These benefits have to be weighed against the general cost of MCMC sampling, which is substantial but required to fully characterise the posterior. Additionally, from an end-use standpoint uncertainty bounds on the background rate are less critical than those on ETAS parameters in a forecasting setting, where accurate hazard probabilities are essential. The DGP-ETAS model proposed here combines some of the advantages of the semi-parametric formulations of Marsan et al. (2013) and Molkenthin et al. (2022), with the additional benefit of providing the characteristic rate-of-change of the aseismic forcing rate. The DGP framework amounts to a form of Bayesian kernel learning for GPs, and to some extent avoids the need for explicit construction of heterogeneity in parameterizing the GP kernel, which would otherwise rely on substantial expert knowledge. We have also shown how the use of the SPDE approximation for GPs reduces the computational complexity for the problem and naturally allows for the inclusion of layered GP structure. Thus, the DGP framework based on SPDE approximations answers two of the key questions in Molkenthin et al. (2022) that hindered the applicability of GPs as a basis for semiparametric modelling of earthquake sequences. The next immediate extension of the framework presented in this paper is to include spatial variability of the background rate -- the examples presented in this paper are purely temporally varying and for the real data examples consist of very small geographic regions. There is no inherent change in the theory of using DGPs in higher dimensions and the results presented here would transfer directly even in the \(3+1\)D case -- however, the efficiency of the SPDE approximation declines relative to classical GPs as dimensionality increases, as PDE solutions in higher dimensions necessarily must contend with geometrically increasing volumes whereas GPs scale purely with the data. A possible way to circumvent this issue could be to use a grid-refined finite-element approach for solving the SPDE (e.g. Rue et al. (2009)), which may scale better to high dimensions than our simple finite-difference approach. This is especially true if the seismicity is highly clustered so that empty parts of the space-time domain can be meshed with very large elements. A method for fitting ETAS models with background rates defined on finite-elements and a Laplace approximation to the posterior is investigated in Naylor et al. (2022). Alternatively, other methods of constructing DGPs which do not rely on the covariance operator approach advocated here may scale better (e.g. Paciorek & Schervish (2003)), although they lose the attractive interpretability of the intermediary timescales in the DGP presented here. For particularly large catalogs, it is likely that further approximation of the GP background rate and ETAS parameters via variational methods may be required, as ETAS remains in general an expensive model to compute -- a problem compounded by exact sampling methods such as MCMC. Ray & Myer (2019) and Blatter et al. (2021) have investigated an approximate GP-like hierarchical method for magnetotelluric tomography, which utilises node-points throughout the inversion domain that are added parsimoniously using trans-dimensional methods. Whilst their scheme does not account for the full hierarchical posterior of a deep-GP model, it does illustrate that the class of models discussed here may be useful for tomography problems as well as point-processes. In summary, the DGP-ETAS model allows detailed quantitative statistical analysis of earthquake sequences under very general assumptions for the background rate, and we have explored and validated its applicability for a range of interesting cases. The DGP framework is highly flexible, and may be applied to other geophysical problems in which we expect heterogeneity in the characteristic lengthscales and timescales for variation of a parameter of interest. ## Data and Code Availability The AseismicGP.jl package, used to run all inversions in this paper, may be installed for Julia 1.6+ by first adding [https://github.com/jbmuir/JBMuirJuliaRegistry.git](https://github.com/jbmuir/JBMuirJuliaRegistry.git) to the list of registries and then adding AseismicGP using the package manager. All figures in the paper (other than the schematic Figure 1) may be reproduced by running the demo code at [https://github.com/jbmuir/AseismicGPDemo](https://github.com/jbmuir/AseismicGPDemo), which automatically installs the required registry. ## Acknowledgements JBM acknowledges the support of the General Sir John Monash foundation during his graduate studies, which initiated this project. ZER was supported by the National Science Foundation under award EAR-2034167. The authors would like to thank the efforts of editor Dr Margarita Segou and two anonymous reviewers who significantly improved the manuscript.
2302.09519
Bell's Theorem Begs the Question
I demonstrate that Bell's theorem is based on circular reasoning and thus a fundamentally flawed argument. It unjustifiably assumes the additivity of expectation values for dispersion-free states of contextual hidden variable theories for non-commuting observables involved in Bell-test experiments, which is tautologous to assuming the bounds of $\pm2$ on the Bell-CHSH sum of expectation values. Its premises thus assume in a different guise the bounds of $\pm2\,$ it sets out to prove. Once this oversight is ameliorated from Bell's argument by identifying the impediment that leads to it and local realism is implemented correctly, the bounds on the Bell-CHSH sum of expectation values work out to be ${\pm2\sqrt{2}}$ instead of ${\pm2}$, thereby mitigating the conclusion of Bell's theorem. Consequently, what is ruled out by any of the Bell-test experiments is not local realism but the linear additivity of expectation values, which does not hold for non-commuting observables in any hidden variable theories to begin with. I also identify similar oversight in the GHZ variant of Bell's theorem, invalidating its claim of having found an inconsistency in the premisses of the argument by EPR for completing quantum mechanics. Conceptually, the oversight in both Bell's theorem and its GHZ variant traces back to the oversight in von Neumann's theorem against hidden variable theories identified by Grete Hermann in the 1930s.
Joy Christian
2023-02-19T09:33:12Z
http://arxiv.org/abs/2302.09519v15
# Bell's Theorem Beges the Question ###### Abstract I demonstrate that Bell's theorem is based on circular reasoning and thus a fundamentally flawed argument. It unjustifiably assumes the additivity of expectation values for dispersion-free states of contextual hidden variable theories for non-commuting observables involved in Bell-test experiments, which is tautologous to assuming the bounds of \(\pm 2\) on the Bell-CHSH sum of expectation values. Its premises thus assume in a different guise the bounds of \(\pm 2\) it sets out to prove. Once this oversight is ameliorated from Bell's argument, the bounds on the Bell-CHSH sum of expectation values work out to be \(\pm 2\sqrt{2}\) instead of \(\pm 2\), thereby mitigating the conclusion of Bell's theorem. Consequently, what is ruled out by the Bell-test experiments is not local realism but the additivity of expectation values, which does not hold for non-commuting observables in any hidden variable theories to begin with. + Footnote †: preprint: APS/123-QED KEYWORDS: Bell's theorem, local realism, Bell-CHSH inequalities, quantum correlations, Bell-test experiments ## I Introduction Bell's theorem [1] is an impossibility argument (or "proof") that claims that no locally causal and realistic hidden variable theory envisaged by Einstein that could "complete" quantum theory can reproduce all of the predictions of quantum theory. But some such claims of impossibility in physics are known to harbor unjustified assumptions. In this paper, I show that Bell's theorem against locally causal hidden variable theories is no exception. It is no different, in this respect, from von Neumann's theorem against all hidden variable theories [2], or the Coleman-Mandula theorem overlooking the possibilities of supersymmetry [3]. The implicit and unjustified assumptions underlying the latter two theorems seemed so innocuous to many that they escaped notice for decades. By contrast, Bell's theorem has faced skepticism and challenges by many from its very inception (cf. footnote 1 in [4]), including by me [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15], because it depends on a number of questionable implicit and explicit physical assumptions that are not difficult to recognize [9; 15]. In what follows, I bring out one such assumption and demonstrate that Bell's theorem is based on a circular argument [8]. It unjustifiably assumes the additivity of expectation values for dispersion-free states of hidden variable theories for non-commuting observables involved in the Bell-test experiments [16], which is tautologous to assuming the bounds of \(\pm 2\) on the Bell-CHSH sum of expectation values. It thus assumes in a different guise what it sets out to prove. As a result, what is ruled out by Bell-test experiments is not local realism but the additivity of expectation values, which does not hold for non-commuting observables in dispersion-free states of hidden variable theories to begin with. ## II Heuristics for completing quantum mechanics The goal of any hidden variable theory [17; 2; 18] is to reproduce the statistical predictions encoded in the quantum states \(\left|\psi\right\rangle\in\mathscr{H}\) of physical systems using hypothetical dispersion-free states \(\left|\psi,\,\lambda\right\rangle:=\{\left|\psi\right\rangle,\,\lambda\}\in \mathscr{H}\otimes\mathscr{L}\) that have no inherent statistical character, where the Hilbert space \(\mathscr{H}\) is extended by the space \(\mathscr{L}\) of hidden variables \(\lambda\), which are hypothesized to "complete" the states of the physical systems as envisaged by Einstein [19]. If the values of \(\lambda\in\mathscr{L}\) can be specified in advance, then the results of any measurements on a given physical system are uniquely determined. To appreciate this, recall that expectation value of the square of any self-adjoint operator \(\Omega\in\mathscr{H}\) in a normalized quantum mechanical state \(\left|\psi\right\rangle\) and the square of the expectation value of \(\Omega\) will not be equal to each other in general: \[\left\langle\psi\right|\Omega^{2}\left|\psi\right\rangle\neq\left\langle\psi \left|\,\Omega\left|\,\psi\right\rangle^{2}.\right. \tag{1}\] This gives rise to inherent statistical uncertainty in the value of \(\Omega\), indicating that the state \(\left|\psi\right\rangle\) is not dispersion-free: \[\Delta\Omega=\sqrt{\left\langle\psi\right|\{\,\Omega-\left\langle\psi\right| \Omega\left|\psi\right\rangle\}^{2}\left|\psi\right\rangle}\neq 0. \tag{2}\] By contrast, in a normalized dispersion-free state \(\left|\psi,\,\lambda\right\rangle\) of hidden variable theories formalized by von Neumann [2], the expectation value of \(\Omega\), _by hypothesis_, is equal to one of its eigenvalues \(\omega(\lambda)\), determined by the hidden variables \(\lambda\), \[(\,\psi,\,\lambda\,|\,\Omega\,|\,\psi,\,\lambda\,)=\omega(\lambda)\iff\Omega\, |\,\psi,\,\lambda\,)=\omega(\lambda)\,|\,\psi,\,\lambda), \tag{3}\] so that a measurement of \(\Omega\) in the state \(|\,\psi,\,\lambda\,\rangle\) would yield the result \(\omega(\lambda)\) with certainty. How this can be accomplished in a dynamical theory of measurement process remains an open question [17]. But accepting the hypothesis (3) implies \[(\psi,\,\lambda\,|\,\Omega^{2}\,|\,\psi,\,\lambda)=(\psi,\,\lambda\,|\,\Omega\,| \,\psi,\,\lambda)^{2}. \tag{4}\] Consequently, unlike in a quantum state \(|\psi\rangle\), in a dispersion-free state \(|\psi,\,\lambda\rangle\) observables \(\Omega\) have no inherent uncertainty: \[\Delta\Omega=\sqrt{(\,\psi,\,\lambda\,|\,\{\,\Omega-(\,\psi,\,\lambda\,|\, \Omega\,|\,\psi,\,\lambda\,)\}^{2}\,|\,\psi,\,\lambda)}=0. \tag{5}\] The expectation value of \(\Omega\) in the quantum state \(|\psi\rangle\) can then be recovered by integrating over the hidden variables \(\lambda\): \[\langle\,\psi\,|\,\Omega\,|\,\psi\,\rangle\,=\int_{\mathscr{L}}\,(\,\psi,\, \lambda\,|\,\Omega\,|\,\psi,\,\lambda\,)\,\,p(\lambda)\,d\lambda\,=\int_{ \mathscr{L}}\omega(\lambda)\;p(\lambda)\,d\lambda\,, \tag{6}\] where \(p(\lambda)\) denotes the normalized probability distribution over the space \(\mathscr{L}\) of thus hypothesized hidden variables. As it stands, this prescription amounts to assignment of unique eigenvalues \(\omega(\lambda)\) to all observables \(\Omega\)_simultaneously_, regardless of whether they are actually measured. In other words, according to (6) every physical quantity of a given system represented by \(\Omega\) would possess a unique preexisting value, irrespective of any measurements being performed. In Section 2 of [17], Bell works out an instructive example to illustrate how this works for a system of two-dimensional Hilbert space. The prescription (6) fails, however, for Hilbert spaces of dimensions greater than two, because in higher dimensions degeneracies prevent simultaneous assignments of unique eigenvalues to all observables in dispersion-free states \(|\,\psi,\,\lambda\,\rangle\) dictated by the ansatz (3), giving contradictory values for the same physical quantities. This was proved independently by Bell [17], Kochen and Specker [20], and Belinfante [21], as a corollary to Gleason's theorem [22; 23]. These proofs - known as the Kochen-Specker theorem - do not exclude contextual hidden variable theories in which the complete state \(|\,\psi,\,\lambda\rangle\) of a system assigns unique values to physical quantities only _relative_ to experimental contexts [18; 23]. If we denote the observables as \(\Omega(c)\) with \(c\) being the environmental contexts of their measurements, then the non-contextual prescription (6) can be easily modified to accommodate contextual hidden variable theories as follows: \[\langle\,\psi\,|\,\Omega(c)\,|\,\psi\,\rangle\,=\int_{\mathscr{L}}\,(\,\psi, \,\lambda\,|\,\Omega(c)\,|\,\psi,\,\lambda\,)\,\,p(\lambda)\,d\lambda\,=\int _{\mathscr{L}}\omega(c,\,\lambda)\;p(\lambda)\,d\lambda\,. \tag{7}\] Each observable \(\Omega(c)\) is still assigned a unique eigenvalue \(\omega(c,\,\lambda)\), but now determined cooperatively by the complete state \(|\,\psi,\,\lambda\rangle\) of the system and the state \(c\) of its environmental contexts. Consequently, even though some of its features are no longer intrinsic to the system, contextual hidden variable theories do not have the inherent statistical character of quantum mechanics, because outcome of an experiment is a cooperative effect just as it is in classical physics [23]. Therefore, such theories interpret quantum entanglement at the level of the complete state \(|\,\psi,\,\lambda\rangle\) only epistemically. For our purposes here, it is also important to recall that in the Hilbert space formulation of quantum mechanics [2] the correspondence between observables and Hermitian operators is one-to-one. Moreover, a sum \(\widetilde{\Omega}(\tilde{c})=\sum_{i=1}^{n}\Omega_{i}(c_{i})\) of several observables such as \(\Omega_{1}(c_{1}),\,\Omega_{2}(c_{2}),\,\Omega_{3}(c_{3}),\ldots,\,\Omega_{n }(c_{n})\) is also an observable representing a physical quantity, and consequently the sum of the expectation values of \(\Omega_{i}(c_{i})\) is the expectation value of the summed operator \(\widetilde{\Omega}(\tilde{c})\), \[\sum_{i=1}^{n}\,\langle\,\psi\,|\,\Omega_{i}(c_{i})\,|\,\psi\,\rangle=\langle \,\psi\,|\left[\sum_{i=1}^{n}\Omega_{i}(c_{i})\right]|\,\psi\,\rangle, \tag{8}\] regardless of whether the observables are simultaneously measurable or mutually commutative [17]. The question then is, since within any contextual hidden variable theory characterized by (7) all of the observables \(\Omega_{i}(c_{i})\) and their sum \(\widetilde{\Omega}(\tilde{c})\) are assigned unique eigenvalues \(\omega_{i}(c_{i},\,\lambda)\) and \(\widetilde{\omega}(\tilde{c},\,\lambda)\), respectively, would these eigenvalues satisfy the equality \[\sum_{i=1}^{n}\left[\int_{\mathscr{L}}\omega_{i}(c_{i},\,\lambda)\;p(\lambda) \,d\lambda\right]\,\stackrel{{?}}{{=}}\int_{\mathscr{L}}\left[ \sum_{i=1}^{n}\omega_{i}(c_{i},\,\lambda)\right]p(\lambda)\,d\lambda \tag{9}\] in dispersion-free states \(|\,\psi,\,\lambda\rangle\) of physical systems in analogy with the linear quantum mechanical relation (8) above? The answer is: Not in general, because the eigenvalue \(\widetilde{\omega}(\tilde{c},\,\lambda)\) of the summed operator \(\widetilde{\Omega}(\tilde{c})\) is not equal to the sum \(\sum_{i=1}^{n}\omega_{i}(c_{i},\,\lambda)\) of eigenvalues \(\omega_{i}(c_{i},\,\lambda)\) for given \(\lambda\), unless the constituent observables \(\Omega_{i}(c_{i})\) are mutually commutative. As Bell points out in Section 3 of [17], the linear relation (8) is an unusual property of quantum mechanical states \(|\,\psi\rangle.\) There is no reason to demand it _individually_ of the dispersion-free states \(|\,\psi,\,\lambda\rangle\), whose function is to reproduce the measurable features of quantum systems only when averaged over, as in (7). I will come back to this point in Section VI. ## III Special case of the singlet state and EPR-Bohm observables Now, the proof of Bell's famous theorem [1] is based on Bohm's spin version of the EPR's thought experiment [24], which involves an entangled pair of spin-\(\frac{1}{2}\) particles emerging from a source and moving freely in opposite directions, with particles 1 and 2 subject, respectively, to spin measurements along independently chosen unit directions \(\mathbf{a}\) and \(\mathbf{b}\) by Alice and Bob, who are stationed at a spacelike separated distance from each other (see Fig. 1). If initially the pair has vanishing total spin, then the quantum mechanical state of the system is described by the entangled singlet state \[\left|\Psi\right\rangle=\frac{1}{\sqrt{2}}\Big{\{}|\mathbf{k},\,+\rangle_{1} \otimes|\mathbf{k},\,-\rangle_{2}\,-\,|\mathbf{k},\,-\rangle_{1}\otimes| \mathbf{k},\,+\rangle_{2}\Big{\}}, \tag{10}\] where \(\mathbf{k}\) is an arbitrary unit vector in \(\mathbbm{R}^{3}\) and \[\boldsymbol{\sigma}\cdot\mathbf{k}\,|\mathbf{k},\,\pm\rangle\,=\,\pm\,| \mathbf{k},\,\pm\rangle \tag{11}\] defines quantum mechanical eigenstates in which the two fermions have spins "up" or "down" in the units of \(\hbar=2\), with \(\boldsymbol{\sigma}\) being the Pauli spin "vector" (\(\sigma_{x}\), \(\sigma_{y}\), \(\sigma_{z}\)). Once the state (10) is prepared, the observable \(\Omega(c)\) of interest is \[\Omega(c)=\boldsymbol{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\boldsymbol{\sigma }_{2}\cdot\mathbf{b}\,, \tag{12}\] whose possible eigenvalues are \[\omega(c,\,\lambda)=\mathscr{B}\!\left(\mathbf{a},\,\mathbf{b},\lambda\right) =\pm 1, \tag{13}\] where \(\mathscr{A}=\pm 1\) and \(\mathscr{B}=\pm 1\) are the results of spin measurements made jointly by Alice and Bob along their randomly chosen detector directions \(\mathbf{a}\) and \(\mathbf{b}\). In the singlet state (10) the joint observable (12) predicts sinusoidal correlations \(\langle\Psi|\boldsymbol{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\boldsymbol{ \sigma}_{2}\cdot\mathbf{b}|\Psi\rangle=-\mathbf{a}\cdot\mathbf{b}\) between the values of the spins observed about the freely chosen contexts \(\mathbf{a}\) and \(\mathbf{b}\)[5]. For _locally_ contextual hidden variable theories there is a further requirement that the results of local measurements must be describable by functions that respect local causality, as first envisaged by Einstein [19] and later formulated mathematically by Bell [1]. It can be satisfied by requiring that the eigenvalue \(\omega(c,\lambda)\) of the observable \(\Omega(c)\) in (12) representing the joint result \(\mathscr{A}\!\left(\mathbf{a},\,\mathbf{b},\lambda\right)=\pm 1\) is factorizable as \(\omega(c,\lambda)=\omega_{1}(c_{1},\lambda)\,\omega_{2}(c_{2},\lambda)\), or in Bell's notation as \[\mathscr{A}\!\left(\mathbf{a},\,\mathbf{b},\lambda\right)=\mathscr{A}\!\left( \mathbf{a},\lambda\right)\mathscr{B}\!\left(\mathbf{b},\lambda\right), \tag{14}\] with the factorized functions \(\mathscr{A}\!\left(\mathbf{a},\lambda\right)=\pm 1\) and \(\mathscr{B}\!\left(\mathbf{b},\lambda\right)=\pm 1\) satisfying the following condition of local causality: Apart from the hidden variables \(\lambda\), the result \(\mathscr{A}=\pm 1\) of Alice depends _only_ on the measurement context \(\mathbf{a}\), chosen freely by Alice, regardless of Bob's actions. And, likewise, apart from the hidden variables \(\lambda\), the result \(\mathscr{B}=\pm 1\) of Bob depends _only_ on the measurement context \(\mathbf{b}\), chosen freely by Bob, regardless of Alice's actions. In particular, the function \(\mathscr{A}\!\left(\mathbf{a},\,\lambda\right)\)_does not_ depend on \(\mathbf{b}\) or \(\mathscr{B}\) and the function \(\mathscr{B}\!\left(\mathbf{b},\,\lambda\right)\)_does not_ depend on \(\mathbf{a}\) or \(\mathscr{A}\). Moreover, the hidden variables \(\lambda\) do not depend on either \(\mathbf{a}\), \(\mathbf{b}\), \(\mathscr{A}\), or \(\mathscr{B}\)[10]. Figure 1: In an EPR-Bohm-type experiment, a spin-less fermion – such as a neutral pion – is assumed to decay from a source into an electron-positron pair, as depicted. Then, measurements of the spin components of each separated fermion are performed at space-like separated observation stations \(\mathbf{1}\) and \(\mathbf{2}\), obtaining binary results \(\mathscr{A}=\pm 1\) and \(\mathscr{B}=\pm 1\) along directions \(\mathbf{a}\) and \(\mathbf{b}\). The conservation of spin momentum dictates that the total spin of the system remains zero during its free evolution. After Ref. [4]. The expectation value \(\mathcal{E}(\mathbf{a},\mathbf{b})\) of the joint results in the dispersion-free state \(|\,\psi,\,\lambda\rangle\) should then satisfy the condition \[\langle\Psi|\,\boldsymbol{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\boldsymbol{ \sigma}_{2}\cdot\mathbf{b}\,|\Psi\rangle=\,\mathcal{E}(\mathbf{a},\mathbf{b}):= \!\int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda)\;p(\lambda)\,d\lambda\,, \tag{15}\] where the hidden variables \(\lambda\) originate from a source located in the overlap of the backward light-cones of Alice and Bob, and the normalized probability distribution \(p(\lambda)\) is assumed to remain statistically independent of the contexts \(\mathbf{a}\) and \(\mathbf{b}\) so that \(p(\lambda\,|\,\mathbf{a},\mathbf{b})=p(\lambda)\), which is a reasonable assumption. In fact, relaxing this assumption to allow \(p(\lambda)\) to depend on \(\mathbf{a}\) and \(\mathbf{b}\) introduces a form of non-locality, as explained by Clauser and Horne in footnote 13 of [25]. Then, since \(\mathscr{A}(\mathbf{a},\lambda)=\pm 1\) and \(\mathscr{B}(\mathbf{b},\lambda)=\pm 1\), their product \(\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda)=\pm 1\), setting the following bounds on \(\mathcal{E}(\mathbf{a},\mathbf{b})\): \[-1\leqslant\,\mathcal{E}(\mathbf{a},\mathbf{b})\leqslant+1. \tag{16}\] These bounds are respected not only by local hidden variable theories but also by quantum mechanics and experiments. ## IV Mathematical core of Bell's theorem By contrast, at the heart of Bell's theorem is a derivation of the bounds of \(\pm 2\) on a combination of the expectation values \(\mathcal{E}(\mathbf{a},\mathbf{b})\) of local results \(\mathscr{A}(\mathbf{a},\lambda)\) and \(\mathscr{B}(\mathbf{b},\lambda)\), recorded at remote observation stations by Alice and Bob, from four different sub-experiments involving measurements of non-commuting observables such as \(\boldsymbol{\sigma}_{1}\cdot\mathbf{a}\) and \(\boldsymbol{\sigma}_{1}\cdot\mathbf{a}^{\prime}\)[1; 16]: \[\mathcal{E}(\mathbf{a},\,\mathbf{b})+\mathcal{E}(\mathbf{a},\,\mathbf{b}^{ \prime})+\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b})-\mathcal{E}(\mathbf{a} ^{\prime},\,\mathbf{b}^{\prime})\,. \tag{17}\] Alice can freely choose a detector direction \(\mathbf{a}\) or \(\mathbf{a}^{\prime}\), and likewise Bob can freely choose a detector direction \(\mathbf{b}\) or \(\mathbf{b}^{\prime}\), to detect, at a space-like distance from each other, the spins of fermions they receive from the common source. Then, from (16), we can immediately read off the upper and lower bounds on the combination (17) of expectation values: \[-4\,\leqslant\,\mathcal{E}(\mathbf{a},\,\mathbf{b})\,+\,\mathcal{E}(\mathbf{a },\,\mathbf{b}^{\prime})\,+\,\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b})\,- \,\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b}^{\prime})\,\leqslant\,+4\,. \tag{18}\] The next step in Bell's derivation of the bounds \(\pm 2\) instead of \(\pm 4\) is the assumption of additivity of expectation values: \[\mathcal{E}(\mathbf{a},\,\mathbf{b})\,+\,\mathcal{E}(\mathbf{a}, \,\mathbf{b}^{\prime})\,+\,\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b})\,-\, \mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b}^{\prime})\] \[=\!\int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\lambda)\mathscr{B}( \mathbf{b},\lambda)\,p(\lambda)\,d\lambda\,+\!\!\int_{\mathscr{L}}\!\mathscr{ A}(\mathbf{a},\lambda)\mathscr{B}(\mathbf{b}^{\prime},\lambda)\,p(\lambda)\,d \lambda\,+\!\!\int_{\mathscr{L}}\!\!\mathscr{A}(\mathbf{a}^{\prime},\lambda) \mathscr{B}(\mathbf{b},\lambda)\,p(\lambda)\,d\lambda\,-\!\!\int_{\mathscr{ L}}\!\!\mathscr{A}(\mathbf{a}^{\prime},\lambda)\mathscr{B}(\mathbf{b}^{\prime}, \lambda)\,p(\lambda)\,d\lambda\] \[=\!\int_{\mathscr{L}}\!\big{\{}\,\mathscr{A}(\mathbf{a},\lambda )\,\mathscr{B}(\mathbf{b},\lambda)+\mathscr{A}(\mathbf{a},\lambda)\,\mathscr{ B}(\mathbf{b}^{\prime},\lambda)+\mathscr{A}(\mathbf{a}^{\prime},\lambda)\, \mathscr{B}(\mathbf{b},\lambda)-\mathscr{A}(\mathbf{a}^{\prime},\lambda)\, \mathscr{B}(\mathbf{b}^{\prime},\lambda)\big{\}}\;p(\lambda)\,d\lambda\,. \tag{19}\] We will have much to discuss about this step, but if we accept the last equality, then the bounds of \(\pm 2\) on Bell-CHSH combination (17) of expectation values is not difficult to work out by rewriting the integrand on its right-hand side as \[\mathscr{A}(\mathbf{a},\lambda)\,\big{\{}\,\mathscr{B}(\mathbf{b},\lambda)+ \mathscr{B}(\mathbf{b}^{\prime},\lambda)\,\big{\}}\,+\,\mathscr{A}(\mathbf{a} ^{\prime},\lambda)\,\big{\{}\,\mathscr{B}(\mathbf{b},\lambda)-\mathscr{B}( \mathbf{b}^{\prime},\lambda)\,\big{\}}. \tag{20}\] Since \(\mathscr{B}(\mathbf{b},\lambda)=\pm 1\), if \(|\mathscr{B}(\mathbf{b},\lambda)+\mathscr{B}(\mathbf{b}^{\prime},\lambda)|=2\), then \(|\mathscr{B}(\mathbf{b},\lambda)-\mathscr{B}(\mathbf{b}^{\prime},\lambda)|=0\), and vice versa. Consequently, since \(\mathscr{A}(\mathbf{a},\lambda)=\pm 1\), the integrand (20) is bounded by \(\pm 2\) and the absolute value of the last integral in (19) does not exceed \(2\): \[-2\,\leqslant\int_{\mathscr{L}}\!\big{\{}\,\mathscr{A}(\mathbf{a},\lambda)\, \mathscr{B}(\mathbf{b},\lambda)+\mathscr{A}(\mathbf{a},\lambda)\,\mathscr{B}( \mathbf{b}^{\prime},\lambda)+\mathscr{A}(\mathbf{a}^{\prime},\lambda)\, \mathscr{B}(\mathbf{b},\lambda)-\mathscr{A}(\mathbf{a}^{\prime},\lambda)\, \mathscr{B}(\mathbf{b}^{\prime},\lambda)\big{\}}\;p(\lambda)\,d\lambda\; \leqslant\,+2\,. \tag{21}\] Therefore, the equality (19) implies that the absolute value of the combination of expectation values is bounded by \(2\): \[-2\,\leqslant\,\mathcal{E}(\mathbf{a},\,\mathbf{b})\,+\,\mathcal{E}(\mathbf{a}, \,\mathbf{b}^{\prime})\,+\,\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b})\,-\, \mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b}^{\prime})\,\leqslant\,+2\,. \tag{22}\] But since the bounds on (17) predicted by quantum mechanics and observed in experiments are \(\pm 2\sqrt{2}\), Bell concludes that no local and realistic theory envisaged by Einstein can reproduce the statistical predictions of quantum mechanics. In particular, contextual hidden variable theories specified by (7) that respect the factorizability (14) are not viable. Now, it is not difficult to demonstrate the _converse_ of the above derivation in which the additivity of expectation values (19) is derived by assuming the stringent bounds of \(\pm 2\) on the sum (17). Employing (15), (17) can be written as \[\int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\lambda)\mathscr{B}(\mathbf{b},\lambda) \,p(\lambda)\,d\lambda\,+\!\!\int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\lambda) \mathscr{B}(\mathbf{b}^{\prime},\lambda)\,p(\lambda)\,d\lambda\,+\!\!\int_{ \mathscr{L}}\!\!\mathscr{A}(\mathbf{a}^{\prime},\lambda)\mathscr{B}(\mathbf{b},\lambda)\,p(\lambda)\,d\lambda\,-\!\!\int_{\mathscr{L}}\!\!\mathscr{A}( \mathbf{a}^{\prime},\lambda)\mathscr{B}(\mathbf{b}^{\prime},\lambda)\,p( \lambda)\,d\lambda\,. \tag{23}\] Since each product \(\mathscr{A}(\mathbf{a},\lambda)\mathscr{B}(\mathbf{b},\lambda)\) in the above integrals is equal to \(\pm 1\), each of the four integrals is bounded by \(\pm 1\): \[-1\,\leqslant\int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\lambda)\mathscr{B}( \mathbf{b},\lambda)\,p(\lambda)\,d\lambda\ \leqslant\,+1. \tag{24}\] Thus the sum of four integrals in (23) is bounded by \(\pm 4\), not \(\pm 2\). However, we started with (22), which contends that the sum of integrals in (23) is bounded by \(\pm 2\). But the only way to reduce the bounds on (23) from \(\pm 4\) to \(\pm 2\) without violating the rules of anti-derivatives is by equating the sum of integrals in (23) to the following integral of the sum, \[\int_{\mathscr{L}}\big{\{}\mathscr{A}(\mathbf{a},\lambda)\,\mathscr{B}( \mathbf{b},\lambda)+\mathscr{A}(\mathbf{a},\lambda)\,\mathscr{B}(\mathbf{b}^{ \prime},\lambda)+\mathscr{A}(\mathbf{a}^{\prime},\lambda)\,\mathscr{B}( \mathbf{b},\lambda)-\mathscr{A}(\mathbf{a}^{\prime},\lambda)\,\mathscr{B}( \mathbf{b}^{\prime},\lambda)\big{\}}\;p(\lambda)\,d\lambda\,, \tag{25}\] which, as we saw above in (21), is bounded by \(\pm 2\). We have thus derived the additivity of expectation values (19) by imposing (22) as our starting assumption. Thus, given the previous derivation that led us to (22) by assuming (19) and the current derivation that led us to (19) by assuming (22), we have proved that the assumption (19) of the additivity of expectation values is tautologous to assuming the bounds of \(\pm 2\) on Bell-CHSH combination (17) of expectation values. In many derivations of (22) in the literature, factorized probabilities of observing binary measurement results are employed rather than measurement results themselves I have used in (14) in my derivation following Bell [1; 16]. But employing probabilities would only manage to obfuscate the logical flaw in Bell's argument I intend to bring out here. ## V Additivity of expectation values is respected by quantum states The key step that led us to the bounds of \(\pm 2\) on (17) that are more restrictive than \(\pm 2\sqrt{2}\) is the assumption (19) of the additivity of expectation values, which (as noted after (9) and will be further explained in Section VI) is _valid only for commuting observables_[15]. This assumption, however, is usually not viewed as an assumption at all. It is usually viewed as a benign mathematical step, necessitated by Einstein's requirement of realism [19]. But as I will demonstrate in Section VI, far from being required by realism, the right-hand side of (19), in fact, _contradicts_ realism, which requires that _every_ observable of a physical system is assigned a unique eigenvalue, quantifying one of its preexisting properties. Moreover, realism has already been adequately accommodated by the very definition of the local functions \(\mathscr{A}(\mathbf{a},\lambda)\) and \(\mathscr{B}(\mathbf{b},\lambda)\) and their counterfactual juxtaposition on the left-hand side of (19), as contextually existing properties of the system. Evidently, while a result in only one of the four expectation values corresponding to a sub-experiment that appear on the left-hand side of (19) can be realized in a given run of a Bell-test experiment, the remaining three results appearing on that side are realizable at least counterfactually, thus fulfilling the requirement of realism [8]. Therefore, the requirement of realism does not necessitate the left-hand side of (19) to be equated with its right-hand side in the derivation of (22). Realism requires definite results \(\mathscr{A}(\mathbf{a},\lambda)\,\mathscr{B}(\mathbf{b},\lambda)\) to exist as eigenvalues only counterfactually, _not all four at once_, as they are written on the right-hand side of (19). What is more, as we will soon see, realism implicit in the prescription (7) requires the quantity (20) to be a _correct_ eigenvalue of the summed operator (33), but it is not. On the other hand, given the assumption \(p(\lambda\,|\,\mathbf{a},\mathbf{b})=p(\lambda)\) of statistical independence and the addition property of anti-derivatives, mathematically the equality (19) follows at once. The binary properties of the functions \(\mathscr{A}(\mathbf{a},\lambda)\) and \(\mathscr{B}(\mathbf{b},\lambda)\) then immediately lead to the bounds of \(\pm 2\) on the Bell-CHSH sum (17). But, as we saw above, assuming the bounds of \(\pm 2\) on (17) leads, conversely, to the assumption (19) of the additivity of expectation values. Thus, assuming the additivity of expectation values (19) is mathematically equivalent to assuming the bounds of \(\pm 2\) on the sum (17). In other words, Bell's argument presented in Section IV _assumes_ its conclusion (22) in the guise of assumption (19). Sometimes assumption (19) is justified on statistical grounds. It is argued that the four sub-experiments appearing on the left-hand side of (19) with different experimental settings \(\{\mathbf{a},\,\mathbf{b}\}\), \(\{\mathbf{a},\,\mathbf{b}^{\prime}\}\), _etc._ can be performed independently of each other, on possibly different occasions, and then the resulting averages are added together at a later time for statistical analysis. If the number of experimental runs for each pair of settings is sufficiently large, then, theoretically, the sum of the four averages appearing on the left-hand side of (19) are found not to exceed the bounds of \(\pm 2\), thus justifying the equality (19). This can be easily verified in numerical simulations (see Ref. [27] cited in [12]). However, this heuristic argument is not an analytical proof of the bounds. What it implicitly neglects to take into account by explicitly assuming that the four sub-experiments can be performed independently, is that the sub-experiments involve mutually exclusive pairs of settings such as \(\{\mathbf{a},\,\mathbf{b}\}\) and \(\{\mathbf{a},\,\mathbf{b}^{\prime}\}\) in physical space, and thus involve non-commuting observables that cannot be measured simultaneously [8]. Unless the statistical analysis takes this physical fact into account, it cannot be claimed to have any relevance for the Bell-test experiments. For ignoring this physical fact amounts to incorrectly assuming that the spin observables \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b},\)_etc._ are mutually commuting, and thus simultaneously measurable, for which assumption (19) is indeed valid, as demonstrated below in Section VI (see the discussion around (39)). On the other hand, when the non-commutativity of the observables involved in the sub-experiments is taken into account in numerical simulations, the bounds on (17) turn out to be \(\pm 2\sqrt{2}\), as shown in [9; 10] and Ref. [27] cited in [12]. In other words, such a statistical argument is simply assumption (19) in disguise. Another important point to recognize here is that the above derivation of the stringent bounds of \(\pm 2\) on (17) for a locally causal dispersion-free counterpart \(|\,\psi,\,\lambda\rangle\) of the quantum mechanical singlet state (10) must comply with the heuristics of the contextual hidden variable theories we discussed in Section II. If it does not, then the bounds of \(\pm 2\) cannot be claimed to have any relevance for the viability of local hidden variable theories [23]. Therefore, as discussed in Section II, in a contextual hidden variable theory all of the observables \(\Omega_{i}(c_{i})\) of any physical system, _including_ their sum \(\widetilde{\Omega}(\tilde{c})=\sum_{i=1}^{n}\Omega_{i}(c_{i})\) (which also represents a physical quantity in the Hilbert space formulation of quantum mechanics [2] whether or not it is observed), must be assigned unique eigenvalues \(\omega_{i}(c_{i},\,\lambda)\) and \(\widetilde{\omega}(\tilde{c},\,\lambda)\), respectively, in the dispersion-free states \(|\,\psi,\,\lambda\rangle\) of the system, regardless of whether these observables are simultaneously measurable. Now, within quantum mechanics, expectation values do add in analogy with the equality (19) assumed by Bell for local hidden variable theories [2; 17]. In quantum mechanics, the statistical predictions of which any hidden variable theory is obliged to reproduce, the joint results \(\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda)\) observed by Alice and Bob would be eigenvalues of the operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b},\) and the linearity in the rules of Hilbert space quantum mechanics ensures that these operators satisfy the additivity of expectation values. Thus, for any quantum state \(|\psi\rangle\), the following equality holds: \[\langle\psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma }_{2}\cdot\mathbf{b}\,|\psi\rangle+\langle\psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a }\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}\,|\psi\rangle+\langle\psi| \,\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\,\mathbf{\sigma}_{2}\cdot \mathbf{b}\,|\psi\rangle-\langle\psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime} \,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}\,|\psi\rangle\] \[=\,\langle\psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{ \sigma}_{2}\cdot\mathbf{b}+\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{ \sigma}_{2}\cdot\mathbf{b}^{\prime}+\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime} \,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}-\mathbf{\sigma}_{1}\cdot\mathbf{a}^{ \prime}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}\,|\psi\rangle. \tag{26}\] Comparing (19) and (26), the equality between the two sides of (19) seems reasonable, even physically. Furthermore, since the condition (15) for any hidden variable theory obliges us to set the four terms on the left-hand side of (26) as \[\langle\Psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma }_{2}\cdot\mathbf{b}\,|\Psi\rangle= \int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}( \mathbf{b},\,\lambda)\,\,p(\lambda)\,d\lambda\,, \tag{27}\] \[\langle\Psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma }_{2}\cdot\mathbf{b}^{\prime}\,|\Psi\rangle= \int_{\mathscr{L}}\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}( \mathbf{b}^{\prime},\,\lambda)\,\,p(\lambda)\,d\lambda\,,\] (28) \[\langle\Psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\, \mathbf{\sigma}_{2}\cdot\mathbf{b}\,|\Psi\rangle= \int_{\mathscr{L}}\mathscr{A}(\mathbf{a}^{\prime},\,\lambda)\, \mathscr{B}(\mathbf{b},\,\lambda)\,\,p(\lambda)\,d\lambda\,,\] (29) \[\text{and}\,\,\,\langle\Psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}^{ \prime}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}\,|\Psi\rangle= \int_{\mathscr{L}}\mathscr{A}(\mathbf{a}^{\prime},\,\lambda)\, \mathscr{B}(\mathbf{b}^{\prime},\,\lambda)\,p(\lambda)\,d\lambda\,, \tag{30}\] it may seem reasonable that, given the quantum mechanical equality (26), any hidden variable theory should satisfy \[\langle\Psi|\,\widetilde{\Omega}(\tilde{c})\,|\Psi\rangle= \langle\Psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma }_{2}\cdot\mathbf{b}+\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2} \cdot\mathbf{b}^{\prime}+\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\,\mathbf{ \sigma}_{2}\cdot\mathbf{b}-\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes \,\mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}\,|\Psi\rangle\] \[= \int_{\mathscr{L}}\bigl{\{}\,\mathscr{A}(\mathbf{a},\lambda)\, \mathscr{B}(\mathbf{b},\lambda)+\mathscr{A}(\mathbf{a},\lambda)\,\mathscr{B}( \mathbf{b}^{\prime},\lambda)+\mathscr{A}(\mathbf{a}^{\prime},\lambda)\, \mathscr{B}(\mathbf{b},\lambda)-\mathscr{A}(\mathbf{a}^{\prime},\lambda)\, \mathscr{B}(\mathbf{b}^{\prime},\lambda)\bigr{\}}\,\,p(\lambda)\,d\lambda\,, \tag{31}\] adhering to the prescription (7), which would then justify equality (19). Since hidden variable theories are required to satisfy the prescription (7), should not they also reproduce equation (31)? The answer to this is not straightforward. ## VI Additivity of expectation values does not hold for dispersion-free states The problem with equation (31) is that, while the joint results \(\mathscr{A}(\mathbf{a},\lambda)\mathscr{B}(\mathbf{b},\lambda),\)_etc._ appearing on the left-hand side of equation (19) are possible eigenvalues of the products of spin operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b},\)_etc._, their summation \[\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda)+\mathscr{A}( \mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b}^{\prime},\,\lambda)+\mathscr{A}( \mathbf{a}^{\prime},\,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda)-\mathscr{A}( \mathbf{a}^{\prime},\,\lambda)\,\mathscr{B}(\mathbf{b}^{\prime},\,\lambda) \tag{32}\] appearing as the integrand on the right-hand side of equation (31) or (19) is _not_ an eigenvalue of the summed operator \[\widetilde{\Omega}(\tilde{c})=\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{ \sigma}_{2}\cdot\mathbf{b}+\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2} \cdot\mathbf{b}^{\prime}+\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\,\mathbf{ \sigma}_{2}\cdot\mathbf{b}-\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes \,\mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}, \tag{33}\] because the spin operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\) and \(\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\), _etc._, and therefore \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}\), _etc._, do not commute with each other: \[\left[\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2} \cdot\mathbf{b},\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2} \cdot\mathbf{b}^{\prime}\,\right] =2\,\mathbf{\sigma}\cdot\{(\mathbf{a}\times\mathbf{b}^{\prime}) \times(\mathbf{a}\times\mathbf{b})\}\] \[\neq 0\,\,\,\text{if}\,\,\,\mathbf{b}^{\prime}\neq\mathbf{b}\neq \mathbf{a}. \tag{34}\] Consequently, equation (31) would hold within any hidden variable theory _only if_ the operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}\), _etc._ were commuting operators. This is well known from the famous criticisms of von Neumann's theorem against hidden variable theories (see, _e.g._, [8] and references therein). While the equality (19) of the sum of expectation values with the expectation value of the sum is respected in quantum mechanics, it does not hold for hidden variable theories [17]. In [17], Bell illustrates this problem using spin components of a spin-\(\frac{1}{2}\) particle. Suppose we make a measurement of the component \(\sigma_{x}\) of the spin with a Stern-Gerlach magnet suitably oriented in \(\mathbbm{R}^{3}\). That would yield an eigenvalue \(s_{x}\) of \(\sigma_{x}\) as a result. However, if we wish to measure the component \(\sigma_{y}\) of the spin, then that would require a different orientation of the magnet in \(\mathbbm{R}^{3}\), and would give a different eigenvalue, \(s_{y}\) of \(\sigma_{y}\), as a result. Moreover, a measurement of the sum of the \(x\)- and \(y\)-components of the spin, \(\sigma_{x}+\sigma_{y}\), would again require a very different orientation of the magnet in \(\mathbbm{R}^{3}\). Therefore, the result obtained as an eigenvalue of the summed operators \(\sigma_{x}+\sigma_{y}\) will not be the sum \(s_{x}+s_{y}\) of an eigenvalue of the operator \(\sigma_{x}\) added linearly to an eigenvalue of the operator \(\sigma_{y}\). As Bell points out in [17], the additivity of expectation values \(\langle\,\psi\,|\,\sigma_{x}\,|\,\psi\,\rangle+\langle\,\psi\,|\,\sigma_{y}\, |\,\psi\,\rangle=\langle\,\psi\,|\,\sigma_{x}+\,\sigma_{y}\,|\,\psi\,\rangle\) is a rather unusual property of the quantum states \(|\psi\rangle\). It does not hold for the dispersion-free states \(|\psi,\,\lambda\rangle\) of hidden variable theories because the eigenvalues of non-commuting observables such as \(\sigma_{x}\) and \(\sigma_{y}\) do not add linearly, as we noted at the end of Section II. Consequently, the additivity relation (19) that holds for quantum states would not hold for the dispersion-free states. This problem, however, suggests its own resolution. We can work out the correct eigenvalue \(\widetilde{\omega}(\tilde{c},\,\lambda)\) of the summed operator (33), at least formally, as I have worked out in Appendix A below. The correct version of equation (31) is then \[\langle\Psi|\,\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot \mathbf{b}+\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot \mathbf{b}^{\prime}+\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\,\mathbf{ \sigma}_{2}\cdot\mathbf{b}-\mathbf{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\, \mathbf{\sigma}_{2}\cdot\mathbf{b}^{\prime}\,|\Psi\rangle=\!\int_{\mathscr{L}} \widetilde{\omega}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime},\lambda)\,\,p(\lambda)\,d\lambda\,, \tag{35}\] where \[\widetilde{\omega}=\pm\sqrt{\left\{\mathscr{A}(\mathbf{a},\,\lambda)\, \mathscr{B}(\mathbf{b},\,\lambda)+\mathscr{A}(\mathbf{a},\,\lambda)\, \mathscr{B}(\mathbf{b}^{\prime},\,\lambda)+\mathscr{A}(\mathbf{a}^{\prime}, \,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda)-\mathscr{A}(\mathbf{a}^{\prime}, \,\lambda)\,\mathscr{B}(\mathbf{b}^{\prime},\,\lambda)\right\}^{2}+(\Psi,\, \lambda\,|\,\widetilde{\Theta}\,|\,\Psi,\,\lambda)\neq 0} \tag{36}\] is the correct eigenvalue of the summed operator (33), with its non-commuting part separated out as the operator \[\widetilde{\Theta}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime})=2\,\mathbf{\sigma}\cdot\mathbf{n}(\mathbf{a},\mathbf{a}^{\prime}, \mathbf{b},\mathbf{b}^{\prime})\,, \tag{37}\] where the vector \[\mathbf{n}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime})=\big{\{}\,(\mathbf{a}\times\mathbf{b}^{\prime})\times(\mathbf{a} \times\mathbf{b}) +(\mathbf{a}^{\prime}\times\mathbf{b})\times(\mathbf{a}\times \mathbf{b})+(\mathbf{a}^{\prime}\times\mathbf{b})\times(\mathbf{a}\times \mathbf{b}^{\prime})\] \[-(\mathbf{a}^{\prime}\times\mathbf{b}^{\prime})\times(\mathbf{a} \times\mathbf{b})-(\mathbf{a}^{\prime}\times\mathbf{b}^{\prime})\times( \mathbf{a}^{\prime}\times\mathbf{b})-(\mathbf{a}^{\prime}\times\mathbf{b}^{ \prime})\times(\mathbf{a}\times\mathbf{b}^{\prime})\,\big{\}}. \tag{38}\] The details of how this separation is accomplished using (34) can be found in Appendix A below. From (36), it is now easy to appreciate that the additivity of expectation values (19) assumed by Bell can hold only if the expectation value \((\Psi,\lambda\,|\,\widehat{\Theta}\,|\,\Psi,\lambda)=\pm 2\,||\mathbf{n}||\) of the non-commuting part within the eigenvalue \(\widetilde{\omega}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime},\lambda)\) of the summed operator (33) is zero. But that is possible only if the operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\otimes\mathbf{\sigma}_{2}\cdot\mathbf{b}\), _etc._ constituting the sum (33) commute with each other. In general, if the operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}\), _etc._ in (33) do not commute with each other, then we would have \[\widetilde{\omega}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime},\lambda)\neq\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b}, \,\lambda)+\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b}^{\prime}, \,\lambda)+\mathscr{A}(\mathbf{a}^{\prime},\,\lambda)\,\mathscr{B}(\mathbf{b}, \,\lambda)-\mathscr{A}(\mathbf{a}^{\prime},\,\lambda)\,\mathscr{B}(\mathbf{b}^{ \prime},\,\lambda). \tag{39}\] But the operators \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}\), _etc._ indeed do not commute with each other, because the pairs of directions \(\{\mathbf{a},\,\mathbf{a}^{\prime}\}\), _etc._ in (33) are mutually exclusive directions in \(\mathbbm{R}^{3}\). Therefore, the additivity of expectation values assumed at step (19) in the derivation of (22) is unjustifiable. Far from being necessitated by realism, it actually contradicts realism. Since three of the four results appearing in the expression (32) can be realized only counterfactually, their summation in (32) cannot be realized _even_ counterfactually [8]. Thus, in addition to not being a correct eigenvalue of the summed operator (33) as required by the prescription (7) for hidden variable theories, the quantity appearing in (32) is, in fact, an entirely fictitious quantity, with no counterpart in any possible world, apart from in the trivial case when all observables are commutative. By contrast, the correct eigenvalue (36) of the summed operator (33) can be realized at least counterfactually because it is a genuine eigenvalue of that operator, thereby satisfying the requirement of realism correctly, in accordance with the prescription (7) for hidden variable theories. Using (36), all five of the observables appearing on both sides of the quantum mechanical equation (26) can be assigned unique and correct eigenvalues [8]. Once this oversight is ameliorated, it is not difficult to show that the conclusion of Bell's theorem no longer follows. For then, using the correct eigenvalue (36) of (33) instead of (32) on the right-hand side of (19), we have the equation \[\mathcal{E}(\mathbf{a},\,\mathbf{b})+\mathcal{E}(\mathbf{a},\,\mathbf{b}^{ \prime})+\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b})-\mathcal{E}(\mathbf{a }^{\prime},\,\mathbf{b}^{\prime})=\!\int_{\mathscr{L}}\widetilde{\omega}( \mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{\prime},\lambda)\,\,p( \lambda)\,d\lambda \tag{40}\] instead of (19), which implements local realism correctly on both of its sides, as required by the prescription (7) we discussed in Section II. This equation (40) is thus the correct dispersion-free counterpart of the equivalence (26) for the quantum mechanical expectation values [8]. It can reduce to Bell's assumption (19) only when the expectation value \((\Psi,\lambda\,|\,\widetilde{\Theta}\,|\,\Psi,\lambda)\) of the non-commuting part within the eigenvalue \(\widetilde{\omega}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime},\lambda)\) of the summed operator (33) happens to be vanishing, and thus expresses the correct relationship among the expectation values for the singlet state (10) in the local hidden variable framework considered by Bell [1]. Recall again from the end of Section II that the quantum mechanical relation (26) is an unusual property of the quantum states \(|\psi).\) As Bell stressed in [17], "[t]here is no reason to demand it individually of the hypothetical dispersion free states, whose function it is to reproduce the _measurable_ peculiarities of quantum mechanics _when averaged over_." Moreover, in Section V of [8] I have demonstrated that the bounds on the right-hand side of (40) are \(\pm 2\sqrt{2}\) instead of \(\pm 2\). An alternative derivation of these bounds follows from the magnitude \(||\mathbf{n}||\) of the vector defined in (38), which, as proved in Appendix B below, is bounded by \(2\), and therefore the eigenvalue \(\pm 2\,||\mathbf{n}||\) of the operator (37) obtained as its expectation value \((\Psi,\lambda\,|\,\widetilde{\Theta}\,|\,\Psi,\lambda)\) is bounded by \(\pm 4\), giving \[-4\,\leqslant(\Psi,\lambda\,|\,\widetilde{\Theta}(\mathbf{a},\mathbf{a}^{ \prime},\mathbf{b},\mathbf{b}^{\prime})\,|\,\Psi,\lambda)\leqslant+4\,. \tag{41}\] Substituting these into (36), together with the bounds of \(\pm 2\) we worked out before on the commuting part (32), gives \[-2\sqrt{2}\,\leqslant\,\widetilde{\omega}(\mathbf{a},\mathbf{a}^{\prime}, \mathbf{b},\mathbf{b}^{\prime},\lambda)\leqslant+2\sqrt{2}\,, \tag{42}\] which is constrained to be real despite the square root in the expression (36) because the operator (33) is Hermitian. Consequently, we obtain the following Tsirel'son's bounds in the dispersion-free state, on the right-hand side of (40): \[-2\sqrt{2}\,\leqslant\int_{\mathscr{L}}\widetilde{\omega}(\mathbf{a},\mathbf{a }^{\prime},\mathbf{b},\mathbf{b}^{\prime},\lambda)\,\,p(\lambda)\,d\lambda\, \leqslant+2\sqrt{2}\,. \tag{43}\] Given the correct relation (40) between expectation values instead of the flawed assumption (19), we thus arrive at \[-2\sqrt{2}\,\leqslant\,\mathcal{E}(\mathbf{a},\,\mathbf{b})+\mathcal{E}( \mathbf{a},\,\mathbf{b}^{\prime})+\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b })-\mathcal{E}(\mathbf{a}^{\prime},\,\mathbf{b}^{\prime})\leqslant+2\sqrt{2}\,. \tag{44}\] Since the bounds of \(\pm 2\sqrt{2}\) we have derived on the Bell-CHSH sum of expectation values are the same as those predicted by quantum mechanics and observed in the Bell-test experiments, the conclusion of Bell's theorem is mitigated. What is ruled out by these experiments is not local realism but the assumption of the additivity of expectation values, which does not hold for non-commuting observables in dispersion-free states of any hidden variable theories to begin with. ## VII Conclusion: Bell's theorem assumes its conclusion (_petitio principi_) Let me reiterate the main points discussed above. Together, they demonstrate that Bell's theorem begs the question. (1) The first point is that the derivation in Section IV of the bounds of \(\pm 2\) on (17) for the dispersion-free counterpart \(|\,\Psi,\,\lambda\rangle\) of the singlet state (10) must comply with the heuristics of the contextual hidden variable theories discussed in Section II. Otherwise, the stringent bounds of \(\pm 2\) cannot be claimed to have any relevance for hidden variable theories. This requires compliance with the prescription (7) that equates the quantum mechanical expectation values with their hidden variable counterparts for _all_ observables, including any sums of observables, pertaining to the singlet system. (2) The most charitable view of the equality (19) is that it is an _assumption_, over and above those of locality, realism, and all other auxiliary assumptions required for deriving the inequalities (22), because it is valid only for commuting observables. Far from being required by realism, it contradicts realism, because it fails to assign the correct eigenvalue (36) to the summed observable (33) as its realistic counterpart, as required by the prescription (7). Realism requires that all observables, including their sums, must be assigned unique eigenvalues, regardless of whether they are observed. (3) Expectation values in dispersion-free states of hidden variable theories do not add linearly for observables that are not simultaneously measurable. And yet, Bell assumed linear additivity (19) within a local hidden variable model. Conversely, in the light of the heuristics of contextual hidden variable theories we discussed in Section II, assuming (19) is equivalent to assuming that the spin observables \(\mathbf{\sigma}_{1}\cdot\mathbf{a}\,\otimes\,\mathbf{\sigma}_{2}\cdot\mathbf{b}\), _etc._ commute with each other, but they do not. (4) When the correct eigenvalue (36) is assigned to the summed operator (33) replacing the incorrect step (19), the bounds on Bell-CHSH sum (17) work out to be \(\pm 2\sqrt{2}\) instead of \(\pm 2\), thus mitigating the conclusion of Bell's theorem. (5) As we proved in Section IV, the assumption (19) of the additivity of expectation values is equivalent to assuming the strong bounds of \(\pm 2\) on Bell-CHSH sum (17) of expectation values. In other words, (19) and (22) are tautologous. The first four points above invalidate assumption (19), and thus inequalities (22) on physical grounds, and the last one demonstrates that Bell's theorem assumes its conclusion in a different guise, and is thus invalid on logical grounds. In this paper I have focused on a formal and logical critique of Bell's theorem. Elsewhere [9; 13; 15], I have developed a comprehensive local-realistic framework for understanding quantum correlations in terms of the geometry of the spatial part of one of the well-known solutions of Einstein's field equations of general relativity -- namely, that of a quaternionic 3-sphere -- taken as a physical space within which we are confined to perform Bell-test experiments. This framework is based on Clifford algebra and thus explicitly takes the non-commutativity of observables into account. It thus shows, constructively, that contextually local hidden variable theories are not ruled out by Bell-test experiments. Since, as we discussed in Section III, the formal proof of Bell's theorem is based on the entangled singlet state (10), in [4; 5; 7; 10; 11; 12; 14] I have reproduced the correlations predicted by (10) as a special case within the local-realistic framework proposed in [9; 13; 15]. I especially recommend the calculations presented in [7] and [14], which also discuss a macroscopic experiment that would be able to falsify the 3-sphere hypothesis I have proposed in these publications. ## Appendix ### Separating the commuting and non-commuting parts of the summed operator (33) Before considering the specific operator (33), in this appendix let us prove that, in general, the eigenvalue of a sum \(r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\) of operators is not equal to the sum \(r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\) of the individual eigenvalues of the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\), unless these operators commute with each other. Here \(r\), \(s\), \(t\), and \(u\) are real numbers. It is not difficult to prove this known fact by evaluating the square of the operator \(\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\}\) as follows: \[\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\} \{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\}=r^{2} \mathcal{R}^{2}+rs\,\mathcal{R}\mathcal{S}+rt\,\mathcal{R}\mathcal{T }+ru\,\mathcal{R}\mathcal{U}\] \[+sr\,\mathcal{S}\mathcal{R}+s^{2}\mathcal{S}^{2}+st\,\mathcal{S }\mathcal{T}+su\,\mathcal{S}\mathcal{U}\] \[+tr\,\mathcal{T}\mathcal{R}+ts\,\mathcal{T}\mathcal{S}+t^{2} \mathcal{T}^{2}+tu\,\mathcal{T}\mathcal{U}\] \[+ur\,\mathcal{U}\mathcal{R}+us\,\mathcal{U}\mathcal{S}+ut\, \mathcal{U}\mathcal{T}+u^{2}\mathcal{U}^{2}. \tag{11}\] Now, assuming that the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) do not commute in general, let us define the following operators: \[\mathcal{L}:=\mathcal{S}\mathcal{R}-\mathcal{R}\mathcal{S} \iff\mathcal{S}\mathcal{R}=\mathcal{R}\mathcal{S}+\mathcal{L}, \tag{12}\] \[\mathcal{M}:=\mathcal{T}\mathcal{R}-\mathcal{R}\mathcal{T} \iff\mathcal{T}\mathcal{R}=\mathcal{R}\mathcal{T}+\mathcal{M},\] (13) \[\mathcal{N}:=\mathcal{T}\mathcal{S}-\mathcal{S}\mathcal{T} \iff\mathcal{T}\mathcal{S}=\mathcal{S}\mathcal{T}+\mathcal{N},\] (14) \[\mathcal{O}:=\mathcal{U}\mathcal{R}-\mathcal{R}\mathcal{U} \iff\mathcal{U}\mathcal{R}=\mathcal{R}\mathcal{U}+\mathcal{O},\] (15) \[\mathcal{P}:=\mathcal{U}\mathcal{T}-\mathcal{T}\mathcal{U} \iff\mathcal{U}\mathcal{T}=\mathcal{T}\mathcal{U}+\mathcal{P},\] (16) \[\text{and}\quad\mathcal{Q}:=\mathcal{U}\mathcal{S}-\mathcal{S} \mathcal{U} \iff\mathcal{U}\mathcal{S}=\mathcal{S}\mathcal{U}+\mathcal{Q}. \tag{17}\] These operators would be null operators with vanishing eigenvalues if the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) did commute with each other. Using these relations for the operators \(\mathcal{R}\), \(\mathcal{T}\mathcal{R}\), \(\mathcal{T}\mathcal{S}\), \(\mathcal{U}\mathcal{R}\), \(\mathcal{U}\mathcal{T}\) and \(\mathcal{U}\mathcal{S}\), equation (100) can be simplified to \[\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\} \{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\}=r^{2}\mathcal{ R}^{2}+2rs\,\mathcal{RS}+2rt\,\mathcal{RT}+2ru\,\mathcal{R}\mathcal{U}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+rs\, \mathcal{L}+s^{2}\mathcal{S}^{2}+2st\,\mathcal{ST}+2su\,\mathcal{S}\mathcal{ U}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad+rt\,\mathcal{M}+st\,\mathcal{N}+t^{2}\mathcal{T}^{2}+2tu\, \mathcal{T}\mathcal{U}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ru\, \mathcal{O}+su\,\mathcal{Q}+tu\,\mathcal{P}+u^{2}\mathcal{U}^{2} \tag{101}\] \[=\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\} _{\mathbf{c}}^{2}+\,\mathcal{Y}, \tag{102}\] where \[\mathcal{Y}:=rs\,\mathcal{L}+rt\,\mathcal{M}+st\,\mathcal{N}+ru\,\mathcal{O}+ tu\,\mathcal{P}+su\,\mathcal{Q}\,. \tag{103}\] We have thus separated out the commuting part \(\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\}_{\mathbf{c}}\) and the non-commuting part \(\mathcal{Y}\) of the summed operator \(\mathcal{X}:=\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\}\). Note that the operators \(\mathcal{L}\), \(\mathcal{M}\), \(\mathcal{N}\), \(\mathcal{O}\), \(\mathcal{P}\), and \(\mathcal{Q}\) defined in (100) to (100) will not commute with each other in general unless their constituents \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) themselves are commuting. Next, we work out the eigenvalue \(\mathscr{X}\) of the operator \(\mathcal{X}\) in a normalized eigenstate \(|\,\xi\,\rangle\) using the eigenvalue equations \[\mathcal{X}\,|\,\xi\,\rangle=\mathscr{X}\,|\,\xi\,\rangle \tag{104}\] and \[\mathcal{X}\,\mathcal{X}\,|\,\xi\,\rangle=\mathcal{X}\big{\{}\mathcal{X}\,|\, \xi\,\rangle\big{\}}=\mathcal{X}\,\big{\{}\mathcal{X}\,|\,\xi\,\rangle\big{\}} =\mathscr{X}\,\big{\{}\mathcal{X}\,|\,\xi\,\rangle\big{\}}=\mathscr{X}^{2}\,| \,\xi\,\rangle, \tag{105}\] in terms of the eigenvalues \(\mathcal{R}\), \(\mathscr{S}\), \(\mathscr{T}\), and \(\mathscr{U}\) of the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) and the expectation value \(\langle\,\xi\,|\,\mathcal{Y}\,|\,\xi\,\rangle\): \[\mathscr{X}=\pm\sqrt{\langle\,\xi\,|\,\mathcal{X}\,\mathcal{X}\,|\,\xi\, \rangle}=\pm\sqrt{\langle\,\xi\,|\big{\{}r\,\mathcal{R}+s\,\mathcal{S}+t\, \mathcal{T}+u\,\mathcal{U}\big{\}}_{\mathbf{c}}^{2}\big{|}\,\xi\,\rangle+ \langle\,\xi\,|\,\mathcal{Y}\,|\,\xi\,\rangle}\,, \tag{106}\] where we have used (102). But the eigenvalue of the commuting part \(\{r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\}_{\mathbf{c}}\) of \(\mathcal{X}\) is simply the linear sum \(r\,\mathscr{R}+s\mathscr{S}+t\,\mathscr{T}+u\,\mathscr{U}\) of the eigenvalues of the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\). Consequently, using the equation analogous to (105) for the square of the operator \(\big{\{}r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\big{\}}_{ \mathbf{c}}\) we can express the eigenvalue \(\mathscr{X}\) of \(\mathcal{X}\) as \[\mathscr{X}=\pm\sqrt{\big{\{}r\,\mathcal{R}+s\,\mathscr{S}+t\,\mathscr{T}+u\, \mathscr{U}\big{\}}^{2}+\,\langle\,\xi\,|\,\mathcal{Y}\,|\,\xi\,\rangle}\,. \tag{107}\] Now, because the operators \(\mathcal{L}\), \(\mathcal{M}\), \(\mathcal{N}\), \(\mathcal{O}\), \(\mathcal{P}\), and \(\mathcal{Q}\) defined in (100) to (101) will not commute with each other in general if their constituent operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) are non-commuting, the state \(|\,\xi\,\rangle\) will not be an eigenstate of the operator \(\mathcal{Y}\) defined in (103). Moreover, while a dispersion-free state \(|\,\psi,\,\lambda\rangle\) would pick out one of the eigenvalues \(\mathscr{Y}\) of \(\mathcal{Y}\), it will not be equal to the linear sum of the corresponding eigenvalues \(\mathscr{L}\), \(\mathscr{M}\), \(\mathscr{N}\), \(\mathscr{O}\), \(\mathscr{P}\), and \(\mathscr{Q}\) in general, \[\mathscr{Y}\neq rs\,\mathscr{L}+rt\,\mathscr{M}+st\,\mathscr{N}+ru\,\mathscr{O }+tu\,\mathscr{P}+su\,\mathscr{Q}, \tag{108}\] even if we assume that the operators \(\mathcal{X}\) and \(\mathcal{Y}\) commute with each other so that \((\,\psi,\,\lambda\,|\,\mathcal{Y}\,|\,\psi,\,\lambda\,)=\mathscr{Y}\) is an eigenvalue of \(\mathcal{Y}\). That is to say, just like the eigenvalue \(\mathscr{X}\) of \(\mathcal{X}\), the eigenvalue \(\mathscr{Y}\) of \(\mathcal{Y}\) is also a nonlinear function in general. On the other hand, because we wish to prove that the eigenvalue of the sum \(r\,\mathcal{R}+s\,\mathcal{S}+t\,\mathcal{T}+u\,\mathcal{U}\) of the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) is not equal to the sum \(r\,\mathscr{R}+s\,\mathscr{S}+t\,\mathcal{T}+u\,\mathcal{U}\) of the individual eigenvalues of the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) unless they commute with each other, we must make sure that the eigenvalue \(\mathscr{Y}\) does not vanish for the unlikely case in which the operators \(\mathcal{L}\), \(\mathcal{M}\), \(\mathcal{N}\), \(\mathcal{O}\), \(\mathcal{P}\), and \(\mathcal{Q}\) commute with each other. But even in that unlikely case, we would have \[\overline{\mathscr{Y}}=rs\,\mathscr{L}+rt\,\mathscr{M}+st\,\mathscr{N}+ru\, \mathscr{O}+tu\,\mathscr{P}+su\,\mathscr{Q} \tag{109}\] as eigenvalue of the operator \(\mathcal{Y}\) defined in (103), and consequently the eigenvalue \(\mathscr{X}\) in (107) will at best reduce to \[\mathscr{X}=\pm\sqrt{\big{\{}r\,\mathcal{R}+s\,\mathscr{S}+t\,\mathscr{T}+u\, \mathscr{U}\big{\}}^{2}+rs\,\mathscr{L}+rt\,\mathscr{M}+st\,\mathscr{N}+ru\, \mathscr{O}+tu\,\mathscr{P}+su\,\mathscr{Q}}\,. \tag{110}\] In other words, even in such an unlikely case \(\mathscr{Y}\) will not vanish, and consequently the eigenvalue \(\mathscr{X}\) will not reduce to \[\overline{\mathscr{Y}}=r\,\mathcal{R}+s\,\mathscr{S}+t\,\mathscr{T}+u\,\mathscr{U}. \tag{111}\] Consequently, unless ( \(\psi\), \(\lambda\,|\,\mathcal{Y}(c)\,|\,\psi\), \(\lambda\,\)) \(\equiv 0\), the expectation value of \(\mathcal{X}(c)\) equating the average of \(\mathscr{X}(c,\,\lambda)\) will be \[\langle\,\psi\,|\,\mathcal{X}(c)\,|\,\psi\,\rangle= \int_{\mathscr{L}}\mathscr{X}(c,\,\lambda)\;p(\lambda)\,d\lambda \tag{119}\] \[= \int_{\mathscr{L}}\left[\pm\sqrt{\left\{\mathscr{T}\mathscr{R}(c, \,\lambda)+s\,\mathscr{S}(c,\,\lambda)+t\,\mathscr{T}(c,\,\lambda)+u\, \mathscr{U}(c,\,\lambda)\right\}^{2}+\,\left(\,\psi,\,\lambda\,|\,\mathcal{Y}( c)\,|\,\psi,\,\lambda\,\right)\,\right]p(\lambda)\,d\lambda\] (120) \[\neq \int_{\mathscr{L}}\left[\pm\sqrt{\left\{\mathscr{T}\mathscr{R}(c, \,\lambda)+s\,\mathscr{S}(c,\,\lambda)+t\,\mathscr{T}(c,\,\lambda)+u\, \mathscr{U}(c,\,\lambda)\right\}^{2}+\,\overline{\mathscr{Y}(c,\,\lambda)}} \,\right]p(\lambda)\,d\lambda\;\;\text{if}\;[\,\mathcal{X},\,\mathcal{Y}\,]\neq 0\] (121) \[\neq \int_{\mathscr{L}}\pm\bigl{\{}r\,\mathscr{R}(c,\,\lambda)+s\, \mathscr{S}(c,\,\lambda)+t\,\mathscr{T}(c,\,\lambda)+u\,\mathscr{U}(c,\, \lambda)\bigr{\}}\;p(\lambda)\,d\lambda\;\;\text{if}\;\mathscr{L},\,\mathscr{ M},\,\mathscr{N},\,\mathscr{O},\,\mathscr{P},\,\mathscr{Q}\neq 0, \tag{122}\] where \(c\) indicates the contexts of experiments as discussed in Section II. The above result confirms the inequality (39) we discussed in Section VI. Note that, because \(\mathscr{X}(c,\,\lambda)\) and \(\mathscr{Y}(c,\,\lambda)\) are highly _nonlinear_ functions in general (recall, _e.g._, that \(\sqrt{x^{2}\pm y^{2}}\neq\sqrt{x^{2}}\pm\sqrt{y^{2}}\,\)), the inequality in (122) can reduce to equality _if and only if_ the operators \(\mathcal{R}\), \(\mathcal{S}\), \(\mathcal{T}\), and \(\mathcal{U}\) commute with each other. In that case, the operators \(\mathcal{L}\), \(\mathcal{M}\), \(\mathcal{N}\), \(\mathcal{O}\), \(\mathcal{P}\), and \(\mathcal{Q}\) defined in (12) to (107) will also commute with each other, as well as being null operators, with each of the eigenvalues \(\mathscr{L}\), \(\mathscr{M}\), \(\mathscr{N}\), \(\mathscr{O}\), \(\mathscr{P}\), and \(\mathcal{Q}\) reducing to zero. Consequently, in that case ( \(\psi\), \(\lambda\,|\,\mathcal{Y}(c)\,|\,\psi\), \(\lambda\,\)) will vanish identically and (114) will reduce to (118). It is now straightforward to deduce the operator \(\widetilde{\Theta}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime})\) specified in (37) using (34). For this purpose, we first note that for the Bell-CHSH sum (17) the real numbers \(r=s=t=+1\) and \(u=-1\), and therefore (118) simplifies to \[\overline{\mathscr{X}}(c,\,\lambda)=\mathscr{R}(c,\,\lambda)+\mathscr{S}(c,\, \lambda)+\mathscr{T}(c,\,\lambda)-\mathscr{U}(c,\,\lambda). \tag{123}\] This quantity is tacitly assumed in the derivation of Bell's theorem to be the eigenvalue of the summed operator (33), implying the following identifications: \[\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b},\,\lambda) \equiv\mathscr{R}(\mathbf{a},\,\mathbf{b},\,\lambda)\] \[=\pm 1\;\;\text{is an eigenvalue of the observable}\;\;\mathcal{R}(\mathbf{a},\, \mathbf{b})\equiv\boldsymbol{\sigma}_{1}\cdot\mathbf{a}\,\otimes\, \boldsymbol{\sigma}_{2}\cdot\mathbf{b}\,, \tag{124}\] \[\mathscr{A}(\mathbf{a},\,\lambda)\,\mathscr{B}(\mathbf{b}^{\prime},\,\lambda) \equiv\mathscr{S}(\mathbf{a},\,\mathbf{b}^{\prime},\,\lambda)\] \[=\pm 1\;\;\text{is an eigenvalue of the observable}\;\;\mathcal{S}(\mathbf{a},\, \mathbf{b}^{\prime})\equiv\boldsymbol{\sigma}_{1}\cdot\mathbf{a}\,\otimes\, \boldsymbol{\sigma}_{2}\cdot\mathbf{b}^{\prime},\] (125) \[\mathscr{A}(\mathbf{a}^{\prime},\,\lambda)\,\mathscr{B}(\mathbf{b },\,\lambda) \equiv\mathscr{T}(\mathbf{a}^{\prime},\,\mathbf{b},\,\lambda)\] \[=\pm 1\;\;\text{is an eigenvalue of the observable}\;\;\mathcal{T}(\mathbf{a}^{\prime},\, \mathbf{b})\equiv\boldsymbol{\sigma}_{1}\cdot\mathbf{a}^{\prime}\,\otimes\, \boldsymbol{\sigma}_{2}\cdot\mathbf{b}\,,\] (126) \[\text{and}\;\;\;\mathscr{A}(\mathbf{a}^{\prime},\,\lambda)\, \mathscr{B}(\mathbf{b}^{\prime},\,\lambda) \equiv\mathscr{U}(\mathbf{a}^{\prime},\,\mathbf{b}^{\prime},\,\lambda)\] \[=\pm 1\;\;\text{is an eigenvalue of the observable}\;\;\mathcal{U}(\mathbf{a}^{\prime},\, \mathbf{b}^{\prime})\equiv\boldsymbol{\sigma}_{1}\cdot\mathbf{a}^{\prime}\, \otimes\,\boldsymbol{\sigma}_{2}\cdot\mathbf{b}^{\prime}. \tag{127}\] The non-commuting part of the operator (33) can therefore be identified using (101) and the above identifications as \[\widetilde{\Theta}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime})=\bigl{\{}\mathcal{L}+\mathcal{M}+\mathcal{N}-\mathcal{O}-\mathcal{P}- \mathcal{Q}\bigr{\}}(\mathbf{a},\mathbf{a}^{\prime},\mathbf{b},\mathbf{b}^{ \prime})\,, \tag{128}\] where the operators \(\mathcal{L}\), \(\mathcal{M}\), \(\mathcal{N}\), \(\mathcal{O}\), \(\mathcal{P}\), and \(\mathcal{Q}\) are defined in (12) to (107). The result is the operator specified in (37). ### Establishing bounds on the magnitude of the vector \(\mathbf{n}\) defined in (38) The vector \(\mathbf{n}\) defined in (38) is a function of four unit vectors, \(\mathbf{a}\), \(\mathbf{a}^{\prime}\), \(\mathbf{b}\), and \(\mathbf{b}^{\prime}\), in \(\mathrm{I\!R}^{3}\), and involves various cross products among these vectors. Consequently, as the vectors \(\mathbf{a}\), \(\mathbf{a}^{\prime}\), \(\mathbf{b}\), and \(\mathbf{b}^{\prime}\) vary in their directions within \(\mathrm{I\!R}^{3}\) due to various choices made by Alice and Bob, the extremum values of the magnitude \(||\mathbf{n}||\) is obtained by setting the vectors orthogonal to each other, with angles between them set to 90 or 270 degrees. However, in three dimensions that is possible only for three of the four vectors, so one of the four would have to be set either parallel or anti-parallel to one of the remaining three. Therefore, let us first choose to set \(\mathbf{b}^{\prime}=-\mathbf{b}\). Substituting this into (38) then gives \(\mathbf{n}=\mathbf{0}\), and thus \(||\mathbf{n}||=0.\) We have thus found the lower bound on the magnitude \(||\mathbf{n}||.\) To determine the upper bound on \(||\mathbf{n}||\), we set \(\mathbf{a}^{\prime}=-\mathbf{a}\) instead. Substituting this into (38) reduces the vector \(\mathbf{n}\) to the following function of \(\mathbf{a}\), \(\mathbf{a}^{\prime}\), \(\mathbf{b}\) and \(\mathbf{b}^{\prime}\): \[\mathbf{n}=2\,\bigl{\{}\left(\mathbf{a}\times\mathbf{b}^{\prime}\right)\times \left(\mathbf{a}\times\mathbf{b}\right)\bigr{\}}. \tag{129}\] Consequently, in this case, the magnitude of the vector \(\mathbf{n}\) works out to be \[||\mathbf{n}|| =2\,||(\mathbf{a}\times\mathbf{b}^{\prime})||\,||(\mathbf{a}\times \mathbf{b})||\,\sin\beta_{(\mathbf{a}\times\mathbf{b}^{\prime}),(\mathbf{a} \times\mathbf{b})} \tag{30}\] \[=2\,\big{\{}||\mathbf{a}||\,||\mathbf{b}^{\prime}||\,\sin\beta_{ \mathbf{a},\mathbf{b}^{\prime}}\,\big{\{}||\mathbf{a}||\,||\mathbf{b}||\,\sin \beta_{\mathbf{a},\mathbf{b}}\big{\}}\,\big{\{}\sin\beta_{(\mathbf{a}\times \mathbf{b}^{\prime}),(\mathbf{a}\times\mathbf{b})}\big{\}}, \tag{31}\] where \(\beta_{\mathbf{a},\mathbf{b}}\) is the angle between \(\mathbf{a}\) and \(\mathbf{b}\), _etc_. But since the vectors \(\mathbf{a}\), \(\mathbf{a}^{\prime}\), \(\mathbf{b}\), and \(\mathbf{b}^{\prime}\) are all unit vectors and we have set them orthogonal to each other (apart from \(\mathbf{a}^{\prime}=-\mathbf{a}\)), we obtain \(||\mathbf{n}||=2\) as the maximum possible value for the magnitude of \(\mathbf{n}\). We have thus established the following bounds on the magnitude of the vector \(\mathbf{n}\) as specified in (38): \[0\,\leqslant\,||\mathbf{n}||\,\leqslant\,2. \tag{32}\]
2310.01354
Separation-of-charge confinement and the Higgs transition in SU(3) gauge Higgs theory
With SU(3) gauge Higgs theory as an example, we examine critically the idea that the confinement property in an SU(N) gauge-Higgs theory, with the Higgs field in the fundamental representation, persists in an unbroken SU(N-1) subgroup.
Jeff Greensite, Hou Y. Yau
2023-10-02T17:17:26Z
http://arxiv.org/abs/2310.01354v1
# Separation-of-charge confinement and the Higgs transition in SU(3) gauge Higgs theory ###### Abstract With SU(3) gauge Higgs theory as an example, we examine critically the idea that the confinement property in an SU(N) gauge-Higgs theory, with the Higgs field in the fundamental representation, persists in an unbroken SU(N-1) subgroup. ## I Introduction It has been argued elsewhere [1] that the transition to the Higgs phase of an SU(N) gauge Higgs theory, with the Higgs field in the fundamental representation of the gauge group, is characterized by the spontaneous breaking of the global center subgroup of the gauge group, with an associated non-local order parameter which is closely analogous to the Edwards-Anderson order parameter for a spin glass [2]. In the absence of a massless phase, the transition to the Higgs phase is accompanied by a transition in confinement type, from "separation-of-charge" (S\({}_{c}\)) confinement in the confinement phase, to a weaker "color" (C) confinement property in the Higgs phase. SU(3) gauge Higgs theory is a good testing ground for these assertions, in particular because of the idea that in this theory the SU(3) gauge symmetry is broken to SU(2), so that in some way the confinement property of the SU(3) theory is retained in a subgroup. The purpose of this article is to examine this idea critically; the object is to study whether the concept of separation-of-charge confinement applies to color charges (static "quark" sources) in color directions orthogonal to the color orientation of the Higgs field. In section II we review the order parameter for the confinement-to-Higgs transition presented in ref. [1], define and contrast the S\({}_{c}\) and C varieties of confinement, and expand on the nature of the problem we address. Section III is devoted to the results of our numerical simulations of SU(3) gauge Higgs theory concerning symmetry breaking and confinement. Section IV contains our conclusions. ## II Confinement in gauge Higgs theories In this article the expression "gauge Higgs theory" will refer specifically to SU(N) gauge theories with a single Higgs field in the fundamental representation of the gauge group. ### Separation of charge confinement The property of confinement in a gauge theory with either no matter fields, or with only matter fields of zero N-ality, can be formulated in this way: Let us consider, in lattice regularization, physical states of the form \[|\Psi_{V}\rangle\equiv\overline{\psi}^{a}(\mathbf{x})V^{ab}(\mathbf{x},\mathbf{y};U)\psi^ {b}(\mathbf{y})|\Psi_{0}\rangle\, \tag{1}\] where \(\Psi_{0}\) is the ground state and \(\overline{\psi},\psi\) are operators creating static fermion-antifermion sources, transforming in the fundamental representation of the gauge group, at positions \(\mathbf{x},\mathbf{y}\). Superscripts are color indices. The operator \(V^{ab}(\mathbf{x},\mathbf{y};U)\) is a bicovariant functional of the lattice gauge field \(U\), transforming under a gauge transformation \(g\) as follows: \[V^{ab}(\mathbf{x},\mathbf{y};U)\to g^{ac}(\mathbf{x})V^{cd}(\mathbf{x},\mathbf{y};U)g^{ \dagger 4b}(\mathbf{y}). \tag{2}\] If the energy expectation value above the vacuum energy \[E_{V}(R)=\langle\Psi_{V}|H|\Psi_{V}\rangle-\mathscr{E}_{vac} \tag{3}\] diverges to infinity as \(R=|\mathbf{x}-\mathbf{y}|\to\infty\), regardless of the choice of \(V\), then the gauge theory is said to be confining. Of course there is one particular \(V\), a flux tube state of some kind, which minimizes of the energy of a static quark-antiquark system, with \(E_{V}(R)\approx\sigma R\) at large \(R\). The proposal in [3] is that the very same condition can be used as a criterion for confinement in a gauge theory with matter fields in the fundamental representation of the gauge group. We call this property "separation of charge" or "S\({}_{c}\)" confinement. We are considering physical states containing isolated sources whose color charge is not screened by the matter field, and for this purpose it is understood that the \(V(\mathbf{x},\mathbf{y};U)\) operator is a functional of the gauge field alone, with no dependence on matter fields. In QCD, \(q\overline{q}\) states of this kind would contain, in addition to color electric fields collimated into flux tubes, also detectable abelian electric fields emanating from fractional electric charges at points \(\mathbf{x},\mathbf{y}\), rather than emanating from a set of integer charged particles. The point is that while physical states containing color charges unscreened by matter fields may be challenging to produce in practice and, if produced, would decay very rapidly into ordinary hadrons, states of this kind do exist in the physical Hilbert space, and it is reasonable to consider how their energy varies as the separation between the color sources increases. Indeed, if we imagine pair production followed by a very rapid separation of the quark and antiquark, then we expect that momentarily the physical state of the separated \(q\overline{q}\) pair would have the form (1). In the S\({}_{c}\) phase the energy of the unscreened state increases with separation, and this is the mechanism which underlies the existence of linear Regge trajectories in QCD. Of course such states exist only momentarily, until string breaking sets in. In a phase without the S\({}_{c}\) property there is no reason to expect that the optimal \(\Psi_{V}\) would be associated with linear Regge trajectories, so the loss of linear Regge trajectories is to be expected in a transition from a S\({}_{c}\) phase to a C confinement phase. The alternative to S\({}_{c}\) confinement, for SU(N) gauge theories with matter in \(D=2+1\) and \(3+1\) dimensions, is "color" or "C" confinement, meaning that the asymptotic spectrum consists of color neutral particles. This is a much weaker condition than S\({}_{c}\) confinement, and in fact it holds true in gauge Higgs theory in the Higgs regime, where there are no linear Regge trajectories or metastable flux tubes whatever. There will still exist \(V(\mathbf{x},\mathbf{y},U)\) operators such that \(E_{V}(R)\) diverges with \(R\). An example is a Wilson line running between points \(\mathbf{x},\mathbf{y}\). However, in the C confining phase (which we will identify with the Higgs phase) there also exist \(V(\mathbf{x},\mathbf{y},U)\) operators such that \(E_{V}(R)\) tends to a finite limit at large \(R\). It can be shown that a transition from S\({}_{c}\) confinement (the "confining" phase) to C confinement (the "Higgs" phase) must exist in the SU(N) gauge Higgs phase diagram [4]. It is natural to ask whether this transition is accompanied by the breaking of some symmetry. ### Charged states and global center gauge symmetety A state which is charged with respect to some symmetry of the Hamiltonian is a state which transforms covariantly, as a non-singlet, under those symmetry transformations. So we might naively expect a state which is charged with respect to the gauge group to transform under gauge transformations, e.g. a state like \(\overline{\psi}^{a}(x)\Psi_{0}\), where \(\Psi_{0}\) is the ground state. But states of that type violate the Gauss law constraint, and are hence unphysical. Thus we are looking for a state of the form \(\overline{\psi}^{a}(x)\xi^{a}(x)\Psi_{0}\) which is invariant under infinitesimal gauge transformations, thereby satisfying the Gauss law, but still non-invariant under some subgroup of the gauge group. Charged states of that type can be constructed; they are states which are charged with respect to the global center subgroup of the gauge group, i.e. which transform under space-invariant gauge transformations \(g(x)=g\) where \(g\) is a center element. Note that transformations in this global subgroup will not transform the gauge field. Our first example of a charged state is a static electric charge coupled to the quantized electromagnetic field. The lowest energy eigenstate of this system is as follows [5]: \[|\Psi_{\rm chrg}\rangle=\overline{\psi}(\mathbf{x})\rho(\mathbf{x})|\Psi_{0}\rangle\, \tag{4}\] where \[\rho(\mathbf{x};A)\ =\ \exp\left[-i\frac{e}{4\pi}\int d^{3}z\,A_{i}(\mathbf{z}) \frac{\partial}{\partial z_{i}}\frac{1}{|\mathbf{x}-\mathbf{z}|}\right]. \tag{5}\] The operator \(\rho(\mathbf{x};A)\) is an example of what we have called a "pseudomatter" field, namely, a non-local functional of the gauge field which transforms like a matter field at point \(\mathbf{x}\), except under transformations in the global center subgroup (GCS) of the gauge group. It is in fact obvious that such an operator is invariant under the GCS, because the gauge field itself is invariant under such transformations. In the abelian case, consider a U(1) gauge transformation \(g(\mathbf{x})=e^{i\theta(\mathbf{x})}\) on a time slice, with \(\theta(x)=\theta_{0}+\tilde{\theta}(x)\), where \(\theta_{0}\) is the zero mode of \(\theta(x)\). The ground state \(\Psi_{0}\) is invariant under this transformation, and \(\overline{\psi}(x)\rightarrow\overline{\psi}e^{-i\theta(x)}\), but \[\rho(\mathbf{x};A)\to e^{i\tilde{\theta}(\mathbf{x})}\rho(\mathbf{x},A)\, \tag{6}\] As a result, \(|\Psi_{\rm chrg}\rangle\) is not entirely gauge invariant, but transforms as \(|\Psi_{\rm chrg}\rangle\to e^{-i\theta_{0}}|\Psi_{\rm chrg}\rangle\). In other words, it is covariant under transformations in the U(1) global center subgroup of the U(1) gauge group. If the theory contains a single-charged scalar field, then we may construct neutral states, invariant under the GCS, such as \[|\Psi_{\rm neutral}\rangle=\overline{\psi}(\mathbf{x})\phi(\mathbf{x})|\Psi_{0}\rangle. \tag{7}\] Providing the global center symmetry is unbroken, charged and neutral states are necessarily orthogonal in the confined and massless phases. But this is no longer true if the U(1) GCS is spontaneously broken, in which case we lose the sharp distinction between charged and neutral states. All of this extends to non-abelian gauge theories. In the case of SU(N) gauge Higgs theories, the GCS is the global \(Z_{N}\) center subgroup of the gauge group.1 Then we may construct charged states in lattice gauge theory of the form Footnote 1: In the special case of SU(2) gauge Higgs theory, there is a larger global SU(2) symmetry, known as “custodial symmetry,” which transforms the Higgs but not the gauge field. This symmetry includes the GCS symmetry. \[|\Psi_{\rm chrg}\rangle=\overline{\psi}^{a}(x)\xi^{a}(x;U)|\Psi_{0}\rangle\, \tag{8}\] where \(\xi^{a}(x;U)\) is a non-local functional of the link variables only which transforms like a field in the fundamental representation of the gauge group _except_ under transformations in the GCS, i.e. it is a pseudomatter field. Thus, under a gauge transformation \(g(x)=z\mathbb{1},\ z\in Z_{N}\), \[|\Psi_{\rm chrg}\rangle\to z^{*}|\Psi_{\rm chrg}\rangle. \tag{9}\] Examples of non-abelian speumatter fields include the eigenstates \(\zeta^{a}_{m}(\mathbf{x};U)\) of the covariant Laplacian operator, \(-D^{2}\zeta_{n}=\lambda_{n}\zeta_{n}\), where2 Footnote 2: Pseudomatter fields of this type can be combined to construct transformations to gauges which avoid the Gribov ambiguity, cf. [6]. \[(-D^{2})^{ab}_{xy}\ =\ \sum_{k=1}^{3}\left[2\delta^{ab}\delta_{xy}-U^{ab}_{ k}(x)\delta_{y,x+\tilde{k}}-U^{tab}_{k}(x-\hat{k})\delta_{y,x-\tilde{k}}\right]. \tag{10}\] As in the abelian theory, one can also construct neutral states in which the charge of the fermion is entirely shielded by the Higgs field: \(|\Psi_{\rm neutral}\rangle=\overline{\psi}^{a}(\mathbf{x})\phi^{a}(\mathbf{x})|\Psi_{0}\rangle\), and these are invariant with respect to the GCS. Assuming that the GCS is not spontaneously broken, then it is obvious that \(\langle|\Psi_{\rm neutral}|\Psi_{\rm chrg}\rangle=0\). The phase in which this is no longer true, and there is no longer a sharp distinction between charged and neutral states, is the Higgs phase. ### Order parameter It is not hard to construct an order parameter for the spontaneous breaking of a GCS. Consider \[e^{-H(U,\phi)} \equiv \Psi_{0}^{2}[U,\phi]\] \[Z[U] = \int D\phi(x)\ e^{-H(U,\phi)} \tag{11}\] where it is understood that all fields are defined on some time slice. Introduce \[\overline{\phi}(x;U)=\frac{1}{Z(U)}\int D\phi\ \phi(x)e^{-H(U,\phi)} \tag{12}\] Then the GCS is spontaneously broken in the background \(U\) field if \(\overline{\phi}(x;U)\neq 0\). Since the background \(U\) breaks translation symmetry we can expect that the spatial average of \(\overline{\phi}(x;U)\) will vanish even in the broken phase. So it is convenient to introduce \[\Phi[U]=\frac{1}{V}\sum_{x}|\overline{\phi}(x;U)| \tag{13}\] Now we take the expectation value \(\Omega=\langle\Phi[U]\rangle\). If \(\Omega\neq 0\), then the global center subgroup of the gauge group is spontaneously broken in the full theory. So \(\Omega\) is the desired order parameter.3 Footnote 3: There are strong similarities between \(\Omega\) and the Edwards-Anderson order parameter [2] for a spin glass. See [1]. The central result of [1] is that the spontaneous breaking of the global center subgroup of the gauge group, as detected by the order parameter \(\Omega\), is accompanied by a transition from \(\rm S_{c}\) to C confinement.4 Briefly, when \(\overline{\phi}(x;U)\) is nonzero, it may be used to define a gauge in which \(\overline{\phi}(x;U)\) points everywhere in a given color direction. Then one can construct a \(V(\mathbf{x},\mathbf{y};U)\) operator from the product of gauge transformations to this gauge, one transformation at point \(x\), and the conjugate at point \(y\). It can be shown that for this \(V\) operator, \(\Psi_{V}\) is no longer orthogonal to neutral states in which color is screened by the Higgs field, and \(E_{V}(R)\) tends to a finite constant as \(R\to\infty\). Then by definition the broken symmetry phase is not an \(\rm S_{c}\) confining phase. For details, cf. [1]. Footnote 4: Here we have ignored some technicalities. Formally there is no spontaneous symmetry breaking of a global symmetry in a finite volume; the rigorous procedure is to introduce a term, proportional to some parameter \(h\), which explicitly breaks the global symmetry. We then evaluate \(\Omega\) first taking the thermodynamic limit, and then the \(h\to 0\) limit. The rigorous statement is that the GCS is broken if \(\Omega\) is non-zero after the two limits, taken in this order. We refer the reader to [1] for a detailed discussion of the breaking term, but for the numerical treatment these formalities will not be necessary. ### GCS and \(\rm S_{c}\) confinement in SU(3) This finally brings us to the question we would like to address. In an SU(N) gauge Higgs theory, it is stated in the standard texts, e.g. [7], that the Higgs mechanism "breaks" the SU(N) gauge symmetry to SU(N-1). More precisely, what can be seen perturbatively in unitary gauge is that the Lagrangian in this gauge supplies a mass term for some of the gauge bosons, leaving the bosons corresponding to the generators of an SU(N-1) group massless, at least at the perturbative level. The implication is that, in \(D\leq 4\) dimensions, confinement doesn't disappear entirely; it should remain for the quark components which transform among themselves via the SU(N-1) group. But what is really meant by the word "confinement" in this situation? We have already defined confinement in a gauge theory with matter fields as \(\rm S_{c}\) confinement, as opposed to the weaker property of C confinement. Then the question is whether \(\rm S_{c}\) confinement can be seen in some way in the SU(N-1) subgroup. For example, in an SU(3) gauge theory with a unimodular constraint \(\phi^{\dagger}\phi=1\), in a unitary gauge which sets \[\phi(x)=\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right]\, \tag{14}\] then the fermion components orthogonal to the \(\phi\) color direction are simply the first two color components of the \(\psi^{a}(x)\) field, namely \(a=1\) and \(a=2\). These are the color components which, in unitary gauge, are said to be "confined" by the unbroken SU(2) gauge symmetry. We will refer to these components as "quarks" \(q\). The gauge invariant generalization is the fermion field multiplied by a color projection operator \(P^{ab}(x)\) \[q^{a}(x) = [\delta^{ab}-\phi^{a}(x)\phi^{\dagger b}(x)]\psi^{b}(x) \tag{15}\] \[= P^{ab}(x)\psi^{b}(x)\,\] and we consider the energy \(E_{V}^{q}(R)\) of physical states of the form \[|\Psi_{V}^{\prime}\rangle\equiv\overline{q}^{a}(\mathbf{x})V^{ab}(\mathbf{x},\mathbf{y};U )q^{b}(\mathbf{y})|\Psi_{0}\rangle. \tag{16}\] If in the Higgs phase \(E_{V}^{q}(R)\) diverges with \(R\) regardless of \(V\), then there is \(\rm S_{c}\) confinement of quarks transforming in the "unbroken" sector. On the contrary, if we can find some \(V\) operators such that \(E_{V}^{q}(R)\) converges to a finite constant at large \(R\), then it is hard to make sense of the claim that the quarks are confined, in any sense other than the property of C confinement which holds for all fermion components, and not just the quarks. ## III Numerical results The SU(3) lattice action is \[S=-\frac{\beta}{3}\sum_{\rm plaq}{\rm ReTr}[UUU^{\dagger}U^{\dagger}]-\gamma\sum_{ x,\mu}{\rm Re}[\phi^{\dagger}(x)U_{\mu}(x)\phi(x+\hat{\mu}]\, \tag{17}\] where we impose, for convenience, the unimodular condition \(|\phi(x)|=1\). At \(\gamma=\infty\) and unitary gauge this reduces to SU(2) gauge theory, since all but the SU(2) degrees of freedom are frozen. In this limit, the theory is certainly confining. But are the quark degrees of freedom also confined at finite \(\gamma\) and, if not, how is confinement regained as \(\gamma\to\infty\)? We begin with simulations at \(\beta=6.0\) and a variety of \(\gamma\) values on \(16^{4}\) lattice volumes, followed by simulations at \(\beta=3.6\), which is in the strong-coupling regime of pure SU(3) lattice gauge theory. A few details about the link updates, which greatly improve efficiency at large \(\gamma\), are provided in the Appendix. ### Symmetry breaking transition The first question is at which value of \(\gamma\) the symmetry breaking transition takes place, according to the order parameter \(\Omega\), and this is determined numerically as follows: The SU(3) gauge and scalar fields are updated in the usual way, but each data-taking sweep (separated by 100 update sweeps) actually consists of a set of \(n_{sym}\) sweeps in which the space-like links \(U_{i}(\mathbf{x},0)\) are held fixed on the \(t=0\) time slice. So data taking is, in a sense, a "Monte Carlo in a Monte Carlo." Let \(\phi(\mathbf{x},t=0,n)\) be the scalar field at site \(\mathbf{x}\) on the \(t=0\) time slice at the N-th sweep. Then we compute \(\overline{\phi}(\mathbf{x},U)\) from the average over \(n_{sym}\) sweeps \[\overline{\phi}(\mathbf{x},U)=\frac{1}{n_{sym}}\sum_{n=1}^{n_{sym}}\phi(\mathbf{x},0, n)\, \tag{18}\] and \(\Phi_{n_{sym}}(U)\) from (13). Here it is important to indicate the dependence on \(n_{sym}\). Then the procedure is repeated, updating links and the scalar field together, followed by another computation of \(\Phi_{n_{sym}}(U)\) from a simulation with spatial links at \(t=0\) held fixed, and so on. Averaging the \(\Phi_{n_{sym}}(U)\) obtained by these means results in an estimate for \(\langle\Phi_{n_{sym}}\rangle\). Since \(\Phi_{n_{sym}}(U)\) is a sum of moduli, it cannot be zero. Instead, on general statistical grounds, we expect \[\langle\Phi_{n_{sym}}\rangle=\Omega+\frac{\kappa}{\sqrt{n_{sym}}}\, \tag{19}\] where \(\kappa\) is some constant. By computing \(\langle\Phi_{n_{sym}}\rangle\) in independent runs at a range of \(n_{sym}\) values, and fitting the results to (19), we obtain an estimate for \(\Omega\) at any point in the \(\beta,\gamma\) plane of lattice couplings. In principle, in a finite volume, one should include an explicit GCS symmetry breaking term in the action, proportional to some parameter \(h\), and then compute \(\Omega\) by first taking the thermodynamic and then the \(h\to 0\) limit. For the numerical simulations this procedure is not really necessary, and we may set the symmetry breaking parameter \(h\) to zero, providing \(n_{sym}\) is not too large. Of course, at \(h=0\) and finite volume, \(\langle\Phi\rangle\) must vanish at \(n_{sym}\to\infty\), since a symmetry cannot break at finite volume. Nevertheless, for \(n_{sym}\) in the range we have used, (19) turns out to be a good fit to the data, and the extrapolation to \(n_{sym}=\infty\) should be reliable. Figure 1 shows typical data for \(\langle\Phi(n_{sym})\rangle\) vs. \(1/\sqrt{n_{sym}}\) at \(\gamma=1.0\) and \(\gamma=1.3\), with \(\langle\Phi\rangle\) determined from the intercept of the straight line fit with the y-axis. The transition point, where \(\langle\Phi\rangle\) moves away from zero, appears to be for \(\gamma\) somewhere between \(\gamma=1.1\) and \(\gamma=1.15\), as shown in Fig. 2. It is worth noting that there does not appear to be any thermodynamic transition at \(\beta=6.0\) at any \(\gamma\), as we see from a plot of the link susceptibility \(\chi_{L}\) vs. \(\gamma\) in Fig. 3, where \[L=\sum_{x,\mu}{\rm Re}[\phi^{\dagger}(x)U_{\mu}(x)\phi(x)] \tag{20}\] \[\chi_{L}=4\,{\cal V}\left(\langle L^{2}\rangle-\langle L\rangle^{2}\right) \tag{21}\] where \({\cal V}\) is the lattice volume. The work of [8; 9; 10] tells us that the confinement and Higgs regions are not entirely isolated from one another by a line of thermodynamic transition, so while there could have been such a thermodynamic transition at \(\beta=6\) and some \(\gamma\), this is not required. Note that \(\Phi(U)\) is a non-local functional of the gauge field, and the expectation value of non-local functionals can have non-analytic behavior even when the free energy is analytic in that region. ### E\({}_{V}\) across the transition We first consider the S\({}_{\rm c}\) criterion with no projection to the quark fields. Let \[|\Psi_{n}(R)\rangle=Q_{n}(R)|\Psi_{0}\rangle\, \tag{22}\] where \[Q_{0}(R) = [\overline{\psi}^{a}(\mathbf{x})\phi^{a}(\mathbf{x})]\,\times\,[\phi^{ \dagger b}(\mathbf{y})\psi^{b}(\mathbf{y})]\] \[Q_{n}(R) = [\overline{\psi}^{a}(\mathbf{x})\zeta_{n}^{a}(\mathbf{x})]\,\times\,[ \zeta_{n}^{\dagger b}(\mathbf{y})\psi^{b}(\mathbf{y})]\quad(n>0) \tag{23}\] For \(n>0\), the \(Q_{n}\) are of the form \(\overline{\psi}V\psi\) with factorizable \(V(\mathbf{x},\mathbf{y};U)\) operators \[V^{ab}(\mathbf{x},\mathbf{y};U)=\zeta_{n}^{a}(\mathbf{x})\zeta_{n}^{\dagger b}(\mathbf{y})\, \tag{24}\] where \(\zeta_{n}\) is the n-th eigenstate of the covariant Laplacian operator (10). The fermions are static, and propagate only in the time direction. For continuous time, the energy expectation value above the vacuum energy \(\mathscr{E}_{0}\) is \[E_{n} = -\lim_{t\to 0}\frac{d}{dt}\log\left[\langle\Psi_{n}|e^{-(H- \mathscr{E}_{0})t}|\Psi_{n}\rangle\right] \tag{25}\] \[= -\lim_{t\to 0}\frac{d}{dt}\log\left[\langle Q_{n}^{\dagger}(R,t)Q _{n}(R,0)\rangle\right]\,\] where \(Q_{n}(R,t)\) is shown in (23), with operators on the right hand side defined on time slice \(t\). The corresponding expression in discretized time is \[E_{n}=-\log\left[\frac{\langle Q_{n}^{\dagger}(R,1)Q_{n}(R,0)\rangle}{\langle Q _{n}^{\dagger}(R,0)Q_{n}(R,0)\rangle}\right]. \tag{26}\] After integrating out the heavy quark fields, and dropping an \(R\)-independent constant, we have, for \(n>0\), \[\langle Q_{n}^{\dagger}(R,1)Q_{n}(R,0)\rangle\] \[\qquad=\left\langle\left[\zeta_{n}^{\dagger}(\mathbf{x},0)U_{0}(\bm {x},0)\zeta_{n}(\mathbf{x},1)\right][\zeta_{n}^{\dagger}(\mathbf{y},1)U_{0}^{\dagger}( \mathbf{y},0)\zeta_{n}(\mathbf{y},0)]\right\rangle\] \[\langle Q_{n}^{\dagger}(R,0)Q_{n}(R,0)\rangle\] \[\qquad=\left\langle\left[\zeta_{n}^{\dagger}(\mathbf{x},0)\zeta_{n}( \mathbf{x},0)\right]\left[\zeta_{n}^{\dagger}(\mathbf{y},0)\zeta_{n}(\mathbf{y},0)\right] \right\rangle\,. \tag{27}\] The expressions are the same for \(n=0\), with \(\zeta_{n}\) replaced by the Higgs field \(\phi\). In Fig. 4 we see \(E_{0}(R)\) and \(E_{1}(R)\) just below (\(\gamma=1.1\)) and just above (\(\gamma=1.2\)) the symmetry breaking transition. \(E_{0}(R)\) is the energy of a pair of separated color neutral objects, and of course we do not expect any significant dependence on \(R\), as we see in the figure. In the symmetric phase, the claim is that any \(E_{V}(R)\) diverges with \(R\), and this is certainly true for \(E_{1}(R)\), as seen in Fig. 4(a). The data for \(E_{1}(R)\) is fit to the form \[E^{fit}(R)=a+bR-\frac{\pi}{12R}\, \tag{28}\] and the coefficient of the linearly rising term is \(b\approx 0.039\). This can be compared to the asymptotic string tension of the pure (\(\gamma=0\)) SU(3) gauge theory at \(\beta=6.0\), which is \(\sigma=0.048\)[11]. It should be understood that at any finite \(\gamma\) the asymptotic string tension, as extracted from, e.g., Wilson loops or Polyakov line correlators, is zero, due to string breaking. This was the motivation to construct the S\({}_{\rm c}\) criterion. In the broken phase there is no prediction; \(E_{V}(R)\) might diverge, or it might converge to a constant, depending on \(V\). The claim is only that there must exist _some_\(V\) such that \(E_{V}(R)\) becomes flat at large \(R\). In fact \(E_{1}(R)\) has this convergence property only a little past the transition, as seen in Fig. 4(b). If we were ignorant of the order parameter \(\langle\Phi\rangle\), the behavior of \(E_{1}(R)\) at \(\gamma\geq 1.2\) would be sufficient to establish that the system is in the Higgs phase in that region. It is instructive to look at \(E_{n}(V)\) for other \(n\), below and above the symmetry breaking transition, and the results are seem in Fig. 5. As predicted, all \(E_{n>0}(R)\) rise with \(R\) in the symmetric region (Fig. 5(a) at \(\gamma=1.1\). In the Higgs region at \(\gamma=1.2\), \(E_{1}(R)\) flattens out, while the other \(E_{n>0}\) continue to rise linearly (Fig. 5(b)). At still higher \(\gamma\), we find that more of the \(E_{n}(R)\) become flat (Fig. 5(c)). Figure 3: Gauge invariant link susceptibilities \(\chi\) vs. \(\gamma\) at \(\beta=6\) and several lattice volumes. ### Quark \(\mathbf{E}_{V}\) across the transition The intent of using \(\overline{\psi}V\psi\) operators to create physical states is to exclude "broken string" states, in which the color of the fermion is neutralized by the scalar. In the confinement phase the corresponding \(E_{V}(R)\) diverges with \(R\), and a few examples were presented in the previous section. The question of interest is whether the Higgs phase could still be a confinement phase for the quark operators \(q\), whose color orientation is, by definition, orthogonal to that of the scalar field. Therefore we consider just replacing the \(\overline{\psi}\psi\) operators by \(\overline{q}q\) operators, and computing the energy expectation values \(E_{n}^{q}(R)\) of states \[|\Psi_{n}^{q}(R)\rangle = \overline{q}(\mathbf{x})V_{n}(\mathbf{x},\mathbf{y})q(\mathbf{y})|\Psi_{0}\rangle \tag{29}\] \[= Q_{n}^{q}(R)|\Psi_{0}\rangle\;,\] where this time \[Q_{n}^{q}(R)=[\overline{q}^{a}(\mathbf{x})\zeta_{n}^{a}(\mathbf{x})]\;\times\;[\zeta_ {n}^{\dagger b}(\mathbf{y})q^{b}(\mathbf{y})] \tag{30}\] and \(Q_{0}^{q}=0\) by definition. The superscript \(q\) indicates that there is a color projection to the quark operators.Then we have \[E_{n}^{q}(R)=-\log\left[\frac{\langle Q^{q\dagger}_{n}(R,1)Q_{n}^{q}(R,0) \rangle}{\langle Q^{q}_{n}(R,0)Q_{n}^{q}(R,0)\rangle}\right] \tag{31}\] as before, but this time, after integrating out the heavy quark fields, \[\langle\mathcal{O}_{n}^{\dagger}(R,1)Q_{n}^{q}(R,0)\rangle\] \[=\left\langle\left[\zeta_{n}^{\dagger a}(\mathbf{x},0)P^{ab}(\mathbf{x}, 0)U_{0}^{bc}(\mathbf{x},0)P^{cd}(\mathbf{x},1)\zeta_{m}^{d}(\mathbf{x},1)\right]\right.\] \[\left.\times\left[\zeta_{n}^{\dagger e}(\mathbf{y},1)P^{ef}(\mathbf{y},1 )U_{0}^{\dagger fg}(\mathbf{y},0)P^{gh}(\mathbf{y},0)\zeta_{n}^{h}(\mathbf{y},0)\right] \right\rangle\;, \tag{32}\] Figure 4: Energy expectation values \(E_{0}(R)\) and \(E_{1}(R)\) for unprojected fermion states just below (subfigure (a)) and just above (subfigure (b)) the confinement to Higgs transition. \(E_{0}\) corresponds to a state where the fermionic color charges are neutralized by the Higgs field, while all \(E_{n}(R)\) with \(n>0\) are derived from operators \(V(\mathbf{x},\mathbf{y};U)\) built from eigenstates of the lattice Laplacian operator. The fact that \(E_{1}(R)\) diverges with \(R\) is required in the confinement phase, while the fact that \(E_{1}(R)\) converges to a constant, while not required, implies that the system is in the Higgs phase. The term ”unprojected” means that the fermions are not projected to the quark states, with color orthogonal to the Higgs field. Figure 5: Same as Fig. 4 for \(E_{n}\) with \(n=1-5\) at (a) \(\gamma=1.1\), in the confined phase; and (b) \(\gamma=1.2\), (c) \(\gamma=1.8\) both in the Higgs phase. Note that while only \(E_{1}\) goes “flat” at \(\gamma=1.2\), also \(E_{2}\) and \(E_{3}\) are flat at the higher \(\gamma=1.8\) coupling, where \(E_{0}\) is also displayed for comparison. Figure 6: Same as Fig. 4, this time for fermions projected to quark components orthogonal to the color direction of the Higgs field. The important point here is that in contrast to subfigure (a) in the confined phase, \(E_{1}(R)\) goes flat in the Higgs phase (subfigure (b)), meaning that the quarks which transform into themselves under an SU(2) subgroup do not have the S\({}_{\rm c}\) confinement property. Figure 7: Same as Fig. 5 but for fermions projected to quarks as described in the text. and \[\langle Q^{q\dagger}_{\,n}(R,0)Q^{q}_{n}(R,0)\rangle = \left\langle\,\left[\zeta^{\dagger a}_{n}(\mathbf{x},0)P^{ab}(\mathbf{x},0) \zeta^{b}_{n}(\mathbf{x},0)\right]\right.\] \[\times \left.\left[\zeta^{\dagger e}_{n}(\mathbf{y},0)P^{ef}(\mathbf{y},0)\zeta ^{f}_{n}(\mathbf{y},0)\right]\,\right\rangle\,.\] where \(P^{ab}\) is the color projection operator defined in (15). Then the question is whether the energies \(E^{q}_{n}(R)\) of the \(\overline{q}q\) states satisfy the S\({}_{\rm c}\) criterion in the Higgs phase. If so, this would supply a gauge-invariant meaning to the claim that confinement somehow exists, in the Higgs phase, in an "unbroken" SU(2) subgroup. At \(\beta=6.0\) there is no evidence for this claim. Figure 6 shows the results of a calculation of \(E_{0}(R)\) (as defined above) along with \(E^{q}_{1}(R)\) using quark sources, both in the confinement (\(\gamma=1.1\)) and Higgs (\(\gamma=1.2\)) phases. As in Fig. 4, \(E^{q}_{1}(R)\) rises linearly in the confined phase, with a string tension (extracted from a fit to (28)) of \(\sigma\approx 0.04\). But in the Higgs phase \(E^{q}_{1}(R)\) is flat, with no discernable linear component, as found in Fig. 4 for the unprojected fermion field. This implies the absence of S\({}_{\rm c}\) confinement for quarks in the Higgs phase. For completeness we show, in Fig. 7, the results for \(E^{q}_{n}(R)\) in a range of \(n\geq 1\), at \(\gamma=1.1,1.2,1.8\). However, the interpretation of these results is not entirely clear, due to the fact that a pure SU(2) gauge theory on a \(16^{4}\) lattice, at \(\beta=6.0\), would be deep in the deconfined phase of the theory. Suppose we go to unitary gauge and assume that, in the Higgs phase, the gauge bosons of the SU(2) subgroup are approximately decoupled from the other gauge bosons of the SU(3) theory. Denote by \(\tilde{U}_{\mu}\) the \(2\times 2\) sub-matrix of components \(U^{ab}_{\mu}\) with indices \(a,b\) in the range \(1,2\). Then the part of the Lagrangian involving only the degrees of freedom of the SU(2) subgroup is \[S_{eff}\approx-\frac{\beta}{3}\sum_{plaq}\text{Re}\text{Tr}[UUU^{\dagger}U^{ \dagger}]=-\frac{\beta_{eff}}{2}\sum_{plaq}\text{Tr}[\tilde{U}\tilde{U}\tilde{ U}^{\dagger}\tilde{U}^{\dagger}]\, \tag{34}\] with an effective SU(2) coupling \[\beta_{eff}=\frac{2}{3}\beta \tag{35}\] which, at \(\beta=6\), corresponds to \(\beta_{eff}=4\). But at this effective SU(2) coupling, on a \(16^{4}\) lattice, the pure SU(2) theory is in the deconfined phase. Hence the lack of S\({}_{\rm c}\) confinement might be attributable to finite size effects. To eliminate the source of ambiguity, we repeat our calculation for \(\beta=3.6\), which is in the strong coupling regime of pure SU(3) gauge theory, but which corresponds to \(\tilde{B}_{eff}=2.4\). On a \(16^{4}\) volume, at this effective coupling, pure SU(2) gauge theory is in the confined phase, and the test of S\({}_{\rm c}\) confinement is less susceptible to finite size effects. ### Numerical results at \(\beta=3.6\) At \(\gamma=\infty\) we expect that the SU(3) gauge Higgs theory reduces exactly to a pure SU(2) gauge theory, with an effective SU(2) Wilson coupling \(\beta_{eff}\) of (35), and this theory must have the property of S\({}_{\rm c}\) confinement for the quark degrees of freedom. The question is whether this property exists at finite \(\gamma\) in the Higgs phase and, if not, how the S\({}_{\rm c}\) property is approached in the continuum limit. For this purpose it is sufficient to compute the \(E_{n}(R)\) at \(\beta=3.6,\ \beta_{eff}=2.4\) and at \(\gamma\) values which are deep within the Higgs regime. The data shown in Fig. 8 for \(E_{1}(R)\) at various \(\gamma\), without projection to the quark states, shows that there is more evidence for S\({}_{\rm c}\) confinement, even at very large \(\gamma\) values. In fact, these energies are not only \(R\) independent, but drop to zero as \(\gamma\) increases. This is unsurprising, since there is no projection here to the "unbroken" SU(2) subgroup. The quark projection data is more interesting. From the data it is fairly clear that, as at \(\beta=6.0\), there is also no S Figure 9: \(E_{1}(R)\) vs. \(R\) for quarks in the Higgs phase, at \(\beta=3.6\) and various \(\gamma\) ranging up to \(\gamma=200\). At each \(\gamma\) there is no S\({}_{\rm c}\) confinement. However, the behavior with increasing \(R\) is opposite to the unprojected case, in that the value at which \(E_{1}(R)\) flattens out tends to increase with \(\gamma\). Figure 8: \(E_{1}(R)\) vs. \(R\) for unprojected fermions in the Higgs phase, at \(\beta=3.6\) and various \(\gamma\) ranging up to \(\gamma=200\). Note that \(E_{1}(R)\) converges to zero with increasing \(\gamma\). confinement of quarks in the Higgs phase at \(\beta=3.6\), as we see from plots of \(E_{1}(R)\) in Fig. 9. At any of the \(\gamma\) values shown in the figure, up to a very large value of \(\gamma=200\), the S\({}_{\rm c}\) quark confinement criterion is violated for \(E_{V}^{q}(R)\) by the \(V\) operator built from the lowest Laplacian eigenmode \[V(\mathbf{x},\mathbf{y},U)=\zeta_{1}(\mathbf{x})\zeta_{1}^{\dagger}(\mathbf{y})\;. \tag{36}\] But this raises the question of how the \(\gamma\to\infty\) limit recovers S\({}_{\rm c}\) confinement. The first observation is that while the energy \(E_{1}^{q}(R)\) is seen to plateau at fairly small separations \(R\), the value of \(E_{1}^{q}(R)\) at the plateau appears to increase with \(\gamma\). It is reasonable to conclude that the plateau rises to infinity at \(\gamma\to\infty\). But this is not sufficient; one would certainly expect to be \(V\) operators, e.g. flux tube states of some kind, such that \(E_{V}^{q}(R)\) rises linearly, at some slope compatible with the SU(2) string tension of the effective SU(2) theory, as \(\gamma\to\infty\). And we indeed see evidence of this happening in Fig. 10, where some of the \(E_{n}^{q}(R)\) at larger \(n\) values appear to have exactly this behavior. Of course these are not minimal energy flux tube states, so the string tensions of the \(E_{n}(R)\) must be greater than the string tension of the minimal energy flux tube state, which is \(\sigma=0.071\) in the \(\gamma=\infty\) limit at \(\beta_{eff}=2.4\). It is interesting to compare Fig. 10 with the same results at \(\gamma=\infty\). In unitary gauge, this simply amounts to initializing link variables to a \(3\times 3\) unit matrix, and then only updating the upper left \(2\times 2\) matrix (the first step of link updates described in the appendix). The results at \(\beta=3.6\Longrightarrow\beta_{eff}=2.4\) are shown in Fig. 11. The most striking difference is in the behavior of \(E_{1}(R)\) in the two figures. The distinction between the \(\gamma=100\) and \(\gamma=\infty\) data is not so dramatic for \(n=19,33\), but still noticeable. The comparison suggests that although the \(\gamma\to\infty\) limit recovers confinement for the quarks, as expected, this limit is still not quite the same as \(\gamma=\infty\). ## IV Conclusions "Confinement" is a word that must be carefully defined in an SU(N) gauge theory with matter in the fundamental representation of the gauge group. Here and elsewhere [1; 3] we have argued that it is important to distinguish between color (C) confinement, which means that there is a color neutral particle spectrum, and the much stronger condition of separation-of-charge (S\({}_{\rm c}\) ) confinement. The latter condition means that the energy of physical states with separated color charges unscreened by matter fields increases without limit as the color charge separation increases. The transition from the confining to the Higgs phase of an SU(N) gauge theory corresponds to a transition from S\({}_{\rm c}\) to C confinement. The question we have addressed in this article is whether, in an SU(N) gauge Higgs theory, the separation of charge property persists in an SU(N-1) subgroup or, more precisely, for "quark" sources with color orthogonal to that of the scalar field, transforming among themselves, in a unitary gauge, via an SU(N-1) subgroup. The answer which we find numerically for SU(3) gauge Higgs theory is that S\({}_{\rm c}\) confinement is lost in the Higgs phase also for the quarks transforming among themselves in the SU(2) subgroup. Since it is certain that S\({}_{\rm c}\) confinement must reappear in a certain limit (\(\gamma\to\infty\) for the action in (17)), it is of interest to see how S\({}_{\rm c}\) confinement is regained in that limit. To investigate this, we have constructed quark-antiquark states with unscreened color, made gauge invariant by the use of eigenmodes of the covariant Laplacian operator. In the Higgs phase there are always quark-antiquark states which violate the S\({}_{\rm c}\) condition, i.e. the energy of such states rises to a value which is constant with charge separation. This constant value, however, rises with \(\gamma\). There are other gauge invariant states whose energy increases with charge separation, but which, for some range of quark separation, is less than that constant value. Thus the evidence suggests that, while there is strictly speaking no S\({}_{\rm c}\) confinement at all in the Higgs phase, including for quark-antiquark states, the energy of states which violate the S\({}_{\rm c}\) condition goes to infinity as \(\gamma\to\infty\), leaving only states which satisfy the S\({}_{\rm c}\) condition. Figure 10: \(E_{n}^{q}\) at selected values of \(n\) at \(\beta=3.6\) and a large \(\gamma=100\). We observe that while \(E_{1}(R)\) rises abruptly and flattens out at \(E_{1}=8\), \(E_{19}(R)\) rises linearly in the range shown, with string tension \(\sigma_{19}=1.19(4)\), while \(E_{33}(R)\) also rises linearly in this range, with string tension \(\sigma_{33}=0.172(1)\). For comparison, in the \(\gamma\to\infty\) limit, the coupling of the effective SU(2) pure gauge theory is \(\beta_{SU(2)}=2.4\), and the corresponding string tension of the minimal energy flux tube state is \(\sigma=0.071\)[12] in lattice units. ###### Acknowledgements. This research is supported by the U.S. Department of Energy under Grant No. DE-SC0013682.
2308.03709
Prototype Learning for Out-of-Distribution Polyp Segmentation
Existing polyp segmentation models from colonoscopy images often fail to provide reliable segmentation results on datasets from different centers, limiting their applicability. Our objective in this study is to create a robust and well-generalized segmentation model named PrototypeLab that can assist in polyp segmentation. To achieve this, we incorporate various lighting modes such as White light imaging (WLI), Blue light imaging (BLI), Linked color imaging (LCI), and Flexible spectral imaging color enhancement (FICE) into our new segmentation model, that learns to create prototypes for each class of object present in the images. These prototypes represent the characteristic features of the objects, such as their shape, texture, color. Our model is designed to perform effectively on out-of-distribution (OOD) datasets from multiple centers. We first generate a coarse mask that is used to learn prototypes for the main object class, which are then employed to generate the final segmentation mask. By using prototypes to represent the main class, our approach handles the variability present in the medical images and generalize well to new data since prototype capture the underlying distribution of the data. PrototypeLab offers a promising solution with a dice coefficient of $\geq$ 90\% and mIoU $\geq$ 85\% with a near real-time processing speed for polyp segmentation. It achieved superior performance on OOD datasets compared to 16 state-of-the-art image segmentation architectures, potentially improving clinical outcomes. Codes are available at https://github.com/xxxxx/PrototypeLab.
Nikhil Kumar Tomar, Debesh Jha, Ulas Bagci
2023-08-07T16:30:24Z
http://arxiv.org/abs/2308.03709v1
# Prototype Learning for Out-of-Distribution Polyp Segmentation ###### Abstract Existing polyp segmentation models from colonoscopy images often fail to provide reliable segmentation results on datasets from different centers, limiting their applicability. Our objective in this study is to create a robust and well-generalized segmentation model named _PrototypeLab_ that can assist in polyp segmentation. To achieve this, we incorporate various violetlighting modes such as White light imaging (WLI), Blue light imaging (BLI), Linked color imaging (LCI), and Flexible spectral imaging color enhancement (FICE) into our new segmentation model, that learns to create prototypes for each class of object present in the images. These prototypes represent the characteristic features of the objects, such as their shape, texture, color. Our model is designed to perform effectively on out-of-distribution (OOD) datasets from multiple centers. We first generate a coarse mask that is used to learn prototypes for the main object class, which are then employed to generate the final segmentation mask. By using prototypes to represent the main class, our approach handles the variability present in the medical images and generalize well to new data since prototype capture the underlying distribution of the data. PrototypeLab offers a promising solution with a dice coefficient of \(\geq 90\%\) and mIoU \(\geq 85\%\) with a near real-time processing speed for polyp segmentation. It achieved superior performance on OOD datasets compared to 16 state-of-the-art image segmentation architectures, potentially improving clinical outcomes. Codes are available at [https://github.com/xxxxx/PrototypeLab](https://github.com/xxxxx/PrototypeLab). Keywords:Polyp segmentation Prototype learning Out-of-Distribution Robust Segmentation Prototype Segmentation ## 1 Introduction The Cancer statistic 2023 [15] estimates that colorectal cancer (CRC) will be the third leading cause of cancer-related incidence and death in the United States. The United States Preventive Services Task Force (USPSTF) recommends CRC screening at 45 years of age [11]. Thus, regular screening is essential as CRC does not show symptoms (bleeding in the stool, constipation or diarrhoea) at an early stage. Studies have shown the lesion miss rate to be 26% [3]. This emphasizes the need for an accurate and reliable screening computer aided diagnosis (CADx) method for reducing the polyp miss-rate and contributing to the reduction of CRC related death. Encoder-decoder based networks are widely used for automatic polyp segmentation [4; 5; 6; 12; 14; 16; 17; 21; 22]. Dong et al. [4] proposed Polyp-PVT that is based on pyramid vision transformer for automatic polyp segmentation. Wang et al. [19] proposed SSFormer, which uses a pyramid Transformer encoder and progressive locality decoder to improve the performance of polyp segmentation. Encoder-decoder based methods achieved improved accuracy on regular or large-sized polyps. However, most methods are developed on white light imaging modality and are tested on an in-distribution dataset (tested on the same center). They readily fail on Out-Of-Distribution (OOD) datasets such as small, diminutive, flat, sessile, or partially visible polyps and in the presence of camouflage and noisy images. Moreover, the complex morphological structures, indistinct boundaries between polyps and mucosa, and varying sizes make polyp segmentation more challenging. Therefore, there is an urgent need to develop a more generalizable and robust polyp segmentation method. We hypothesize that a prototype based segmentation can be a strong approach for OOD generalization because it relies on the creation of prototypes that capture the essential features of each class of objects in the images. These prototypes can be used to identify and segment objects even in images that differ significantly from those used during training. An algorithm performing well on OOD dataset could prevent models from making inaccurate diagnoses or treatment planning. Failure to do so can result in false positives or false negatives leading to misdiagnosis, which might have adverse consequences for the patient. Therefore, the model must perform well on OOD datasets to be useful in clinical settings. In this work, we evaluate the proposed model on three datasets collected in different countries captured under different conditions and image enhancement techniques, including OOD datasets to study the generalization ability of the model, which is critical for the development of CADx system. **Summary of our contributions: (1)** We propose to develop a _prototype learning_ algorithm for medical image segmentation. To generate multiple prototypes, we design a Prototype Generation Module (PGM) by capturing the underlying data distribution. These prototypes handle variability and exhibit strong generalization capabilities when applied to novel data, **(2)** We present a _Coarse Mask Generation Module (CMGM)_ to improve the accuracy, efficiency, and generalization capabilities of the polyp segmentation network, **(3)** We encourage our network to operate on multi-scale fashion through an _Encoder Feature Fusion Module (EFFM)_, **(4)** We devise a _Prototype Mask Generation Module (PMGM)_ to generate a final prototype mask using the prototypes generated by the PGM and the output feature map of the EFFM, and **(5)** we conduct a thorough analysis and evaluation on multi-center datasets and perform an OOD generalization test on different datasets. Our experiments and results show that PrototypeLab outperforms 16 SOTA medical image segmentation methods on three publicly available polyp datasets in terms of accuracy and efficiency. ## 2 Method The proposed PrototypeLab is a new image segmentation architecture with integrated _prototype learning_, generating high-quality segmentation masks. PrototypeLab consists of five key components: Pyramid Vision Transformer (PVT) encoder, Coarse Mask Generation Module (CMGM), Prototype Generation Module (PGM), Decoder, and Prototype Mask Generation Module (PMGM) (Figure 1). The CMGM, PGM, and PMGM are integral components that effectively contribute to the prototype learning process and collectively enhance the overall performance of the proposed architecture. ### Pyramid Vision Transformer (PVT) Encoder In PrototypeLab, the pyramid vision transformer (PVT) is used as a pre-trained encoder, extracting multi-scale features from an input image. An input image \(I\in\mathbb{R}^{H\times W\times 3}\) is fed to the PVT-encoder to obtain four distinct feature maps \(X_{i}\in\mathbb{R}^{\frac{H}{2^{i+1}}\times\frac{W}{2^{i+1}}\times C_{i}}\), where \(i\in\{1,2,3,4\}\) and \(C_{i}\in\{64,128,320,512\}\). These feature maps are then used in the other modules of PrototypeLab. ### Coarse Mask Generation Module (CMGM) The CMGM module generates a loose mask using features learned by the PVT encoder. It begins with an upsampling of \(X_{4}\) to double its spatial dimensions, Figure 1: The overall architecture of the proposed PrototypeLab. The input image is fed to the PVT-encoder to generate a coarse mask, which is combined with encoder features in the prototype generation module to generate various prototypes. Subsequently, the decoder is employed, which produces a feature map that is used to create the prototype masks. Finally, the ‘output mask’ is generated. followed by concatenation with \(X_{3}\) and passing through \(3\times 3\) Conv2D-BN-ReLU layers. The module incorporates a novel _Large Kernel Dilated Convolution (LKDC)_ block (Supp. Figure 1) consisting of two parallel convolution layers with large kernel size of \(7\times 7\) and \(13\times 13\) factorized into \(\{7\times 1\}\{1\times 7\}\) and \(\{13\times 1\}\{1\times 13\}\) respectively. The outputs of these layers are concatenated and fed into parallel dilated convolution layers with dilation rates of \(r=1,2,4\). The utilization of large kernel size along dilated convolutions enhance the receptive field for capturing contextual information and handling objects of varying scales. The output is processed through a \(1\times 1\) Conv2D-BN-ReLU layer and upsampled by a factor of two. A final \(1\times 1\) convolution layer with sigmoid activation generates the coarse mask. The LKDC block enhances accuracy, efficiency, and generalization capabilities of the segmentation network. ### Prototype Generation Module (PGM) The PGM module utilizes PVT encoder features and a coarse mask to generate multiple prototypes. It applies \(3\times 3\) Conv2D-BN-ReLU layers to the PVT-encoder feature \(X_{i}\). The feature maps are then element-wise multiplied with the coarse mask, followed by mask average pooling. This process yields four prototypes \(P=\{p_{1},p_{2},p_{3},p_{4}\}\) from \(X_{i}=\{X_{1},X_{2},X_{3},X_{4}\}\). Learning multiple prototypes allows for a richer representation of diverse visual patterns. By utilizing different features from the PVT encoder, we can capture a wider range of information and enhance the model's ability to recognize various objects and scenarios. This approach improves the robustness of the system by increasing its adaptability to different input conditions, resulting in more accurate and reliable predictions. The number of prototypes can be adjusted based on the problem complexity. For multi-scale feature fusion of PVT encoder features, we introduce the Encoder Feature Fusion Module (EFFM) (Supp. Figure 1). It involves upsampling \(X_{4}\) and concatenating it with \(X_{3}\), followed by \(3\times 3\) Conv2D-BN-ReLU layers. This process is repeated for upsampling and concatenation with \(X_{2}\), and again with \(X_{1}\). Four parallel convolution layers are used, including a \(1\times 1\) convolution layer and three factorized convolution layers with dilated convolutions. The outputs of these layers are concatenated and passed through a final \(1\times 1\) convolution layer to generate the final prototype \(p_{5}\). ### Decoder In the decoder, the upsampled feature map from the CMGM is first concatenated with \(X_{2}\) and passed through a residual block. Then, the feature map is upsampled by a factor of two and concatenated with \(X_{1}\), followed by another residual block. Subsequently, we upsample the feature map to increase the spatial dimensions by a factor of four, and then concatenate it with the original input image. This is followed by another residual block. Finally, the output of this block is used to generate prototype masks. ### Prototype Mask Generation Module and Final Mask The PMGM utilizes cosine similarity to generate five sets of masks by comparing the decoder's output feature map with multiple prototypes from the PGM. To create the final segmentation mask, we concatenate the decoder's output feature map with the prototype masks. This concatenated feature map undergoes a residual block, followed by a \(1\times 1\) convolution and a sigmoid activation function. The proposed framework effectively incorporates the CMGM, PGM, decoder, and PMGM to generate accurate and efficient polyp segmentation masks. ## 3 Experiments and Results **Datasets:** We use BKAI-IGH [9], Kvasir-SEG [7], and PolypGen [1] dataset for experimentation. The BKAI-IGH consists of 1000 images with the corresponding ground truth. The dataset was collected from two medical centers in Vietnam. It contains images captured using WLI, FICE, BLI and LCI. To train all the algorithms, we used 800 images for training, 100 for validation, and 100 for testing. Similarly, Kvasir-SEG consists of 1000 images collected from four hospitals in Norway. We have used 880 in the training set and rest 120 images in the validation and test set. Moreover, we use PolypGen [1] dataset collected from six medical centers in _Norway, United Kingdom, France, Italy and Egypt_ as OOD dataset. PolypGen is collected from multiple hospitals that cover different clinical patient populations and modalities, which makes it diverse and useful for generalizability tests. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline **Method** & **Publication** & **mDSC** & **mIoU** & **Recall** & **Precision** & **F2** & **HD** \\ \hline U-Net [13] & MICCAI 2015 & 0.8286 & 0.7599 & 0.8295 & 0.8999 & 0.8264 & 3.17 \\ DeepLabV3+ [2] & ECCV 2018 & 0.8938 & 0.8314 & 0.8870 & 0.9333 & 0.8882 & 2.90 \\ PraNet [5] & MICCAI 2021 & 0.8904 & 0.8264 & 0.8901 & 0.9247 & 0.8885 & 2.94 \\ MSNet [22] & MICCAI 2021 & 0.9013 & 0.8402 & 0.8868 & 0.9423 & 0.8913 & 2.85 \\ TransFuse-S [21] & MICCAI 2021 & 0.8599 & 0.7819 & 0.8531 & 0.9075 & 0.8530 & 3.04 \\ TransFuse-L [21] & MICCAI 2021 & 0.8747 & 0.8105 & 0.8736 & 0.9235 & 0.8723 & 2.96 \\ Polyp-PVT [4] & ArXiv 2021 & 0.8995 & 0.8379 & 0.9016 & 0.9238 & 0.8986 & 2.88 \\ UACANet [8] & ACMMM 2021 & 0.8945 & 0.8275 & 0.8870 & 0.9297 & 0.8882 & 2.86 \\ DuAT [16] & Arxiv 2022 & 0.9140 & 0.8563 & 0.9038 & 0.9437 & 0.9066 & 2.77 \\ CaraNet [10] & MIIP 2022 & 0.8962 & 0.8329 & 0.8939 & 0.9273 & 0.8937 & 2.91 \\ SSFormer-S [19] & MICCAI 2022 & 0.9111 & 0.8527 & 0.9043 & 0.9391 & 0.9060 & 2.81 \\ SSFormer-L [19] & MICCAI 2022 & 0.9124 & 0.8508 & 0.9005 & 0.9400 & 0.9041 & 2.74 \\ UNeXt [18] & MICCAI 2022 & 0.4758 & 0.3797 & 0.5814 & 0.5820 & 0.5132 & 4.49 \\ LDNet [20] & MICCAI 2022 & 0.8927 & 0.8254 & 0.8867 & 0.9153 & 0.8874 & 2.94 \\ TGANet [17] & MICCAI 2022 & 0.9023 & 0.8409 & 0.9025 & 0.9208 & 0.9002 & 2.84 \\ PVT-Cascade [12] & WACV 2023 & 0.9123 & 0.8534 & **0.9223** & 0.9212 & 0.9167 & 2.81 \\ **PrototypeLab** & & **0.9243** & **0.8744** & 0.9194 & **0.9494** & **0.9202** & **2.70** \\ \hline \hline \end{tabular} \end{table} Table 1: Result of models trained and tested on BKAI-IGH [9]. ‘Red’, ‘Green’ and ‘Blue’ colors represent the highest, second highest and third highest scores. **Experimental Setup:** We have trained all the models on NVIDIA GeForce RTX 3090 GPU. All the images are first resized to \(256\times 256\) pixels. The training images are followed by simple data augmentation strategies, which includes random rotation, vertical flipping, horizontal flipping, and coarse dropout, are used to improve generalization and prevent overfitting. All models are trained on a \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline **Method** & **mDSC** & **mIoU** & **Recall** & **Precision** & **F2** & **HD** \\ \hline U-Net [13] & 0.8264 & 0.7472 & 0.8503 & 0.8703 & 0.8352 & 4.57 \\ DeepLabV3+ [2] & 0.8837 & 0.8172 & 0.9014 & 0.9028 & 0.8900 & 4.10 \\ PraNet [5] & 0.8943 & 0.8296 & 0.9060 & 0.9126 & 0.8976 & 4.00 \\ MSNet [22] & 0.8859 & 0.8217 & 0.9006 & 0.9110 & 0.8901 & 4.01 \\ TransFuse-S [21] & 0.8780 & 0.8079 & 0.8898 & 0.9090 & 0.8813 & 4.09 \\ TransFuse-L [21] & 0.8768 & 0.8115 & 0.8842 & 0.9198 & 0.8771 & 4.05 \\ Polyp-PVT [4] & 0.8960 & 0.8328 & **0.9440** & 0.8811 & 0.9164 & 3.91 \\ UACANet [8] & 0.8835 & 0.8133 & 0.9085 & 0.8947 & 0.8937 & 4.19 \\ DuAT [16] & 0.8903 & 0.8294 & 0.9186 & 0.9019 & 0.8999 & 3.87 \\ CaraNet [10] & 0.8707 & 0.7958 & 0.9203 & 0.8621 & 0.8935 & 4.11 \\ SSFormer-S [19] & 0.8994 & 0.8363 & 0.9194 & 0.9086 & 0.9076 & 3.88 \\ SSFormer-L [19] & 0.9060 & 0.8500 & 0.9213 & **0.9199** & 0.9107 & 3.77 \\ UNeXt [18] & 0.7318 & 0.6284 & 0.7840 & 0.7656 & 0.7507 & 5.18 \\ LDNet [20] & 0.8881 & 0.8208 & 0.9063 & 0.9046 & 0.8946 & 4.09 \\ TGANet [17] & 0.8982 & 0.8330 & 0.9132 & 0.9123 & 0.9029 & 3.96 \\ PVT-Cascade [12] & 0.8950 & 0.8329 & 0.9355 & 0.8888 & 0.9103 & 3.90 \\ **PrototypeLab** & **0.9086** & **0.8544** & 0.9344 & 0.9136 & **0.9194** & **3.71** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative results of the model trained and tested on Kvasir-SEG [7]. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline **Method** & **mDSC** & **mIoU** & **Recall** & **Precision** & **F2** & **HD** \\ \hline U-Net [13] & 0.5841 & 0.5102 & 0.6142 & 0.7746 & 0.5739 & 4.36 \\ DeepLabV3+ [2] & 0.6757 & 0.6051 & 0.7074 & 0.8237 & 0.6732 & 4.03 \\ PraNet [5] & 0.7330 & 0.6659 & 0.7825 & 0.8182 & 0.7391 & 3.71 \\ MSNet [22] & 0.6777 & 0.6133 & 0.6811 & **0.8881** & 0.6657 & 4.06 \\ TransFuse-S [21] & 0.6510 & 0.5720 & 0.6894 & 0.7952 & 0.6416 & 4.04 \\ TransFuse-L [21] & 0.6592 & 0.5881 & 0.6792 & 0.8289 & 0.6487 & 4.10 \\ Polyp-PVT [4] & 0.7421 & 0.6746 & 0.7717 & 0.8494 & 0.7358 & 3.67 \\ UACANet [8] & 0.7063 & 0.6404 & 0.7265 & 0.8519 & 0.7016 & 3.87 \\ DuAT [16] & 0.7225 & 0.6553 & 0.7710 & 0.8081 & 0.7204 & 3.73 \\ CaraNet [10] & 0.6977 & 0.6329 & 0.7035 & 0.8830 & 0.6873 & 3.99 \\ SSFormer-S [19] & 0.7332 & 0.6664 & 0.7672 & 0.8386 & 0.7288 & 3.72 \\ SSFormer-L [19] & 0.7426 & 0.6732 & 0.7901 & 0.8209 & 0.7418 & 3.60 \\ UNeXt [18] & 0.3484 & 0.2669 & 0.4519 & 0.4834 & 0.3492 & 5.09 \\ LDNet [20] & 0.6922 & 0.6193 & 0.7389 & 0.8013 & 0.6836 & 3.83 \\ TGANet [17] & 0.6925 & 0.6206 & 0.7466 & 0.7833 & 0.6891 & 3.79 \\ PVT-Cascade [12] & 0.7271 & 0.6572 & **0.8154** & 0.7672 & 0.7358 & **3.58** \\ **PrototypeLab** & **0.7583** & **0.6957** & 0.7897 & 0.8456 & **0.7563** & 3.68 \\ \hline \hline \end{tabular} \end{table} Table 3: Result of models trained on BKAI-IGH [9] and tested on PolypGen [1]. similar hyperparameters configuration with a learning rate of \(1e^{-4}\), batch size of 16, and an ADAM optimizer. We use a combination of binary cross-entropy and dice loss with equal weights as a loss function. In addition, we use an early stopping and _ReduceLROnPlateau_ to avoid overfitting. Results on BKAI-IGH dataset:Table 1 shows the results of all the models on BKAI-IGH dataset. The table demonstrates the superiority of PrototypeLab with a high Dice Coefficient (DSC) of 0.9243, mean Intersection over Union (mIoU) of 0.8744, a high recall of 0.9194, a precision of 0.9494, low Hausdorff distance (HD) of 2.70. It outperforms 16 State-of-the-art (SOTA) methods. DuAT [16] and SSFormer-L [19] are the most competitive network to our network, where our network still outperforms DuAT and FCBFormer by 1.03% and 1.19% in DSC respectively. These results suggest that PrototypeLab is highly effective in segmenting polyps on different endoscopic imaging techniques such as WLI, BLI, FCI, and FICE. Results on Kvasir-SEG dataset:Table 2 shows the results of all the models on Kvasir-SEG dataset. PrototypeLab obtains a high DSC of 0.9086, mIoU of 0.8544, recall of 0.9344, precision of 0.9136, and low HD of 3.71. The most competitive network to PrototypeLab is SSFormer-L [19]. Our model surpasses by SSFormer-L by 0.26% in DSC, 0.44% in mIoU, 0.06 in HD metrics. Thus, PrototypeLab surpasses all SOTA in both overlap based metrics and distance based metrics. Figure 2: Qualitative results of models trained BKAI-IGH and tested on PolypGen. It can be observed that PrototypeLab produces a more accurate segmentation map in all the centers from C1 to C6. Results of BKAI-IGH models tested on PolypGen: Table 3 shows results of the model trained on BKAI-IGH and tested on PolypGen (all centers combined). PrototypeLab obtains the highest DSC, mIoU, and F2 of 0.7583, 0.6957 and 0.7563, respectively. SSFormer-L and Polyp-PVT are the most competitive network, where PrototypeLab outperforms both networks by 1.57% and 1.62%, respectively. Although PVT-Cascade has the least HD, our results is very competitive. We have similar findings when the model is trained on Kvasir-SEG and tested on PolypGen (Supp. Table 2) which suggests that PrototypeLab is more effective in handling OOD dataset. Figure 4: Qualitative results of the models trained on Kvasir-SEG and tested on BKAI-IGH. Figure 3: The diagram of the Large Kernel Dilated Convolution block and Encoder Feature Fusion Module. Figure 2 and _Supp. Figure 2_ shows the qualitative results comparison of different models on diminutive polyp, flat polyp and regular polyp. The most competitive network, DuAT and Polyp-PVT exhibit over-segmentation or under-segmentation on different scenarios. However, PrototypeLab can segment more accurately on diminutive, flat and noisy images as compared to the SOTA baselines. _Supp. Table 1_ shows the number of parameters, flops and processing speed. The Table shows that PrototypeLab has 51.34 Million parameters and 174. 87 GMac Flops with a processing speed of 23.85. Although the processing speed \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline **Method** & **Publication** & & **Param.** & **Flops** & \\ **Method** & **Venue** & **Backbone** & **(Million)** & **(GMac)** & **FPS** \\ \hline TransFuse-S & MICCAI 2021 & ResNet34 + DeiT-S & 26.35 & 11.5 & 34.76 \\ TransFuse-L & MICCAI 2021 & ResNet50 + DeiT-B & 143.74 & 82.71 & 37.6 \\ Polyp-PVT & ArXiv 2021 & PVTv2-B2 & 25.11 & 5.30 & 47.54 \\ UACANet & ACMMM 2021 & Res2Net50 & 69.16 & 31.51 & 26.04 \\ DuAT & Arxiv 2022 & PVTv2-B2 & 24.97 & **5.24** & 44.56 \\ CaraNet & MIIP 2022 & Res2Net101 & 46.64 & 11.48 & 20.67 \\ SSFormer-S & MICCAI 2022 & MiT-PLD-B2 & 29.57 & 10.1 & 49.11 \\ SSFormer-L & MICCAI 2022 & MiT-PLD-B4 & 66.22 & 17.28 & 28.21 \\ UNeXt & MICCAI 2022 & - & **1.47** & 0.5695 & 88.01 \\ LDNet & MICCAI 2022 & Res2Net50 & 33.38 & 33.14 & 31.14 \\ TGANet & MICCAI 2022 & ResNet50 & 19.84 & 41.88 & 26.21 \\ PVT-Cascade & WACV 2023 & PVTv2-B2 & 35.27 & 8.15 & 39.09 \\ **PrototypeLab** & - & PVTv2-B2 & 51.34 & 174.87 & 23.85 \\ \hline \hline \end{tabular} \end{table} Table 4: The study provides model parameters, flops, and FPS for both SOTA methods and proposed PrototypeLab. ‘Red’, ‘Green’, and ‘Blue’ represent the best, second best and third best scores. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline **Method** & **mDSC** & **mIoU** & **Recall** & **Precision** & **F2** & **HD** \\ \hline TransFuse-S & 0.6956 & 0.6229 & 0.8079 & 0.7481 & 0.7180 & 3.66 \\ TransFuse-L & 0.7273 & 0.6621 & 0.8060 & 0.7822 & 0.7404 & 3.53 \\ Polyp-PVT & 0.7424 & 0.6763 & 0.8369 & 0.7985 & 0.7557 & 3.44 \\ UACANet & 0.7185 & 0.6517 & 0.7972 & 0.8007 & 0.7288 & 3.61 \\ DuAT & 0.7332 & 0.6654 & 0.8664 & 0.7342 & 0.7609 & 3.48 \\ CaraNet & 0.6976 & 0.6240 & 0.8264 & 0.7419 & 0.7285 & 3.68 \\ SSFormer-S & 0.7389 & 0.6734 & 0.8625 & 0.7496 & 0.7634 & 3.43 \\ SSFormer-L & 0.7556 & 0.6926 & 0.8530 & 0.7910 & 0.7703 & 3.37 \\ UNeXt & 0.4552 & 0.3761 & 0.6135 & 0.5600 & 0.4805 & 4.55 \\ LDNet & 0.7273 & 0.6604 & 0.8336 & 0.7685 & 0.7462 & 3.53 \\ TGANet & 0.7030 & 0.6386 & 0.8030 & 0.7654 & 0.7177 & 3.59 \\ PVT-Cascade & 0.7151 & 0.6460 & 0.8651 & 0.7108 & 0.7396 & 3.48 \\ **PrototypeLab** & **0.7560** & **0.6966** & 0.8603 & 0.7846 & **0.7745** & **3.35** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of models trained on Kvasir-SEG and tested on PolypGen. ‘Red’, ‘Green’, and ‘Blue’ represent the best, second best and third best scores. is close to near real-time (\(\approx 30fps\)), our architecture is more accurate, which is essential in clinical settings for early diagnosis and treatment. Therefore, the trade-off between speed and increased accuracy can be compensated. _Ablation study:_ Table 7 shows the ablation study of the PrototypeLab on the Kvasir-SEG. The results show that the baseline (#1) obtains DSC of 0.8971, mIoU of 0.8399, and HD of 3.88. In setting #2, a slight performance improvement was observed. This is because the masks generated by the CMGM were not used in the decoder, but were used by the PGM to generate multiple prototypes. These prototypes were then utilized by the PMGM to generate multiple prototype mask in setting #6, which explains the limited performance improvement when comparing setting #2 to setting #1.To demonstrate the impact of LKDC block and EFFM, we have conducted three experiments in setting #3, #4 and #5, where we can observed a drop in performance when compared with setting #6. Specifically, when both the LKDC block and EFFM were removed in setting #5, a 0.71% decrease in DSC, a 0.80% decrease in mIoU, and a 0.11% increase in HD were observed compared to setting #6 The Table shows that the baseline is improved by adding CMGM and further improved by adding PGM + PMGM. PrototypeLab (#6) offers an improvement of 1.15% in DSC, 1.45% in mIoU and 0.17% in HD when compared with baseline. ## 4 Conclusion We propose a prototype learning based new segmentation model, called PrototypeLab, for in-distribution and out-of-distribution polyp segmentation. The \begin{table} \begin{tabular}{l|l|c|c|c|c} \hline \hline **No** & **Method** & **mDSC** & **mIoU** & **HD** \\ \hline \#1 & Baseline (PVT-encoder + Decoder) & 0.7503 & 0.6879 & **3.36** \\ \#2 & Baseline + CMGM & **0.7585** & **0.6956** & **3.36** \\ \#3 & Baseline + (CMGM w/o LKDC) + PGM + PMGM & 0.7234 & 0.6566 & 3.49 \\ \#4 & Baseline + CMGM + (PGM w/o EFFM) + PMGM & **0.7538** & 0.6909 & **3.36** \\ \#5 & Baseline + (CMGM w/o LKDC) + (PGM w/o EFFM) + PMGM & 0.7521 & **0.6910** & **3.39** \\ \#6 & Baseline + CMGM + PGM + PMGM (PrototypeLab) & **0.7560** & **0.6966** & **3.35** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study of PrototypeLab on OOD dataset. The methods are trained on Kvasir-SEG dataset and tested on PolypGen dataset. \begin{table} \begin{tabular}{l|l|c|c|c|c|c} \hline \hline **No** & **Method** & **mDSC** & **mIoU** & **Recall** & **HD** \\ \hline \#1 & BL (PVT-encoder + Decoder) & 0.8971 & 0.8399 & 0.9203 & 3.88 \\ \#2 & BL + CMGM & 0.8971 & 0.8402 & 0.9199 & 3.84 \\ \#3 & BL + (CMGM w/o LKDC) + PGM + PMGM & 0.8935 & 0.8332 & 0.9282 & 4.02 \\ \#4 & BL + CMGM + (PGM w/o EFFM) + PMGM & 0.9012 & 0.8440 & 0.9284 & 3.82 \\ \#5 & BL + (CMGM w/o LKDC) + (PGM w/o EFFM) + PMGM & 0.9015 & 0.8464 & 0.9228 & 3.82 \\ \#6 & BL + CMGM + PGM + PMGM (PrototypeLab) & **0.9086** & **0.8544** & **0.9344** & **3.71** \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study of PrototypeLab on Kvasir-SEG. Here, BL = Baseline. use of prototypes in the proposed architecture helps in dealing with variability present in the medical images making the model more robust to inter-patient variations. It helps to perform well on diminutive, flat, partially visible, noisy images and camouflage properties of polyp. The proposed architecture obtains high DSC of 0.9243, mIoU of 0.8744, and low HD of 2.70 on the BKAI-IGH dataset. Our extensive experiments revealed that PrototypeLab exhibits superior performance compared to 16 state-of-the-art (SOTA) methods across three distinct datasets, including notoriously difficult multi-center out-of-distribution (OOD) datasets. In the future, we aim to develop PrototypeLabV2, by further optimizing speed and accuracy for mobile applications.
2307.15936
A Theory for Emergence of Complex Skills in Language Models
A major driver of AI products today is the fact that new skills emerge in language models when their parameter set and training corpora are scaled up. This phenomenon is poorly understood, and a mechanistic explanation via mathematical analysis of gradient-based training seems difficult. The current paper takes a different approach, analysing emergence using the famous (and empirical) Scaling Laws of LLMs and a simple statistical framework. Contributions include: (a) A statistical framework that relates cross-entropy loss of LLMs to competence on the basic skills that underlie language tasks. (b) Mathematical analysis showing that the Scaling Laws imply a strong form of inductive bias that allows the pre-trained model to learn very efficiently. We informally call this {\em slingshot generalization} since naively viewed it appears to give competence levels at skills that violate usual generalization theory. (c) A key example of slingshot generalization, that competence at executing tasks involving $k$-tuples of skills emerges essentially at the same scaling and same rate as competence on the elementary skills themselves.
Sanjeev Arora, Anirudh Goyal
2023-07-29T09:22:54Z
http://arxiv.org/abs/2307.15936v2
# A Theory for Emergence of Complex Skills in Language Models ###### Abstract A major driver of AI products today is the fact that new skills emerge in language models when their parameter set and training corpora are scaled up. This phenomenon is poorly understood, and a mechanistic explanation via mathematical analysis of gradient-based training seems difficult. The current paper takes a different approach, analysing emergence using the famous (and empirical) Scaling Laws of LLMs and a simple statistical framework. Contributions include: (a) A statistical framework that relates cross-entropy loss of LLMs to competence on the basic skills that underlie language tasks. (b) Mathematical analysis showing that the Scaling Laws imply a strong form of inductive bias that allows the pre-trained model to learn very efficiently. We informally call this _slingshot generalization_ since naively viewed it appears to give competence levels at skills that violate usual generalization theory. (c) A key example of slingshot generalization, that competence at executing tasks involving \(k\)-tuples of skills emerges essentially at the same scaling and same rate as competence on the elementary skills themselves. 1 Footnote 1: Princeton University 2 Footnote 2: Google DeepMind ## 1 Introduction _Teach a man to read and write, and you have put into his hands the great keys of the wisdom box._ **Thomas Huxley** As language models scale up, via an increase in both the number of parameters and the size of the training datasets, they exhibit remarkable new behaviors Brown et al. (2020); Ganguli et al. (2022); Srivastava et al. (2022); Wei et al. (2022). _Emergence_ is a term used to describe this fascinating phenomenon. Some emergent properties were noticed by early model designers and have since been confirmed through experiments with substantially larger models such as GPT Brown et al. (2020); OpenAI (2023); PaLM Chowdhery et al. (2022) and PaLM-2 (Anil et al., 2023). From a scientific perspective, "emergence" appears to encompass a range of phenomena rather than a singular phenomenon. Attempting to explain it solely through a mathematical analysis of gradient-based training proves to be challenging, and there have been suggestions that such a mechanistic approach may fall short --i.e., new scientific theories may be needed. A related question -of interest in discussions about AI safety and Alignment--concerns the extent of the machine's ability to exhibit "new behaviors" that it did not encounter in the training data. If all learned behaviors were already present in the (massive) training corpus --the so-called "stochastic parrots" critique?-- then the AI safety issue becomes less pressing. The current paper seeks better mathematical understanding of such phenomena. There are well-known challenges with such a line of inquiry. 1. The phenomenon is not well-defined! We lack a mathematical formulation for the set of language tasks of interest, and also a full catalog of the skills and abilities that the theory should apply to. Linguistics research suggests useful mathematical formalizations such as _Probabilistic Context-Free Grammars_, _Dependency Grammars_, _Gricean theories_, and _Frame Theory_. But it is a challenge to integrate these into a single framework and connect it to statistical learning, which underlies the cross-entropy loss used in language models. 2. Explaining the co-emergence of skills: why do multiple skills tend to emerge roughly _together_. What connects them? 3. Finally, a theory has to explain how the ability to solve tasks combining natural skills (whatever "combination" means --note that prior attempts to formalize compositions get highly technical, e.g. involving category theory) emerges just as naturally from the theory1. One mystery here is that the number of skill combinations seems too numerous for all of them to have been seen in the training data (this is a concrete example of the old _Poverty of the Stimulus_ question; see Section 2). Footnote 1: For example popular chat agents can give reasonable responses to questions meant to test this ability to combine skills, e.g., _Please give me a a couple lines of text that illustrate all of the following language understanding skills: Anaphora resolution, simple logical reasoning, simple understanding of physics, and understanding of sentiment_. See Appendix A.1 for more examples. The current paper introduces new types of mathematically rigorous --yet elementary--analyses that stay recognizably close to frameworks of statistical learning, and yet offer plausible explanations for some of the above phenomena. Mathematically, the theory imagines a bipartite graph with skills on one side and text-pieces on the other. Text-pieces are generated by an unknown process that takes random tuples of skills and converts them into a text-piece whose comprehension requires those skills, and an unknown process that inserts multiple-choice prompts in text-pieces to test that understanding. A key assumption is that prediction loss on the cloze prompts captures average cross-entropy loss of the model. Thus "understanding" gets connected to prediction in a statistical task, except the statistical tasks corresponding to skills (and tuples of skills) have overlaps created by the random mixing of skills in text. Even though the framework feels so general and unspecified (with several unknown processes in the picture) the very fact of text-pieces having been produced using random combinations of skills is powerful enough (as captured by our lemmas about random graphs) to imply that reduction in cross-entropy manifests itself as improvement in competence on the basic skills (measured by success at answering cloze questions) as well as competence on _combinations_ of basic skills. The rate of emergence is tightly governed by random graph theory, which we easily adapted to allow arbitrary measures on text and skills. Potential conceptual frameworks for next-generation AI models:The ongoing reworking of Language Models into AI agents is accompanied by a shift away from the old paradigm of simply training a model to predict the next word in a text corpus. Instead, the "corpus" is a carefully weighted/curated combination of data, which could include code, math/logical reasoning, images, etc. Training could involve new kinds of losses. Our conceptual framework seems applicable to these new settings, since it is agnostic about what "skills" and "text" are, and how they compose. The analysis is also adaptable to prediction losses other than cross-entropy. For simplicity, we choose to describe the framework in context of vanilla language modeling. Paper organization:Section 2 provides a brief introduction to scaling laws, emergence, and random bipartite graphs. Section 3 explores the connection between the reduction in excess cross-entropy and learning. Section 4 presents a key insight about scaling laws, namely that they imply a robust inductive bias for the emergence of complex skills. Section 5 presents our conceptual (and statistical) framework for measuring skills via next-word prediction. Sections 6 and 7 give concrete results about emergence of skills as a result of scaling. Section 8 presents a brief experiment highlighting the (predicted) phenomenon of ease of learning skill combinations that have not been seen in the training data. ## 2 Preliminaries Language modeling follows a statistical paradigm: occurences of linguistic units (say, words, sentences, paragraphs) are assumed to fit a statistical profile, and thus pieces of text from a sufficiently large and diverse corpus are assumed to be samples from a probability distribution Bengio et al. (2000). Language models are trained to solve _Next-word-prediction_ tasks: Given, say, the past 500 words2 in the text, predict the next word. Faced with such a task, humans may be able to give many completions, and the modeling assumes that frequencies of these completions can be modeled via probability. Thus the model \(M\) computes a probability \(\Pr_{M}[w_{i+1}\mid w_{1}w_{2}\ldots w_{i}]\) for all possible words \(w_{i+1}\)'s. The goodness of a model is computed by its _cross entropy loss_, which for a sequence of words \(w_{1}w_{2}\ldots w_{t}\) is: Footnote 2: Actual training involves breaking up words into smaller _tokens_, which allows a single model to handle all human languages, math formulae, computer code, etc. For simplicity, our discussion will refer to “words.” \[\ell(M)=-\sum_{i}\log\Pr[w_{i+1}\mid w_{1}w_{2}\ldots w_{i}]\qquad\text{( Cross Entropy)} \tag{1}\] Models are trained by minimizing (via gradient descent) this training loss on a text corpus, and their goodness is computed by their _test loss_--evaluating the same loss expression on a held-out text from the same corpus. Often the training corpus is so large that the model trains only once (or a few times) on each piece of text, and by the end, the test loss on held-out text is almost the same as the training loss. Scaling laws:These empirically-derived expressions describe how test cross entropy loss on held-out data scales (in experiments) with number of model parameters (N) and size of the dataset (D) Cortes et al. (1993); Hestness et al. (2017); Kaplan et al. (2020); Bahri et al. (2021). For Chinchilla models Hoffmann et al. (2022) the law is as follows (precise constants of 2 hold only for the specific architecture and training strategy --even the constant \(A\) depends upon the tokenization): \[L(N,D)=A+\frac{B}{N^{0.34}}+\frac{C}{D^{0.28}}\qquad A=1.61\quad B=406.4\quad C =410.7. \tag{2}\] This description of macro behavior using two basic parameters --reminiscent of 2nd Law of Thermodynamics-- will help us circumvent the need for mechanistic understanding of training. Our theory will only rely upon the general form of the equations, specifically, that the dependence is inverse polynomial in \(N,D\). So it applies to other frameworks of training (e.g., overtrained models) where scaling laws have also been found. Emergence:Emergence refers to an interesting empirical phenomenon that as \(D,N\) are increased together then the model's performance (zero shot or few-shot) on a _broad range_ of language tasks improves in a correlated way. The improvement can appear as a quick transition when \(D,N\) are plotted on a log scale (which is often the case) but it is now generally accepted that for most tasks the performance improves gradually when \(D,N\) are scaled up. Thus the term _slow emergence_ is more correct. Furthermore, it is known that emergence happens at different rates for different tasks, and is often quickest for tasks where the text is plausibly close to text found in training data Wei et al. (2022). Plenty of tasks are known that stump current models, and they usually tend to be very different from what one would find in usual text corpora. See Wei et al. (2022); Srivastava et al. (2022) for a discussion of emergence rates of the broad range of language tasks in Big-Bench dataset. One might thus posit, with some justification from the above-mentioned studies, that the emergence of skills arises from training on similar tasks that were implicitly solved while solving next-word prediction in the training dataset. This is indeed our starting point. Poverty of Stimulus with respect to skill combinations:Any expert who has conversed with popular chat agents quickly discovers that at the very least they seem to flexibly solve tasks that require _combinations_ of simple skills. However, the number of \(t\)-wise combinations of elementary skills feels too large (say, with \(t=4\) and the number of elementary skills is in the tens of thousands) for all of the combinations to even be present in the training dataset. In other words, if the model indeed acquires ability to solve tasks involving \(t\)-tuples of skills, we are looking at the familiar _Poverty of the Stimulus_ issue3 which we return to in Section 6.1. Footnote 3: Chomsky coined the phrase _Poverty of the Stimulus_ to emphasize that babies do not have enough data to learn language, and concluded that evolution must have led to a “universal grammar” that humans must be be born with. In our framework, the language model (whose learning is initialized using Gaussian noise) also seems to defy paucity of stimulus with respect to skill combinations. The driver of this counterintuitive fact is the Scaling Laws; see Section 4. Cross-Entropy vs Excess Cross-Entropy: A Clarifying Example Thinking about emergence and Scaling Laws, researchers sometimes get confused as follows: _"The loss decreases by a tiny amount when we increase \(D\) from \(10^{11}\) to \(10^{12}\). According to (2) this changes cross-entropy by a tiny amount. Why does it lead to big changes in macroscopic behavior?"_ Let us understand the flaw in this reasoning. Language has an inherent (i.e., irreducible) cross-entropy --arising from existence of many possible correct choices for the next word in an average place in text--and this is captured by the \(A\) term 4 in (2). No model, however good, can achieve lower cross-entropy than \(A\). Below, we argue that rate at which the model makes (arising from its misunderstandings of the presented text) are connected to _excess_ cross-entropy over this minimum amount, which is captured by the second and third terms of (2). This excess does reduce noticeably from scaling: e.g., when \(N,D\) are increased by a factor of \(10\) it reduces by roughly \((10)^{0.28}\approx 2\). Footnote 4: Here we’re assuming that as the model and dataset size tend to infinity in tandem, the model will perfectly learn the language distribution. Excess Cross Entropy Drives Learning:We present a key idea in our theory: reduction in excess cross-entropy drives improvement on language tasks. We illustrate using a classic example from Winograd [1971] that inspired the _Winograd Schema Challenge(WSC)_Levesque et al. [2012]: The city councilmen refused the demonstrators a permit because they feared violence. Here the pronoun they is ambiguous-- grammar rules allow it to refer to either demonstrators or city councilmen. Winograd pointed out that disambiguating it (i.e., anaphora resolution) requires world knowledge that is unavailable in the text itself, namely that demonstrations can get violent, and city councilmen don't like violence. The WSC contains many such examples from varied contexts, thus testing the model's world knowledge and common sense. A key idea in designing testbeds for language understanding such as WSC is the **Cloze Procedure5**, popular also for testing language development in children Brown [2018]. To test the model's understanding of they in this sentence, we can append a _prompt_: Q. Who feared violence?. This is followed by either a blank, or a choice of multiple answers: A. city councilmen. B. demonstrators. For WSC examples, even though a human would be hundred percent sure of the answer, language models circa 2016 were roughly \(50/50\) confused between the two options. Footnote 5: Although cloze procedures allow testing most language skills, they get a bit artificial for testing quality of language productions, or testing understanding of irony, because the setup requires presenting multiple choices, one of which already explains the joke to the model. In the above example, the human is \(100\%\) certain of the answer, which implies their cross-entropy here is \(\log 1\), namely \(0\). However if the model is split \(50\)-\(50\) between the two options this implies it has cross-entropy \(\log 2\), all of which is _excess cross entropy_! Given the frequency of ambiguous pronouns in usual English, one concludes that a model that has not learned pronoun disambiguation will display huge excess cross-entropy at many places in text6 and thus reductions in excess cross-entropy will tend to squeeze out such errors. Footnote 6: Note that quantifying “excess” cross-entropy requires humans in the picture. It is not possible to look at the model’s cross-entropy loss on the \(i\)th word according to (1) and know—without asking humans— whether it is inherent cross-entropy or excess. Of course, text corpora do not normally contain such artificial cloze questions. But one could imagine that the model's basic misunderstanding of the above type could, often, lead to prediction mistakes in neighboring text. Our theory will make this precise in Section 5. ## 4 Scaling Law implies Slingshot Generalization This section is a warmup for our theory, highlighting that the Scaling Law implies a strong inductive bias --learning capability that can seem shockingly high from naive application of learning theory. We use a simplistic thought experiment that assumes the Scaling Law continues to hold for text data derived from multiple sources. (This seems roughly true in practice.) Assume there are \(S\) elementary skills in language, and each piece of text applies exactly one of these skills. Then training and test datasets consists of \(S\) (roughly) equal portions, one per sub-language. If \(D\) is the total dataset size then the size per sub-language is \(D/S\). Section 3 suggests that excess cross entropy roughly captures the rate of learning, so the average error on applying the _union_ of \(S\) skills on test tasks (involving any of the \(S\) skills) is the per-word excess cross-entropy of the model trained on the full language, which scales as \(1/D^{0.28}\) according to the Scaling Law. (While this is the average per word, for at least \(1/2\) of the the sub-languages the per-word cross-entropy is at most twice of this quantity.) By contrast, if we were to train the model using just data from a single skill, the scaling would be \(S^{0.28}/D^{0.28}\), which is worse by a multiplicative factor \(S^{0.28}\), provided the model size is the same in all cases. Since \(S\) is presumably large, training on the full dataset gives hugely better performance on individual skills than if we trained separate models on individual skills. We call this effect _Slingshot Generalization_7 and it requires no mechanistic understanding of gradient descent or transformers. Footnote 7: Economists might call this phenomenon _increasing returns to scale_. The above effect is closely related to _Effective Data Transfer_, a concept used to empirically quantify efficacy of pre-training. Suppose we use a small labeled dataset of size \(n\) for supervised training of a classifier by fine-tuning a pretrained LLM. This usually yields final accuracy much better than training the classifier using vanilla supervised training on the labeled dataset. The improvement can be thought of as an effective transfer of knowledge (= additional equivalent labeled data) from the model's pretraining. Our discussion here begins to make this precise as a form of inductive bias. **View from Learning theory:** Naive application of classical learning theory implies that if the model is trained on a single skill (i.e., sub-language) then its error on downstream tasks on that skill should be at least the reciprocal of the square root of dataset size, which is \(1/\sqrt{D/S}\). However, the above analysis suggests that when the model is trained on the _union_ of \(S\) sub-languages, Scaling Law makes the error on the average skill decrease as \(1/D^{0.28}\). When \(S\gg D^{0.44}\), we have \(1/\sqrt{D/S}\gg 1/D^{0.28}\), which suggests at first sight that the Scaling Law violates learning theory, at least in our toy setting. This inference is erroneous, however. Learning theory carves out an exception for inductive bias of the model -- the error estimate of \(1/\sqrt{D/S}\) for training on a single sub-language assumes no inductive bias8. The correct conclusion is that training using the union of \(S\) datasets provides a strong inductive bias for learning on each individual dataset. This inductive bias presumably arises from the model's architecture and the fact that its parameters can aggregate structural information from all \(S\) sub-languages, which improves learning on individual languages. (This effect is intuitively clear to humans: learning your \(6\)th language is easier than learning your \(2\)nd.) Footnote 8: To give a trivial example, if we were to commence training with a model that already has minimum training and test loss, then for sure its error is zero, not \(1/\sqrt{D/S}\). ## 5 Statistical Formalization of Skills This section gives a mathematical framework for thinking about skills and how they might relate to language comprehension. Since our theory assumes scaling laws such as (2) it will have the luxury of ignoring issues of training and generalization, allowing us to reason directly about the model's behavior on the test distribution, i.e., the distribution from which the training data was drawn. Furthermore, this distribution is assumed to consist of long stream of text-pieces, and the model's comprehension is being tested via a suitable _prediction loss_. **Definition 1** (Test stream and skill graph).: Test data consists of an arbitrarily large set of _text-pieces_, denoted by \(T\), each consisting of \(C_{test}\) tokens. Each text piece has an associated _prediction loss_. Language has an underlying set \(S\) of _skills_. The _skill graph_ is a bipartite graph \((S,T,E)\) where nodes in \(S\) correspond to skills, nodes in \(T\) correspond to text-pieces, and \((s,t)\) is in the edge set \(E\) if attaining low prediction loss on (i.e., "comprehending") text-piece \(t\) requires using skill \(s\). In this full generality no theory seems possible, since the full skill set and the skill graph are unknown to us humans. So our theory makes some mild assumptions. First, we assume that the model's "comprehension" of a text piece is testable via suitable Cloze prompts9 analogous to the Winograd example in Section 3. Specifically we assume that an (unknown) process cloze has been used to add such suitable Cloze prompts to the test stream (see Assumption 3). Footnote 9: Note that cloze prompts are popular for evaluating language development in children Brown et al. (2020), and most language comprehension skills are believed to be testable via Cloze prompts. See Saunshi et al. (2021) for earlier use of Cloze prompts in theory of LLMs. Second, we make an assumption about how skills are mixed up in text-pieces. Such mixing is evident from Winograd's example: _The city councilmen refused the demonstrators a permit because they feared violence_. Winograd implicitly assumes that the trickiest skill needed here is pronoun/anaphora resolution, but of course, applying that skill in this context requires other skills: understanding of causality (i.e., interpretation of "because") as well as world knowledge about "city councilmen," "permit," "demonstrators," etc. This example highlights that if we were to look at random places in text that require pronoun disambiguation, we would encounter random real-world scenarios, whose comprehension requires very different set of skills. Moreover, the scenarios (and hence the relevant skills) could have different probabilities of occurring in the corpus. A crucial assumption in our theory is that text is generated using random combinations of skills10. For simplicity we assume that each text-piece requires exactly \(k\) skills for some \(k\). (Allowing \(k\) to be a random variable is doable, but makes notation more complicated.) Thus the skill graph has random edges --each text piece is connected to a set of \(k\) skills, which were chosen iid from an underlying skill distribution. Footnote 10: While our framework is suggested as a plausible way to think about a diverse text corpus, it seems a particularly good match for productions of a chat agent, since it has to generate short pieces of text in response to a user query, which usually requires a subset of skills. Third, to confront the issue that the distribution of text-pieces has no simple description, we assume there is an _unknown_ procedure gen for converting a randomly sampled tuple of skills into a text-piece exhibiting those skills, as well as an associated measure (i.e., probability) associated with that text piece. The next definition formalizes the above framework in form of a _skill cluster_. A text corpus will in general consist of many skill clusters (e.g., wikipedia, science, coding, etc.). We assume each text-piece appears in only a single cluster but each skill may appear in text-pieces to more than one cluster. Extensions of this framework are left for future work. **Definition 2** (Degree-\(k\) skill cluster).: This is a collection of text pieces generated by "nature" as follows. It picks a subset of \(k\) skills via iid sampling from an underlying measure \(\mu_{1}\) on skills, and then uses a procedure gen to create a text-piece \(t\) whose comprehension requires these skills, as well as a measure \(\mu_{2}(t)\) associated11 with this text piece \(t\). Then it uses process cloze to add cloze prompts to test comprehension on \(t\). The _prediction loss_ on the text-piece is the cross-entropy loss on predicting the answers to the cloze questions. The average prediction loss over all text-pieces is computed with respect to this measure \(\mu_{2}()\). We call the skill-graph thus produced a _degree-\(k\) skill cluster_. Footnote 11: Note that the measure on text-pieces has to have the correct marginals e.g., the \(\mu_{2}\)-measure of all text-pieces containing a skill \(s\) is \(\mu_{1}(s)\). There are many measures satisfying this weak condition, since the number of text pieces is way larger than the number of skills. **Note:** (1) The prediction loss does not require (i.e., penalize) the model to predict the location or contents of the prompt in the cloze question: the model has to merely predict which of the presented multiple-choices in the cloze question is correct. This setup is implicitly assuming that the cloze questions are easy to comprehend for a reasonably capable language model --the only tricky part about cloze questions is answering the multiple choice question at the end. (2) As discussed earlier in Section 3, if the cloze prompt is assumed to be perfectly answerable by a human then any incorrect answers by the model can be interpreted as contributing to excess cross entropy. The next assumption says that the cloze questions capture excess cross-entropy of the model as defined in (1), and from now on the theory will assume that this excess loss goes down in accordance with the scaling law. Figure 1: Skill Graph. **Assumption 3**.: _[Cloze tasks are meaningful]_ The model's average prediction loss, averaged on the distribution of text pieces, closely tracks (within a small multiplicative factor) the overall excess cross-entropy of the pre-trained model for next-word prediction. Definition 2 allows us to interpret "skill" in the more familiar setting of statistical learning theory, specifically by letting us associate a statistical task with it. The task involves predicting cloze questions in a sub-distribution of text pieces that contain that skill. **Definition 4** (Statistical view of Skill).: In the setting of Definition 2, for each skill cluster and each skill \(s\in S\)_statistical task_\(T_{s}\)_corresponding to_\(s\)_and this cluster_ is defined as follows. The learner is given a text-piece created by sampling \(s_{1},\ldots,s_{k-1}\) via iid sampling \((k-1)\) times from measure \(\mu_{1}\), and applying apply gen and cloze to the skill-tuple \((s,s_{1},\ldots,s_{k-1})\) to convert it into a text piece \(t\) with an associated measure \(\mu_{2}(t)\) (but the measure is re-scaled so that the total measure of the inputs to this task \(T_{s}\) is \(1\)). The _error rate_ of the model at the statistical tasks is the expected prediction loss on text-pieces drawn from the above distribution. For every \(k^{\prime}\)-tuple of skills \((s_{1},s_{2},\ldots,s_{k^{\prime}})\) (where \(k^{\prime}\leq k\)) the statistical task \(T_{s_{1},s_{2},\ldots,s_{k^{\prime}}}\) corresponding to that \(k^{\prime}\)-tuple is similarly defined. The inputs to the task are generated by completing the \(k^{\prime}\)-tuple to a \(k\)-tuple \(\vec{s}\) by iid sampling of \(k-k^{\prime}\) additional skills from \(\mu_{1}\) and then using gen and cloze to convert it into a text-piece. To illustrate with an example, if the text-piece consists of the Winograd sentence and it involves \(5\) skills, then that text-piece will appear in \(5\) statistical tasks corresponding to individual skills, \(\binom{5}{2}\) tasks corresponding to pairs of skills, and so on. However, our method of measuring the loss incurred on these statistical tasks implicitly assumes that if the model incorrectly answered this cloze question (i.e., it assigned significant probability to the wrong answer), then that loss was incurred in _all_ these statistical tasks. This accounting ignores the possibility that a prediction error may occur because just a subset of the skills were misapplied rather than all. For example a model could in principle have perfect understanding of the meaning of "city councilmen" but have still failed the cloze question because it doesn't correctly apply "causality" skill. (That said, we don't think there is a general method --even for a human--to partition "blame" for wrong answers to misapplication of individual skills.) Definition 4 can be thought of as a lower bound on the model's "competence" on task \(T_{s}\). In practice, the model may be able to apply the skill more on other distributions of text-pieces than the one seen in training, which is --as usual--ignored in the statistical view. ## 6 Deriving Emergence (uniform cluster) Having set up a framework for modeling skills and (via Assumption 3) connecting them to the cross-entropy loss of the model, we have arrived at the core mathematical issue around emergence: _As model's excess cross entropy goes down (due to scaling), this improves the model's performance on cloze tasks inserted in the test stream. How does this improve performance on the statistical tasks connected with skills as well as on tuples of skills?_ This section analyzes a simple setting where the test-stream consists of a single degree-\(k\) skill cluster, and the skills are uniformly distributed and so are the text-pieces--in other words, the distributions \(\mu_{1}\) and \(\mu_{2}\) in Definition 2 are uniform. Section 7 will extend the analysis to the general setting. First we point out the naive but incorrect way to reason about this. Since each text piece is connected to a random \(k\)-tuple of skills, say \(\vec{s}\), one is tempted to reason about emergence via linearity of expectations, specifically, the following relation about prediction loss: \[k\cdot E_{t}[\text{loss}(t)]=E_{s}[\text{failure rate of statistical task }T_{s}].\] ( **Incorrect!** ) (3) To see that this is incorrect, let \(Y\) be the subset of such text pieces where the model makes mistakes on cloze questions. This \(Y\) depends upon the skill graph, and the unknown processes gen and cloze of Definition 2, which may introduce arbitrary correlations on text-pieces. Since the model "saw" part of the test stream (namely, the portion corresponding to training data) it has picked some information about the skill cluster that is unknown. Thus at the end of training, locations of errors in the test stream -i.e., the set \(Y\)--are arbitrary and dependent upon the skill-cluster. Thus our analysis is allowed to assume an upper bound on the test loss, but nothing about which text-pieces it occurs on. Thus (3) cannot be inferred. This is the key mathematical hurdle and our proof will surmount it using random graph theory. Let's say the model _makes a mistake_ on a text-piece if the prediction loss on the included cloze-questions is at least \(1/2\) (which is the kind of error incurred if the incorrect answer is chosen with noticeable probability on a single multiple-choice question). Since average cross-entropy loss for the text-pieces is \(\delta\) we conclude \(Y\) consists of at most \(2\delta\) fraction of text pieces. The following result guarantees that statistical tasks corresponding to most skills do not assign significant probability to text pieces in \(Y\) -in other words, the model has good performance on statistical tasks connected with these skills. The theorem follows from (and is a simple rephrasing of) Lemma 10 in the appendix. **Theorem 5** (Main).: _Irrespective of the details of gen and cloze processes, the following property holds with very high probability in a single skill-cluster with uniform distribution of skills and text pieces. Let \(Y\) be an arbitrary subset of text pieces consisting of \(\theta\) fraction of text pieces. Let \(\alpha,\beta>0,\beta>1,\alpha\beta<1\) satisfy_ \[H(\theta)+k\theta(H(\beta\alpha)-\beta\alpha\log\frac{1}{\alpha}-(1-\beta \alpha)\log(\frac{1}{1-\alpha}))<0 \tag{4}\] _Then are at least \(1-\alpha\) fraction of skills that have at most \(\beta\theta\) fraction of their edges connected to \(Y\)._ Note that as the model is scaled up, the set \(Y\) will shrink --i.e., \(\theta\) will go down. Since edges between a skill node \(s\) and set \(Y\) correspond to errors in the statistical task \(T_{s}\), the theorem is giving an upper bound on the prediction error in statistical tasks corresponding to \((1-\alpha)\) fraction of skills. Figure 6 gives _performance curves_, i.e., the contour plot of the set of \(\alpha,\beta\) combinations satisfying Theorem 5 for a given \(\theta,k\). The horizontal axis plots \((1-\alpha)\) and the vertical axis plots \(\beta\theta\), so point \((0.8,0.16)\) on a curve means at least \(0.8\) fraction of skills have at most \(0.16\) fraction of their edges in the "error set" \(Y\) (hence \(0.84\) fraction of their edges are outside the error set). The emergence curves shift down noticeably (i.e., imply emergence of more skills) as we increase \(k\). The next lemma shows this trend always holds; follows from the fact that \(H(\theta)/\theta\) is a decreasing function in the interval \((0,1)\). **Lemma 6** (Monotonicity).: _If \(\theta^{\prime}<\theta\) then the performance curve for \(\theta^{\prime},k\) lies below that for \(\theta,k\)._ _If \(k^{\prime}>k\) then the performance curve of \(\theta,k^{\prime}\) lies below that for \(k,\theta\)._ **Note:** Theorem 5 shows that for a fixed prediction loss \(\theta\), using higher \(k\) implies better emergence of skills. Since \(k\) is the number of skills being used in a text-piece, it intuitively measures how _complex_ the text is (e.g., a college text would be expected to have higher \(k\) than a primary school text). Since \(\theta\) scales as per the scaling law, our theorem predicts that more complex text will be more effective at inducing skills, provided the scaling laws don't change. This prediction generally matches experts' intuition, although we are not aware of a study of scaling laws for text of different quality. ### Emergence for \(k^{\prime}\)-tuples of skills Now we estimate the model's failure rate on statistical tasks corresponding to \(k^{\prime}\)-tuples for \(k^{\prime}\leq k\). The basic idea is to consider \(k^{\prime}\)-tuples of skills as 'composite-skills,' and then re-do the calculation. **Naive estimate:** This consists of observing that a random \(k\)-tuple is a union of \(k/k^{\prime}\) random \(k^{\prime}\)-tuples. Considering \(k^{\prime}\)-tuples as 'composite-skills,' we get a skill-graph with degree \(k/k^{\prime}\), and Theorem 5 can let us derive performance curves for \(k^{\prime}\)-tuples. However, this is a weak and often trivial estimate. Figure 2: Performance Curves: Left plot has \(\theta=0.1\) and varies \(k=2,4,8,16\). Higher values of \(k\) greatly improve performance (for \(k=2\) valid \(\alpha,\beta\) did not exist). The right plot has \(k=8\) and \(\theta=0.05,0.1,0.2\). Section 6.1 clarifies that it also describes the model’s performance curve for \(t\)-tuples of skills for for \(\theta=0.05\) and \(t=1,2,4\) respectively (e.g., blue curve for \(4\)-tuples). **2nd estimate (better):** Consider the following \(k^{\prime}\)-_wise recombination_ operation on the test stream. First randomly partition the test stream into subsets of size \(k^{\prime}\), and then concatenate the \(k^{\prime}\) text pieces within each subset to create a larger piece of text that we refer to as a "\(k^{\prime}\)-piece." All cloze questions for the old test-pieces are retained and no new cloze questions are inserted. Clearly, if the error of the model per average text-piece was \(\delta\), then the error per average \(b\)-piece is \(k^{\prime}\delta\). However, each \(k^{\prime}\)-piece is now using a random \(k^{\prime}k\)-tuple of skills, which we can alternatively view as \(k\) random \(k^{\prime}\)-tuples. Thus viewing \(k^{\prime}\)-tuples of skills as 'composite skills' we can use this as the skill set in the setting of Theorem 5, which gives us an easy corollary quantifying the performance on tasks corresponding to \(k^{\prime}\)-tuples of skills. **Lemma 7** (Emergence for \(k^{\prime}\)-tuples of skills).: _Consider the skill-graph \((S^{\prime},T^{\prime},E)\) where \(S^{\prime}\) consists of all \(k^{\prime}\)-tuples of skills, \(T^{\prime}\) consists of \(k^{\prime}\)-pieces, and \(E\) consists of \((s^{\prime},t^{\prime})\) where \(s^{\prime}\) is a \(k^{\prime}\)-tuple of skills and \(t^{\prime}\) is a \(k^{\prime}\)-piece where this tuple of skills is used. Let \(Y\) consist of \(\theta\) fraction of \(k^{\prime}\)-pieces. Then for any \(\alpha,\beta>0,\beta>1,\alpha\beta<1\) satisfying (7) there are at least \(1-\alpha\) fraction of \(k^{\prime}\)-tuples of skills that have at most \(\beta\theta\) fraction of their edges connected to \(Y\)._ The next corollary presents a somewhat surprising general principle that's also hinted at in caption of Figure 2. Assume (for simplicity) a Chinchilla-like scaling law that \(10\)x up-scaling leads to factor \(2\) reduction in excess cross-entropy. If a model is considered to have reasonable performance on individual skills at current scaling, then after further up-scaling of \(10x\) one would see similar reasonable performance on skill-pairs, and scaling up by yet another \(10\)x after that will yield similar reasonable performance on \(4\)-tuples of skills, etc. Note that these are _provable lower bounds_ on performance gains--actual gains could be higher. Figure 2 illustrates the phenomenon. **Corollary 8**.: _When the model \(M_{1}\) with loss \(\delta\) is scaled up (e.g., as per equation (2)) so that the new model \(M_{2}\) has loss \(\delta/k^{\prime}\), then the performance curve inferred by our method for \(k^{\prime}\)-tuples of skills using \(M_{2}\) is identical to the curve inferred for individual skills on model \(M_{1}\)._ Proof.: As noted above, a loss of \(\delta\) still allows the model to make significant mistakes on \(2\delta\) fraction of test pieces, which we denote by \(\theta\). Thus Theorem 5 describes the performance curve for skills. Making the loss drop to \(\delta/k^{\prime}\) but creating \(k^{\prime}\)-pieces makes the fraction of errors \(\theta=2\delta\) again. (Note that "error" now means an erroneous answer on _any_ cloze question in the entire \(k^{\prime}\)-piece --again, this is a conservative definition of error.) Applying Lemma 7 we get the same emergence curve as Theorem 5. **Discussion of Paucity of stimulus.** We now point out how this result leads to a paucity of stimulus situation. Suppose we trained a language model with \(D\) tokens. After scaling by \(k\) orders of magnitude (i.e., increasing dataset size to \(c^{k^{\prime}}D\) tokens, where in the Chinchilla framework \(c\) is around \(10\)) the performance on \(k^{\prime}\)-tuples of skills is as good as what the performance was on individual skills before the scaling. Note that the number of \(k^{\prime}\) tuples of skills is around \(|S|^{k^{\prime}}\) where \(S\) is the set of skills. This quickly leads to paucity of stimulus for some fixed \(k^{\prime}\), specifically, if \(Dc^{k^{\prime}}\ll|S|^{k^{\prime}}\). To give an example, suppose \(c=10\) and \(|S|=10^{4}\) and the model's proficiency on individual skills was considered good when it was trained with \(D=10^{10}\) tokens (roughly the dataset size for GPT-2 style models). Then a larger model trained on with \(10\) trillion tokens (\(10^{13}\)) - closer to the size of corpora used in training today's models- should display proficiency in most \(8\)-tuples of skills, despite never not having seen most of those combinations in training (which we can be sure of because \(10^{10}\times 10^{8}\ll(10^{4})^{8}\)). See also Section 8 for a toy example. ## 7 Emergence analysis with general measure on text and skills Now we turn to analysis of the general setting of Definition 2 where text piece \(t\) has measure \(\mu_{2}(t)\) and skill \(s\) has measure \(\mu_{1}(s)\). In this setup, our lemma statements (e.g., Lemma 10 as well as the ones in Sections 6 and 6.1) hold ---the claim is the same but with cardinalities replaced by measure! **Theorem 9** (Emergence of skills and \(k^{\prime}\)-tuples of skills).: _Let \(Y\) be any subset of text pieces consisting of text pieces with total measure \(\theta\), and every text-piece has measure substantially less than \(\theta\). Let \(\alpha,\beta>0,\beta>1,\alpha\beta<1\) satisfy_ \[H(\theta)+k\theta(H(\beta\alpha)-\beta\alpha\log\frac{1}{\alpha}-(1-\beta \alpha)\log(\frac{1}{1-\alpha}))<0 \tag{5}\] _Then the measure of skills that have at most \(\beta\theta\) fraction of their edges connected to \(Y\) is at least \(1-\alpha\)._ _For \(k^{\prime}\)-tuples of skills the statement of Lemma 7 holds with the same modification of cardinality to "measure."_ Proof.: The measure \(\mu_{1}\) on skills is trivial to reason about by just replacing each skill \(s\) by a number of copies that is proportional to \(\mu_{1}(s)\). This converts the measure to a uniform measure --specifically, \(k\) iid draws from this uniform measure are equivalent to \(k\) iid draws from the \(\mu_{1}\). For the measure \(\mu_{2}(\cdot)\) on texts, the above trick doesn't work. Recall that a text-piece is connected in the skill graph to a random \(k\)-tuple of skills. If we try to replace \(\mu_{2}()\) with a uniform measure by replacing the text piece with identical copies, then these copies must still connect to the _same_ subset of \(k\) skills --meaning these connections are correlated and not random. We need a more subtle argument. The key part is the proof of Lemma 10 is where we choose random subset of text-pieces, \(Y\) whose size is \(\theta|T|\) and subset \(Z\) of skills of size \(\alpha|S|\), and then upper bound by () the expectation of the event that the latter has more than \(\alpha\beta\theta k\) fraction of its edges going to \(Y\). In presence of measure \(\mu_{2}()\) let's pick \(Y\) as follows: Independently pick text-pieces, choosing \(t\) with probability \(\theta\mu_{2}(t)\). (Note: \(|Y|\) is tightly concentrated around \(\theta|T|\).) We still pick \(Z\) randomly as before. Then we apply Jensen's Inequality on the same calculation to end up with the same upper bound as before. See Lemma 11 in the Appendix. ### Extending theory to multiple clusters Above we assumed a single skill cluster in the language. Real-life text might contain multiple skill clusters. For example, standard corpora must contain a large skill cluster involving pieces of "everyday" text pieces and a set of basic language skills and world knowledge needed to comprehend them. Smaller clusters may correspond to specialized topics, e.g., finance, science, mathematical reasoning, etc. We assume each piece of text appears in only one cluster but skills may appear in different clusters. When each text-piece appears in a single cluster, the analysis of Section 4 continues to apply. The overall loss is the weighted sum of measure of text in the individual clusters. Thus overall reduction in loss will drive emergence within individual clusters. But lacking any mechanistic insight, our theory cannot predict the rate at which loss decrease (and hence emergence) happens within clusters. This pertains to the point made earlier in the paper about lack of detailed study of scaling laws for different kinds of corpora, as well as for training on mixes of corpora. We leave a more fine-grained analysis, including possibly allowing hierarchical structure in clusters, for future work. As usual, simpler settings probably give the main insight. ## 8 Toy Illustration Our theory can explain the surprising phenomenon that the inductive bias of pretraining implies that combinations of skills emerge as naturally as the individual skills. Now we give a simple experiment illustrating such a phenomenon involving vision tasks on pretrained ViT models. Our labeled dataset consisted of \(10^{3}\) composite images created from random selecting four images from the MNIST dataset of handwritten digit images and putting each in one quadrant of the composite image. The composite image was assigned a fixed label, which is the label of one of the four digits, randomly selected. This is the "target" label, while the remaining three digits were considered "background" labels not made available during training. Supervised training on this dataset used a linear probe on top of input embeddings of the composite images output by a pre-trained Vision Transformer (ViT-CLIP) model (pre-trained on a custom image dataset). The training used mini-batch SGD. Testing was done using held out images, which were composites constructed from MNIST images that had not been used to create the training set. This also ensured the model faced novel composite images during evaluation. Figure 3: Composite image setup. Number of underlying skills is \(10\) and the model learns to apply all \(4\) skills needed for the composite image. The final classifier, being softmax, can be used to output top-\(4\) labels just as easily as as top-\(1\). Doing so achieved approximately \(92\)% accuracy in classifying the four-digit tuples even though trained using only \(10^{3}\) examples labeled with a single digit. Full fine-tuning of the model yielded around \(95\)% accuracy with \(10^{3}\) labeled examples. To understand why this happened, note that since the provided label is fixed by randomly picking one of the four digits in the image, the optimum softmax output should learn to give equal logit values to labels of all four digits present in the image. Figure 3 describes the skill-cluster implicit here. ## 9 Conclusions We have proposed a theoretical framework for understanding emergence of skills when language models are scaled up. A key insight (see Figure 2) is that reduction in excess cross entropy loss drives skill acquisition, together with the assumption that normal language --down to short paragraph level--already utilizes multiple skills, mixed up randomly. Need for mechanistic insight is sidestepped using Scaling Law, which quantifies a powerful inductive bias in pre-trained models and is the basis of _slingshot generalization_ (Section 4). One concrete example of this inductive bias is that in our framework proficiency in combinations of skills arises just as naturally as proficiency in the individual skills themselves, and need not require seeing examples of all (or even most) of these combinations in the training set. This has relevance to the ongoing debate about the extent of "understanding" that current models have, and their ability to address novel settings. We hope the simplicity of our framework will also encourage further experimental and theoretical study, including extensions to more general language skills such as generation and dialog; and modeling inductive bias at a finer level than the Scaling Laws. (For example, what are the scaling laws for interesting parts of language such as math or coding?) It is also possible that our theory underestimates the rate of emergence, due to unknown mechanisms --e.g., having to do with workings of transformers-that are left out in our theoretical framework. The simple and statistical nature of our theory should be seen as a plus -- it helps identify which emergence phenomena should not be considered surprising, most notably emergence of competence on skills as well as on their combinations. But it shares limitations with other statistical frameworks. Competence is guaranteed only on text-pieces drawn from the data distribution, and governed by usual \(\epsilon\)-\(\delta\)) guarantees -- many skills as well as combinations of skills may not get learnt, and the ones that do get learnt may incorrectly applied on some fraction of the data distribution. Nevertheless we hope this inspires more thorough experimental study (our simple experiments give a starting point) of whether or not current language models have capabilities that go beyond simple statistical explanations. Any phenomena not derivable in our framework (or its natural extensions) may be of interest for AI alignment as well as better design and understanding of language models. **Acknowledgements:** We are very grateful to Jonah Brown-Cohen and Timothy Lillicrap for many discussions that motivated us to improve the theory and its expositions. We thank Boaz Barak, Rong Ge, Nikunj Saunshi for their feedback on the manuscript.
2303.16710
An intelligent modular real-time vision-based system for environment perception
A significant portion of driving hazards is caused by human error and disregard for local driving regulations; Consequently, an intelligent assistance system can be beneficial. This paper proposes a novel vision-based modular package to ensure drivers' safety by perceiving the environment. Each module is designed based on accuracy and inference time to deliver real-time performance. As a result, the proposed system can be implemented on a wide range of vehicles with minimum hardware requirements. Our modular package comprises four main sections: lane detection, object detection, segmentation, and monocular depth estimation. Each section is accompanied by novel techniques to improve the accuracy of others along with the entire system. Furthermore, a GUI is developed to display perceived information to the driver. In addition to using public datasets, like BDD100K, we have also collected and annotated a local dataset that we utilize to fine-tune and evaluate our system. We show that the accuracy of our system is above 80% in all the sections. Our code and data are available at https://github.com/Pandas-Team/Autonomous-Vehicle-Environment-Perception
Amirhossein Kazerouni, Amirhossein Heydarian, Milad Soltany, Aida Mohammadshahi, Abbas Omidi, Saeed Ebadollahi
2023-03-29T14:04:59Z
http://arxiv.org/abs/2303.16710v1
# An Intelligent Modular Real-Time Vision-Based System for Environment Perception ###### Abstract A significant portion of driving hazards is caused by human error and disregard for local driving regulations; Consequently, an intelligent assistance system can be beneficial. This paper proposes a novel vision-based modular package to ensure drivers' safety by perceiving the environment. Each module is designed based on accuracy and inference time to deliver real-time performance. As a result, the proposed system can be implemented on a wide range of vehicles with minimum hardware requirements. Our modular package comprises four main sections: lane detection, object detection, segmentation, and monocular depth estimation. Each section is accompanied by novel techniques to improve the accuracy of others along with the entire system. Furthermore, a GUI is developed to display perceived information to the driver. In addition to using public datasets, like BDD100K, we have also collected and annotated a local dataset that we utilize to fine-tune and evaluate our system. We show that the accuracy of our system is above 80% in all the sections. Our code and data are available on GitHub. ## 1 Introduction As the number of vehicles on the road has grown in recent years, traffic violations, accidents, and fatalities have increased considerably [13]. However, along with the growth in urban traffic, human error plays a significant role in the frequency of road casualties and offenses, in a way that careless driving and disregard for traffic signs account for more than 70% of street accidents [6]. Hence, providing a realistic solution to improve driving accuracy and road safety could be highly beneficial. Artificial intelligence has advanced many fields, including the automotive industry [2; 22; 23]. Some companies have offered autonomous vehicles as an alternative to human driving [24], but they are still not commonly used by the public due to their high cost. Road lane detection is a significant part of the perception system to determine the vehicle's position and its desired trajectory on the road, which remains problematic in intelligent vehicles. The challenges of this problem include disordered, fading, and unreasonable road markings, along with various lighting conditions. Furthermore, road lines might be occluded by other cars and obstacles. Early lane detection methods include classical image processing techniques based on image edges, and the Hough transform [32; 38; 42; 59; 54]. Despite their simplicity, these algorithms require environment-dependent hyperparameter adjustments. [41] presents a hybrid method based on object detection to improve lane detection accuracy. On the other hand, deep learning-based methods such as Point Instance Network (PINet) [31] and LaneATT [55] offer precise and robust performance in multiple scenarios with different lighting conditions, and occlusions [44; 36; 1; 20]. As a result, deep-learning-based approaches are considered a better choice for practical applications. Object Detection methods are used to detect various objects in the environment. Most detectors feed an image into an artificial neural network and output bounding box locations for each object present in the image and probability scores corresponding to each object class. The detector's head connecting to the backbone is usually one or two-stage. One-stage detectors, for instance, YOLO and SSD, are generally much faster than their two-stage counterparts [26; 7; 47], while two-stage detectors, like the R-CNN family, tend to yield more outstanding accuracy scores [37; 34; 16; 15; 50]. Our first use of object detection is detecting pedestrians and other vehicles, static and dynamic; this is done to avoid collisions with other objects on the road. Secondly, traffic indicators are detected to ensure vehicles follow the regulations on different roads [12]. An intelligent system should accurately identify the road-sidewalk boundaries and measure its distance from detected objects to prevent accidents. In this case, pixel-wise classification of an image (semantic segmentation) offers impressive accuracy in determining the boundary by recognizing the sidewalk [40; 8; 53; 61; 51; 5]. In addition, perspective transform approaches [28; 57] that estimate the distance based on the warped image's pixel spacing and monocular depth estimation methods [58; 39; 10; 33] that measure the distance based on the depth map and camera parameters have shown promising results. However, the accuracy of perspective-transform-based approaches is highly dependent on the situation and cannot perform well in all conditions. Moreover, it is not a computationally cost-effective solution to measure the distance and detect the sidewalk individually through different models. For this purpose, models based on multi-task learning can be advantageous since they output both semantic segmentation and depth estimation in one model [64; 62; 9]. These models simultaneously enhance the accuracy of segmentation and depth estimation through their mutual effect during training. For instance, SGDepth [30] outputs the estimated depth map and semantic segmentation from a single input image. In this research, we have developed a comprehensive package for real-time environmental perception based on computer vision techniques to assist drivers in minimizing driving faults and violations. To the best of our knowledge, we have provided a novel combination of traits, and only a few studies are available on the permutations of features mentioned in this paper. For instance, compared to the YOLOP[60] network, this network lacks depth estimation and distance measurement, whereas our package is more extensive. There are two aspects to our real-time intelligent package: software and hardware. The software section consists of four main phases. For the road lane detection part, our module benefits from the PINet [31] neural network, which is trained on the CULane [52] dataset and fine-tuned on our collected local dataset. For the object detection section, we utilize YOLOv5 [27] for detecting vehicles, pedestrians, and traffic signs. In the third phase, we use the SGDepth [30] network for segmentation tasks and recognizing the sidewalks. Finally, we introduce a novel approach for measuring the distance to the surrounding cars based on the monocular depth estimation output from the SGDepth. In terms of hardware, this module uses only one camera and a mid-range GPU, reducing costs and making it effortless to implement on various vehicles. ## 2 Background ### PINet The Point Instance Network (PINet) detects traffic lines regardless of their number [31]. It generates points on lanes and separates them into distinct instances. As illustrated in Fig 2, three output Figure 1: **Overview of the proposed system.** Continuous lines represent system modules’ main data path, and dashed lines denote the additional data, which aids in the accuracy of the system. branches are included in this network: a confidence branch, an offset branch, and an embedding branch. Predicting the exact points of traffic lines is what the confidence and offset branches do. The embedding branch creates the embedding features of the predicted points, which are provided in the clustering process to differentiate each instance. First, the input RGB image enters the resizing network, and the sequence of three convolution layers compresses the image into a lower size. After each convolution layer, PReLU [21] and Batch Normalization [25] are used. Then, the predicting network receives the resizing network output. It predicts the exact points on the traffic lines as well as embedding features. There are multiple hourglass modules in this network, each with an encoder, decoder, three output branches, and some skip-connections. The predicting network can include any number of hourglass modules, and all of them are trained concurrently by the same loss function. Therefore, in the case of running the trained model on a system with limited computing power, the network can be cut and transferred without extra training. Hourglass blocks have three types of bottlenecks: same, down, and up bottlenecks. The output of the same bottleneck is the same size as the input. In the encoder, the down bottleneck is used for down-sampling, with the first layer of a convolution layer, and in the up-sampling layer, a transposed convolution is used for the up bottleneck. Each output branch has its own channel (confidence: 1, offset: 2, embedding: 4). The associated loss function is applied based on the output branch's goal, and each confidence output is passed on to the following block. The PINet network loss function equation consists of four different loss functions. Three of them (confidence, offset, and feature loss functions) are parallel and are applied to the network output branch. The other one (distillation loss function) optimizes the knowledge-learning process from the deepest hourglass, which is beneficial if the prediction network is cut to a lighter one. The total loss function equals the weighted sum of the above four loss functions, as shown below: \[L_{total}=\gamma_{e}L_{exist}+\gamma_{n}L_{non-exist}+\gamma_{o}L_{offset}+ \gamma_{f}L_{feature}+\gamma_{d}L_{distillation}. \tag{1}\] Where constant coefficients are obtained experimentally. ### Yolo YOLOv5 [27] network is a single-stage detector which consists of three vital parts. The backbone, the initial element, extracts image features at various scales. The neck, which merges the retrieved features, is the second element. And the network head predicts the box and class of each object in the picture using the features coming from the neck. Eventually, Yolo encodes all the information into a single tensor for each image [48]. YOLO models the detection problem as regression by dividing the input image into an \(S\times\tilde{S}\) grid. Each grid cell predicts \(B\) bounding boxes with \(x\), \(y\), \(w\), and \(h\), and an "objectness" score \(P(Object)\), which indicates whether or not the grid cell contains an object. In addition, a conditional probability \(P(Class\mid Object)\) is predicted for the class of the object associated with each grid cell. Therefore, YOLO outputs \(B\times 5\) parameters and \(C\) class probabilities for each grid cell. These predictions are represented by a tensor with the size \(S\times S\times(B\times 5+C)\). Non-Maximum Suppression (NMS) and thresholding are the last steps in generating final object detection predictions [48]. The network parameters are trained by minimizing a three-element loss function called GIoU, Objectness, and Classification. The GIoU element minimizes bounding box prediction error, and the objectness element determines the existence of an object in a grid cell. Finally, the classification error is responsible for object class prediction. YOLOv5 benefits from previous version features, but it is implemented in PyTorch rather than Darknet, making it more flexible with computer vision libraries. ### SGDepth SGDepth is a novel monocular self-supervised approach to estimate depth information from single images. This process is semantically-guided, meaning it utilizes information obtained from segmentation maps and does not require depth labels. As shown in Fig. 3, the approach consists of two main components: the depth part, which is trained in a self-supervised fashion, and the segmentation part, which utilizes a supervised training scheme. For models to estimate depth maps from sequences of images, the world must be static, i.e., in two consecutive frames of a video, all objects must remain in their absolute positions. Moving dynamic-class (DC) entities, including passing vehicles and pedestrians, violate the static world assumptions. Hence, while being necessary for self-supervised depth estimation, correct projections could not be calculated between sequences of frames. In the training phase, SGDepth ignores such objects in their optimization. **Self-Supervised Monocular Depth Estimation.** Instead of training the model on depth labels in a supervised manner, the predicted depth maps are then considered as geometric properties of the environment to warp the preceding and succeeding frames \(\mathbf{x}_{t^{\prime}}\), with \(t^{\prime}\in\mathcal{T}^{\prime}=\{t-1,t+1\}\) to \(\mathbf{x}_{t}\) at time t. Afterward, a photometric loss \(J_{t}^{\mathrm{ph}}\) is computed between \(\mathbf{x}_{t}\) and \(\mathbf{x}_{t^{\prime}\to t}\); this is to ensure that the transformed images \(\mathbf{x}_{t^{\prime}\to t}\), are as close as possible to \(\mathbf{x}_{t}\). On top of that, a smoothness loss is responsible for making sure that nearby pixels have similar depth values [17; 18]. **Supervised Semantic Segmentation.** A segmentation mask \(\mathbf{m}_{t}\in\mathcal{S}^{H\times W}\), where \(\mathcal{S}\) is a set of classes, is obtained by assigning a class to each pixel coordinate. This is done through a supervised training method, and the segmentation head (Fig. 3) outputs \(\mathbf{y}_{t}\in\mathbb{I}^{H\times W\times S}\) are compared to the ground truth labels \(\mathbf{\overline{y}}_{t}\) using a weighted cross-entropy loss [43]. **Semantic Guidance.** As shown in Fig. 3, there are two decoders attached to one encoder (Feature Extractor block). In the backward propagation stage, the gradients that return from the two decoders are scaled to form \(\mathbf{g}^{\mathrm{total}}\) that is propagated back into the decoder. This is how multi-task training is done across the two domains. To deal with moving DC objects, projected segmentation maps are calculated using nearest-neighbor sampling. In addition, a DC object mask \(\mathbf{\mu}_{t}\in\{0,1\}^{H\times W}\) is computed that has zero values for all pixel coordinates that belong to a DC class \(\mathcal{S}_{\mathrm{DC}}\) in either one of the three frames. This mask is then applied to compute a semantically-masked photometric loss. If a DC object has moved in two consecutive frames, the warped semantic \(m_{t^{\prime}\to t,i}\) and the semantic mask \(m_{t}\) will be inconsistent. To measure this, intersection over union (IoU) of DC objects in \(m_{t^{\prime}\to t,i}\) and \(m_{t}\) is calculated and denoted as \(\Lambda_{t,t^{\prime}}\). Then a threshold \(\theta_{\Lambda}\in[0,1]\) is used to decide whether a frame is static or dynamic, where photometric loss or masked photometric loss is applied, respectively. After this, the total loss will be a combination of the cross-entropy loss and smoothness loss, as well as the photometric losses. ## 3 Method As previously mentioned, our modular intelligent system includes four parts. In the lane detection section, a series of operations are carried out to display the road lines on the module screen. This section aims to help drivers keep their position in the lane and notice their deviation from the off-road side. In the object detection section, surrounding objects, such as obstacles on the road and traffic signs, are detected so that the system alerts drivers to route appropriately and follow traffic laws. In the segmentation section, the segments of the sidewalks are shown on the screen during the semantic segmentation process, which aids drivers in precepting their environment. Finally, in the distance measurement part, the distance from the nearby cars is estimated using a monocular depth estimation approach to avoid collisions and accidents. Each of the four module's parts utilizes a base deep learning model, but some do not have a robust and proper result, necessitating some creativity to solve problems. The method section describes how issues are overcome and what novelties are employed to boost the models' performance and create an intelligent modular system. Fig. 1 illustrates the overall system diagram. ### Lane Detection We have used the PINet model for the lane detection part because, as observed from the experiments done and given in Table 5, it has demonstrated greater accuracy than other rivals while maintaining the required Frame Per Second (FPS). However, the model requires considerable processing power, which results in a noticeable drop in FPS in the performance of our entire module, so PINet is considered a trade-off between accuracy and FPS. Although PINet shows accurate results for determining road lines, some environmental conditions, like poor illumination, cause severe errors in PINet prediction. In these cases, traffic lines are detected outside the road space. As a result, a novel dynamic region of interest (ROI) is designed and applied to the image to avoid off-road lines being identified. Our experiments show that all issues are addressed after adding this modified ROI to the PINet output, which can be seen in Fig. 4. Building the modified ROI begins with deriving the road mask from the segmentation map output of the SGDepth. The road mask may have some empty areas caused by removing vehicles and objects from the road. Next, the convex hull approach is used on the road segment to obtain a convex space covering all the road mask edges [4]. After this, a modified and seamless ROI is obtained from the road by filling the convex space gaps. The utilized convex hull formula is as follows: \[\text{ROI }=\text{Co}(X)=\left\{\sum_{i=1}^{q}\alpha_{i}x_{i}\mid x_{i}\in X, \alpha_{i}\geqslant 0,\sum_{i=1}^{q}\alpha_{i}=1\right\} \tag{2}\] Where \(X=\{x_{1},x_{2},\ldots,x_{q}\}\) denotes a set of points whose covered area is extracted by the convex hull. Morphological erosion and dilation are also applied for noise canceling and creating soft margins for the mask [19]. The erosion of the binary mask by the structuring element B is defined by: \[\text{ROI }\ominus B=\{z\in E\mid B_{z}\subseteq\text{ ROI }\} \tag{3}\] Where \(E\) is a Euclidean space, the structuring element \(B\) is a circular disc in the plane, and \(B_{z}\) is the translation of \(B\) by the vector \(z\). Also, the dilation of the binary mask by the structuring element \(B\) is defined by: \[\text{ROI }\oplus B=\{Z\in E\mid\left(B^{s}\right)_{z}\cap\text{ ROI }\neq\emptyset\} \tag{4}\] Where \(E\) and \(B\) are the same as above, and \(B^{s}\) denotes the symmetric of \(B\). Finally, by applying the modified ROI to the input image, the surroundings are removed, and only the in-road detected lines remain, which specify lanes. Another challenge at this stage is facing roads and ROIs that do not contain traffic lines. To solve this, we introduce a new dataset containing local images collected by us in which roads do not include traffic lines. In the next step, traffic lines are labeled manually, and the network is fine-tuned on a combination of our introduced dataset with pre-existing datasets. As a result, due to the key point extraction nature of PINet, key points, even in the absence of white lines and in noisy conditions, are extracted well, and the fine-tuned PINet is able to divide the road into hypothetical lanes. The dataset introduced in this section is available on the project's GitHub. ### Object Detection For the object detection part, the YOLO network has been used, which is a state-of-the-art, real-time object detection system. We have chosen the YOLOv5 version because, as shown in Table 6, it provides significantly higher accuracy and FPS compared to its counterparts. Utilizing the pre-trained YOLO object detection model, vehicles and pedestrians are detected, and bounding boxes are drawn around them on the monitor to alert the driver. However, identifying traffic signs is challenging since solely detecting them is not sufficient; it is necessary to recognize their meaning so that the driver can follow them. As a result, the YOLO network has been fine-tuned using a combination of multiple traffic signs datasets to both detect them and also identify the type of traffic signs. The final acquired dataset includes fifteen traffic signs, most of which are similar in shape (e.g., circular with a red margin) but different in meaning. As a result of fine-tuning the model on this dataset, the traffic signs are accurately identified. After a sign is detected, the name of the sign is displayed on the driver's monitor, depending on the type of detected sign. Also, for identifying the traffic lights' colors, after using the YOLO and detecting the traffic lights, we use a lightweight CNN-based classifier based on RegNetY002 [45] to classify four classes of red, yellow, green, and off states. When a traffic light is detected, the identified color of the light determines the displayed message. If the light is green, the word "pass" displays, and if the light is yellow or red, the words "warning" or "stop" display, respectively. The results of the object detection section are improved compared to the network's initial version after gathering the datasets and applying the modifications mentioned above, as shown in Table 2. Figure 4: Summary of the implemented method in the lane detection section. ### Segmentation We utilize image segmentation for two primary purposes: a) avoiding curb collisions by detecting sidewalks and b) accurately measuring the distance from the vehicle to other objects. In the SGDepth model, which is a multi-task learning network, both the tasks of measuring the distance through the monocular depth estimation approach and separating the sidewalk segment from the road are done at the same time. Typically, environmental conditions have a significant influence on the accuracy of the model. Still, by employing this multi-tasking method, the extracted features of both tasks are shared in the network, leading to generalization and better results. SGDepth is trained on the Cityscapes dataset [11], and we use the sidewalk, car, and bus classes to segment the scene with respect to said class labels. In addition, sidewalks will be colorized and displayed on the monitor. Since the segmentation is usually quite noisy, the approach still needs some novelties and modifications to perform appropriately. We deploy contour analysis and convex hulls to alleviate the sharp edges segmentation map. First, we use morphological functions to dilate the segmented area for each class and fill the holes inside the map that are the outcome of environmental noise. After that, the contours belonging to each semantic class are detected and sorted based on their encapsulating area, and contours whose area is smaller than a constant threshold are discarded. The remaining contours are then converted to convex hulls to compensate for the concavity that might occur in some instances. This way, the imperfections present in the initial segmentation map will be resolved considerably, and a much smoother segmentation map will be obtained. ### Distance Measurement In this section, we propose a novel hierarchical approach that exploits SGDepth and YOLO outputs to attain an accurate distance measurement. Our method consists of two major stages. The first stage is responsible for perceiving the object of interest in the scene. A more in-depth object analysis is achieved in the second stage to obtain the final distance measurement. Since SGDepth operates on the entire scene, it is vital to identify the surrounding objects first. Therefore, as seen in Fig. 5, the object detection output is used to crop the detected object depth map and its segmentation mask based on the bounding box coordinates. We then take advantage of the acquired mask and apply it to the detected depth map to remove the background. In order to eliminate the effect of zero-pixel values, we replace them with NaN to not affect the distance measurement calculations. The obtained mask may contain pixels from the background that are assumed to be outliers, and their presence may negatively affect distance measurement. To mitigate their impact, we apply min pooling on the depth map with a kernel size of \(3\times 3\) in the second stage. Min pooling operation with the kernel size of \(K\) is defined as follows \[\hat{I}_{s,p}=min(I_{i+K\times s,j+K\times p})\qquad\text{Min Pooling}:I^{H \times W}\longrightarrow\hat{I}^{\lceil\frac{H}{K}\rceil\times\lceil\frac{W}{ K}\rceil} \tag{5}\] for \(i=j=0,1,...,K-1\), where \(s\in S=\left\{0,1,...,\left\lceil\frac{H}{K}\right\rceil-1\right\}\), and \(p\in P=\left\{0,1,...,\left\lceil\frac{W}{K}\right\rceil-1\right\}\) determine the set of new coordinates after applying min pooling. To ensure noisy depth points are not involved in the measurement procedure, we propose a Gaussian Noise Removal module that fits a Gaussian distribution on the depth values. Points whose distances are within an interval of \(\left[\mu-2\sigma,\mu+2\sigma\right]\) are known as inliers and remain, while the rest of the points will be excluded and given NaN value to not participate in the distance measurement process. The goal of the Gaussian Noise Removal module is to find inlier points such that \[\text{Inlier Points}=\left\{\hat{I}_{s,p}|\mu-2\sigma<\hat{I}_{s,p}<\mu+2 \sigma\right\} \tag{6}\] Figure 5: **The overview of distance measurement approach. In the first stage, the object of interest is derived from the input image. Then the distance from the perceived object is determined in the second stage.** where \(\sigma\) and \(\mu\) denote standard deviation and mean, respectively. In addition, it is necessary to apply average pooling to the acquired depth map for two main reasons, (i) to smooth the depth values and ignore the remaining sharp values, and (ii) to achieve an accurate value to report as distance. However, one of the challenging problems of estimating the distance is the object's image size, and applying the prior min pooling would reduce its spatial dimensions by a factor of three. As a result, the kernel size of average pooling should be compatible with the object depth map size to guarantee we have enough depth points for the measurement step. Hence, to satisfy the objects with large and small depth map sizes, we propose a grouped average pooling with different kernel sizes, which are \(2\times 2\), \(3\times 3\), and \(5\times 5\). The output of each average pooling is then flattened and concatenated. The NaN Removal module is subsequently employed to eradicate NaN values. Eventually, the distance to the surrounding objects is obtained by global averaging the resultant depth points. ## 4 Experiments For traffic sign detection, we needed both richness and diversity in the signs. To this end, we used a combination of the DFG Traffic Sign Dataset [56] and Traffic-Sign Detection and Classification in the Wild dataset [66]. Furthermore, we used the Common Objects in Context (COCO) dataset [35] for the detection of pedestrians and other vehicles. To evaluate the overall performance of our proposed system in all sections, we combined the BDD100K dataset with our collected local dataset. BDD100K [63] is an extensive collection of 100,000 short videos of driving in various weather and driving conditions. Also, our comprehensive local dataset contains 100 fifteen-second 30 FPS videos of vehicles, roads, traffic signs, pedestrians, and sidewalks. ### Hardware Configuration The proposed modular package requires a high-resolution camera, a GPU, and a display. A 1080p camera captures images from the vehicle's perspective, an Nvidia GTX 1660 Ti GPU is in charge of tensor processing in neural networks, and a display indicates the system's results and any necessary alarms to the user. In real-world experiments, some additional hardware components are required for implementation, so a laptop is employed as the hardware framework in the vehicle to avoid this complexity. Fig. 6 shows the mentioned system implemented in a standard vehicle. ### Experimental Results Our intelligent system evaluation consists of three phases: assessment of area recognition sections, object detection parts, and distance measurement. Evaluations are performed on 500 frames extracted from the system's output on the BDD100K video dataset and our local dataset. The area recognition section includes detecting sidewalks and traffic lanes. The object detection section comprises identifying pedestrians, parked and moving vehicles, traffic signs, and traffic lights with their colors. Finally, distance labels have been utilized to evaluate the distance measurement section. The assessment methods for each step and the result tables are explained below. **Lane and sidewalk segmentation.** The segmentation of lane areas and sidewalk is evaluated using the Intersection over Union (IoU) metric, which is the intersection of the predicted and labeled masks per their union. The road lane masks are derived from the regions between the detected lines located in the modified ROI, and the sidewalk masks are the SGDepth modified outputs. These masks are \begin{table} \begin{tabular}{c c c c} & **IoU** & **Confidence** \\ \hline \hline \multirow{2}{*}{**Lane**} & _Local_ & 0.861 & 91.38\% \\ & _BDD100K_ & 0.842 & 89.11\% \\ \hline \multirow{2}{*}{**Sidewalk**} & _Local_ & 0.795 & 83.54\% \\ & _BDD100K_ & 0.762 & 80.23\% \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of the sidewalk and lane segmentation sections. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Veibles**} & _Local_ & **90.43\%** & 83.71\% & 90.75\% & 83.08\% \\ & _BDD100K_ & 94.92\% & 85.62\% & 90.03\% & 81.87\% \\ \hline \hline \multirow{2}{*}{**Pedentiars**} & _Local_ & 95.24\% & 90.91\% & 93.02\% & 86.96\% \\ & _BDD100K_ & 92.99\% & 98.29\% & 90.91\% & 83.33\% \\ \hline \hline \multirow{2}{*}{**Truffle Signes**} & _Local_ & 93.31\% & 87.94\% & 93.28\% & 93.28\% \\ & _BDD100K_ & 88.88\% & 94.12\% & 91.43\% & 82.41\% \\ \hline \hline \multirow{2}{*}{**Traffic Liphes**} & _Local_ & 94.74\% & 90.00\% & 92.31\% & 85.71\% \\ & _BDD100K_ & 93.55\% & 85.29\% & 89.23\% & 80.56\% \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of the object detection section. Figure 6: Implementation of the system in the urban environment. The hardware components include the camera, and the laptop, as a platform for running the package. The output information is displayed to the driver using the GUI. compared to their labels based on the IoU criterion, and a 0.5 threshold is utilized to calculate output accuracy. The predictions with IoU greater than 0.5 are regarded as valid, and with IoU less than 0.5 are considered false. The acquired results are displayed in Table 1. **Object detection.** The performance of our system in object detection is evaluated using precision, recall, F1-score, and accuracy criteria. Notably, detected traffic lights are evaluated as a correct prediction only if their color is classified correctly, as recognizing traffic lights without interpreting their color would not result in getting the right traffic message. The findings are shown in Table 2, which indicates that the system's accuracy is not less than 80% in any subsection. **Distance measurement.** We utilize relative accuracy (RA) as a metric for the accuracy of the distance measurement section of our proposed system. Relative accuracy is defined using the following formula: \[RA=1-\frac{|Distance_{actual}-Distance_{predicted}|}{Distance_{actual}} \tag{7}\] If \(RA\) is greater than 0.8, then the predicted value for distance is considered correct. Afterward, the accuracy of our system is the percentage of the correct predictions in the entire dataset. The system accuracy and mean relative accuracy are reported for both local and BDD-100k datasets in Table 3. Figs.7-8 show the performance results of the proposed perception system on local and BDD100K datasets. A graphical user interface (GUI) is also placed on the right side to represent traffic information to the driver. The GUI information includes traffic light status, traffic signs, and the distance to the nearest car so that the driver can be aware of the situation at a glance. Time (HRT) in driving is the time between when the driver is placed in a critical situation and when he decides to take action [29]. While a constant value for HRT cannot be reported, different methods report values between 1.27s to 1.55 [46, 3]. We believe that our system is real-time because its overall response time is more than six times less than that of humans. It is possible to connect our module to the vehicle's braking system so that it can slow down in critical situations; this adds the benefit of much faster response times compared to that of humans, rendering our system real-time. **Ablation on Lane Detection** We evaluate different SOTA approaches in lane detection on our proposed local and BDD100k datasets in terms of F1 accuracy, IoU, confidence, and FPS. IoU is first computed between predictions and ground truth, and lanes whose IoU exceeds a threshold (0.5) are considered true positives (TP). It is evident in Table 5 that despite CondLaneNet [36] and CLRNet's [65] high FPS, their accuracy is significantly inferior to PINet's. Therefore, this gives us the insight that such approaches are not generalizable across different environments. Specifically, they failed when no lines were present in the scene or when other environmental objects obscured the lines. PINet, however, has shown promising results when tested in a new setting, despite some errors that often occur under poor lighting conditions, which we have discussed in 3.1 on how to improve it. Overall, we have selected PINet as our lane detector since it delivers the requisite FPS for the system's real-time performance while attaining high accuracy in recognizing lanes. **Ablation on Object Detection.** Table 6 exhibits the performance of current SOTA approaches for the object detection task on our proposed local dataset and BDD100K. Results indicate that despite the slight difference between approaches, YOLOv5 [27] outperforms all the latest object detection methods in terms of mAP, mAP, and FPS, making it the best choice for our object detection module. ## 6 Discussion and Limitations Our experimental results demonstrate the power of our proposed modular system in perceiving the environment and safe driving. The selected networks have been chosen according to the trade-off of accuracy and FPS compared to their counterparts and show high accuracy results. The package FPS can even go higher as the PINet network alone has moderate FPS, and when this network is added to the package, even while the total package stays in real-time, its speed is reduced. Also, we may encounter missing frames on rare occasions, but because it is only one frame in 30 frames, for instance, it is hardly visible. However, by resolving this minor missing frame, the package's accuracy will improve. In addition, with the development of the package, we can also connect it to the brake and gas pedals. In such a way that after detecting the object at a distance less than a threshold in front of the car, the brake is activated, and the gas pedal is pressed according to the distance from the front cars and the speed limit. It brings us closer to safe driving without the high costs of self-driving vehicles. Overall, the perfect integration of multiple networks and applying appropriate changes to improve their performance, along with the designed GUI, has created a system that gives the necessary warnings to the driver while driving and reduces the risks of driving. ## 7 Conclusions In this paper, we have proposed an intelligent modular vision-based system to assist drivers in safe driving by alerting them in critical moments. There are four main stages in our proposed system: lane detection, object and sign detection, sidewalk segmentation, and distance measurement. PINet is used for lane detection and fine-tuned with a combination of our presented local dataset and BDD100K to alleviate its line-free road estimation issue. To prevent PINet from being impacted by turbulence related to the environment, a novel dynamic ROI is applied to the PINet output. Yolov5 has also been utilized to detect 3 class labels and 15 distinct traffic signs. Moreover, having leveraged SGDepth outputs, monocular depth estimation, and segmentation, we have developed a novel hierarchical method to precisely calculate the distance from neighboring vehicles. In addition, a graphical user interference (GUI) is designed to inform the driver about the traffic light's status, the nearest distance from the adjacent vehicles, and information about traffic signs ahead. Extensive experiments on our local and BDD100K datasets demonstrate that our proposed system performs noticeably well in different environments with different scenarios and conditions and can be an excellent tool to reduce human errors. In the future, we will concentrate on making our system more efficient so that it can operate on embedded boards. \begin{table} \begin{tabular}{c l c c c} & & **mAP** & **mAR** & **FPS** \\ \hline \multirow{2}{*}{**YOLOv3 [14]**} & _Local_ & 95.90\% & 89.27\% & \multirow{2}{*}{59.6} \\ & _BDD100K_ & 90.72\% & & \\ \hline **YOLOv3 [49]** & _Local_ & 90.76\% & 87.94\% & \\ & _BDD100K_ & 85.84\% & 91.43\% & \\ \hline **YOLOv4 [7]** & _Local_ & 93.33\% & 87.50\% & \multirow{2}{*}{32.1} \\ & _BDD100K_ & 88.88\% & 94.12\% & \\ \hline **YOLOv5 [27]** & _Local_ & 96.01\% & 90.91\% & \\ \hline **YOLOv5 [27]** & _BDD100K_ & 92.42\% & 96.28\% & \\ \end{tabular} \end{table} Table 6: Evaluation of different object detection approaches. NVIDIA Tesla T4 is used for evaluation.
2308.14275
A search for a muon to electron conversion in COMET
The COMET experiment at J-PARC, Japan, aims to search for muon to electron conversion with aluminium nuclei, achieving a sensitivity four orders of magnitude higher than the current upper limit at a 90% confidence level. The proton beam line has recently been completed, and muons have been successfully transported through the curved solenoid in the Phase-alpha of the experiment. In this paper, we will present preliminary results from the Phase-alpha beam measurement, the status of the intermediate sensitivity experiment (COMET Phase-I), and the ultimate goal of COMET Phase-II.
Yuki Fujii
2023-08-28T03:13:57Z
http://arxiv.org/abs/2308.14275v1
# A search for a muon to electron conversion in COMET ###### Abstract The COMET experiment at J-PARC, Japan, aims to search for muon to electron conversion with aluminium nuclei, achieving a sensitivity four orders of magnitude higher than the current upper limit at a 90% confidence level. The proton beam line has recently been completed, and muons have been successfully transported through the curved solenoid in the Phase-alpha of the experiment. In this paper, we will present preliminary results from the Phase-alpha beam measurement, the status of the intermediate sensitivity experiment (COMET Phase-I), and the ultimate goal of COMET Phase-II. Particle detectors; Spectrometers + ## 1 Introduction The COMET experiment searches for a transition of a muon to an electron without emission of neutrinos, also known as a \(\mu\)-\(e\) conversion. This process violates the conservation law of lepton flavour and is strictly forbidden in the standard model of particle physics (SM). Even when considering the minimum SM model extension with neutrino oscillations, the conversion rate (\(CR(\mu^{-}N\to e^{-}N)\)) is strongly suppressed to approximately \(10^{-54}\), which is completely undetectable. On the other hand, a large enhancement of the \(CR(\mu^{-}N\to e^{-}N)\), up to \(10^{-15}\), is predicted in many BSM models, e.g., Leptoquark models [1]. The signal electron is essentially monochromatic with an energy \(E_{\mu e}=M_{\mu}-B_{\mu}-E_{recoil}\approx 105\) MeV in case of muonic aluminium. This peculiar feature together with the low background level make the \(\mu\)-\(e\) conversion an ideal tool to investigate the BSM physics. The current upper limit on \(CR(\mu^{-}N\to e^{-}N)\) is \(7\times 10^{-13}\) at 90% confidence level set by the SINDRUM II experiment [3]. In COMET, our primary target is to achieve 100 times better single event sensitivity, \(10^{-15}\), which is called COMET Phase-I [2], followed by another two orders of magnitude improved sensitivity in Phase-II. This allows us to indirectly explore the new physics energy scale up to 100-1000 TeV. Furthermore, the complementary searches with other muon CLFV modes (\(\mu\to e\gamma\) and \(\mu\to eee\)) will provide more detailed new physics investigation by looking at slightly different parameter space. In addition to those two stages, we recently performed the first muon beam delivery to the COMET experimental area with a set of detectors, which is called COMET Phase-alpha. In this paper, we first introduce the event feature of the signal and backgrounds, then we report details of the experimental apparatus used in Phase-I and Phase-II, and the current preparation status. Also, the preliminary results from the first muon beam delivery and profile measurement in COMET Phase-alpha, are briefly presented. Signal and Backgrounds in COMET In the SM, a muon decays into an electron and two neutrinos to conserve the lepton flavour; \(\mu^{-}\to e^{-}\nu_{\mu}\overline{\nu_{e}}\). If this happens while the muon is bounded to an atom, this is known as muon decay in orbit (DIO). Due to the presence of a nucleus, DIO electrons receive the recoil energy and they can have a broad energy spectrum with an endpoint of \(\approx\)105 MeV, hence, the DIO is an intrinsic background for the signal event. Alternatively, a muon is captured by a nucleus and emits a neutrino and neutrons/protons/alpha particles. The capture process also includes a radiative process called radiative muon capture; RMC. This process produces a photon or a pair of \(e^{+}e^{-}\) in the final state. However, the maximum energy of gamma-rays is sufficiently lower than the of the conversion electron energy to make this background negligible. The DIO events can only be discriminated by their energy, therefore the precise momentum measurement is a key design factor. According to our studies based on simulations, a momentum resolution of 200 keV/\(c\) is required to achieve a single event sensitivity of \(10^{-15}\). To lower the sensitivity by a factor of 100, this resolution has to be improved further down to 150 keV/\(c\). In the case of the signal event, a muon stops inside the target and forms a muonic atom at \(1s\) ground state. The electron emitted from the muonic aluminium has a monochromatic energy of 105 MeV as already explained with a decay time of \(\tau_{\mu}=864\) ns. Other sources of background are the electrons produced by the in flight decays of muons or pions. In this case, the time delay due to the muonic atom lifetime is not present so this prompt background can be suppressed by a timing cut. We therefore adopt a pulsed beam structure with a separation of \(\sim\)1.2 \(\mu\)s between the pulses and use a delayed measurement window between 700 ns and 1170 ns from the bunch arrival on the target to avoid those prompt backgrounds in COMET. In this scheme, it is also important to maintain a highly pulsed structure, since any leakage protons appearing inter-bunches can create a signal-like secondary particle in this time window. We call this an "Extinction" factor defined as; \(R_{\rm ext}=(\#{\rm of}\ {\rm protons}\ {\rm between}\ {\rm bunches})/(\#{\rm of}\ {\rm protons}\ {\rm in}\ {\rm a}\ {\rm main}\ {\rm bunch})\), and this \(R_{\rm ext}\) should be less than \(10^{-10}\ (10^{-11})\) to accomplish the Phase-I (Phase-II) sensitivity. The last background is a cosmic-ray induced background randomly produced inside the detector with an energy close to 105 MeV. The cosmic-ray background will be suppressed by covering the entire detector with a cosmic-ray-veto detector. ## 3 Apparatus Muon beamTo produce the world's most intense muon beam, we use the proton beam accelerated up to 8 GeV in the main ring (MR) of the proton synchrotron at J-PARC (Japan Proton Accelerator Research Complex), Japan. To realise an ideal bunch separation timing, only one of two buckets is filled with protons at the rapid cycling synchrotron which accelerates the proton beam from 400 MeV to 3 GeV before the MR. This corresponds to 4 out of 9 total buckets in the main ring with a bunch separation of 1.17 \(\mu\)s. After the acceleration, protons are slowly extracted towards the J-PARC Hadron experimental facility by using one or two electrostatic septa (ESS) and multiple magnetic septa (SMS) while keeping the bunched structure (BSX: Bunched Slow Extraction) [4]. The protons are delivered to the COMET beam area and impinged into the pion production target which is surrounded by the 5 T pion capture solenoid. Through the hadronic interaction, pions are produced, and low momentum back scattered pions are efficiently transferred towards the 3 T muon transportation solenoid (MTS). Most of pions decay into muons before reaching the exit of the MTS. Due to the curved structure of the MTS, charged particles vertically drift depending on their charge and transverse momentum while travelling inside, with an additional dipole field used to adjust compensate the drifting of low momentum negative muons to keep them around the centre at the end of the MTS. Phase-IIn Phase-I, we will conduct the physics measurement using a Cylindrical Detector (CyDet) system placed at the end of the first 90 degrees of the MTS (see Figure 1) inside the 1 T solenoidal magnetic field (Detector Solenoid: DS). The muon stopping target, a series of 200 \(\mu\)m thick aluminium disks, is located inside the CyDet. The cylindrical shape helps to avoid the remaining beam particles such as pions and their secondaries that concentrate on the beam axis with small transverse momentum, as well as filtering the most of DIO electrons whose momentum is below 70 MeV/\(c\). The CyDet system consists of a Cylindrical Drift Chamber (CDC) and a set of Cylindrical Trigger Hodoscope (CTH) detectors as shown in Figure 2(a). The CDC, filled with a He:\(i\)C\({}_{5}\)H\({}_{10}\) (90:10) gas mixture, contains almost 5,000 all stereo anode wires, and is used to reconstruct the momentum of charged particles [5]. The wires are read out by 105 RECBE boards 1 with minor adaptations. The expected momentum resolution is about 200 keV/\(c\), which is good enough to achieve the Phase-I physics sensitivity. The CTH detectors are located at both ends of CDC and consist of two layers of concentric rings containing 64 counters each [7]. Each counter is a plastic scintillator (BC-408) with height\(\times\)width\(\times\)length of \(5\times 88\times 360\) (\(10\times 88\times 340\)) mm\({}^{3}\) for the inner (outer) layer. All scintillators are connected to fibre bundles to readout the scintillation photons outside of the detector solenoid for using silicon photo-multipliers (SiPM) which are relatively weak against radiation damage and have to be placed outside of the detector solenoid where the neutron level is an order lower, inside the cooling box. Due to an extremely high hit rate environment, the fake trigger could be an issue especially in Phase-I physics measurement. To suppress the trigger Figure 1: Overview of the COMET Phase-I and Phase-II conceptual design. rate caused by non signal-like fake events as low as possible, an online trigger system is introduced by utilising machine learning based algorithms inside field programmable gate arrays (FPGAs) in the CyDet system, which is called COTTRI (COmeT TRIgger) [11]. This will realise the trigger rate of less than 13 kHz, which is within the maximum available rate for our data acquisition (DAQ) system. We will also perform a direct beam measurement by replacing the CyDet with a set of planar shaped detectors called StrECAL (Straw+ECAL, see Figure 2(b)). The beam measurement detector consists of a series of straw tube tracker (StrawTrk) stations and a full absorption type electron calorimeter (ECAL). The StrawTrk has five stations and each one consists of one vertical layer and the other horizontal layer. Each layer has staggered straw tubes in order to minimise gaps to reduce inefficiency. A straw is made of 20 \(\mu\)m thick aluminised Mylar with a 10 mm diameter, filled with Ar:CO\({}_{2}\)=50:50. The ECAL is composed of arrays of LYSO (Lu\({}_{2(1-x)}\)Y\({}_{2x}\)SiO\({}_{5}\)) crystals with dimensions \(2\times 2\times 12\) cm\({}^{3}\)[9]. This length corresponds to 10.5 radiation lengths to contain the signal like electrons with minimum leakage. In each crystal, scintillation photons will be read by an avalanche photodiode (APD, Hamamatsu S8664-1010, \(10\times 10\) mm\({}^{2}\)). Both StrawTrk and ECAL signals are read by specially designed waveform digitisers [10] based on domino ring sampler chips (DRS4). The beam measurement detectors are also regarded as detector prototypes for Phase-II physics measurement. The DS will be covered by the iron yoke and cosmic ray veto counters made of plastic scintillator. Also, there is a possibility to introduce resistive plate chambers (RPC) around the MTS to DS bridging solenoid where plastic scintillators cannot work properly due to higher neutron levels expected. Phase-IITo achieve another 100 times better sensitivity than in Phase-I, the production target will be replaced from graphite to tungsten to increase the pion yield, and shielding materials will be reinforced. The MTS will be extended by another 90 degrees to further reduce pions, and there will be an additional electron spectrometer after the muon stopping target to filter most DIO electrons. This electron spectrometer enables using the detector system that covers almost fully \(2\pi\) forward direction, increasing the signal acceptance. In COMET Phase-II, it is crucial to improve the momentum resolution which is dominated by the multiple scattering effects. Therefore, all straw tubes will be replaced by thinner ones, 12 \(\mu\)m wall thickness with twice smaller diameter of 5 mm to reduce the material budget inside the tracking volume. The ECAL will cover the 50 cm radius to increase the acceptance with 1,920 crystals. Figure 2: Two detector systems used in COMET Phase-I. Physics Sensitivity In Table 1, we summarise the breakdown of the expected signal acceptance in each phase together with the expected number of background events. The efficiencies have been obtained from a detailed simulation using the results of the tests on the prototypes. The estimate of the number of events assumes 150 (230) days of data acquisition with 3.2 kW (56 kW) beam power in Phase-I (Phase-II) and \(4.7\times 10^{-4}\) (\(1.6\times 10^{-3}\)) stopped muons per proton on target. The single event sensitivity is defined as: \[SES(\mu^{-}N\to e^{-}N)=\frac{1}{N_{\mu}\cdot f_{cap}\cdot f_{ grad}\cdot A_{sig}}, \tag{1}\] where \(N_{\mu}\) is the total number of stopped muons, \(f_{cap}=0.61\) is the fraction of nuclear muon captures, \(f_{grad}=0.9\) is the model dependent fraction of muons to be the ground state. Given the signal efficiencies reported in Table 1, the expected S.E.S. for Phase-I (Phase-II) is \(3\times 10^{-15}\) (\(1.4\times 10^{-17}\)), which is 100 (10,000) times better than the one achieved by SINDRUM II. \begin{table} \begin{tabular}{l|c|c} \hline Items & Phase-I & Phase-II \\ \hline Signal acceptance & 0.2 & 0.18 \\ Trigger + DAQ efficiency & 0.8 & 0.87 \\ Track finding efficiency & 0.99 & 0.77 \\ Track selection & 0.9 & 0.94 \\ Momentum window & 0.93 & 0.62 \\ Timing window & 0.3 & 0.49 \\ \hline Total & 0.04 & 0.034 \\ \hline Sources of background & & \\ \hline Muon decay in orbit & 0.01 & 0.068 \\ Radiative muon capture & 0.0019 & Negligible \\ Neutron emission after muon capture & \textless{0.001} & Negligible \\ Charged particle emission after muon capture & \textless{0.001} & Negligible \\ Prompt beam induced electrons & \textless{0.0038} & 0.002 \\ Radiative pion capture & 0.0028 & 0.001 \\ Delayed beam induced2 & Negligible & 0.001 \\ Antiproton induced & 0.0012 & 0.30 \\ Cosmic ray induced & \textless{0.01} & 0.29 \\ \hline Total & \textless{0.032} & 0.662 \\ \hline \end{tabular} \end{table} Table 1: Signal efficiency and expected number of background events for the COMET Phase-I [2] / Phase-II [12, 13]. Status The tests of the 8 GeV BSX proton beam have shown that the extinction factor is enough to keep the out of pulse background at a negligible level [14]. The C-line construction was recently completed up to the COMET beam area and is ready to provide the beam. The coils for the proton capture solenoid are all winded and the cryostat cold bore part has been constructed [15]. The first 90 degrees of the muon transport solenoid, that is the full TS for Phase-I, has been constructed and tested up to 1.5 T. The coil winding for the detector solenoid has been completed and the cryostat will be made soon this year. The CDC has already been constructed, and its performance has been evaluated with cosmic rays. It meets the performance requirement, a spatial resolution of 170 \(\mu\)m [5]. We have almost completed the R&D for the CTH and the design has been fixed. The support structure has been partially purchased and the detector construction is underway. A trigger system for COMET Phase-I is being prepared and we have recently performed a successful test of a small trigger electronics chain. The first station of the straw tube tracker has been completed [16] and commissioned in Phase-alpha beam measurement very recently. The LYSO crystals are almost fully purchased, and the ECAL support structure has been manufactured. The CRV design is almost finalised and the first module has recently been been produced. For Phase-II, the thinner straw tube will be used with 12 \(\mu\)m with a twice smaller diameter. The test production of thin straw tubes was done in success, and they are under the long term stability test [17]. In February and March 2023, the proton beam has been successfully delivered to the COMET beam area as COMET Phase-alpha. In this measurement, pions produced in a thin graphite target have been transported through the MTS and the muons coming from their decays have been successfully observed by a set of dedicated detectors. Preliminary results can be found in [18] while the final ones will be published soon. ## 6 Summary and Prospects The COMET experiment aims to search for the \(\mu\)-\(e\) conversion with 100 and 10,000 better sensitivities than the present upper limit. The COMET beam line has been completed in 2022. More recently the proton beam has successfully beam delivered to COMET Phase-alpha and the first muons have been observed. Phase-I preparation is well on track and first physics measurements are expected for 2024-2025. ## Acknowledgments This was written on behalf of, and the author acknowledges strong support from, the COMET collaboration. We acknowledge support from JSPS, Japan; ARC, Australia; Belarus; NSFC, China; IHEP, China; IN2P3-CNRS, France; CC-IN2P3, France; SRNSF, Georgia; DFG, Germany; JINR; IBS, Korea; RFBR, Russia; STFC, United Kingdom; and the Royal Society, United Kingdom. The views expressed herein are those of the author and are not necessarily those of the Australian Government or Australian Research Council.
2310.15309
Lead-free Magnetic Double Perovskites for Photovoltaic and Photocatalysis Applications
The magnetic spin degrees of freedom in magnetic materials serve as additional capability to tune materials properties, thereby invoking magneto-optical response. Herein, we report the magneto-optoelectronic properties of a family of lead-free magnetic double perovskites Cs_{2}AgTX_{6} (T = Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu; X=Cl, Br, I). This turns out to provide an extremely fertile series, giving rise to potential candidate materials for photovoltaic(PV) applications. In conjunction with high absorption coefficient and high simulated power conversion efficiency for PV applications, few compounds in this series exhibit novel magnetic character useful for spintronic applications. The interaction between magnetism and light can have far-reaching results on the photovoltaic properties as a consequence of the shift in the defect energy levels due to Zeeman effect. This subsequently affects the recombination rate of minority carriers, and hence the photoconversion efficiency. Moreover, the distinct ferromagnetic and anti-ferromagnetic ordering driven by hybridization and super-exchange mechanism can play a significant role to break the time-reversal and/or inversion symmetry. Such a coalescence of magnetism and efficient optoelectronic response has the potential to trigger magnetic/spin anomalous photovoltaic (non-linear Optical) effect in this Cs$_{2}$AgTX$_{6}$ family. These insights can thus channelize the advancement of lead-free double perovskites in magnetic/spin anomalous photovoltaic field as well.
Muskan Nabi, Sanika S. Padelkar, Jacek J. Jasieniak, Alexandr N. Simonov, Aftab Alam
2023-10-23T19:22:16Z
http://arxiv.org/abs/2310.15309v1
# Lead-free Magnetic Double Perovskites for Photovoltaic and Photocatalysis Applications ###### Abstract The magnetic spin degrees of freedom in magnetic materials serve as additional capability to tune materials properties, thereby invoking magneto-optical response. Herein, we report the magneto-optoelectronic properties of a family of lead-free magnetic double perovskites Cs\({}_{2}\)AgTX\({}_{6}\) (T = Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu; X=Cl, Br, I). This turns out to provide an extremely fertile series, giving rise to potential candidate materials for photovoltaic(PV) applications. In conjunction with high absorption coefficient and high simulated power conversion efficiency for PV applications, few compounds in this series exhibit novel magnetic character useful for spintronic applications. The interaction between magnetism and light can have far-reaching results on the photovoltaic properties as a consequence of the shift in the defect energy levels due to Zeeman effect. This subsequently affects the recombination rate of minority carriers, and hence the photoconversion efficiency. Moreover, the distinct ferromagnetic and anti-ferromagnetic ordering driven by hybridization and super-exchange mechanism can play a significant role to break the time-reversal and/or inversion symmetry. Such a coalescence of magnetism and efficient optoelectronic response has the potential to trigger magnetic/spin anomalous photovoltaic (non-linear Optical) effect in this Cs\({}_{2}\)AgTX\({}_{6}\) family. These insights can thus channelize the advancement of lead-free double perovskites in magnetic/spin anomalous photovoltaic field as well. ## I Introduction Organic-inorganic halide double perovskites have emerged as a promising class of materials in various fields such as ferroelectrics,[1] spintronics,[2] photovoltaics,[3; 4] and optoelectronic devices such as light emitting diodes (LED's),[5] sensors,[6], X-ray detectors[7] and photo-detectors [8]. Lead-free halide double perovskites (DPs) with general formula A\({}_{2}\)BB\({}^{\prime}\)X\({}_{6}\), formed by a combination of one monovalent and one trivalent ion, have emerged ubiquitously as a stable and green alternative to toxic lead-based halide perovskites. The adept optoelectronic properties of these materials can be associated with the compositional flexibility, dielectric properties and exciton binding energies ranging over several orders of magnitude [9; 10]. Amongst the DPs family, materials with A=Cs\({}^{+1}\), B= Ag\({}^{+1}\) or Cu\({}^{+1}\) and B\({}^{\prime}\)=Bi\({}^{3+}\),Sb\({}^{3+}\) or In\({}^{3+}\) have been proposed to be environmentally friendly alternatives to lead-based perovskites [11]. Some of the experimentally studied materials, e.g., Cs\({}_{2}\)AgBiX\({}_{6}\) (X=Cl, Br, I)[12; 13], Cs\({}_{2}\)AgSbX\({}_{6}\)[14; 15], Cs\({}_{2}\)AgInX\({}_{6}\)[16; 17] and Cs\({}_{2}\)InB\({}^{3+}\)X\({}_{6}\) (B\({}^{3+}\) =Sb, Bi) [18; 19] have received substantial interest because of their promising properties. Cs\({}_{2}\)AgBiBr\({}_{6}\)[20] is one of the most frequently investigated materials in this class having higher thermodynamical stability although with an indirect band gap of about 2 eV and power conversion efficiency of about ca 3%. However, recent experimental and theoretical evidences have shown that the Ag-Bi variants exhibit intrinsic and strong electronic confinement, which is manifested in very large exciton binding energies (hundreds of meV), strong carrier localization and reduced free-carrier mobility[21; 22]. The exciton binding energies in halide DPs are influenced by the electronic structure of the alternating B and B\({}^{\prime}\) site cations. Hence, via chemical substitution at the B and B\({}^{\prime}\) sites, existing set of halide DPs can be optimized towards better performance. The seminal contributions to modulate the existing class of materials have been underpinned by several viable strategies like alloy/doping mediated band-gap engineering, optimizing synthesis processes with a view to tackle critical challenges [23]. However, an aspect that remains underexplored but presents a lot of promise is the magnetic spin degrees of freedom available in magnetic perovskites to tune the photovoltaic (PV) properties, which in all likelihood can lead to staggering spin-related properties. In this regard, few halide perovskites, viz.,Cs\({}_{2}\)AgT\({}^{3+}\)Cl\({}_{6}\) (T=Fe, Cr) [24; 25], Cs\({}_{2}\)NaCl\({}^{3+}\)Cl\({}_{6}\) (T=Fe, V, Mn, Ni) [26; 27], Cs\({}_{2}\)KT\({}^{3+}\)Cl\({}_{6}\) (T=Mn, Co, Ni) [28], Cs\({}_{2}\)GeT\({}^{3+}\)X\({}_{6}\) (M = Ti, V, Cr, Mn, Fe, Co, Ni, or Cu)[29] and various oxide perovskites [30; 31] have been reported to show interesting magnetic properties. Amongst these compounds, cubic Cs\({}_{2}\)AgFeCl\({}_{6}\)[24] in particular has been experimentally reported to exhibit promising optoelectronic characteristics and PV performance. Correspondingly, hexagonal Cs\({}_{2}\)AgCrCl\({}_{6}\)[25] was synthesized in the paramagnetic phase exhibiting appreciable optoelectronic properties. Yet, the interconnection between the magnetic degrees of freedom and the optical properties are still due to be explored. The interplay of two spin degrees of freedom in these magnetic systems gives a wider plat-form for mod ulating the absorption range with an attempt to harvest the entire solar radiation spectrum. Oxide perovskite Bi\({}_{2}\)FeCrO\({}_{6}\)[32] is another state-of-the-art magnetic material which has been experimentally reported with remarkable efficiency of ca 8%. The interaction between the magnetic field and light is called the magneto-optical effect, which includes - Zeeman effect, Faraday effect and magneto-optical kerr effect. The exceptional performance of Bi\({}_{2}\)FeCrO\({}_{6}\) is attributed to two prime magneto-optical effects, viz., magnetoelectric coupling and Zeeman effect. This system has gained attraction on account of its coexistent ferroelectric properties with inbuilt polarization - thus leading to magnetoelectric coupling. Whereas, the Zeeman effect causes the energy levels to split when placed in an applied magnetic field. As a consequence of which, the energy levels depart from the band gap center, reducing the recombination rate of minority carriers, and thus prolonging minority-carriers lifetime. This capability of magnetism in tuning the optoelectronic properties is widely explored in oxide perovskites and not much has been reported for halide perovskites. The latter are good photovoltaic materials with promising photovoltaic power conversion efficiencies [33]. So, combining the magnetic effect with halide perovskites can provide new opportunities for highly efficient devices. Herein, we primarily focus to propose a family of halide Dps where an amalgagam of magnetic and optoelectronic properties will be discussed towards magneto-photovoltaic applications. With that perspective, we studied a set of 27 compounds Cs\({}_{2}\)AgT\({}^{3+}\)X\({}_{6}\) (T=Sc,Ti,V,Cr,Mn,Fe,Co,Ni,Cu; X=Cl,Br,I) using the _ab-initio_ density functional theory (DFT) simulation to explore the interplay between different magnetic ordering and optoelectronic properties. For example, a magnetic semiconductor with unequal band gap in the two spin channels can capture two different ranges of the solar spectrum and hence has the capability to provide higher quantum yield. A detailed structural and chemical phase stability calculation confirms 14 out of 27 compounds to be stable in a single phase, each with different magnetic ordering. Few others also show robust stability but with the possibility for the formation of a secondary phase. Due to the presence of 3d transition elements (T\({}^{3+}\)), this family of compounds gives a fertile ground to realize several interesting properties such as half metallic ferromagnets, antiferromagnetic semiconductors, ferromagnetic and nonmagnetic semiconductors. Based on these versatile properties, these compounds are classified for different renewable energy applications. Such a detailed study of synthesizability, electronic/magnetic structure and optoelectronic properties paves a guiding path to experimentalists for future exploration these novel magnetic perovskites. ## II Computational details All calculations are performed using the DFT as implemented in the Vienna ab-initio simulation package (VASP) [34; 35]. For spin-polarized calculations, electronic exchange correlation functional Perdew-Burke-Ernzerhof (PBE) [36] were used within generalized gradient approximation (GGA) [37] along with projected augmented wave (PAW) pseudopotentials [38; 39]. For the wavefunction expansion, a plane-wave energy cutoff of about 450 eV was used for all calculations. The Brillouin zone integration was done within the tetrahedron method using 8\(\times\)8\(\times\)8 k-point grid for structural optimization. While for self-consistent field (SCF) calculations, \(\Gamma\)-centered k-mesh of size 12\(\times\)12\(\times\)12 is used. Keeping in mind the possibility of both cubic and hexagonal phases (as experimentally reported for two different systems belonging to this class) along with different magnetic ordering, structural optimization for all the compounds was carried out in both these phases considering non-magnetic (NM), ferromagnetic (FM) and antiferromagnetic (AFM) ordering. The force (energy) was converged up to 10\({}^{-3}\) eVA\({}^{-1}\) (10\({}^{-6}\) eV). Due to the strongly correlated nature of the 3d-transition elements, an "on-site" Hubbard potential (U) [40] was applied to capture the intra-atomic interactions between these strongly correlated electrons. U value was calculated using linear response ansatz of Cococcioni et al.[41]. The calculation procedure and the simulated U-values for different systems are presented in Sec. S1 (see Figure S1 and Table S1) of the Supplementary Information (SI) [42]. To estimate the theoretical photoconversion efficiency we herein reported the spectroscopic maximum limited efficiency (SLME) of the semiconducting systems [43]. ## III Results and discussion ### Structural and Chemical phase stability The structural stability of the halide DPs [44] is dictated via a geometrical tolerance factor (\(t\)) defined as \[t=\frac{r_{A}+r_{X}}{\sqrt{2}(r_{avg}+r_{X})} \tag{1}\] where \(r_{A}\), \(r_{X}\), and \(r_{avg}\) are the Shannon ionic radii of A cation, X anions, and average ionic radii of B and B\({}^{\prime}\) cations respectively. Table S2 (see supplementary) [42] shows the tolerance factor of all the compounds which are found to lie in the range \(t\)=0.91-0.98 predicting these compounds to stabilize in cubic structure. Interestingly, Cs\({}_{2}\)AgCrCl\({}_{6}\)[25], was experimentally reported to crystallize in the hexagonal phase, which is normally associated with values of \(t\) greater than one. Certainly, there exist other instances where the ideal cubic structure is not observed experimentally, departing from these purely geometrical tolerance factor guidelines [45; 46]. There are also other factors such as the B cation off-centering, and the presence and stereo activity of a lone pair of electrons which are also suggested to have strong impact on the geometrical stability of perovskites, in general [47; 48; 49]. However, for transition metal based perovskites, the non-bonding (lone) pairs of electrons in transition metals do not influence molecular geometry and are said to be stereo chemically inactive. And hence we believe that the presence of lone pairs in transition metals will not affect the molecular geometry of the BX\({}_{6}\) octahedra in A\({}_{2}\)BB'X\({}_{6}\) DPs. To address the above mentioned ambiguities, we investigated the chemical/thermodynamics stability in both cubic as well as hexagonal phases for the entire series. For the cubic structure of Cs\({}_{2}\)AgTX\({}_{6}\), 40 atoms conventional unit cell (space group Fm\(\overline{3}\)m (#225)) was considered, in which Cs is enclosed by a cage of 12 X-atoms while Ag and T adapt a corner sharing AgX\({}_{6}\) and TX\({}_{6}\) octahedra, a s shown in Fig. 1(a). The Wyckoff positions of Cs, Ag, T and X were 8c(0.25,0.25,0.25), 4b(0.5,0.5,0.5), 4a(0,0,0) and 24e(0.24,0,0) respectively. For the hexagonal phase, we used a 20 atoms primitive unit cell (space group R\(\overline{3}\)m (166)) adapting a Ba\({}_{2}\)NiTeO\({}_{6}\)[50] type structure, see Fig. 1(b). The Wyckoff positions in this case were 6c(0,0,0.21), 6c(0,0,0.37), 3a(0,0,0) and 8h(0.48,\(\overline{x}\),0.24) for Cs, Ag, T and X respectively. To assess the thermodynamic stability, we calculated the formation energy (\(\Delta\)E\({}_{F}\)) of all the compounds in both cubic as well as hexagonal phases considering three different magnetic ordering (NM, FM and AFM), as depicted in Fig. 2. The asterisk (*) indicates the energetically most stable phase for a given compound. The negative \(\Delta\)E\({}_{F}\) indicates the thermodynamic possibility of the formation of these compounds. Herein, a comparison of the data for materials with different halides imply that chloride-based compounds are more robust than those based on bromide and iodide. This trend validates the available experimental findings, according to which bromides and chlorides-based halide DPs are easier to synthesize than the iodide counterparts. Unsurprisingly, the experimental reports on the latter are quite rare [51]. The actual values of formation energies of all the examined compounds in their respective stable structural/magnetic phase are provided in Table S3 of SI [42]. Experimentally, Cs\({}_{2}\)AgFeCl\({}_{6}\)[24; 52] and Cs\({}_{2}\)AgCrCl\({}_{6}\)[25] are reported to crystallize in cubic (with antiferromagnetic nature at low temperature) and hexagonal structures respectively. Due to the involvement of 3d transition elements (Fe, Cr), these compounds are expected to show rich magnetic phase diagrams including the possibility of distinct magnetic ordering such as FM, AFM etc. in the lower temperature range. This aspect of the aforementioned systems is overlooked in the literature but it can be extremely important to dictate their overall magneto-optical properties. As such, we have studied all the compounds in different magnetic configurations, namely NM, FM and AFM. Remarkably, the magnetic as well as structural phases of these compounds are not significantly affected by the chemical nature of the halide anions indicating that perhaps the transition element T\({}^{3+}\) plays a pivotal role in determining the crystal/magnetic structure of these systems. However, the tolerance factor opens up uncertainty in the experimentally stable structural phase, as observed in case Cs\({}_{2}\)AgCrCl\({}_{6}\)[25]. Apparently, the simulated thermodynamic chemical phase diagram gives more accurate information about the degree of synthesizability of such compounds (i.e. whether in cubic/hexagonal and/or NM/FM/AFM phase). The chemical phase diagrams of the experimentally synthesized systems Cs\({}_{2}\)AgFeCl\({}_{6}\)[24] and Cs\({}_{2}\)AgCrCl\({}_{6}\)[25] are shown in Fig. 3. They display a narrow stable region (shown by shaded grey area) in comparison with its competing secondary phases. For each of these two compounds, 10 secondary phases are simulated to draw their phase diagrams (Table S4) [42]. The narrow stable region indicates that it may be an arduous task to synthesize these systems in a single phase. Analyzing the energetics of our calculations, the most competing secondary phases for the two systems are Cs\({}_{3}\)Fe\({}_{2}\)Cl\({}_{9}\) and Cs\({}_{3}\)Cr\({}_{2}\)Cl\({}_{9}\) which depending on the synthesis condition may restrain the respective target phases Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\) to remain stable in a single phase. There is a possibility of the co-existence of binary phases in the synthesized compounds with a slight dominance of Cs\({}_{3}\)Fe\({}_{2}\)Cl\({}_{9}\) over Cs\({}_{2}\)AgFeCl\({}_{6}\). Recently, the phase segregation and existence of secondary phases along with the target phase have also been observed in Cs-Pb-Br thin films [53]. For example, Caicedo-Davila et al. [54] reported the coexistence of CsPbBr\({}_{5}\) and CsPbBr\({}_{3}\) during the synthesis of CsPbBr\({}_{5}\) based on the competing phase diagram analysis which was further validated from the DFT calculations. Also, Yu et al, [55] confirmed that the coexistence of CsPb\({}_{2}\)Br\({}_{5}\) and CsPbBr\({}_{3}\) is inevitable during the synthesis of CsPb\({}_{2}\)Br\({}_{5}\). However, the presence of binary phases can be negated, by optimizing the synthesis process and varying the environmental parameters like pressure.There are reports where pressure is used as an important parameter to study the structural-property relationships [56]. To Figure 1: Crystal structure of Cs\({}_{2}\)AgTX\({}_{6}\) (T=transition elements) in (a) cubic and (b) hexagonal phases. FM and AFM indicates ferromagnetic and antiferromagnetic ordering. crosscheck the pressure effect, we performed DFT calculations for two compounds Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\), by systematically varying the lattice constant which in turn results in change in unit cell volume and hence the pressure. In the case of Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\), we have studied the chemical stability over a range of pressure from O Kb to 44 Kb and 0 Kb to 18.82 Kb respectively. The chemical phase stability diagram of these compounds in the above pressure range are presented in SI (see Fig. S3)[42]. it can be observed that with increase in pressure the stability region of target phase decreases. Although the synthesizability of Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\) decreases with increase in pressure, it is important to note that both Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\) have been synthesized experimentally under ambient condition. So, from the chemical stability point of view we confirm that Cs\({}_{2}\)AgFeCl\({}_{6}\)[52] crystallize in cubic structures with AFM ordering while Cs\({}_{2}\)AgCrCl\({}_{6}\) in hexagonal structure with ferromagnetic ordering respectively. We have simulated the chemical phase diagrams of the rest of the compounds in the series as well, and found 14 out of 27 to stabilize in a single phase (Fig. S2) [42]. The respective competing secondary phases used for each target compound are provided in Table S4[42]. These 14 compounds are Cs\({}_{2}\)AgScI\({}_{6}\), Cs\({}_{2}\)AgScBr\({}_{6}\), Cs\({}_{2}\)AgScCl\({}_{6}\), Cs\({}_{2}\)AgVBr\({}_{6}\), Cs\({}_{2}\)AgVCl\({}_{6}\), Cs\({}_{2}\)AgCrBr\({}_{6}\), Cs\({}_{2}\)AgCrCl\({}_{6}\), Cs\({}_{2}\)AgMnBr\({}_{6}\), Cs\({}_{2}\)AgMnCl\({}_{6}\), Cs\({}_{2}\)AgFeBr\({}_{6}\), Cs\({}_{2}\)AgFeCl\({}_{6}\), Cs\({}_{2}\)AgCoBr\({}_{6}\), Cs\({}_{2}\)AgCoCl\({}_{6}\), and Cs\({}_{2}\)AgNiCl\({}_{6}\). Note that, out of 14 compounds, only one iodide-based perovskite (Cs\({}_{2}\)AgScI\({}_{6}\)) is predicted to be stable, again confirming the difficulty in stabilizing the DPs with this anion. ### Electronic structure and magnetic properties Table 1 shows the band gap (E\({}_{g}\)), difference between direct and indirect band gap (\(\Delta\)) and atom projected magnetic moments on the transition elements (T) for all the 14 stable compounds, as mentioned in the previous section. Similar details for all the 27 compounds (Cs\({}_{2}\)AgTX\({}_{6}\)) are presented in Table S5 (see supplementary) [42]. The energetically most stable structure and the corresponding magnetic phase for each compound is mentioned within the parenthesis of the first columns of Table 1 and S5. In order to correctly capture the effect of electron-electron correlation arising out of transition elements, we have applied an onsite Hubbard U correction for each compound which is calculated self consistently. Upon Inclusion of Hubbard potential, the degeneracy of d-states found around the Fermi level is lifted in some of the examined compounds resulting in the increase of band gap. As follows from the data in Table 1, these set of compounds shows diverse electronic/magnetic properties ranging from nonmagnetic metals/semiconductors to ferromagnetic half metals to antiferromagnetic semiconductors and ferromagnetic semiconductors. Cs\({}_{2}\)AgFeX\({}_{6}\) (X= Cl, Br, I) show direct nature of band gap (E\({}_{g}\)), with E\({}_{g}\) values ranging from 1.17 eV (Cl) to 0.64 eV (Br) to 0.12 eV (I). Though, other compounds show indirect nature of band gap, the difference (\(\Delta\)) between the direct (E\({}_{g}^{d}\)) and indirect (E\({}_{g}\)) gaps is quite small (ranging from 0.01-0.09 eV). As expected, the band gap value de Figure 3: Chemical phase diagram of (a) Cs\({}_{2}\)AgFeCl\({}_{6}\) (b) Cs\({}_{2}\)AgCrCl\({}_{6}\). Grey shaded area indicates the extent of stability region for the target systems, whereas the cyan shaded area represents secondary phases. Figure 2: Formation energies (\(\Delta\)E\({}_{F}\)) of Cs\({}_{2}\)AgTX\({}_{6}\) (T=Sc,Ti,V,Cr,Mn,Fe,Co,Ni,Cu; X=Cl,Br,I) in two different structures (cubic and hexagonal) and three different magnetic phases (NM, FM and AFM). The asterisk (*) indicates the energetically most stable phase for each compound. creases owing to the decrease in the electronegativity of the halogen elements, as observed in other halide perovskites as well [57]. The magnetism in some of these compounds mainly arises from the partially filled transition d-elements at T site( Table 1). Based on these electronic/magnetic properties, one can classify these materials into different categories of compounds with (1)- moderate band gap values, (2) large band gap values, (3)-metallic nature in one spin channel and semiconducting in the other (half-metals). Such varying properties can make these materials useful for various applications such as photovoltaics, photo(electro)catalysis and spintronics, which needs further in-depth analysis.(Section-3 of the SI for more details). To explore the potential of these materials for the magneto-photovoltaic application, two experimentally synthesized compounds Cs\({}_{2}\)AgFeCl\({}_{6}\)[24] and Cs\({}_{2}\)AgCrCl\({}_{6}\)[25] which show two distinct magnetic ordering (AFM and FM respectively) were chosen for deeper examination. Figures 4(a) and 4(b) show the bulk electronic band structures of these two materials. The magnetic ordering preferably shows up at low temperature which to the best of our knowledge, has not been experimentally testified. Our simulation confirms Cs\({}_{2}\)AgFeCl\({}_{6}\) to be antiferromagnetic with a band gap of around 1.17 eV which agrees fairly well with experimental value of 1.55eV [24]. Both valence band maxima (VBM) and conduction band minima (CBM) lie at a common high-symmetry X-point resulting in a direct band gap. From the partial density of states (pDOS), one can see that the CBM is mostly composed of the Fe-d states hybridized \begin{table} \begin{tabular}{l l l l} \hline \hline \(\mathbf{Cs}_{2}\)**AgTX\({}_{6}\)** & **Band gap (E\({}_{g}\))** & \(\Delta\) & \(m_{\mathbf{T}}\) \\ & **(eV)** & **(eV)** & **(\(\mu_{B}\))** \\ \hline Cs\({}_{2}\)AgScCl\({}_{6}\) (Hex NM) & 3.64 (indirect) & 0.01 & - \\ Cs\({}_{2}\)AgVCl\({}_{6}\) (Hex FM) & (\(\uparrow\)) 2.40 (indirect) & 0.01 & 1.93 \\ & (\(\downarrow\)) 2.46 (indirect) & 0.07 & \\ Cs\({}_{2}\)AgCrCl\({}_{6}\) (Hex FM) & (\(\uparrow\)) 1.69 (indirect) & 0.02 & 3.15 \\ & (\(\downarrow\)) 3.51 (indirect) & 0.07 & \\ Cs\({}_{2}\)AgMnCl\({}_{6}\) (Cubic FM) & (\(\uparrow\)) metallic & - & 4.3 \\ & (\(\downarrow\)) 3.81 (direct) & - & \\ Cs\({}_{2}\)AgFeCl\({}_{6}\) (Cubic AFM) & 1.17 (direct) & - & 4.12 \\ Cs\({}_{2}\)AgCoCl\({}_{6}\) (Hex NM) & 1.27 (indirect) & 0.03 & - \\ Cs\({}_{2}\)AgScBr\({}_{6}\) (Hex NM) & 2.89 (indirect) & 0.03 & - \\ Cs\({}_{2}\)AgVBr\({}_{6}\) (Hex FM) & (\(\uparrow\)) 1.95 (indirect) & 0.13 & 1.99 \\ & (\(\downarrow\)) 3.00 (indirect) & 0.11 & \\ Cs\({}_{2}\)AgCrBr\({}_{6}\) (Hex FM) & (\(\uparrow\)) 1.09 (indirect) & 0.05 & 3.33 \\ & (\(\downarrow\)) 3.19 (indirect) & 0.09 & \\ Cs\({}_{2}\)AgMnBr\({}_{6}\) (Cubic FM) & (\(\uparrow\)) metallic & - & 4.32 \\ & (\(\downarrow\)) 2.87 (direct) & - & \\ Cs\({}_{2}\)AgFeBr\({}_{6}\) (Cubic AFM) & 0.64 (direct) & - & 3.98 \\ Cs\({}_{2}\)AgCoBr\({}_{6}\) (Hex NM) & 0.96 (indirect) & 0.05 & - \\ Cs\({}_{2}\)AgNiBr\({}_{6}\) (Hex FM) & (\(\uparrow\)) metallic & - & 1.66 \\ & (\(\downarrow\)) 1.93 (direct) & 0.03 & \\ Cs\({}_{2}\)AgScI\({}_{6}\) (Hex NM) & 2.51 (indirect) & 0.08 & - \\ \hline \hline \end{tabular} \end{table} Table 1: Magnitude and nature of bandgap (E\({}_{g}\)), difference between direct and indirect band gap (\(\Delta\)) and local magnetic moment on T-atoms (\(m_{\mathrm{T}}\)) of 14 chemically stable Cs\({}_{2}\)AgTX\({}_{6}\) compounds. \(\uparrow\) and \(\downarrow\) stands for the spin up and down channels respectively. The description within parenthesis corresponding to every system indicates their stable phase. Figure 4: Spin resolved band structure and partial density of states (PDOS) for (a) cubic AFM Cs\({}_{2}\)AgFeCl\({}_{6}\) and (b) hexagonal FM Cs\({}_{2}\)AgCrCl\({}_{6}\). The former is an AFM semiconductor with a band gap of 1.17 eV, while the later is a FM semiconductor with band gaps 3.51 and 1.69 eV for spin \(\downarrow\) and \(\uparrow\) channels respectively with some of the Cl-p states with a slight contribution from the Ag-d states. The VBM is mostly composed of the Cl-p and Ag-d states. The strong hybridization between these states near the VBM is responsible for strong band dispersion leading to low effective mass of holes and hence higher mobility of holes as compared to electrons, indicating the p-type semiconducting behaviour of Cs\({}_{2}\)AgFeCl\({}_{6}\). Under constant relaxation time approximation, we have simulated the carrier mobilities of Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\). Figure S8 (see supplementary) [42] shows a comparative plot of electron and hole mobilities of Cs\({}_{2}\)AgFeCl\({}_{6}\) and Cs\({}_{2}\)AgCrCl\({}_{6}\) across varying temperature range at a carrier concentration of 10\({}^{10}\) /cm\({}^{3}\). For comparison, carrier mobilities for a well studied DP Cs\({}_{2}\)AgBiBr\({}_{6}\) is also shown. The Cs-s states are scant around the Fermi level and dominantly lie deep down the conduction band and hence play a passive role in defining the electronic properties. Due to the AFM ordering, the net magnetization of the unit cell is zero while the atom-projected Fe moment is \(\sim\)4.12 \(\mu_{B}\). In the case of the Br and the I based analogues of this material, the nature of the orbital contributions are similar, but the band gap decreases as expected.Next we consider Cs\({}_{2}\)AgCrCl\({}_{6}\) which is a ferromagnetic semiconductor ( Figure 4(b) ). It has two different semiconducting gaps (\(E_{g}\)) in two spin channels. In the spin down channel, the CBM lies at F high symmetry k-point while the VBM lies at T-point, producing an indirect band gap (\(E_{g}\)=3.15 eV). In the spin up channel, the CBM also lies at F-point but the VBM occur at L-point, with an indirect band gap of 1.69 eV. From the pDOS plot, one can notice a hybridization between the Cl-p and Ag-d states in valence band in both the spin channels. In the spin up channel the conduction band is mostly composed of the Cr-d states and some contribution from the p-states. In both the spin channels, the Ag-d states reside near the valence band because it has full d-valence electrons. The d-orbitals of Cr are half filled with three unpaired electrons giving rise to a net moment of \(\sim\)3.1\(\mu_{B}\). The electronic band structure along with the DoS for the rest of the chemically stable compounds (Cs\({}_{2}\)AgTX\({}_{6}\)) are given in SI (see Figs. S4-S7) [42]. Robust chemical stability, optimal band gap and varying magnetic orderings makes these materials potential candidates for magneto-photovoltaics applications. ### Optical properties Hybrid halide perovskites have proven to be potential candidates for photovoltaic applications owing to various factors like excellent absorption coefficients and suitable band gaps. The optical absorption coefficient (\(\alpha\)) is calculated from the dielectric function using the following equation - \[\alpha(\omega)=\frac{\sqrt{2}\omega}{c}\sqrt{Re(\epsilon_{1}(\omega^{2}))+ \mathrm{Im}\,m(\epsilon_{2}(\omega^{2}))}-Re(\epsilon_{1}(\omega^{2})) \tag{2}\] where \(c\) is the speed of light, \(\epsilon_{1}\) and \(\epsilon_{2}\) represent the real and imaginary parts of the dielectric function, and \(\omega\) is the frequency. \(\epsilon_{2}\) was calculated by using independent particle approximation [58] and \(\epsilon_{1}\) was calculated from \(\epsilon_{2}\) using Kramers-Kronig relation [59] In case of hexagonal structure, the dielectric tensor is diagonal with two different components: \(\epsilon_{2\alpha_{zz}}\) (along the z-axis of the crystal), and \(\epsilon_{2\alpha_{xx}}\) and \(\epsilon_{2\alpha_{yy}}\) (along any direction in the plane perpendicular to the hexagonal z-axis). Based on the optimal band gap values in the visible range, few of the compounds viz., Cs\({}_{2}\)AgCoI\({}_{6}\), Cs\({}_{2}\)AgCoBr\({}_{6}\), Cs\({}_{2}\)AgCoCl\({}_{6}\), Cs\({}_{2}\)AgFeBr\({}_{6}\), Cs\({}_{2}\)AgFeCl\({}_{6}\), Cs\({}_{2}\)AgCrI\({}_{6}\), Cs\({}_{2}\)AgCrBr\({}_{6}\), Cs\({}_{2}\)AgCrCl\({}_{6}\), and Cs\({}_{2}\)AgVBr\({}_{6}\) were found to be promising for the PV applications.The optical absorption of these compounds are relatively high (\(>3\times 10^{5}cm^{-1}\)) for most of the compounds, as shown in Fig. 5(a-f). The onset of these absorption curves corroborates with their respective band gaps (Table 1). For example in the case of Cs\({}_{2}\)AgFeCl\({}_{6}\), one can observe that optical absorption onset is around 1.17 eV which matches our simulated electronic band gap. In order to quantify the power conversion efficiency, a parameter known as spectroscopic limited maximum efficiency (SLME) introduced by Yu et al [43] is estimated (see Sec. 3.1 of SI for more details). For SLME, the input parameters are band gap, absorption coefficient, standard solar spectrum and the thickness. Figures 5(g,h,i) show the simulated Figure 5: Absorption coefficient(\(\alpha_{ii}\)) along in-plane and out-of-plane direction for selected (a,b) non-magnetic, (c,d) antiferromagnetic, and (e,f) ferromagnetic systems. Spectroscopic limited maximum efficiency (SLME) of the same set of (g) non-magnetic, (h) anti-ferromagnetic, and (i) ferromagnetic systems. SLME for all the compounds shortlisted for PV application. The calculated SLME at 1 \(\mu\)m absorber layer thickness is 23.09%, 30.57%, 32.41%, 25.52%, 28.82%, 17.42%, 31.72%, 28.45%, and 20.93% for Cs\({}_{2}\)AgCl\({}_{6}\), Cs\({}_{2}\)AgCl\({}_{6}\), Cs\({}_{2}\)AgCl\({}_{6}\), Cs\({}_{2}\)AgFeBr\({}_{6}\), Cs\({}_{2}\)AgFeCl\({}_{6}\), Cs\({}_{2}\)AgCrI\({}_{6}\), Cs\({}_{2}\)AgCrBr\({}_{6}\), Cs\({}_{2}\)AgCrCl\({}_{6}\), and Cs\({}_{2}\)AgVBr\({}_{6}\) respectively. Interestingly, all these materials have high values of SLME even at smaller absorber layer thickness. This, along with their suitable band gap and high absorption coefficient, makes them potential candidates for thin film solar cells. The highest value of SLME is achieved for Cs\({}_{2}\)AgClCl\({}_{6}\), which could serve as the best material for PV applications. In particular, Cs\({}_{2}\)AgCrBr\({}_{6}\), which is a ferromagnetic semiconductor, exhibits high value of absorption coefficients and SLME (\(\sim\)31%), and hence can be a promising candidate for magneto-photovoltaic applications. Yet another interesting candidate is Cs\({}_{2}\)AgFeCl\({}_{6}\) which shows highly dispersive bands around the high symmetry X-point (Fig. 4(a)) implying low effective mass of hole charge carriers and high hole mobility as aforementioned. Herein, both VBM and CBM are dominated by T-d states located at X point giving rise to direct band gap. In contrast, for Cs\({}_{2}\)AgCrCl\({}_{6}\), two peaks are observed in the optical absorption spectrum which corresponds to two different band gaps in the two spin channels. These two peaks can be assigned to the d-d transitions at 3.51 eV and 1.69 eV respectively. With varying halides (Cl to Br to I), there is a significant shift in the optical absorption onset indicating a change in the electronic band gaps. ## IV Conclusion The interplay of magnetic and optical properties can lead to a new avenue for finding candidate materials with promising photovoltaic (PV) performance. The purpose of this manuscript is to explore the said objective in a new family of lead free magnetic DPs Cs\({}_{2}\)AgTX\({}_{6}\) (where T = Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu and X = Cl, Br, I). The combination of two transition elements (Ag and T) offers a wide range of intriguing magneto-optoelectronic properties. Out of 27 compounds, seven prospective compounds Cs\({}_{2}\)AgCoBr\({}_{6}\), Cs\({}_{2}\)AgCoCl\({}_{6}\), Cs\({}_{2}\)AgFeBr\({}_{6}\), Cs\({}_{2}\)AgFeCl\({}_{6}\), Cs\({}_{2}\)AgCrBr\({}_{6}\), Cs\({}_{2}\)AgCrCl\({}_{6}\), and Cs\({}_{2}\)AgVBr\({}_{6}\) have shown visible-light driven band gaps, good absorption coefficients and high power conversion efficiency substantiating their potential as potential PV absorbers. Rest of the compounds in this series have the potential to show significant promise in the field of photo(electro)catalysis and spintronics, based on their electronic properties. The driving mechanism for the distinct magnetic properties in these compounds is hybridization and super-exchange. Amongst the magnetic compounds in this series, Cs\({}_{2}\)AgCrI\({}_{6}\), Cs\({}_{2}\)AgCrBr\({}_{6}\), Cs\({}_{2}\)AgCrCl\({}_{6}\), and Cs\({}_{2}\)AgVBr\({}_{6}\) stabilize in hexagonal symmetry with ferromagnetic or antiferromagnetic ordering. This can incite broken time-reversal or inversion symmetry and lead to bulk spin photovoltaic effect, thereby enhancing the PV performance due to non-linear optical effects. For instance, large photoconductivity has been reported in CrI\({}_{3}\) due to the magnetism-mediated asymmetry in its antiferromagnetic ordered structure. Such asymmetry has paved a unique way to explore the novel properties of magnetic materials for anomalous photovoltaic effect (APVE). Hence, the insights on magnetic DPs Cs\({}_{2}\)AgTX\({}_{6}\), reported in our work can channelise the advancement of perovskites in the field of magnetic/spin APVE. The theoretical reaffirmation of the synthesizability of our proposed compounds paves way to also expedite experimental research in this impending class of family Cs\({}_{2}\)AgTX\({}_{6}\). ## Acknowledgment A.A acknowledges computing facility (spacetime2) provided by IIT Bombay to support this research. S.S.P acknowledges the computational support by MASSIVE HPC facility (www.massive.org.au) and the Monash eResearch Centre and eSolutions-Research Support Services through the use of the MonARCH HPC Cluster. ## Author Contributions M.N and S.S.P contributed equally to this work.
2302.02464
On the numerical stability of discretised Optimal Control Problems
Optimal Control Problems consist on the optimisation of an objective functional subjected to a set of Ordinary Differential Equations. In this work, we consider the effects on the stability of the numerical solution when this optimisation is discretised in time. In particular, we analyse a OCP with a quadratic functional and linear ODE, discretised with Mid-point and implicit Euler. We show that the numerical stability and the presence of numerical oscillations depends not only on the time-step size, but also on the parameters of the objective functional, which measures the amount of control input. Finally, we also show with an illustrative example that these results also carry over non-linear optimal control problems
Ashutosh Bijalwan, Jose J Muñoz
2023-02-05T19:21:07Z
http://arxiv.org/abs/2302.02464v1
# On the numerical stability of discretised ###### Abstract Optimal Control Problems consist on the optimisation of an objective functional subjected to a set of Ordinary Differential Equations. In this work, we consider the effects on the stability of the numerical solution when this optimisation is discretised in time. In particular, we analyse a OCP with a quadratic functional and linear ODE, discretised with _Mid-point_ and _implicit Euler_. We show that the numerical stability and the presence of numerical oscillations depends not only on the time-step size, but also on the parameters of the objective functional, which measures the amount of control input. Finally, we also show with an illustrative example that these results also carry over non-linear optimal control problems. ## 1 Introduction Optimal control is a class of mathematical optimisation problems where the desired system state is determined by minimising/maximising a cost functional subjected to path constraints, written as ordinary differential equations (ODEs) and initial conditions. In a real-world applications, close-form solutions of these problems are difficult to obtain and often they are solved numerically with non-linear programming techniques [3; 4]. Based on the sequence of optimisation and time discretisation, solution procedures for Optimal Control Problems (OCPs) can be classified as indirect and direct approaches. The indirect method first derives the necessary optimality conditions and forms the Differential-Algebraic Equations (DAEs) with the two-point boundary conditions, also known as the Two-Point Boundary Value Problem
2306.10597
Mxenes for CO$\rm_{2}$ reduction and catalytically improved liquid hydrogen storage vie reverse water gas shift reaction
The catalytic reduction of $\mathrm{CO_{2}/CO}$ is an appealing approach for reducing greenhouse gas concentrations while also producing renewable energy. We used two-dimensional transitional metal carbides known as Mxenes as the most promising catalysts for boosted water-gas-shift reaction for conversion of $\mathrm{CO_{2}}$ to chemical fuel and liquid hydrogen. Our findings reveal that the $\mathrm{Ti_{2}C}$ surface collects $\mathrm{CO_{2}}$ and converts it to reactive carbon mono oxide gas and oxygen termination. Surface catalytic reactions always start with $\mathrm{CO}$ hydrogenation, which is sustained by a continual supply of water at the optimum temperature. $\mathrm{Ti_{2}C}$ surface terminations are in charge of the formation of molecules, free radicals, and alcohols, and the conversion reaction is cycled frequently, producing methanol, methane, water, and hydrogen molecules with each cycle. Furthermore, once water is injected for system hydrogenation, the $\mathrm{Ti_{2}C}$ surface has the ability to hydrogenate itself, because water breaks down into its constituents $\mathrm{O}$ and $\mathrm{OH}$ in the presence of free radicals such as $\mathrm{H_{2}CO}$. Thus, self hydrogenation increases liquid hydrogen generation in addition to the usage of water for hydrogen supply.
Tewodros Eyob Ada, Kenate Nemera Nigussa, Cecil N. M. Ouma
2023-06-18T16:50:56Z
http://arxiv.org/abs/2306.10597v1
Mxenes for CO\({}_{2}\) reduction and catalytically improved liquid hydrogen storage via reverse water gas shift reaction. ###### Abstract The catalytic reduction of CO\({}_{2}\)/CO is an appealing approach for reducing greenhouse gas concentrations while also producing renewable energy. We used two-dimensional transitional metal carbides known as Mxenes as the most promising catalysts for boosted water-gas-shift reaction for conversion of CO\({}_{2}\) to chemical fuel and liquid hydrogen. Our findings reveal that the Ti\({}_{2}\)C surface collects CO\({}_{2}\) and converts it to reactive carbon mono oxide gas and oxygen termination. Surface catalytic reactions always start with CO hydrogenation, which is sustained by a continual supply of water at the optimum temperature. Ti\({}_{2}\)C surface terminations are in charge of the formation of molecules, free radicals, and alcohols, and the conversion reaction is cycled frequently, producing methanol, methane, water, and hydrogen molecules with each cycle. Furthermore, once water is injected for system hydrogenation, the Ti\({}_{2}\)C surface has the ability to hydrogenate itself, because water breaks down into its constituents O and OH in the presence of free radicals such as H\({}_{2}\)CO. Thus, self hydrogenation increases liquid hydrogen generation in addition to the usage of water for hydrogen supply. ## Introduction Due to their numerous applications, 2D materials have emerged as a cutting-edge study field. The newest developments in the 2D universe include transition metal carbides, carbonitrides, and nitrides (MXenes). MXenes are composed of layers of transition metal carbides or nitrides, M\({}_{n+1}\)X\({}_{n}\) where M stands for a transition metal (Sc, Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, and so on), and X is either carbon or nitrogen. Various Mxene characteristics, from metallic to semiconductor, depend on the nature of M, X, and surface termination. High mechanical stability, high electronic conductivity, chemical stability, ion intercalation, and tunable band gaps are just a few of the Mxenes' many desirable properties. These properties have important implications for energy applications like fuel cells, hydrogen storage, and lithium ion batteries. Furthermore, encouraging outcomes have been attained in a variety of fields, including spintronics, wearable electronics, environmental remediation, and biomedicine. Mxenes are enticing prospects for catalysis because of the strong reactivity of the resulting surface and the naturally large area of these materials. Mxenes meet several of the characteristics that are frequently needed for catalysts to be stable under reaction conduction and selectivity of reaction product. The exothermic adsorption of MXenes suggests that they could be used as CO\({}_{2}\) conversion catalysts, and they obviously show promise as materials for CO\({}_{2}\) capture. The over-reliance on fossil fuels in modern society has negative effects on climate change and ocean acidification. These two phenomena have a strong correlation with CO\({}_{2}\) levels in the Earth's atmosphere. A currently popular option for minimizing Figure 1: Schematic of syngas catalytic activity on Mxene, Ti\({}_{2}\)C surface, color code: silver color for Titanium, transition metal element, brown for Carbon, red for Oxygen, and white for Hydrogen. these effects is the employment of solid collectors to reduce the CO\({}_{2}\) concentration in accordance with the so-called carbon capture and storage (CCS) plan. Since CO\({}_{2}\) has a high degree of chemical stability and only weakly interacts with most substrates, thus, active substrates for the selective, robust adsorption of CO\({}_{2}\) are necessary for this. To put it another way, CO\({}_{2}\) capture is conceptually straightforward, but really putting it into practice on a large enough scale is rather difficult. Several attempts have been made over the years to address this issue, and Jorchick _et al_ proposed a two-in-one reactor with the advantage of performing both hydrogenation and dehydrogenation save catalyst and equipment costs for stationary energy storage applications in remote areas. For example, simply altering the pressure and the dehydrogenation reaction could be switched to the hydrogenation reaction using the same catalyst [1]. The most inefficient technique is to use the electrical output from a proton exchange membrane fuel cell (PEMFC) to dehydrogenate a charged LOHC system. A hydrogen burner, however, can be used to provide the dehydrogenation heat. A portion of the hydrogen produced by a liquid organic hydrogen carrier (LOHC) is used to heat the dehydrogenation system, while the remainder is used to power the PEMFC. This technique has a maximum efficiency of roughly 34 %, making it preferable to electrical heating [2]. CO\({}_{2}\) reduction by reverse water gas shift (RWGS), on the other hand, creates a gas mixture composed of CO and H\({}_{2}\), which is commonly referred to as synthesis gas or syngas. The CO\({}_{2}\) dissociation is regarded the rate-determining phase in the RWGS reaction, and its dissociative adsorption heat over the metal governs the reaction rate. The RWGS requires a very high temperature for thermodynamic equilibrium, whereas the chemical equilibrium is pressure independent. Furthermore, the chemical kinetics of the reactions are heavily influenced by the catalyst structure, gas composition, and coating processes [3]. Reverse water gas shift catalysis looks to be the dominant method for converting CO\({}_{2}\) to CO, which can subsequently be transformed to liquid fuel of choice via CO hydrogenation, such as diesel, gasoline, and alcohols. This technique has the advantage of high rates, selectivity, and technological readiness, but it requires renewable hydrogenation via direct photocatalysis or indirect sources such as electricity and electrolysis. The synthesis of liquid hydrogen fuel from CO\({}_{2}\)/CO/H\({}_{2}\) feeding is a significant method that is heavily reliant on reaction circumstances and the type of catalyst used. The liquid yields, such as liquid hydrogen is far more advantageous in terms of reducing the shortcoming of fossil fuels. CO\({}_{2}\)/CO/H\({}_{2}\) is used in the reaction technique to synthesis hydrogen fuel, which is a reverse process of hydrogen fuel reforming at the same time. When H\({}_{2}\) is introduced for material reduction, a reverse water-gas shift chemical looping reaction occurs [4]. The reversible hydrogenation of CO\({}_{2}\) to create CO and H\({}_{2}\)O is represented by the RWGS reaction (Eq. 1). CO\({}_{2}\) is a relatively nonreactive molecule due to its chemical stability, and hence the reaction to convert it to the more reactive CO is energy intensive. \[\text{CO}_{2}+\text{H}_{2} \rightleftharpoons\text{CO}+\text{H}_{2}\text{O},\text{AH}_{298 ~{}K}^{\circ}=+42.1~{}\frac{\text{KJ}}{\text{mol}} \tag{1}\] \[\text{CO}_{2}+4\text{H}_{2} \rightleftharpoons\text{CH}_{4}+2\text{H}_{2}\text{O},\text{AH}_{298 ~{}K}^{\circ}=-165~{}\frac{\text{KJ}}{\text{mol}} \tag{2}\] Since the reaction is endothermic, greater temperatures are thermodynamically advantageous. Increased H\({}_{2}\)/CO\({}_{2}\) ratio maximizes CO\({}_{2}\) conversion and favors the RWGS reaction [5]. As a result, at lower temperatures, the equilibrium will gradually favor the reverse reaction of Eq. (1) and methanation, Eq. (2) reactions, which are exothermic and the most notable side reactions under these conditions. However, this ratio and temperature are restricted to ensure that the parameters used during the experimental stages are economically viable for industrial applications. Previous research has demonstrated that changing the pressure has no effect on the reaction activity and position of the equilibrium due to the stoichiometry of the reaction [6]. Two temporal steps can be recognized in the RWGS process: reduction of Titanium carbide with H\({}_{2}\) followed by re-oxidation with CO\({}_{2}\) generates the target product, CO. The streamlined reaction scheme Reduction \[\text{Ti}_{2}\text{C}+2\text{H}_{2}\to 2\text{Ti}+\text{CH}_{4}\] (3) Oxidation \[2\text{Ti}+2\text{CO}_{2}+\text{CH}_{4}\to\text{Ti}_{2}\text{C}+2 \text{CO}+2\text{H}_{2}\text{O}\] (4) Direct CO\({}_{2}\) hydrogenation is more thermodynamically favorable than RWGS and thus more promising for industrialized methanol synthesis; however, CO\({}_{2}\) hydrogenation to generate methanol via reverse-water-shift process found 20% greater methanol yields when CO\({}_{2}\) is converted to CO rather than CO to methanol using direct CO\({}_{2}\) hydrogenation [7]. Catalysts such as Fe, Co, Ni, Cu, or noble metals are used depending on the end product. Because CO generated from CO\({}_{2}\) is more reactive than CO\({}_{2}\), it can engage in additional hydrogenation processes. Thus, controlling product selectivity in CO\({}_{2}\) hydrogenation to noble metal-based substrates is highly problematic due to multi-faced reaction network, which includes three major reactions: CO\({}_{2}\) reduction to CO in the reverse water gas shift process, CO hydrogenation to methanol, and CO/CO\({}_{2}\) hydrogenation to CH\({}_{4}\). ## Results and Discussion We investigated decomposition or formation of adsorbant or intermediates on the clean Ti\({}_{2}\)C(\(2\times 2\times 1\)) surface of which active site indicted on Fig.2, and B: bridge site, O: off-site, T: top site and H: hallow site. We studied step by step hydrogenation of CO\({}_{2}\) or CO, and the reaction schemes are shown on Fig. 3. Here we discussed Mxenes [8] importance in enhancing catalysis reaction in relation to adsorption, activation energy, and Bronsted-Erans-Polyani(BEP) relation describes the linear relation which exists between activation energy & reaction energy [9]. Electrochemical reduction of CO\({}_{2}\), often surface properties determines the main product of CO\({}_{2}\) reduction [10], hence Ti\({}_{2}\)C surface absorbs CO\({}_{2}\) at bridge site which is highly reactive, thus, it dissociate to CO and O, and this turns on reverse-water-gas-shift reaction with hydrogenation of CO, resulting in subsequent chain reactions. The reduction of CO\({}_{2}\) to CO on Ti-metal surface experimentally verified in litureature Ref. [11] which agrees with our work. The Ti\({}_{2}\)C bridge site is highly reactive and enough to dissociate or break the binding energy of CO\({}_{2}\) into CO and O with energy, 4.09 eV, the binding energy (BE) of surface intermediates was determined according to the following equation, \[\mathrm{BE_{x}}=\mathrm{E_{slab+x}}-\mathrm{E_{slab}}-\mathrm{E_{x}} \tag{5}\] Where \(\mathrm{E_{slab+x}}\), \(\mathrm{E_{slab}}\) and \(\mathrm{E_{x}}\) are the total energies of the adsorbate plus slab system, the clean slab and the gas-phase intermediate, X respectively. According to this equation, a negative value of \(\mathrm{BE_{x}}\) signifies a exothermic adsorption. Table. 1 demonstrates that hydrogen atom is especially reactive in all sites, which facilitates the start of the water-gas-shift-reaction [12] process, which uses reactive hydrogen ion quickly after CO\({}_{2}\) breakdown into constituents. The competitive hydrogen evolution reaction occurs at multiple active Ti\({}_{2}\)C surface regions, as illustrated in the flow chart.Thus, when reactive CO and O react with H\({}_{2}\) result in Formic acid, the hydrogen molecule may require sun energy to ionize itself for active reaction, or hydrogenation via the water gas shift reaction is another viable approach for the production of Formic acid, which is a natural fuel. Similarly, sequential CO hydrogenation results in the formation of methanol, methane, water, and the hydrogen molecule. However, because the oxidation reaction rate drops at low temperatures, the reaction should be controlled by the supply of water and the temperature. The Ti\({}_{2}\)C surface terminations, such as O and OH, are critical in the formation of reactive intermediates as well as the inhomogeneous charge arrangement on the Mxenes layers, impacting the surface's shape and microstructure. As a result, CO\({}_{2}\) is dissociated at the surface resulting in start of conversion reaction, and catalytic cycle must be maintained. Hydrogen and oxygen have to be produced in a molecular ratio of two, or else the system will become unstable and the catalyst will be consumed during the reaction [13]. Direct hydrogenation from free hydrogen molecules or water at moderate temperatures is thus required for the splitting constituent atoms, culminating in the sequential synthesis of methane, methanol, hydrogen gas, and water. In our scenario, continuous hydrogenation is necessary since the reaction cycle would not be sustained; this is mostly owing to hydrogen supply from water split, where continual watering in the presence of free radicals such as H\({}_{2}\)CO leads to water splitting into H and OH as one can note from Table. 2. This stage is critical because constant supply of OH termination comes from water, and OH is responsible for the creation of intermediates as well as O termination from CO\({}_{2}\) dissociation, both of which are crucial in the stabilization of the Ti\({}_{2}\)C surface. As a result, H and OH play important roles in cycling the entire water-gas-shift reaction process. ## Conclusion In conclusion, we calculated the multi-active single layered Ti\({}_{2}\)C surface using basic principles. We discover that the bridge site is the only location where CO\({}_{2}\) dissociates into CO gas and O termination, and that the gate for the reverse-water-gas-shift process begins with hydrogen ignition and continues with water supply for hydrogenation and OH termination. Surface Figure 2: Top views of Ti\({}_{2}\)C(\(2\times 2\times 1\)) clean surface, and B, O, T & H denotes possible adsorption sites. \begin{table} \begin{tabular}{c c c c c} \hline & & & Adsorption energy sites & \\ Adsorbant & Bridge[eV] & Hallow[eV] & Off-site[eV] & On-top[eV] \\ \hline CO & -2.991 & -1.915 & -1.702 & -1.456 \\ CO\({}_{2}\) & CO + O, & CO\({}_{2}\), & CO\({}_{2}\), & CO\({}_{2}\), \\ & -4.09 & -0.482 & -0.293 & -0.331 \\ H\({}_{2}\) & -0.021 & -0.022 & -0.022 & -0.021 \\ H\({}_{2}\)O & -0.888 & -0.219 & -0.191 & -0.163 \\ CHO & -5.074 & -5.073 & -5.065 & -5.07 \\ cis – COOH & cis – COOH, & cis – COOH, & CO + OH, & CO + OH, \\ & -4.386 & -4.940 & -7.016 & -7.014 \\ H & -4.530 & -4.530 & -4.274 & -3.061 \\ O & -8.673 & -8.675 & -8.575 & -6.339 \\ OH & O + H, & OH, & OH, & OH, \\ & -6.886 & -3.776 & -4.047 & -0.989 \\ Intermediates & & & & \\ \hline CH & C + H, & CH, & CH, & CH, \\ & -6.39 & -4.74 & -4.60 & -1.42 \\ CH\({}_{3}\) & -3.24 & -3.18 & -2.77 & -1.73 \\ CH\({}_{4}\) & -0.16 & -0.16 & -0.16 & -0.17 \\ CH\({}_{3}\)O & -5.04 & -5.10 & -5.04 & -5.11 \\ CH\({}_{3}\)OH & CH\({}_{3}\)OH, & CH\({}_{3}\)O + H, & CH\({}_{3}\)OH, & CH\({}_{3}\)OH, \\ & -0.87 & -4.12 & -0.75 & -0.75 \\ H\({}_{2}\)COH & CH\({}_{2}\) + OH, & H\({}_{2}\)COH & CH\({}_{2}\) + OH, & CH\({}_{2}\) + OH, \\ & -5.71 & -3.20 & -5.95 & -3.20 \\ HCOOH & HCOOH, & HCOOH, & CHO + OH, & HCOOH, \\ & -2.34 & -2.67 & -4.59 & -2.48 \\ \hline \end{tabular} \end{table} Table 1: Calculated adsorption energy of various intermediates and adsorbant on bridge, hallow, off-site and on top site of Ti\({}_{2}\)C using van der Waals exchange functional: optPBE\(-\)vdw. Figure 3: Flow chart of step by step hydrogenation of CO\({}_{2}\) and CO, selectivity of catalyst building stable intermediates or molecules, and preferred reduction reaction path way for liquid Hydrogen production. \begin{table} \begin{tabular}{c c c c c} \hline \hline & & & Adsorption energy sites & \\ \multicolumn{1}{c}{Anion-cation interaction} & Bridge[eV] & Hallow[eV] & Off-site[eV] & On-top[eV] \\ \hline H + H \(\rightarrow\) & H\({}_{2}\), & H\({}_{2}\), & H\({}_{2}\), & H\({}_{2}\), \\ & -0.015 & -0.017 & -0.016 & -0.14 \\ C + H \(\rightarrow\) & CH, & CH, & CH, & CH, \\ & -7.00 & -6.96 & -7.00 & -7.00 \\ O + H \(\rightarrow\) & OH, & OH, & OH, & OH, \\ & -5.70 & -5.77 & -5.70 & -5.70 \\ H + OH \(\rightarrow\) & H\({}_{2}\)O, & H\({}_{2}\)O, & H\({}_{2}\)O, & H\({}_{2}\)O, \\ & -1.03 & -0.83 & -0.85 & -0.84 \\ OH + OH \(\rightarrow\) & OH + H + O, & OH + H + O, & OH + OH, & OH + OH, \\ & -9.96 & -9.85 & -8.85 & -8.84 \\ H + OH + O \(\rightarrow\) & H\({}_{2}\)O + O, & OH + O + H, & H\({}_{2}\)O + O, & H + O + OH, \\ & -9.33 & -7.81 & -10.57 & -9.71 \\ H + OH + O + CO \(\rightarrow\) & OH + trans \(-\)COOH, & OH + trans \(-\)COOH, & HCO + OH + O, & OH + trans \(-\)COOH, \\ & -9.33 & -7.81 & -10.57 & -9.71 \\ \hline Water splitting & & & & \\ H\({}_{2}\)O + H\({}_{2}\)CO \(\rightarrow\) & H + OH + H\({}_{2}\)CO, & H + OH + H\({}_{2}\)CO, & H + OH + H\({}_{2}\)CO, & \\ & -3.69 & -4.02 & -4.17 & -6.84 \\ Free hydrogen molecule & & & & \\ H\({}_{2}\) + H\({}_{2}\)CO \(\rightarrow\) & CH\({}_{3}\)OH, & CH\({}_{3}\)OH, & CH\({}_{3}\)OH, & CH\({}_{3}\)OH, \\ & -0.34 & -0.08 & -0.37 & -0.28 \\ \hline \end{tabular} \end{table} Table 2: Calculated binding energy of anion, cation, and radical interactions on bridge, hallow, off-site and on top site of Ti\({}_{2}\)C using van der Waals exchange functional: optPBE\(-\)vdw. terminations play a critical part in the overall cycling of the reaction route and the creation of liquid hydrogen plus methanol fuel. Our research reveals that Ti\({}_{2}\)C surfaces are capable of self-hydrogenation due to free radicals such as H\({}_{2}\)CO generated at the onset of hydrogenation. Hydrogen molecules may be formed during the conversion process from hydrogen ions, but these molecules are promptly consumed in the creation of methanol. ## Methods All calculations were performed using the GPAW code. The Ti\({}_{2}\)C surface was modeled with a one-layered slab using (\(2\times 2\times 1\)) surface unit cell. Plane augmented wave method was applied to describe ionic cores and the Kohn-sham valence electron states were expanded in a basis of plane waves with kinetic energy below 400 eV. The surface Brillouin zone was sampled at \(2\times 2\times 1\) k-points. We implemented optPBE\(-\)vdw exchange correlation which has much better chemical accuracy than vdW-DF for the S22 benchmark set of weakly interacting dimers and for water clusters with improved performance for the adsorption of water [14].
2310.12466
Higher Level Completeness for Permutation Polynomials
Generalising the concept of a complete permutation polynomial over a finite field, we define completness to level $k$ for $k\ge1$ in fields of odd characteristic. We construct two families of polynomials that satisfy the condition of high level completeness for all finite fields, and two more families complete to the maximum level a possible for large collection of finite fields. Under the binary operation of composition of functions one family of polynomials is an abelian group isomorphic to the additive group, while the other is isomorphic to the multiplicative group.
S. Rajagopal, P. Vanchinathan
2023-10-19T04:47:53Z
http://arxiv.org/abs/2310.12466v1
# Higher Level Completeness for Permutation Polynomials ###### Abstract Generalising the concept of a complete permutation polynomial over a finite field, we define completness to level \(k\) for \(k\geq 1\) in fields of odd characteristic. We construct two families of polynomials that satisfy the condition of high level completeness for all finite fields, and two more families complete to the maximum level possible for large collection of finite fields. Under the binary operation of composition of functions one family of polynomials is an abelian group isomorphic to the additive group, while the other is isomorphic to the multiplicative group. Division of Mathematics VIT University Vandalur-Kelambakkam Road Chennai, 600 127 INDIA [email protected], [email protected] **Keywords**: permutation polynomial; complete permutation polynomial AMS Subject Classification: 11T06 ## 1 Introduction A polynomial over a finite field is said to be a permutation polynomial if it gives rise to a bijective function on that field. The monograph [8] has a whole chapter devoted to this. The handbook has a large collection of results and bibliography [9]. The survey articles by Hou [6], [7], gives the status of the theory of permutation in general and binomials and trinomials in particular. Due to its applications to combinatorics, cryptology and coding theory lot of effort is put in constructing them. A permutation polynomial \(f(x)\in{\bf F}_{q}[x]\) is called a complete permutation polynomial, if \(f(x)+x\) is also a permutation polynomial. Complete permutation polynomials seem to have certain advantages. In applications to cryptography they are used for constructing bent functions. Results on complete permutation polynomials can be found, for example, in [1],[2], [3], [4], [10]. In this paper our objective is to find large family of complete permutation polynomials in each finite field and we achieve it and as a bonus, they are higher level complete polynomials: **Definition 1**: _For a positive integer \(k\), we say a polynomial \(f(x)\in{\bf F}_{q}[x]\) is complete to level \(k\), or simply \(k\)-complete, if along with \(f(x)\) we have \(f(x)+x,f(x)+2x,\ldots,f(x)+kx\) are also permutation polynomials._ Clearly complete polynomials of level upto \(p-1\) where \(p\) is the characteristic of \({\bf F}_{q}\), only need to be studied. Here are some easy-to-verify statements of higher level completeness property of a permutation polynomial: * Higher completeness is same as usual completeness in characterstic 2. * Usual complete polynomials are 1-complete. * If a permutation polynomial is \(k\)-complete, then it is also \(k^{\prime}\)-complete for positive integers \(k^{\prime}<k\). **Definition 2**: _A permutation polynomial that is complete to level \(k\) for \(k=p-1\) is said to be a maximally complete permutation polynomial._ **Example:** Any linear polynomial \(ax+b\in{\bf F}_{q}[x]\) is maximally complete if \(a\) is not in the prime subfield; in case \(a\) is in the prime subfield, it will be complete to level \(k=p-1-a^{\prime}\), with \(a^{\prime}\) being the least positive integer representing \(a\in{\bf F}_{p}\). **Question**: _Do complete permutation polynomials of level \(k>1\) and degree \(>1\) exist?_ Our main results of this paper are the following affirmative answers to the above question: **Theorem A**: _For any prime \(p\geq 3\) and a composite integer \(n\), finite fields of order \(p^{n}\) admit maximally complete permutation polynomials which are non-linear._ In case the finite field does not have a proper subfield bigger than the prime subfield (the case where \(n\) in Theorem A is a prime) we can find polynomials that are almost maximally complete: **Theorem B**: _In all finite fields of characteristic \(\geq 3\), there exist non-linear complete permutation polynomials of level \(p-2\)._ _For a more precise and constructive version of the above, see Theorems 1, 2 and 3_ in the next section. ## 2 Main Results and Proofs Notation: \(\mathbf{F}_{q}\) will denote a finite field of prime power order \(q\), assumed to be of odd characteristic \(p\). For an extension field \(\mathbf{F}_{q^{n}}\) of \(\mathbf{F}_{q}\). we denote by \(m=q+q^{2}+q^{3}+\cdots+q^{n-1}\). Note that \(m+1=q^{n}-1\) and that \(m\) is a multiple of the characteristic \(p\). **Lemma 1**: _For a finite field \(\mathbf{F}_{q}\) and an extension field \(\mathbf{F}_{q^{n}}\) and any \(c\in\mathbf{F}_{q}\) the following polynomial_ \[f_{c+}(x)=x+c\sum_{j=1}^{m}x^{j(q-1)}\] _is a permutaion polynomial over \(\mathbf{F}_{q^{n}}\)._ Proof: It will follow once we show that the polynomial \(f_{c+}(x)\) actually is the function given below: \[f_{c+}(a)=\left\{\begin{array}{ll}a-c&\mbox{for}\quad a\in\mathbf{F}_{q^{n}} \setminus\mathbf{F}_{q}\\ a&\mbox{for}\quad a\in\mathbf{F}_{q}\end{array}\right.\] When \(a\) is in the base field \(\mathbf{F}_{q}\), we claim that all the terms in the summation are 1. That is so because we are summing the \(r\)-th powers of \(a\) where \(r\) is always a multiple of \(q-1\). Thus \(f_{c+}(a)=a+cm=a\). Next we will move on to evaluating it on the remaining elements. So consider \(a\in\mathbf{F}_{q^{n}}\setminus\mathbf{F}_{q}\). The summation actually is a sum of \(m\) terms of a geometric progression. So \[f_{c+}(a) =a+c\left[a^{(q-1)}+a^{2(q-1)}+\cdots+a^{m(q-1)}\right]\] \[=a+ca^{q-1}\left[\frac{\left(a^{q-1}\right)^{m}-1}{a^{q-1}-1}\right]\] \[=a+c\left[\frac{a^{q^{n}-1}-a^{q-1}}{a^{q-1}-1}\right]\text{ as }(m+1)(q-1)=q^{n}-1\] \[=a+c\left[\frac{1-a^{q-1}}{a^{q-1}-1}\right]\] \[=a-c\qed\] Summarising we see that \(f_{c+}(x)\) when restricted to the base field is the identity function, and, on the elements not in the base field, is a translation by an element of the base field. Thus \(f_{c+}(x)\) is a permutation polynomial of \(\mathbf{F}_{q^{n}}\) for all \(c\in\mathbf{F}_{q}\). _Though we make no use of it further, it is now clear that the permutation given by this polynomial for \(c\neq 0\), has order \(p\), with exactly \(q\) fixed points_. Next we provide a multiplicative analogue of the polynomials constructed in the previous theorem. **Lemma 2**: _Under the same notation as earlier, we define for \(c\in\mathbf{F}_{q},c\neq 1\) a polynomial by_ \[f_{c*}(x)=x+cx\sum_{j=1}^{m}x^{j(q-1)}.\] _This is a permutaion polynomial over \(\mathbf{F}_{q^{n}}\)._ In fact this polynomial is the function given by the following description: \[f_{c*}(x)=\left\{\begin{array}{ll}x\left(1-c\right)&\text{for}\quad x\in \mathbf{F}_{q^{n}}\setminus\mathbf{F}_{q}\\ x&\text{for}\quad x\in\mathbf{F}_{q}\end{array}\right.\] **Case 1 :** To prove \(f_{c*}(a)=a\forall a\in\mathbf{F}_{q}\). As \(f_{c*}(0)=0\), we consider \(a\in\mathbf{F}_{q}^{*}\). \[f_{c*}(a)=a+c\left[a^{q}+a^{(2q-1)}+\cdots+a^{m.q-(m-1)}\right]\] \[=a+ca[a^{q-1}+a^{2(q-1)}+a^{3(q-1)}+\cdots+a^{m(q-1)}]\] \[=a+ca\left[\underbrace{a+a+\cdots+a}_{m\text{ times}}\right]\] \[=a+cma=a\] **Case 2 :**\(a\in\mathbf{F}_{q^{n}}\setminus\mathbf{F}_{q}\). \[f_{c*}(a) =a+ca\left[a^{(q-1)}+a^{2(q-1)}+a^{3(q-1)}+\cdots+a^{m(q-1)}\right]\] \[=a+ca\left[\frac{a^{(m+1)(q-1)}-a^{q-1}}{a^{q-1}-1}\right]\] \[=a+ca\left[\frac{1-a^{q-1}}{a^{q-1}-1}\right]\] \[=a\left(1-c\right)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\] _Again we can specify the cycle type of this permutation \(f_{c*}\) in terms of the order of \((1-c)\) as an element of the multiplicative group \(\mathbf{F}_{q}^{*}\)._ The permutation polynomials constructed above are actually complete to higher level. Here is the precise result: **Theorem 1**: _The polynomial \(f_{c+}(x)\) is \((p-2)\)-complete, that is, \(f_{c+}(x)\) along with the set of \((p-2)\) polynomials \(x+f_{c+}(x),2x+f_{c+}(x),3x+f_{c+}(x),\ldots,(p-2)x+f_{c+}(x)\) are all permutation polynomials._ Proof: For a positive integer \(k\) from the definition we get \[kx+f_{c+}(x) =(k+1)x+c\left[x^{q-1}+x^{2(q-1)}+\cdots+x^{m(q-1)}\right]\] \[=(k+1)\left[x+\frac{c}{k+1}\left(x^{q-1}+x^{2(q-1)}+\cdots+x^{m(q -1)}\right)\right]\] But, visual inspection makes it clear that, the expression inside the square brackets is actually \(f_{c^{\prime}+}\) for \(c^{\prime}=c/(k+1)\). Thus we arrive at an identity \[kx+f_{c+}(x)=(k+1)\,f_{c^{\prime}+}(x)\] By Lemma 1 RHS is a permutation polynomial and hence because the LHS is too. Of course we need \(k+1\neq 0\), and so this is true for \(k=1,2,\ldots,p-2\). So \(f_{c+}(x)\) is \((p-2)\)-complete. \(\Box\) **Theorem 2**: _The polynomial \(f_{c*}(x)\) are complete to level \(p-2\), i.e., \(f_{c*}(x)\) along with the set of \((p-2)\) polynomials \(x+f_{c*}(x),2x+f_{c*}(x),3x+f_{c*}(x),\ldots,(p-2)x+f_{c*}(x)\) are permutation polynomials._ Proof: Using arguments similar to the ones employed in the proof of Theorem 1 we can easily arrive at \[kx+f_{c*}(x)=(k+1)\,f_{c^{\prime}*}(x)\] for \(c^{\prime}=c/(k+1)\). Again we will be able to conclude that \(kx+f_{c*}\left(x\right)\) is also permutation polynomial for \(0<k<(p-1)\). \(\Box\) Next we move on to finding maximally complete permutation polynomials. First we need a temporary definition of a _middle subfield_ of a field: by that we mean a proper subfield which is not the prime subfield. Middle subfields exist in finite fields of order \(p^{n}\) iff \(n\) is a composite number. **Theorem 3**: _Let \(\mathbf{F}_{q}\) be a finite field admitting at least one middle subfield. Choose \(b,c\in\mathbf{F}_{q}\) such that \(b\) is in some middle subfield and \(c\) is in any proper subfield of \(\mathbf{F}_{q}\) Then the polynomials \(bf_{c+}(x)\) and \(bf_{c*}(x)\) are maximally complete permutation polynomials._ Proof: As \(c\) is in a proper subfield of \(\mathbf{F}_{q}\), Lemma 1 ensures \(f_{c+}(x)\) is a permutation polynomial and hence the scalar multiple \(bf_{c+}(x)\) is too. We will now show for any \(k=1,2,\ldots,p-1\) that \(bf_{c+}(x)+kx\) is a permutation polynomial. We simply rewrite this as \[bf_{c+}(x)+kx=(b+k)\biggl{(}x+\frac{bc}{b+k}\sum_{j}x^{j(q-1)}\biggr{)}\] And the latter polynomial is the same as \((b+k)f_{c^{\prime}+}(x)\) where \(c^{\prime}=bc/(b+k)\), and hence a permutation polynomial. We need \(b+k\) to be nonzero for this to be true. That follows from the hypothesis that \(b\) is not in the prime field while \(k\) is always in the prime field. This completes the proof that \(bf_{c+}(x)\) is maximally complete. Same arguments work for proving maximal completeness property for the polynomials \(bf_{c*}\) and so omitted. Next we discuss inter- and intra-relationship among the members of two families of polynomials \(f_{c+}(x)\) and \(f_{c*}(x)\). **Theorem 4**: _The two collections of functions \(f_{c+}(x)\), and \(f_{c*}(x)\) behave well under composition. In fact,_ * \(\left\{\,f_{c+}(x)\mid c\in\mathbf{F}_{q}\,\right\}\) _is an abelian group under composition, and is isomorphic to the additive group of_ \(\mathbf{F}_{q}\)_._ * \(\left\{\,f_{c*}(x)\mid c\in\mathbf{F}_{q}-\left\{1\right\}\,\right\}\) _is also an abelian group under composition, and is isomorphic to the multiplicative group_ \(\mathbf{F}_{q}^{*}\)_._ * \(f_{c+}(x)\) _and_ \(f_{c*}(x)\) _have the following relationship:_ \[x(f_{c+}(x)-x+1)=f_{c*}(x)\] Proof: Verifying (i) is straightforward. For part (iii) the given relationship between \(f_{c+}(x)\) and \(f_{c*}(x)\) follows directly from their definitions. For (ii), to see that it is a group one can easily check that \[f_{c*}\circ f_{d*}(x)=f_{(c+d-cd)*}=\left\{\begin{array}{ll}(\,1-c-d+cd\,)x& \mbox{for}\quad x\in\mathbf{F}_{q^{n}}\setminus\mathbf{F}_{q}\\ x&\mbox{for}\quad x\in\mathbf{F}_{q}\end{array}\right.\] and that the compositional inverse of \(f_{c*}(x)\) is \[f_{c*}^{-1}(x)=\left\{\begin{array}{ll}cx/(c-1)&\mbox{for}\quad x\in\mathbf{ F}_{q^{n}}\setminus\mathbf{F}_{q}\\ x&\mbox{for}\quad x\in\mathbf{F}_{q}\end{array}\right.\] Now the statement that is it isomophic to \(\mathbf{F}_{q}^{*}\) can be deduced from the following lemma whose proof is a simple exercise: **Lemma 3**: _For any field \(K\), the set \(K\setminus\left\{1\right\}\) is a group under the binary operation \(a*b=a+b-ab\) and this is isomorphic to \(K^{*}\) under the map \(a\mapsto 1-a\)._ **Examples:** First two polynomials given below are 3-complete permutation polynomials over \(\mathbf{F}_{25}\), and the other are 5-complete over \(\mathbf{F}_{49}\). * \(f_{2+}(x)=2x^{20}+2x^{16}+2x^{12}+2x^{8}+2x^{4}+x\in\mathbf{F}_{5^{2}}[x]\) * \(f_{4*}(x)=4x^{21}+4x^{17}+4x^{13}+4x^{9}+4x^{5}+x\in\mathbf{F}_{5^{2}}[x]\) * \(f_{4+}(x)=4x^{42}+4x^{36}+4x^{30}+4x^{24}+4x^{18}+4x^{12}+4x^{6}+x\in\mathbf{ F}_{7^{2}}[x]\) * \(f_{6*}(x)=6x^{43}+6x^{37}+6x^{31}+6x^{25}+6x^{19}+6x^{13}+6x^{7}+x\in\mathbf{ F}_{7^{2}}[x]\)
2306.00784
Interpretable Math Word Problem Solution Generation Via Step-by-step Planning
Solutions to math word problems (MWPs) with step-by-step explanations are valuable, especially in education, to help students better comprehend problem-solving strategies. Most existing approaches only focus on obtaining the final correct answer. A few recent approaches leverage intermediate solution steps to improve final answer correctness but often cannot generate coherent steps with a clear solution strategy. Contrary to existing work, we focus on improving the correctness and coherence of the intermediate solutions steps. We propose a step-by-step planning approach for intermediate solution generation, which strategically plans the generation of the next solution step based on the MWP and the previous solution steps. Our approach first plans the next step by predicting the necessary math operation needed to proceed, given history steps, then generates the next step, token-by-token, by prompting a language model with the predicted math operation. Experiments on the GSM8K dataset demonstrate that our approach improves the accuracy and interpretability of the solution on both automatic metrics and human evaluation.
Mengxue Zhang, Zichao Wang, Zhichao Yang, Weiqi Feng, Andrew Lan
2023-06-01T15:16:18Z
http://arxiv.org/abs/2306.00784v1
# Interpretable Math Word Problem Solution Generation ###### Abstract Solutions to math word problems (MWPs) with step-by-step explanations are valuable, especially in education, to help students better comprehend problem-solving strategies. Most existing approaches only focus on obtaining the final correct answer. A few recent approaches leverage intermediate solution steps to improve final answer correctness but often cannot generate coherent steps with a clear solution strategy. Contrary to existing work, we focus on improving the correctness and coherence of the intermediate solutions steps. We propose a step-by-step planning approach for intermediate solution generation, which strategically plans the generation of the next solution step based on the MWP and the previous solution steps. Our approach first _plans_ the next step by predicting the necessary math operation needed to proceed, given history steps, then _generates_ the next step, token-by-token, by prompting a language model with the predicted math operation. Experiments on the GSM8K dataset demonstrate that our approach improves the accuracy and interpretability of the solution on both automatic metrics and human evaluation. ## 1 Introduction Arithmetic math word problems (MWPs) consist of natural language statements describing real-world scenarios that involve numerical quantities, followed by a question asking for an unknown value. Solving MWPs require parsing the textual statements and carrying out the corresponding calculations Kumar et al. (2022). MWPs are an important educational tool that helps assess and improve student knowledge in basic mathematical concepts and skills Walkington (2013); Verschaffel et al. (2020). They also represent a long-standing interest in artificial intelligence (AI) research since correctly solving them serves as a key benchmark task for testing and improving the mathematical reasoning skills of AI models Feigenbaum and Feldman (1995); Bommasani et al. (2021); Cobbe et al. (2021); Lewkowycz et al. (2022). There is a large body of literature that focuses on automatically solving MWP. Earlier works took a modular approach that first analyzes unconstrained natural language and then maps intricate text patterns onto mathematical vocabulary Sundaram et al. (2022). As a result, this approach relies heavily on hand-crafted rules to fill the gap between natural language and symbolic mathematical vocabulary Sundaram et al. (2022). Recent works leverage advances in natural language processing and take a neural network-based, end-to-end approach, where a neural network encodes a numerical representation of the MWP (and the underlying equation), from which a decoder generates the final answer Zou and Lu (2019); Wang et al. (2017); Wu et al. (2020); Chen et al. (2020); Cao et al. (2021); Shen et al. (2021); Shao et al. (2022); Jie et al. (2022). Unfortunately, the vast majority of these works focus on generating and predicting _a single final answer_, since answer correctness is often the only evaluation metric. Therefore, these works do not provide any insights or explanations into how the models arrive at the answer. As a result, it is often difficult, if not entirely impossible, to explain the model's behavior, especially when it produces a wrong answer. The lack of interpretability of these methods makes it challenging to analyze them and unsafe to use them in real-world applications. This interpretability issue has attracted increasing interest in MWP solving research. Recent works have shifted to designing models that not only generate the final answer for an MWP, _but also the intermediate steps_. The ability to generate intermediate steps not only enables researchers to investigate model behavior but also new applications. For example, in personalized education and intelligent tutoring systems, these models have the potential to generate detailed, personalized solution steps as feedback to improve stu dent understanding of the mathematical concepts and resolve misconceptions (Walkington, 2013; Karpicke, 2012; Koedinger et al., 2015). The recent GSM8K (Cobbe et al., 2021) dataset contains MWPs that come with 2 to 8 intermediate steps described in natural language, which provides us a good resource to study step-by-step solution generation. Many works apply (large) language models (LMs) on this dataset and achieve high accuracy in final answer generation, without studying the quality of intermediate steps (Wei et al., 2022; Wang et al., 2022; Chowdhery, Aakanksha and others, 2022; Lewkowycz et al., 2022; Uesato et al., 2022; Kojima et al., 2022; Li et al., 2022). These works use verifiers, self-consistency decoding strategy (majority votes), chain-of-thought prompting, or calculators; see Section 4 for a detailed discussion. However, existing LMs are still prone to generating incorrect intermediate steps despite yielding the correct final answer. The models are not competent at numerical reasoning, possibly because they generate intermediate steps word by word (or token by token) and cannot look far ahead. As a result, they only use shallow heuristics (Li et al., 2021) in word occurrence and lack multi-step mathematical reasoning capabilities, which solving an MWP requires. A recent study that experiments on GPT-4 also points out that the architecture of next-word prediction precludes any "inner dialog" and cannot really plan ahead (Bubeck et al., 2023). ### Contributions In this paper, we study the problem of generating accurate and high-quality intermediate solution steps with natural language explanation via step-by-step planning using LMs. We formulate this problem as a controllable generation problem where the LM aims to generate the correct intermediate solution at each solution step, given the MWP and previous solution steps. This problem is particularly challenging since _the generated solution steps need to be accurate_, i.e., each intermediate step must be mathematically valid and on the path to the correct answer. We need an approach different from widely-adopted, attribute-controlled generation approaches for topic or sentiment, where the attribute is nuanced and cannot be matched exactly (Dathathri et al., 2020; Krause et al., 2020; Shirish Keskar et al., 2019). To overcome these challenges, we introduce a _planning-LM_ approach, where we plan the strategy for the next solution step and then use the plan to guide LMs to generate the step. Since symbols and patterns are crucial to the effectiveness of chain-of-thought prompting (Madaan and Yazdanbakhsh, 2022), we design plans in the form of _mathematical operations_ to prompt the model to generate the next intermediate step. We summarize our contributions as follows. **[C1]** We explore the use of a planning approach for step-by-step solution generation for MWPs. To the best of our knowledge, our work is the first to focus on generating high-quality intermediate solution steps via LMs. **[C2]** We first predict the mathematical operation applied in the next solution step using a small model and then apply a carefully-constructed prompt to control an LM to generate the next solution step. Our approach can be extended to many downstream applications due to its interpretability and high controllability. **[C3]** We evaluate our planning-LM approach on the GSM8K dataset to demonstrate its effectiveness, both quantitatively and qualitatively. With minimal additional parameters (0.02%), it outperforms existing approaches on both final answer accuracy and intermediate step quality. Moreover, by manually changing the math operation prompt, we can control our approach to generate _different correct solution paths_ for the same MWP. ### Notation We first define all of the terms and components in our approach. We define an MWP as \(Q=\{q_{1},q_{2},\ldots,q_{n}\}\) where \(q_{i}\) represents a token, which is either a numerical value, a mathematical operator, or a word/sub-word. The corresponding step-by-step solution is \(S=\{S^{1},S^{2},\ldots\}\), where \(S^{i}\) denotes \(i^{th}\) step of the solution. For any step \(S^{i}\), we denote it as \(S^{i}=\{s^{i}_{1},s^{i}_{2},\ldots\}\), consisting of a sequence of tokens. Next, we define our prompt in two parts. The first part is the textual instruction prompt, which contains words that LMs can understand, and the second part is the mathematical operation prompt, which is a special token that instructs the LM to perform which mathematical operation in the next solution step. We denote the instruction prompt as \(P=\{p_{1},p_{2},\ldots\}\), where \(p_{i}\) represents a word/sub-word token, and the operation prompt as \(O=\{o\}\), where \(o\) is a categorical variable indicating the math operation token. We define \(H_{i}\) as the solution context, i.e., the history at step \(S^{i}\), which consists of the problem \(Q\) and all previous steps, \(\{S^{1},\ldots,S^{i-1}\}\). \(\mathcal{M}\) denotes the base LM and \(e\) is its corresponding token embedding function. Finally, we define \(f\) as the prompt embedding function. Both \(e\) and \(f\) can map tokens into \(\mathcal{R}^{K}\) where \(K\) is the hidden state dimension of the LM. ## 2 Methodology We now define our MWP solution generation task and detail the specifics of our approach. Our task is that given a question \(Q\), we need to generate a step-by-step solution \(S=S^{1},S^{2},\ldots\), with each step consisting of a combination of textual and mathematical tokens, to reach the final answer. We formulate the problem as a step-wise controllable generation task using prompts-based LM fine-tuning. Figure 1 shows an overview of our approach1 including its two main components: First, we utilize the MWP and the solution history to plan and predict next mathematical operation to apply in the next step. Second, we use the predicted operation prompt with instruction prompt to guide the next step generation process. Our key technical challenges are (i) how to learn a solution planning strategy to transition from step to step and (ii) once we have the next operation, how to apply and design prompts to guide the generative LM to generate the next step to follow the plan. Footnote 1: For clarity, we discuss our methodology based on decoder-only Transformer-based LMs. However, our methodology also generalizes to encoder-decoder-type LMs, such as T5, which we experimentally verify (see Table 1). More details can be found in Appendix E. ### Operation Prediction Our first step is to predict the mathematical operation to be applied in the next step. To achieve this, we concatenate the solution history \(H\) and a crafted instruction prompt \(P\) (e.g.,"What is the next operation?") followed by the special token "\([cls]\)" as input to an (not necessarily large) LM. We encode solution history tokens with a vocabulary embedding function \(e_{\beta}\) and instruction prompt tokens with a separate prompt embedding function Figure 1: An overview of our step-by-step MWP solution generation approach. Planning-LM first predicts the next step operation hint **(a-1)** and controls the next step generate via the predicted operation hint **(a-2)**. **Figure (b)** shows the overview generation process by given the question \(Q\). \(f_{\theta}\); \(\beta\) and \(\theta\) are the parameters of these parts, i.e., the embedding layer in an LM. Then, we obtain the representation of the solution history as the final layer hidden state of the LM, i.e., \(\mathcal{M}\). To predict the operation action of the next step, we use a one-layer, fully-connected network as the classifier, with weight \(w_{\gamma}\), to obtain an operation score vector for each valid math operation \(s\in[0,1]^{|O|}\), where \(|O|\) is the number of operation classes, as \[s=w_{\gamma}\dot{h}_{[cls]},\] where \(\gamma\) is the set of parameters for the classifier. Since we need to use an LM for step generation, introducing a separate LM for operation prediction leads to a large number of parameters. Therefore, we use the same LM for both operation planning and solution step generation. The objective function for operation planning is the cross-entropy loss on operators, i.e., \[\mathcal{L}_{CE}=-\sum_{i}^{|O|}t_{i}\log(\frac{\exp s_{i}}{\sum_{j}^{|O|} \exp s_{j}}),\] where \(s_{i}\) is the score of operation class \(i\). \(t_{i}\) is an indicator such that \(t_{i}=1\) when \(i\) is the true label and \(t_{i}=0\) otherwise. We obtain true labels by extracting mathematical operations from each step of the solution in the training data, which we detail below in Section 2.3. ### Controllable Step Generation Once we have the predicted operation \(O\), we append the corresponding prompt to the instruction prompt \(P\) to form our final prompt for step generation. Our task becomes a controllable generation task: given history \(H\) and the prompt \([P;O]\) that plans the next step, our goal is to generate the next step \(S\) token-by-token. We generate a step \(S_{i}=\{s_{,}^{i}...,s_{T}^{i}\}=\{s_{j}^{i}\}_{j=1}^{T}\) according to \[p(S_{i}|[P_{i};O_{i}],\] \[H_{i})=\prod_{j=1}^{T}p(s_{j}^{i}|[P_{i};O_{i}],\] \[H_{i},\] \[\{s_{j}^{i}\}_{j=1}^{j-1}).\] Then, the overall step-by-step solution \(S\) with \(N\) steps is generated according to \[p(S)=\prod_{i=1}^{N}p(S_{i}|[P_{i};O_{i}],H_{i})p(O_{i}|H_{i}).\] The step generation objective is given by the negative log-likelihood objective function \[\mathcal{L}_{LM}=-\sum_{i=1}^{N}\log p_{\beta,\theta,\gamma,\psi}(S_{i}|[P_{i };O_{i}],H_{i}),\] where the set of parameters include previously defined \(\beta,\theta,\gamma\) and the LM parameters \(\psi\). \(\beta\) and \(\psi\) are fine-tuned while \(\theta\) and \(\gamma\) are learned from scratch. We also investigate two ways to position the prompt in LM input: as prefix, where we place them at the beginning, i.e., the input is given by \([P;O;H]\) and as infix, where we append the prompt after the history, i.e., the input is given by \([H;P;O]\). ### Prompt Design Our prompt consists of two parts: the instruction prompt gives the LM general instructions on what to generate, while the operation prompt provides specific guidelines for the mathematical calculation involved in the next step. For the instruction prompt, we apply prompt mining Yuan et al. (2021) to find good instructions, i.e., word tokens that are the most informative for the LM to accomplish the desired task. See Section D.2 for details. For the operation prompt, we extract 20 common operations from the training data, such as one step addition \([n+n]\), subtraction \([n-n]\), multiplication \([n*n]\), etc and use them as prompts. We note that these operators are easy to find and can be automatically extracted, which means that there is no need to manually create labels to train the operation prediction LM. The instruction tokens and operation action tokens form the entire vocabulary of the prompt function \(f_{\theta}\). The prompt function is a two-layer perceptron with a ReLU activation function. ### Optimization Although our entire approach can be trained together in an end-to-end way, we found that optimizing the operation prediction model and fine-tuning the LM/prompts for step generation asynchronously leads to better performance. Our intuition is that the operation predictor is a high-level decision-making policy for the entire solution while the LM generation process is a low-level (token-by-token) decision-making process for the current step. Optimizing these two modules simultaneously may cause inconsistency since the operation predictor may make a decision based on LM parameters that also need to be updated. Therefore, we first optimize the parameters of the generation LM and prompts with the step generation task loss, using ground truth operation labels, which we extract from the mathematical part of each step in the training data. Then, we iterate between freezing both the LM \(M\) and the prompt function \(f\) while tuning the operation predictor and switching the two. In this way, we can guarantee the whole model to converge in a stable process Wang et al. (2020). ## 3 Experiments We now detail a series of experiments that we conducted to validate the effectiveness of our proposed planning-LM approach on step-by-step MWP solution generation. Since our focus is on MWP solution generation with explanations, GSM8K Cobbe et al. (2021) is a good fit for our purpose. This dataset contains 8.5K high-quality and linguistically diverse MWPs, where each MWP has 2-8 solution steps. See Section C for details on data preprocessing. ### Automated Metrics We need a variety of different metrics to understand the effectiveness of our planning-LM approach. For the final answer, we use the **solve rate** metric to evaluate whether the model generates the final correct answer to each MWP. Since generating meaningful steps is also key, we use the **BLEU** metric Papineni et al. (2002) to evaluate language generation quality. For intermediate steps, we use the equation match accuracy (**ACC-eq**) metric to evaluate whether a generated step contains a math expression (including numbers) that matches the ground truth. Since LMs generate math equations as strings, we decompose the equation string into tokens and calculate the token level match rate instead of the overall string match. We also use the operation match accuracy (**ACC-op**) metric to evaluate whether a generated step's operation label matches the ground truth. ### Human Evaluation Our proposed planning-LM framework cannot be accurately evaluated using only automated metrics since text similarity metrics such as BLEU do not accurately reflect the mathematical validity of intermediate solution steps. To address this limitation, we implemented a human evaluation protocol with three metrics: _reasoning strategy_, _clear explanation_, and _overall preference_. Ten raters with a good understanding of fundamental mathematics concepts evaluated 50 randomly selected MWPs using the protocol, where their task is to compare two different step-by-step solutions. Each MWP receiving at least three ratings. The full evaluation template can be found in Section G. ### Experimental Settings We conduct two experiments to verify the effectiveness of our planning-LM framework. In the first, single-step experiment, we input the question and ground-truth solution steps to the model and let it generate the next step and calculate the ACC-eq and ACC-op metrics for each generated step. Since some of the steps are too short, yielding a high variance in BLEU scores, we concatenate all \begin{table} \begin{tabular}{l l l l l} \hline \hline **Model** & **BLEU** & **ACC-eq** & **ACC-op** & **Solve Rate** \\ \hline Chain-of-thought-tuning GPT-2 (117M) & 34.3 & 49.4 & 55.1 & 8.1 \\ Planning-GPT-2 with operation classifier (117M) & 35.4 & 56.7 & 61.6 & 14.1 \\ \hline Chain-of-thought-tuning GPT-2-medium (345M) & 38.1 & 58.1 & 61.1 & 16.1 \\ Planning-GPT-2-medium with operation classifier (345M) & 39.5 & 61.8 & 65.2 & 20.1 \\ \hline Chain-of-thought-tuning T5 (220M) & 30.3 & 45.4 & 52.1 & 3.1 \\ Planning-T5 with operation classifier (220M) & 34.4 & 55.7 & 60.6 & 13.9 \\ \hline Chain-of-thought-tuning T5-large (770M) & 35.3 & 58.9 & 63.1 & 17.0 \\ Planning-T5-large with operation classifier (770M) & 40.5 & 62.3 & 66.3 & 21.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Planning-LM outperforms fine-tuning LMs for both small and medium-sized GPT-2. Moreover, Planning-LM with a small GPT-2 achieves performance comparable to fine-tuning medium GPT-2, implying that our method can make a smaller model rival larger ones fine-tuned in the traditional way. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{**Method Component**} & \multicolumn{4}{|c|}{**Metric**} \\ \hline Infix & Prefix & Prompt function & Prompt mining & Operation Predictor & BLEU & ACC-eq & ACC-op & Solve Rate \\ \hline ✓ & & ✓ & ✓ & ✓ & **35.4** & **56.7** & 61.6 & **14.1** \\ \hline & ✓ & ✓ & ✓ & ✓ & 33.7 & 52.1 & **63.2** & 10.4 \\ \hline ✓ & & ✓ & ✓ & ✓ & 33.1 & 51.9 & 58.4 & 10.2 \\ \hline ✓ & & ✓ & & ✓ & 33.9 & 55.1 & 59.9 & 13.2 \\ \hline ✓ & & ✓ & ✓ & & 34.1 & 54.2 & 60.1 & 13.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation results for different components of our approach. Most components contribute significantly. generated steps and calculate the overall BLEU metric between the ground truth solution and this true history-informed solution. In the second, all-step experiment, we only provide the model with the MWP and ask it to generate all solution steps. We then calculate the solve rate metric to evaluate whether the final answer is correct. We choose GPT-2 (117M parameters) and GPT-2-medium (345M) as our base models and compare the generation results between LM fine-tuning and planning-LM. Meanwhile, we perform another experiment using the ground truth operation prompt as input for planning-LM to generate the next step. The result, an upper bound on the performance of planning-LM, reflects the effectiveness of low-level token-by-token generation in each step, while ACC-eq and ACC-op reflect the effectiveness of high-level mathematical operation planning across steps. We also conduct the above experiments on encoder-decoder LMs: T5-base(220M) and T5-large(770M). The decoder architecture is the same as GPT-2 models, but instead of treating the question as history input, T5 contains an extra encoder to encode the question and uses cross attention to the question to generate results. To fairly compare planning-LM with other works on LLM prompting such as chain-of-thought, instead of prompt-tuning on a relatively small LM, we adapt our approach for in-context learning. We select five examples with a specific format \((Q,P,O_{1},S_{1},P,O_{2},S_{2},\ldots)\), i.e., the question followed by a number of prompt-operation-solution triples. We use the examples with GPT-3 ("text-davinci-003") for in-context learning. An example of the prompt we use is shown in Table 6. ### Quantitative Results #### 3.4.1 Prompt-tuning Table 1 shows the experimental results for all prompt-tuning based approaches across the two experiments. We see that planning-GPT-2 and planning-T5 with our operation classifier outperform chain-of-thought-tuning on both GPT-2 and T5. We also observe that a similar trend holds for the larger models, GPT-2-medium and T5-large. We highlight that with the planning component, which introduces only around 10K new parameters for the MWP solving task, a base GPT-2 model with 117M parameters performs similarly to a much larger base GPT-2-medium model with 345M parameters. This observation shows that our planning approach is highly parameter-efficient for MWP solving. The other observation is that our approach seems to adapt better to decoder-only LMs than to encoder-decoder LMs, even ones with more parameters; T5-base yields almost the same performance as GPT-2, with twice as many parameters. To validate the effectiveness of each component in our planning-LM approach, we conduct an ablation study on four different components: using prefix or infix prompts, fixed or fine-tuned mathematical operation prompts, instruction prompt mining, and the operation predictior. We see that using infix, fine-tuned mathematical prompts, and the operation predictor improve performance the most across different settings. We also see that infix prompts are significantly better than prefix prompts, which is different from the observation made in prior work Li and Liang (2021). One possible explanation is the incompatibility between prefix prompting and step-by-step generation: prefix prompts put the most important instruction at the front of the LM input, making all generated tokens attend to it, which leads to higher operation prediction accuracy but worse generation performance on other tokens. #### 3.4.2 In-context learning We conduct experiments by giving in-context prompting examples to GPT-3 in different formats and the result is shown in Table 3. We see that planning-LM yields the best solving rate, significantly higher than other approaches. We further analyze the human evaluation results in Section 3.5. ### Human Evaluation Results Figure 2 shows the distributions of participants' selections on human evaluation metrics for the generated solutions. We see that solutions generated by planning-LM are significantly better than those produced by chain-of-thought on all three metrics, proving that our approach leads to solutions with more precise language and better problem solving \begin{table} \begin{tabular}{l c} \hline \hline **Model** & **Solve Rate(\%)** \\ \hline Standard prompting & 21.4 \\ Chain-of-thought & 68.5 \\ Planning-LM & 72.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Solving Rate (%) of different prompting methods on GSM8K. All the in-context prompting methods use text-davinci-003 with 4 examples. strategies. Providing math operation information to the LM as hints on the next step also help the model to generate more clear and sound explanations in the intermediate solution steps. ### Qualitative Analysis Table 4 shows two examples that compare the full step-by-step solutions generated by our planning-LM approach and chain-of-thought prompting. For Example 1, we see that although chain-of-thought happens to produce the correct answer, the reasoning starts to fall apart at Step 3. It generated the correct final answer only because the question mentioned rounding the answer to the nearest integer; however, its intermediate answer 1.33 is wrong. For Example 2, the answer generated by the chain-of-thought does not have detailed wording explanations, whereas planning LM's solution has details of each step of the solving strategy, making the solution much easier to understand. Perhaps surprisingly, we observe that planing-LM can generate multiple solutions if it predicts a different math operation in the next step compared to the ground truth solution. Therefore, we conduct a follow-up experiment by giving the model a hand-crafted plan via operation prompts to see whether it can generate an alternative correct solution strategy. Table 5 further demonstrates that our approach can generate multiple correct solution paths for the same problem. For example, feeding Plans I and II enables the model to generate the correct final answer among the four strategies we used; the generated solutions follow the operation steps given, indicating that the model has some reasoning ability and can extract some meaningful patterns from data. Plan III results in a flawed solution and Plan IV failed since we do not have an operation class that matched the step. For plan III, the first step, \([n+n+\ldots]\), is not seen often enough in the training data. For plan IV, \((n+n)\times n\) is not seen in the training data either. However, we note that in this case, using the closest operation, \([n+n\times n]\), results in a solution that gets very close to the correct final answer. These results suggest that a better representation of the operation prompt is crucial for future work since our current approach is limited to a finite number of predefined operations; a prompt operation _generator_ rather than classifier could be a better choice for a wide variety of mathematical operations. We also note that this flexibility gives our planning-LM approach potential to be useful in real-world applications. For example, these solution plan controls may encourage math students to explore different solution strategies and be more creative. ## 4 Related work MWP solverA large body of recently proposed MWP solvers parses an MWP into its underlying equation, which has been a very active research area with a plethora of related work. These works differ mainly in the technical approaches which broadly fall in three categories. First, some works explore MWP solving via reinforcement learning, which rewards the model with the correct answer generated (Huang et al., 2018; Wang et al., 2018). RL methods generally requires a sizable dataset and can be unstable to train, which may not be suitable for most MWP datasets that are only of modest sizes. Second, some works exploit the combination of symbolic- and neural-network-based approaches, e.g., by combining a pre-defined symbolic patterns such as solution templates (Wang et al., 2019) and symbolic tree structures of equations (Xie and Sun, 2019; Li et al., 2020; Qin et al., 2020; Wang et al., 2018; Wu et al., 2020; Zhang et al., 2021). These methods can be significantly constrained by these patterns and it may be challenging to generalize them to other MWPs whose solutions are not expressed by these patterns. Lastly, some works build on large LMs (LLMs) via special fine-tuning or inference techniques. Chain-of-thought prompting (Wei et al., 2022) prompts LLMs to generate intermediates steps before reaching the final answer. Cobbe et al. (2021) fine-tunes a model as a verifier and applies the verifier to rank outputs in the decoding phase. Wang et al. (2022) are using a majority vote among outputs to select the best answer. Lewkowycz et al. (2022) fine-tunes an LLM by a large collection of math-specific datasets combining existing tech Figure 2: The distributions of participants’ selections in our human evaluation experiment. niques. There are also some extension works based on CoT, like the least-to-most prompting Zhou et al. (2022) that decomposes the complicated question into a single-hop question; STaR Zelikman et al. (2022) iterative rationale generation using a small number of examples and a large dataset. Our work differs from previous studies by not only prioritizing the final solution accuracy but also emphasizing the generation quality of individual solution steps. Additionally, we introduce a novel hierarchical planning method for fine-tuning, in contrast to previous approaches that rely solely on ordinary language modeling techniques. Controllable text generationGiven the rise of LLMs, controllable generation methods that guide these large models to generate desirable content and avoid potential pitfalls such as bias Bender et al. (2021) has been a recent research trend. These controllable generation methods generally fall into two categories. Works in the first category modify the token distribution at each time step to achieve controllable generation via gradient-based methods Dathathri et al. (2020), external classifier-based methods Krause et al. (2020); Liu et al. (2021), or resampling Clark et al. (2020); Bhattacharyya et al. (2021); Bakhtin et al. (2021). Works in the second category fine-tunes the LLM via either language modeling Shirish Keskar et al. (2019) or reinforcement learning Khalifa et al. (2021). These works focus on controllable generation for natural language and study nuanced control attributes such as topic and sentiment that can only be matched implicitly. In contrast, our work focuses differently on both natural and mathematical language, which involves control attributes, e.g., math operation hints in the form of equations that need to be matched exactly. \begin{table} \begin{tabular}{l} \hline \hline **1.Question:** Antoine’s french onion soup recipe calls for 2 pounds of onions. He likes to double that amount. His soup serves 6 people. The onions are currently on sale for $2.00 a pound. He also needs 2 boxes of beef stock, that are also on sale for $2.00 a box. What is the cost per serving? (Round to the nearest integer.). \\ \hline **Chain-of-thoughit:** The cost of the onions is 2’ pounds times $2.00 per pound, which is 2*2 = 4 dollars. The cost of the beef stock is 2 boxes times $2.00 per box, which is 2*2 = 4 dollars. The total cost of the ingredients is 4+4 = 8 dollars. The cost per serving is 8 divided by 6 servings, which is 8/6 = 1.33 dollars. \\ Rounded to the nearest integer, the cost per serving is $2. **PlanningLM:** Antoine needs 4 pounds of onions, which costs 4*2 = 8 dollars. He also needs 2 boxes of beef stock, which costs 2*2 = 4 dollars. The total cost for the ingredients is 8+4 = 12 dollars. The cost per serving is 12/6 = 2 dollars. \\ \hline **2.Question:** Judy teaches 5 dance classes, every day, on the weekdays and 8 classes on Saturday. If each class has 15 students and she charges $15.00 per student, how much money does she make in 1 week? \\ **Chain-of-thoughit:** On the weekdays, Judy teaches 5 classes per day, with 15 students in each class. This means she has 5*15 = 75 students per day. At a rate of $15 per student, she makes 75*15 = 1125 dollars per day. For the weekdays, she makes 5*1125 = 5625 dollars. On Saturday, Judy teaches 8 classes with 15 students in each class. This means she has 8*15 = 120 students. At a rate of $15 per student, she makes 120*15 = 1800 dollars. For the week, Judy makes 5625+1800 = 7425 dollars. Rounded to the nearest integer, the cost per serving is $2. **PlanningLM:** Judy teaches 5 classes per weekday, so 5*5 = 25 classes. Judy teaches 8 classes on Saturday, so 8*1 = 8 classes. The total number of classes she teaches in a week is 25+8 = 33 classes. Each class has 15 students, so 15*33 = 495 students. Judy charges $15.00 per student, so 495*15 = 57,425. \\ \hline \hline \end{tabular} \end{table} Table 4: Demonstrations of generated solutions comparing planning-LM and chain-of-thoughit. Question 1 shows the intermediate step of chain-of-thoughit has wrong reasoning but still reaches the final answer. Question 2 shows that planning-LM in a better reasoning strategy since the calculation process is simple and more concrete. \begin{table} \begin{tabular}{l} \hline \hline **Question:** Two trains leave San Rafael at the same time. \\ They begin traveling westward, both traveling for 80 miles. \\ The next day, they travel northwards, covering 150 miles. \\ What’s the distance covered totally in the two days? Ans: \\ \hline **Plan I:**\([n+n]\)\(\cdot\)\(\triangleright\)\([\)\(\overline{n*n}]\)\(\cdot\)\(\triangleright\)\([\)\(\overline{Ans}]\) \\ **Result:** The total distance covered in the two days is 80 + 150 = 230 miles. The total distance covered in the two days is 230 * 2 = 460. Answer is 460. \\ \hline **Plan II:**\([n*n]\)\(\cdot\)\(\triangleright\)\([\)\(\overline{n*n}]\)\(\triangleright\)\([\)\(\overline{n+n}]\)\(\cdot\)\(\triangleright\)\([\)\(\overline{Ans}]\) \\ **Result:** The first train covers 80 * 2 = 160 miles. On the second day, the trains covered 2 trains * 150 miles/train = 300 miles. Total distance covered is 160 \(\leftrightarrow\) 300 = 460 miles. Answer is 460. \\ \hline **Plan III:**\([n+n+\ldots]\)\(\cdot\)\(\triangleright\)\(n+n\)\(\cdot\)\(\triangleright\)\([\)\(\overline{Ans}]\) \\ **Result:** The total distance covered in the two days is 80 + 80 + 150 = 310 miles. The total distance covered in one day is 310 \(\leftrightarrow\) 150 = 460 miles. Answer is 460. \\ \hline **Plan IV:**\([n+n*n]\)\(\cdot\)\(\triangleright\)\([\)\(\overline{Ans}]\) \\ **Result:** The total distance covered by trains in the two days is 150 + 80 * 2 = 310 miles. Answer is 310. \\ \hline \hline \end{tabular} \end{table} Table 5: Qualitative examples of using our planning-LM to plan for different but valid solution strategies to achieve the same correct result for a given MWP. Plan IV failed since we do not have an exactly operation class that matched the step. Conclusions In this paper, we addressed the new problem of performing fine-grained, step-by-step controllable solution generation for math word problems. We proposed an approach combining planning and language models to generate interpretable solution steps. Our approach leverages pre-trained language models in two ways: at each step, plan the mathematical operation to be applied, followed by using these plans as prompts to control the token-by-token generation of each step. We demonstrated that with minimal additional parameters introduced, our approach significantly improves math word problem-solving performance over simply fine-tuning language models. We also showed that due to the interpretability and high controllability of operation prompts, we can use our approach to generate solutions with alternative strategies by giving it different solution plans. Future work can further explore generating an entire solution path by predicting math operators for each step and revising the plan after each step is generated. We can also explore the application of our approach in real-world educational settings, e.g., for open-ended answer scoring Lan et al. (2015); Zhang et al. (2022). ## 6 Limitations First, our work applies hand-crafted action labels as operation hints, which leads to some limitations to represent more complex operation steps. For the future work, we can use a generator instead of a classifier to generate a more flexible set of operation prompts, making them more representative and meaningful Secondly, due to the high controllable generation of our approach, if our approach yields a wrong operation step prediction, it would further mislead the intermediate step generation. To eliminate the drawback where inaccurately generated operation prompts would mislead the next step, we can apply a verifier Cobbe et al. (2021) to evaluate the reliability of the generated operation prompts. When the reliability is low, we ditch the operation prompt to prevent it from guiding the model into an incorrect path. ## 7 Ethics Statement Currently, most existing works leverage the capability of generating intermediate reasoning steps of large, pre-trained language models for either understanding the model's behaviors (e.g., models' moral judgments Jin et al. (2022)) or improving their problem-solving accuracies (e.g., MWP solving Lewkowycz et al. (2022)). Few works focus on the quality of the generated intermediate reasoning steps themselves. These generated steps have potentially significant real-world applications, such as providing feedback automatically in large-scale education scenarios, but they are not yet of high enough quality to be readily utilized in practice. Our work contributes to the important direction in making such generated intermediate steps more accurate, coherent, and high-quality. However, language models equipped with our approach may still generate intermediate steps that are unreasonable, even though it improves upon existing approaches. These unreasonable generated steps may be misleading to students when they are learning, posing a potential risk to their usage. As a result, more work is required before our approach can be readily deployed in practice. We believe that, in its current form, our work is best suitable for use with experts, i.e., education subject matter experts or instructors to help them write solution steps for new MWPs in a more efficient manner. ## Acknowledgement The authors thank the NSF (under grants 1917713, 2118706, 2202506, 2215193, 2237676) for partially supporting this work.
2307.10866
Probabilistic Compute-in-Memory Design For Efficient Markov Chain Monte Carlo Sampling
Markov chain Monte Carlo (MCMC) is a widely used sampling method in modern artificial intelligence and probabilistic computing systems. It involves repetitive random number generations and thus often dominates the latency of probabilistic model computing. Hence, we propose a compute-in-memory (CIM) based MCMC design as a hardware acceleration solution. This work investigates SRAM bitcell stochasticity and proposes a novel ``pseudo-read'' operation, based on which we offer a block-wise random number generation circuit scheme for fast random number generation. Moreover, this work proposes a novel multi-stage exclusive-OR gate (MSXOR) design method to generate strictly uniformly distributed random numbers. The probability error deviating from a uniform distribution is suppressed under $10^{-5}$. Also, this work presents a novel in-memory copy circuit scheme to realize data copy inside a CIM sub-array, significantly reducing the use of R/W circuits for power saving. Evaluated in a commercial 28-nm process development kit, this CIM-based MCMC design generates 4-bit$\sim$32-bit samples with an energy efficiency of $0.53$~pJ/sample and high throughput of up to $166.7$M~samples/s. Compared to conventional processors, the overall energy efficiency improves $5.41\times10^{11}$ to $2.33\times10^{12}$ times.
Yihan Fu, Daijing Shi, Anjunyi Fan, Wenshuo Yue, Yuchao Yang, Ru Huang, Bonan Yan
2023-07-16T23:10:51Z
http://arxiv.org/abs/2307.10866v1
# Probabilistic Compute-in-Memory Design For Efficient Markov Chain Monte Carlo Sampling ###### Abstract. Markov chain Monte Carlo (MCMC) is a widely used sampling method in modern artificial intelligence and probabilistic computing systems. It involves repetitive random number generations and thus often dominates the latency of probabilistic model computing. Hence, we propose a compute-in-memory (CIM) based MCMC design as a hardware acceleration solution. This work investigates SRAM bitcell stochasticity and proposes a novel "pseudo-read" operation, based on which we offer a block-wise random number generation circuit scheme for fast random number generation. Moreover, this work proposes a novel multi-stage exclusive-OR gate (MSXOR) design method to generate strictly uniformly distributed random numbers. The probability error deviating from a uniform distribution is suppressed under \(10^{-5}\). Also, this work presents a novel in-memory copy circuit scheme to realize data copy inside a CIM sub-array, significantly reducing the use of R/W circuits for power saving. Evaluated in a commercial 28-nm process development kit, this CIM-based MCMC design generates 4-bit-32-bit samples with an energy efficiency of 0.53 pJ/sample and high throughput of up to 166.7M samples/s. Compared to conventional processors, the overall energy efficiency improves \(5.41\times 10^{11}\) to \(2.33\times 10^{12}\) times. + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer and Image Processing great potential as a standalone domain-specific core to be integrated into a system-on-a-chip (SoC). The contribution of this work includes: * We investigate the stochasticity of SRAM bitcell and propose a novel operating principle of the "pseudo-read" operation, based on which we offer a block-wise random number generation (RNG) circuit scheme for fast random number (sample) generating speed over \(10^{6}\) samples/s. * We propose a novel multi-stage exclusive-OR gate (MSXOR) design method to correct a biased probability (e.g. 40%) to generate strict 50% uniform distributed random numbers. The error probability achieves as low as \(10^{-4}\%\). Further, we realize an efficient accept/reject check circuit module. * We propose a novel in-memory copy circuit scheme to realize data copy between different bitcells, which can significantly reduce the use of R/W circuits for energy- and power-saving purposes. To evaluate the proposed CIM design, we implement the proposed CIM-based MCMC design with a commercial process development kit (PDK) at 28 nm technology node with 6-transistor (6T) SRAM foundry bitcells. The proposed CIM-based MCMC design achieves high-accuracy sampling with an energy efficiency of 0.53 pJ/sample and high throughput of up to 166.7M samples/s. ## 2. Preliminaries ### Markov Chain Monte Carlo Sampling Algorithm MCMC method is one of the fundamental building blocks of probabilistic inference to estimate the posterior probabilities and generate stochastic samples from designated probability distributions (Grover and Leskovec, 2015), especially those whose probability density function (PDF) is hard to be analytically expressed. It is an essential mathematical tool for solving integration and optimization problems in large dimensional spaces (Bishop, 2015). The core idea of MCMC is defining a Markov chain where the integral of state transfer probabilities should form a stationary distribution. In other words, if the chain runs long enough time, the distribution of states will be proportional to the desired distribution. Metropolis-Hastings (MH) MCMC sampling (Algorithm 1) is the most common approach in probabilistic computing (Stein of the peripheral circuits, the bitcell sub-array can achieve the functions of storage and computation in independent working modes. By offloading computation from centralized processing units, CIM dramatically eliminates the frequent data transportation between memory and processing units and thus forms an energy-efficient computing system (Kumar et al., 2017). Even though CIM technology has been considered to generate random samples (Kumar et al., 2017), it has been rarely investigated in implementing full MCMC accelerator hardware with CIM technology. ## 3. Operating Principles We propose to exploit the inherent stochastic characteristics of SRAM to develop novel peripheral circuits to realize MCMC hardware acceleration. Before diving into the detailed circuit design, this section provides the basic principles used in the proposed CIM-based MCMC design. This work leverages the inherent stochasticity of the SRAM bitcell as the source of randomness. Subsequently, based on this mechanism, the computing process of the CIM-based MCMC design is described in this section. ### Inherent Stochastic Characteristics of SRAM Bitcell Data stored in SRAM bitcells can be disturbed by thermal noise. This work would like to leverage this as the source of controllable randomness. According to the SRAM theory in (Kumar et al., 2017), each SRAM bitcell (illustrated in Fig. 4(a)) has its own data retention voltage (DRV), which refers to the minimum supply voltage for an SRAM bitcell to preserve the datum bit resilient to the noise perturbation (Kumar et al., 2017). One common indicator for the stability of memory bitcells is the voltage transfer curve (VTC), usually called "butterfly diagram", as shown in Fig. 4(b). To quantify VTCs, we simulate with a commercial 28 nm technology PDK and foundry 6T SRAM bitcells. The standard bitcell supply voltage CVDD is 0.8 V. As Fig. 4(b) shows, the edge length of the largest squares inscribed in the "eyes" of the butterfly curve are defined as static-noise margin (SNM) (Brock et al., 2017). SNM quantifies the voltage noise necessary to flip the stored datum in a bitcell. When the supply voltage CVDD decreases, the two sides of VTC get close to each other, resulting in a sharp SNM decrease, implying that the bitcell resilience to thermal noise turns weaker. Fig. 4(c) shows the bit flip rate (BFR) (Kumar et al., 2017; Li et al., 2017) throughout this process depending on the bitcell supply voltage CVDD. SNM shrinks as the bitcell datum gain increased chances to flip randomly inflicted by thermal noise. However, DRV can be very small owing to the back-to-back connection of the two inverters in the SRAM bitcell. When CVDD is around DRV, small changes of CVDD may cause rapid BFR fluctuation due to the circuit nonlinearity. In order to gain better control of the SRAM bit flipping, we propose to define a novel operation, called "pseudo-read". It is an operation similar to SRAM read but it destroys the original datum bit stored in the SRAM bitcell and generates a random "0" or "1". Fig. 4(d) shows the working condition and timing diagram for the bitcell pseudo-read. During pseudo-read, we apply 0.8 V to BL and BLB through the bitline conditioning transistors, with a lowered CVDD as the bitcell supply. Then WL gets asserted and later deasserted to trigger the random bit flip phenomenon. e.g. at the time of \(t_{pr}\) in Fig. 4(d), the operating configurations are shown in Fig. 4(a). During the pseudo-read, BL and BLB simultaneously keep high to accelerate the SNM shrinking. We observe that BFR is around 45% when CVDD lowers to 0.5 V with this proposed pseudo-read operation1. This is adequate as the source of randomness in the proposed MCMC sampling circuits. Footnote 1: In reality, the value of the lowered CVDD is chosen according to the measured bit flip rate. ### Computing Steps of CIM-Based MCMC design Fig. 5 shows the flow chart of generating stochastic samples with the proposed CIM-based MCMC design. The output of the MCMC is a set of samples \(\{x_{0},x_{1},\cdots,x_{m}\}\) with required PDF \(p(x)\), which are placed in the memory block from the address \(A_{start}\) till \(A_{end}\) Figure 4. (a) "Pseudo-read" operation for an SRAM bitcell; (b) VTC of normal SRAM bitcells and the bitcells during “pseudo-read" operation; (c) bit flip rate of SRAM bitcells; (d) working condition and timing diagram for pseudo-read operation. Figure 3. General circuit architecture of SRAM-based CIM. It consists of three major steps: for each memory address, (a) generate samples \(x\) with the transfer PDF \(q(x)\); (b) generate one sample \(u\) from an accurate \([0,1]\) uniform distribution; (c) check the accept/reject condition: if accepted, the sample should be copied to a new address for results storage. Here we elaborate on this process with an example of 4-bit samples. Each sample (number) is a value between 0 (0000) and 15 (111). In this scale, the initial value \(x^{(0)}\) is written into 4 SRAM bitcells, each bitcell storing 1 bit. The sampled value \(x^{*}\) follows the random distribution of \(x^{*}\subset q(x^{*}|x^{(i)})\), and this process is achievable with pseudo-read operation explained in Sec. 3.1. For example, the stored value of 4 SRAM cells is "0" (0000). After the pseudo-read, according to the curve of BFR (Fig. 4(c)), these bitcells have a 45% chance of generating "1". We hereby obtain a sampled 4-bit random number \(x^{*}\). Then, the next step is to calculate the accept ratio \(\alpha\). Fig. 6 shows the entire transfer matrix of a 4-bit word. Given a certain CVDD for pseudo-read operation, we can measure and plot the transfer matrix \(\mathbf{q}\), where we can look up for the values \(q(x^{(i)}|x^{*})\) and \(q(x^{*}|x^{(i)})\). It is evident to find that this matrix is a diagonal matrix, with \(q(i,j)\equiv q(j,i)\), \(for\,i,j\subset[0,15]\). This transfer matrix \(\mathbf{q}\) is bound to the condition of pseudo-read, and it is thereby a constant when calculating the accept ratio \(\alpha=\frac{p(x^{*})q(x^{(i)}|x^{*})}{p(x^{(i)})q(x^{(i)}|x^{(i)})}\). Subsequently, \(\alpha\) is simplified to \(\frac{p(x^{*})}{p(x^{(i)})}\). Given the required distribution \(p(x)\), this value ("accept ratio") can be easily calculated through peripheral digital logic circuits. With an \([0,1]\) uniformly distributed random number \(u\) generated, the comparison between \(u\) and the calculated accept ratio decides whether the sample \(x^{*}\) is qualified as the MCMC output sample. According to Step 3 in Algorithm 1, the generated \(x^{*}\), if accepted, will be copied to the next 4 bitcells for subsequent sampling operations. Otherwise, bitcells that store the unaccepted value will be overwritten the previous random value, and then this value will be copied to the next bitcells. In this way, CIM-based MCMC design fulfills the MCMC computing./I ## 4. Cim-based MCMC design hardware design As described in Sec. 3, CIM-based MCMC computation requires three major in-memory operators: _in situ_ in-memory RNG (\(x^{*}\) generator), accurate \([0,1]\) RNG (\(u\) generator) and in-memory copy. This section elaborates on the circuit scheme to realize these three operations and form a complete CIM-based MCMC design. CIM-based MCMC design performs on-chip MCMC sampling and storing the generated samples. The entire MCMC processing happens locally inside the CIM-based MCMC design macro circuits, which eliminates the need for frequent data transfer between the processing units and memories in the conventional software realization on CPU/GPU platforms and thus significantly reduces the latency and energy consumption. Fig. 7(a) shows the overall architecture of the CIM-based MCMC design. It consists of an SRAM sub-array2, read/write (R/W) circuits, BL conditioning circuits(), in-memory copy units (), and accurate \([0,1]\) RNG module (). Note that the former 4 modules form a function macro. The accurate \([0,1]\) RNG module () is a standalone module independent of the function macro. Footnote 2: The SRAM sub-array is defined to consist of SRAM bitcells, row/column decoders, and multiplexers. The latter two are not drawn here because of their conventional circuit structure. This function macro can work in three modes: * _Memory mode_ works the same as traditional SRAM does. The functions can write (sense) data into (from) bitcells. In this mode, the SRAM sub-array, BL conditioning circuits, and R/W circuits work; all the other circuits are turned off. * _Block-wise RNG mode_ generates random numbers into blocks. In this mode, only SRAM sub-array and BL conditioning circuits are turned on. Figure 5. Block diagram for CIM-based MCMC method. Figure 6. Probabilistic transfer matrix of 4-bit data. * _CIM copy mode_ realizes copying data from one part to another part. In this mode, only the SRAM sub-array and the in-memory copy unit are turned on. Next, we will detail how the proposed CIM-based design performs MCMC computing. ### Block-Wise In-Memory Random Number Generation Practical applications call for vast amounts of random samples before the accept/reject check. According to Fig. 5, a large number of random numbers should be generated and immediately buffered in the memory for further MCMC processing steps. In order to achieve this, we design the 6T SRAM bitcell sub-array and block-wise in-memory RNG assisting circuit, as illustrated in Fig. 7(b). The entire function macro enters block-wise RNG mode. As one realistic setup, every 4 columns of bitcells share one BL conditioning circuit (the precharge unit). The power supply of the BL conditioning circuits is connected to PVDD and the bitcell sub-array connects to the supply voltage of CVDD. In the block-wise RNG mode, pseudo-read3 is conducted for a block of the sub-array, as shown in Fig. 8(a). The precharge unit of the selected cells is activated and the supply voltage connected to it is raised to 0.8 V with PVDD=0.8 V, maintaining high voltage on both BL/BLB at the same time. CVDD is reduced to 0.5 V. Then, the WL of the selected cells is activated. As a result, the SNM of these bitcells are reduced purposely (as shown in Fig. 8(b) and (c)), and their values are destroyed. For the bitcells in the same column (Fig. 8(d) and (e)), since the WLs are not activated, their SNM remains high and the value stored is not destroyed [(39)]. For bitcells in the selected row (Fig. 8(f) and (g)), despite their corresponding WL activated, precharge units connected to them remain turned off. The setting of a separate precharge unit makes it possible for the macro to perform "Random number generate" on specifically chosen cells, guaranteeing that the generation of new samples will not affect previous data that are stored in unselected blocks in the sub-array. Figure 8. (a) Pseudo-read operation in a bitcell sub-array; (b) working condition and timing diagram of the selected bitcells; (c) VTC of the selected bitcells and unselected bitcells; (d) working condition and timing diagram of the unselected bitcells in the same column of the selected bitcells; (e) VTC of the unselected bitcells in the same column of the selected bitcells; (f) working condition and timing diagram of the unselected bitcells in the same row of the selected bitcells. (g) VTC of the unselected bitcells in the same row of the selected bitcells. Figure 7. (a) Architecture of CIM-based MCMC design; (b) sub-array and peripheral circuits; (c) standard 6T-SRAM bitcells; (d) select unit; (e) copy unit. ### Accurate \([0,1]\) Random Number Generation After the candidate samples for the desired Markov chain are obtained from the block-wise RNG, whether they can be accepted as the output samples depends on the accept/reject check step. That is, compare a random number \(u\) and the expression \(\alpha(x^{(i)},x^{*})\) in Algorithm 1. This requires the random number \(u\) to be uniformly distributed between \([0,1]\). The challenge of realizing this accurate \([0,1]\) RNG is two-fold: (a) software (Zhou et al., 2017) or digital-logic-based pseudo-random number generation algorithms potentially cause large area and energy overhead (Frank et al., 2018); (b) since the SRAM bitcell BFR \(p_{BFR}\) under pseudo-read operation is lower than 50% as shown in Fig. 4(c), direct use of pseudo-read cannot guarantee that the generated random number \(u\) is uniformly distributed4, which potentially bring adverse effects on the quality of MCMC sampling. Therefore, we propose a novel low-cost multi-stage XOR gate circuit scheme (MSXOR) to construct the accurate \([0,1]\) RNG. Footnote 4: For a multi-bit integer \(u\), uniform distribution can be interpreted as that each binary bit of \(u\) has a strict 50% probability of being “1” or “0”. Fig. 9(a) depicts the circuit schematic diagram of the proposed accurate \([0,1]\) RNG. Accurate \([0,1]\) RNG includes a small-scale 6T SRAM bitcell sub-array and a group of multi-stage XOR gates. This small-scale SRAM bitcell sub-array has a similar structure to the sub-array used block-wise RNG in Sec. 4.1; yet the former has no R/W circuits. Besides, in an accurate \([0,1]\) RNG module, the voltage VDDP is divided into two voltages VDDPL and VDDPR to respectively control BL and BLB. This scheme requires two steps to generate a multi-bit accurate \([0,1]\) random number: _Step 1:_ Flush the employed SRAM bitcells to zeros ("reset"). This step can be performed through the voltage difference of VDDPL and VDDPR, as illustrated in Fig. 9(b). VDDPL and VDDPR are both controlled by the transistors MP0\(-\)MP127 with the same control signal PRE. In the reset operation, the supply voltage of the SRAM bitcells is firstly reduced to a relatively low value (0.5V), then the supply pair (VDDPL, VDDPR) is set to (0 V, 0.8 V). Then the MP0\(-\)MP63 and WL are turned off. Since VDDPR is higher than the bitcell supply voltage, the stored value of these bitcells is modified to \({}^{0}\). _Step 2:_ Perform the pseudo-read RNG operation on these bitcells, which is similar to the step described in Sec. 4.1. Considering the separate VDDPL and VDDPR controlling BL/BLB, both VDDPL and VDDPR should be raised to 0.8 V in the pseudo-read steps. The waveform of the control signals is depicted in Fig. 9(c). Next, we will show why this scheme with the two-step operation yields samples of uniform distribution. First, we define the probability of a single bit to be \({}^{*}1\) as \(\lambda\). The multiple stages of XOR gates help optimize \(\lambda\) of the generated samples to 0.5 and equivalently the aggregated multi-bit results achieve uniform distribution. In the circuit scheme, 64 bitcells are divided into 8 groups, i.e. each group representing an 8-bit "raw" random number \(R_{0}^{l}[7:0]\) (\(l\in[0,7]\)). Note that the probability of each bit in \(R_{0}^{l}[7:0]\) as "1" is \(\lambda_{0}=p_{BFR}\) because it is right out of reset and pseudo-read operation. The subscripts of \(R_{0}^{l}\) and \(\lambda_{0}\) mean it is the first stage. Then it feeds \(R_{0}^{0},R_{0}^{1},\cdots,R_{0}^{l}\) into a stage of 64 XOR gates. Each XOR gate converts two input bits into one output bit. Therefore, at this stage, the XOR gates output \(R_{1}^{0},R_{1}^{1},R_{1}^{2},R_{1}^{3}\). As illustrated in Fig. 9(d), both the two inputs at this stage have a probability of \(\lambda_{0}\) being "1" and a probability of \(1-\lambda_{0}\) being "0". The output bit of the XOR gate hereby has a probability of \(\lambda_{1}=2\lambda_{0}(1-\lambda_{0})\) to be "1" (and a probability of \((1-\lambda_{1})\) being "0"). Then \(R_{1}^{0},R_{1}^{1},R_{1}^{2},R_{1}^{3}\) successively pass stage 2 (32-bit XOR gates) and stage 3 (16-bit XOR gates). Finally, this module yields 1 group Figure 9. (a) Proposed accurate \([0,1]\) RNG architecture; (b) working condition and timing diagram of the reset operation; (c) working condition and timing diagram of RNG and read operation; (d) \(\lambda\) of different bit flip rate and number of XOR_stages; (e) \(\lambda_{3}\) of SRAM bitcells under different corner simulations. of samples \(R_{3}[7:0]\) as the final output of accurate \([0,1]\) RNG in \(8-bit\). Take \(p_{BFR}=0.4\) as an example, \(\lambda_{3}=0.49999872\). The detailed proof of \(\lim\limits_{n\to\infty}\lambda_{n}=0.5\) can be found in Appendix A. Fig. 9(d) shows the quantitative analysis result of the distance from \(\lambda=0.5\), i.e. \((0.5-\lambda_{n})\), given the value of \(p_{BFR}\) and the number of XOR gate stages \(n\). It reveals that when \(p_{BFR}\geq 0.4\) (corresponding to the case of \(CVDD\) is disturbed from \(0.5\)V to \(0.6\)V), \(3\)-stage XOR-gates (\(n=3\)) is adequate. **Discussion on the impact of process variation**: as the \([0,1]\) RNG circuit is composed of a limited number of SRAM bitcells, process variation of these bitcells is an adverse factor that potentially affects the accuracy of uniform distribution. To address this issue, this work carries out the corner simulation for accurate \([0,1]\) RNG circuit. The simulation uses CVDD=\(0.5\)V in pseudo-read as the controlled variable. Fig. 9(e) summarizes the corresponding results. It reveals that \(p_{BFR}\) fluctuates in different design corners as influenced by the process variation. This fluctuation arises because the operating period may not be appropriate for the SRAM bitcells to reach an intermediate voltage level during pseudo-read. Our simulation shows that extending the latency of pseudo-read operation can ameliorate this problem. Fig. 9(e) shows that the proposed \(3\)-stage XOR gates yield \(\lambda_{3}\geq 0.4999993981\), which is adequately accurate for the uniform distribution RNG and thus passes the corner test. After a candidate sample \(x\) (from the block-wise in-memory RNG) and a uniformly distributed random number \(u\) (from accurate \([0,1]\) RNG) are prepared, the accept ratio calculation is then performed with digital arithmetic units. First, convert the random number \(u\) into the range of \([0,1]\). Since \(R_{3}\) is an uniformly distributed \(8\)-bit number between \(0(\texttt{ox}\texttt{0})\texttt{0}\texttt{0}\)) and \(255(\texttt{0}\texttt{xff})\). The desired random number can be calculated through \(u/256\). Then calculate the expression \(\frac{p(x^{*})q(x^{*}|x^{*})}{p(x^{*(x^{*})q})q(x^{*(x^{*})})}\) and compare the value with \(u\). To Reduce the complexity of the program, we can change the algorithm from comparing the size of fractions to comparing representation \(u*p(x^{*})\) and \(p(x^{(i)})\). If \(p(x^{(i)})>u*p(x^{*})\), the new sample should accepted. In this case, an in-memory copy operation should be carried out to store the accepted sample. ### In-Memory Copy Scheme After obtaining an accepted sample, it should be written to another nearby address for further use. A normal "read-and-write" operation through SRAM R/W circuits to accomplish this is straightforward but timing-consuming. Because the value to be written back to the array is exactly stored in the previous group of bitcells, a method of copying data between neighboring bitcells could be a better solution for reducing read-write latency and power consumption. To achieve this, we propose an in-memory copy scheme, as shown in Fig. 10. Every \(4\) columns of bitcells in the sub-array form a group with two control signals \(A\) and \(B\). Note the \(4\)-column configuration follows Fig. 7(b). Every compartment of the macro shares \(8\) buses, which are corresponding to BLs and BLBs from every group of bitcells. Every bus has one buffer5, whose supply voltage remains high (\(0.8\) V). Since the buffer is unidirectional, every BL (BLB) is connected to both sides of the corresponding buffer with signal \(A\) controlling the switch on the input port. To copy the value from bitcells from columns \([a_{0},b_{0},c_{0},d_{0}]\) to columns \([a_{1},b_{1},c_{1},d_{1}]\) in the same row, the supply voltage of the sub-array is reduced to \(0.5\) V. Then the WL of this row is activated and the control signal \(A0\) corresponding to \([a_{0},b_{0},c_{0},d_{0}]\) on the buffer input port and \(B1\) corresponding to \([a_{1},b_{1},c_{1},d_{1}]\) on the buffers' output port are raised. In this way, the values of the bitcells in \([a_{0},b_{0},c_{0},d_{0}]\) are transferred to the voltage on BLs and BLBs and enlarged to \(0.8\) V through the associated buffers. They hereby are written into the bitcells of \([a_{1},b_{1},c_{1},d_{1}]\) with a relatively low supply voltage. After a short writing latency (\(<1\) ns in our experimental configuration), the WL turns off and the supply voltage of the whole array restores \(0.8\) V. In this way, an efficient in-memory copy operation is realized. Footnote 5: Here the buffer refers to a circuit buffer, i.e. even number of inverters to increase the durability. ## 5. Enhancement for Cim-based MCMC Design The proposed CIM-based MCMC design is expandable in the precision of generated samples and scalable for high parallelism. ### Expandable Precision Earlier, when we describe the working principle of the proposed macro, we take the example of a word composed of \(4\) bits. Actually, the proposed CIM-based MCMC design supports higher precision as required in some applications (Zhou et al., 2017). In this way, we take a precision-expandable scheme to solve this problem. As shown in Fig. 11(a), the array described in this paper is divided into several compartments, with each compartment composed of \(4096\) SRAM cells (\(64\) rows & \(64\) columns). Since in the most basic case these bitcells are divided into \(16\) groups and each group consists of \(4\) columns, we can combine \(2\) groups of bitcells into a new group by keeping their control signal at the same pace with each other. When performing the sampling, these control signals are turned on simultaneously, enabling outputting an \(8\)-bit sample at a time. Under R/W operations and in-memory copy operations, we can apply the principle of separate transmission, operating on the first \(4\) columns and then the next \(4\) columns. Through this method, we can conduct in-memory copy for the samples of up to \(64\) bits. Figure 10. In-memory copy scheme. ### Scalable Parallelism For computing, high parallelism is what researchers always pursue. The proposed one macro can only perform MCMC sampling once at a time, failing to deliver adequate parallelism. Therefore, we propose to improve the parallelism scheme by dividing the whole sub-array into different compartments. Fig. 11(b) depicts this scheme. Each compartment is composed of 4096 SRAM bitcells (64 rows, 64 columns), copy circuit for the in-memory copy scheme, R/W circuit, and calculation circuits. When we perform sampling, every compartment is thrown into operation. The working timing diagram of a compartment-based macro is shown in Fig. 12. For every step ("random" means block-wise RNG, "read" means reading out the generated sample and performing accurate \([0,1]\) RNG simultaneously, "calculate" means the calculation of accept/reject check, and "copy" means ), WLs of all compartments and corresponding control signals are turned on simultaneously so as to output one sample per compartment instead of per entire macro. The acceptance probability calculation operation is performed at the same time for every compartment. Of course, some samples will be accepted and others will be refused. Therefore, two independent groups of in-memory copy circuits can solve this problem. In one group of in-memory copy circuits, the compartments which obtain accepted samples do not need to modify themselves, as the remaining should copy the previous sample to the present address. This operation can also be performed simultaneously, with WLs of the accepted compartments turned off and the remaining WLs turned on. In the other group of in-memory copy circuits, all compartments are involved, with the value of bitcells of the present address copied to bitcells of the next address. In this way, a group of newly sampled data is obtained and the Markov Chain keeps evolving (Han et al., 2017). ## 6. Evaluation and Discussion ### Configuration for Verification & Circuit Implementaiton The proposed CIM-based MCMC scheme is designed in the latest commercial 28 nm CMOS process development kit. The 6T SRAM bitcell is the foundry-developed bitcell. The capacity of this macro is 256 kb and the core area is 0.1967 mm\({}^{2}\). Fig. 13(a) lists the design parameter of this macro. The area is estimated based on the layout of prior practical SRAM-based CIM design (Han et al., 2017). Fig. 13(b) presents the pie diagram for the entire area breakdown. Generally, R/W circuits dominate the area of this macro (34.136 %). SRAM sub-array with select and copy units is the second largest part of this macro (32.839 %). It is slightly larger than the WL decoders (32.800 %). Since one random Figure 11. (a) Expandable precision scheme; (b) scalable parallelism scheme. Figure 12. Operating timing diagram of this probabilistic macro. Figure 13. (a) Design parameter of the proposed macro; (b) the total area breakdown of this macro. number generated from accurate \([0,1]\) RNG can be shared by the overall 64 compartments. The proposed accurate \([0,1]\) RNG is very compact, only occupying 0.225% of the entire area. ### Function Verification In this section, the function of the proposed macro is verified. Fig. 14 shows the simulated timing diagram of SRAM bitcells as well as the corresponding output of R/W circuits. Fig. 14(a1-a4) correspond to SRAM bitcells belonging to the first 4 columns, i.e. columns (0,1,2,3). Fig. 14(b1-b4) correspond to the second 4 columns, i.e.columns (4,5,6,7). Fig. 14(c1-c4) corresponds to the output of the R/W circuit. "Q" refers to the positive terminal inside the 6T SRAM bitcell in Fig. 7(b). * From 0 ns to 1 ns (x-axis), the proposed macro goes through the "write" process, writing an exemplary value "0101"into the 4 SRAM cells belonging to columns (0,1,2,3). * From 1 ns to 2 ns, block-wise RNG is performed (marked with"random" in Fig. 14). The originally stored data changes from "0101" to "1001". This new data "1011" serve as the first sample of the Markov chain. * From 2 ns to 4 ns, the macro executes "in-memory copy". The data "1001" stored in columns (0,1,2,3) are copied to the bitcells of columns (4,5,6,7). * From 4 ns to 5 ns, the macro executes block-wise RNG on the bitcells of columns (4,5,6,7), with their stored data switching from "1001" to "0100". * From 5 ns to 6 ns, the new data "0100" is read through the output circuit for the accept/reject check. In this way, the proposed macro gets a group of samples stored in the SRAM array following the desired distribution with the MCMC method. ### Thermal Impact The operation of the proposed macro strongly depends on SRAM bitcell stochastic behavior (Mohan et al., 2017). The stochasticity is strongly related to the ambient temperature. Different temperatures may affect the working efficiency of the random generation. Keeping the supply voltage 0.5 V, we measure bit flip rate \(p_{BFR}\) at different temperatures. As shown in Fig. 15, \(p_{BFR}\) slightly increases as the temperature rises. In the normal commercial-grade operating temperature range\(0^{\circ}C-70^{\circ}C\), \(p_{BFR}\) remains around 45%. This implies the influence of the ambient temperature. For industrial grade (\(-40^{\circ}C-85^{\circ}C\)), this macro can work in most temperature ranges (\(-20-85^{\circ}C\)) with a stable bit flip rate. When working at a lower temperature(\(-40--20^{\circ}C\)), \(p_{BFR}\) decreases as thermal noise reduces. According to the prior study in (Bartner et al., 2016), the decrease in bit flip rate will only result in an extension of burn-in time to generate random samples without affecting the quality of obtained samples. For the accurate \([0,1]\) RNG, according to Fig. 9(d) and (e), a slight decrease of \(p_{BFR}\) is tolerable for maintaining the correct RNG function. ### Energy and Efficiency Fig. 16(a) shows the energy consumption of the proposed macro under different operations. When performing the block-wise RNG and in-memory copy operations, this macro consumes 79.1 fJ and 47.5 fJ for each 4-bit sample, respectively. Generally, the operations involving R/W circuits consume more energy than those completely inside the SRAM sub-array (343.1 fJ for read operation and 372.6fJ for write operation). This again confirms the necessity and importance of the in-memory copy scheme. By avoiding excessive reuse Figure 14. Function realization of the proposed macro. Figure 15. Bit flip rate under different temperatures. of the R/W circuit in frequent data transportation, the proposed macro significantly reduces energy consumption in accelerating MCMC computation. The energy consumption of \([0,1]\) RNG is 234.6 fJ for each 8-bit sample. Taking everything into account, the energy cost of this entire macro is 0.5065 pJ/accepted sample, for the sampling operation whose result is a directly accepted sample. However, if the obtained sample is rejected by the calculation circuit, the energy cost of this operation will deteriorate as we should conduct an extra in-memory copy operation to rewrite the value of the previous sample to this rejected sample. In this case, the energy cost is 0.5547 pJ/sample. The sampling accept ratio typically remains between 30% and 40% based on different prior distributions. Generally, considering the overall acceptance, the energy efficiency of the proposed macro will range from 0.5331 pJ/sample to 0.5402 pJ/sample. ### Throughput Fig. 16(b) illustrates the throughput of the proposed macro with different precision of samples. As the number of bits in the sample increases, the overall throughput shows a continuous decrease but still maintains above \(10^{7}\) samples/s. It proves the proposed macro keeps a high sampling speed. It is notable that as the number of bits doubles, the overall throughput of this macro does not decrease exactly by half. In fact, it is slower. This is because when we perform sampling operations with different precision, in-memory copy, R/W operations are performed step by step with 4 SRAM bitcells. As the number of bits doubles the time consumed in these operations will also double. But the in-memory random operation can be performed simultaneously for any number of bits, as the selected SRAM cells are always in the same row and their WL is opened consistently. By adjusting pre-charge units of different columns, we can realize the generation of a complete sample in one sample operation, eliminating the time consumed to proceed in steps. Thus generally it takes less than twice the time to generate a sample of double number of bits. The overall throughput will accordingly decrease slower than decline by half. ### Comparison to MCMC Benchmarks on CPU&GPU In this part, we compare the performance of the proposed macro with CPU and GPU on different sampling tasks. We designate two probability distributions commonly used in artificial intelligence systems for MCMC, as illustrated in Fig. 17(a) and (b). The first one is the Gaussian mixture model (GMM) (GMM) (GMM) (GMM), a probabilistic model widely used for representing the distribution of data points in a dataset. It assumes that the dataset is generated from a mixture of several Gaussian distributions, each characterized by its mean, covariance, and weight. The mean represents the center of this distribution; the covariance captures its shape and orientation; and the weight indicates the contribution of each component to the overall mixture. Here we use a mixture of 4 Gaussian distributions for GMM. The other one is multivariate Gaussian distribution (MGD) (GMM) (GMM), also known as a multivariate normal distribution. It is a probability distribution that generalizes the concept of the Gaussian distribution to multiple dimensions. Fig. 17(b) shows the two-dimensional heat map for a bivariate MGD example. Darker colors represent lower probabilities and lighter colors represent higher probabilities. This distribution is widely used in the tasks such as pattern recognition (Sutskever et al., 2017) and anomaly detection (Beng et al., 2017). In the experiment, we used CPU, GPU, and simulation results of the proposed macro to test the sampling efficiency on these two distributions. For GMM, we performed sampling on the CPU with the NumPy (GMM) and its adapted algorithms. Additionally, we employ JAX library (Beng et al., 2017) with just-in-time compilation and executed along for faster speed on both CPU and GPU. The CPU hardware platform is Intel(r) Xeon(r) Gold 6230 CPU @2.10GHz with 220GB main memory. The GPU hardware platform is NVIDIA(r) GeForce(r) RTX 3090, and the samples drawn from our probabilistic macro are fixed at 32 bits. The experimental result is shown in Fig. 17(c) (GMM case) and Fig. 17(d) (MGD case). The x-axis is the target number of sample Figure 16. (a) Energy consumption and breakdown of different operations; (b) throughput of the proposed macro with different precision of samples. Figure 17. (a) Mixture of 4 Gaussian distributions; (b) two-dimensional heat map for a bivariate MGD example; (c) experimental result of the GMM case; (d) experimental result of the MGD case. generation using the MCMC algorithm and the y-axis is the time cost for completing the corresponding MCMC tasks. For software algorithms, there are two points that need to be explained. The first is that the sampling speed of JAX of CPU is faster than GPU when performing sampling for GMM. This is because GMM is a relatively simple distribution, and the number of samples is rather low. When performing large-scale highly parallel computations, GPUs have an advantage over CPUs, but this advantage does not serve totally for sampling GMM. In fact, when we perform sampling on MGD, the speed advantage of GPUs over CPUs becomes evident as the number of samples increases. The second is we did not use NumPy algorithms when performing GCD sampling. This is because the NumPy library is generally used for handling relatively simple computations, and it has been proven to be challenging to use NumPy and related algorithms for sampling on MGD. Fig. 17(c) compares the sampling speed of this macro with different algorithms on CPU and GPU on GMM. The sampling algorithm using the NumPy library on the CPU takes more than 1200 s (20 min) to generate \(10^{6}\) samples. The use of the JAX library significantly improves sampling efficiency (Beng et al., 2017), but the sampling time remains more than 10 s on both CPU and GPU. Our proposed macro gives a boost on sampling throughput, with \(10^{6}\) samples generated within \(10^{-3}\) s, showing an acceleration of over \(20,0000\times\). Fig. 17(d) shows the sampling speed on a bivariate MGD. The sampling speed shows a noticeable decrease compared to GMM as the distribution grows more complex. As the number of samples increases, the advantage of GPU over CPU in data processing becomes evident. However, the JAX algorithm based on GPU still takes around 400 seconds to generate \(10^{6}\) samples. On this complex sampling task, the superiority of our chip is further demonstrated, as the proposed macro only takes around 2 ms to draw \(10^{6}\) samples. The power efficiency of the sampling of the proposed macro is also significantly higher than GPU. For GMM, the average power consumption of GPU is around 125 W, compared to 0.157 mW on our chip. For MGD, the average power consumption of GPU is around 170 W, compared to \(1.52\times 10^{-4}\) W on our chip. Taking sampling speed and power efficiency into account, the overall energy cost of obtaining one sample of our chip is \(5.41\times 10^{11}\) to \(2.33\times 10^{12}\) times lower than that of GPU. ## 7. Conclusion This work proposes an SRAM-based compute-in-memory macro for Markov Chain Monte Carlo (MCMC) sampling. Featuring high sampling speed, expandable precision, and high parallelism, this CIM macro can realize the complete MCMC sampling process, including on-chip sampling, random number generating, accept ratio calculation, and in-memory data copy. Implemented in a standard 28-nm CMOS process, this macro can achieve high energy efficiency of 0.53 pJ/sample and high throughput of up to 166.7M samples/s. Generally, our chip has a power consumption that is 6 orders of magnitude lower than that of a GPU. ## Appendix A Appendix: Proof of \(\lim\limits_{n\rightarrow\infty}\lambda_{n}=0.5\) The proposed multi-stage XOR gate scheme \(\lambda_{i+1}=2\lambda_{i}(1-\lambda_{i})\). The function is abstracted as \(f(z)=2z(1-z)\). The curve of \(f(z)\) is shown in Fig. 18. Proof.: \(\forall 0<z<0.5,f(z)-z=2z(0.5-z)\cdot\cdot\cdot z>0\) and \(0.5-z>0,\cdot\cdot f(z)>z\). Theorem 2 ().: \(\forall 0<\lambda_{0}<0.5,\lim\limits_{n\rightarrow\infty}\lambda_{n}=0.5\). Proof.: Considering Theorem 1, \(\forall 0<\lambda_{0}<0.5,\cdot\cdot\lambda_{0}<\lambda_{1}<0.5\). For next iteration, \(\cdot\)\(0<\lambda_{0}<\lambda_{1}<0.5,\cdot\)\(0<\lambda_{1}<\lambda_{2}<0.5\). Thus, it can be easily deduced that, \[\forall n\in\mathbb{N}kn\geq 2,0<\lambda_{n-1}<\lambda_{n}<0.5 \tag{1}\] And for each iteration, the increment \(\Delta(\lambda_{n})=\lambda_{n}-\lambda_{n-1}=\lambda_{n-1}-2\lambda_{n-1}^{2}\). Because \(0<\lambda_{n-1}<\lambda_{n}<0.5\), therefore \[0<\Delta(\lambda_{n})\leq 0.125 \tag{2}\] We can use proof by contradiction here. Suppose \(\lim\limits_{n\rightarrow\infty}\lambda_{n}=\Lambda\), where \(\Lambda\neq 0.5\). Then, according to Eq. 1, \(\Lambda\) must be \(<0.5\). When \(n\rightarrow\infty\), \(\Delta(\Lambda)\) has to be \(\to 0\). But this is contradicted to (Eq. 2). Therefore, with \(n\rightarrow\infty\) (large enough), \(\lim\limits_{n\rightarrow\infty}\lambda_{n}=0.5\). Note that the reset operation before peudo-read in accurate \([0,1]\) RNG is to guarantee \(0<\lambda_{0}=p_{BFR}\leq 0.5\). ## Appendix B APPENDIX: List of all variables
2301.12327
Generalized Ordinal Nash Games: Variational Approach
It is known that the generalized Nash equilibrium problem can be reformulated as a quasivariational inequality. Our aim in this work is to introduce a variational approach to study the existence of solutions for generalized ordinal Nash games, that is, generalized games where the player preferences are binary relations that do not necessarily admit a utility representation.
Orestes Bueno, John Cotrina, Yboon García
2023-01-29T02:22:49Z
http://arxiv.org/abs/2301.12327v1
# Generalized Ordinal Nash Games: Variational Approach ###### Abstract It is known that the generalized Nash equilibrium problem can be reformulated as a quasivariational inequality. Our aim in this work is to introduce a variational approach to study the existence of solutions for generalized ordinal Nash games, that is, generalized games where the player preferences are binary relations that do not necessarily admit a utility representation. **Keywords: Preference relations; Ordinal Nash games; Quasivariational inequalities** ## 1 Introduction and Motivation The notion of generalized Nash games was introduced by Debreu in his seminal article [1]. This is an extension of the classical Nash game in which the preference and the strategy set depend on the strategy of the rival players. Some economic problems can be seen as a generalized Nash game, for instance, competitive economic equilibrium problems or exchange economy models. For more details on the history of generalized Nash games, see [2]. It is known that when the preference relation is represented by a concave utility function the generalized Nash game can be reformulated as a quasivariational inequality problem, see [3, 4, 5, 6, 7]. This reformulation was extended to the quasiconcave case, due to the concept of the normal cone in [8, 6, 9]. On the other hand, recently, Milasi _et al._[10], used the variational approach to guarantee the existence of solutions for a competitive economic equilibrium problem, where the consumer preferences are given through a binary relation. Similarly, Donato and Villanaci [11] reformulated an exchange economy model as a quasivariational inequality. Thus, the purpose of this manuscript is to establish necessary and sufficient conditions to guarantee the existence of solutions for generalized ordinal Nash games using the variational approach. The paper is organized as follows. First, in Section 2, we introduce the notations used. Then, our necessary and sufficient conditions are established in Section 3. Moreover, we establish an existence result using a quasivariational inequality reformulation. Preliminaries Formally, a Nash game consists of finite number of players, say \(I\). Each player \(\nu\) controls a variable \(x^{\nu}\) which belongs to a strategy space \(X_{\nu}\subset\mathbb{R}^{n_{\nu}}\). Moreover, for each player \(\nu\in I\), its rivals are denoted by \(-\nu\). Denote by \(X=\prod_{\nu\in I}X_{\nu}\) the Cartesian product of the strategy sets, and by \(X_{-\nu}=\prod_{j\neq\nu}X_{j}\) the Cartesian product of the strategy sets of the rivals of player \(\nu\). We assume that each player \(\nu\) possesses a preference relation \(\succeq_{\nu}\), defined on \(\mathbb{R}^{n}=\prod_{\nu\in I}\mathbb{R}^{n_{\nu}}\). This preference relation, together with a rival strategy \(w^{-\nu}\in X_{-\nu}\), induces a preference relation on \(\mathbb{R}^{n_{\nu}}\), defined as \[x^{\nu}\succeq_{\nu,w^{-\nu}}y^{\nu}\quad\iff\quad(x^{\nu},w^{-\nu})\succeq_{ \nu}(y^{\nu},w^{-\nu}).\] A strategy \(\hat{x}\in X\) is a _Nash equilibrium_ if, for each \(\nu\in I\), there is not a \(x^{\nu}\in X_{\nu}\), such that \[x^{\nu}\succ_{\nu,\hat{x}^{-\nu}}\hat{x}^{\nu},\] where \(\succ_{\nu,x^{-\nu}}\) stands for the asymmetric part of \(\succeq_{\nu,x^{-\nu}}\). In a generalized ordinal game, the strategy of each player \(\nu\) must belong to a set \(K_{\nu}(x^{-\nu})\subset X_{\nu}\) which depends on the rival player's strategies. A strategy \(\hat{x}\in K(\hat{x})=\prod_{\nu\in I}K_{\nu}(x^{-\nu})\) is a _generalized Nash equilibrium_ if, for each \(\nu\in I\), there is not a \(x^{\nu}\in K_{\nu}(\hat{x}^{-\nu})\) such that \[x^{\nu}\succ_{\nu,\hat{x}^{-\nu}}\hat{x}^{\nu}.\] For each \(\nu\in I\), we consider the sets \(U^{s}_{\nu}(x)=\{y^{\nu}\in\mathbb{R}^{n_{\nu}}:\ y^{\nu}\succ_{\nu,x^{-\nu}} x^{\nu}\}\). It is clear that \(\hat{x}\in K(\hat{x})\) is a generalized Nash equilibrium if, and only if, \[U^{s}_{\nu}(\hat{x})\cap K_{\nu}(\hat{x}^{-\nu})=\emptyset,\text{ for all }\nu.\] ## 3 Variational Approach We first recall some facts on quasivariational inequalities. Consider \(X\) a subset of \(\mathbb{R}^{n}\) and, \(T:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) and \(K:X\rightrightarrows X\) two set-valued maps. A vector \(\hat{x}\in X\) is said to be a solution of the _Stampacchia quasivariational inequality problem_ if \(\hat{x}\in K(\hat{x})\) and there exists \(\hat{x}^{*}\in T(\hat{x})\), such that \[\langle\hat{x}^{*},y-\hat{x}\rangle\geq 0,\,\forall\,y\in K(\hat{x}).\] The solution set of the Stampacchia quasivariational inequality problems will be denoted by \(\operatorname{SVIP}(T,K)\). Given a subset \(A\) of \(\mathbb{R}^{n}\) and \(x\in\mathbb{R}^{n}\), the "normal cone" of \(A\) at \(x\) is the set \[\mathscr{N}_{A}(x):=\begin{cases}\{x^{*}\in\mathbb{R}^{n}\,:\,\langle x^{*}, y-x\rangle\leq 0,\,\forall y\in A\},&A\neq\emptyset,\\ \mathbb{R}^{n},&A=\emptyset.\end{cases}\] It is usual in the literature to consider the above definition whenever \(A\) convex and closed, and \(x\in A\). However, we will not consider such conditions in this work. For each \(\nu\in I\), we consider the _normal cone map_\(N_{\nu}:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n_{\nu}}\) defined as \[N_{\nu}(x):=\mathscr{N}_{U^{s}_{\nu}(x)}(x^{\nu}).\] We note that if \(U^{s}_{\nu}(x)=\emptyset\) then \(N_{\nu}(x)=\mathbb{R}^{n_{\nu}}\). We divided this section in two parts. In the first part, we give links between quasivariational inequalities and generalized ordinal Nash games. In the second part, we establish an existence result. ### Sufficient and Necessary Conditions The following result states that any solution of a certain Stampacchia quasivariational inequality problem is a generalized Nash equilibrium. **Theorem 3.1**.: _For each player \(\nu\in I\), we assume that_ 1. \(K_{\nu}\) _is non-empty valued;_ 2. \(U_{\nu}^{s}(x)\) _is open, for all_ \(x\in\mathbb{R}^{n}\)_._ _If \(\hat{x}\in\mathrm{SVIP}(N_{0},K)\), where \(N_{0}\) is defined as \(N_{0}(x):=\prod_{\nu\in I}N_{\nu}(x)\setminus\{0_{\nu}\}\), then \(\hat{x}\) is a generalized Nash equilibrium._ Proof.: Assume that \(\hat{x}\) is not a generalized Nash equilibrium. Then there exists a player \(\nu_{0}\) and \(x^{\nu_{0}}\in K_{\nu_{0}}(\hat{x}^{-\nu_{0}})\), such that \(x^{\nu_{0}}\succ_{\nu_{0},\hat{x}^{-\nu_{0}}}\hat{x}^{\nu_{0}}\), that is, \(x^{\nu_{0}}\in U_{\nu_{0}}^{s}(\hat{x})\). Denote \(y=(x^{\nu_{0}},\hat{x}^{-\nu_{0}})\in K(\hat{x})\). Since \(\hat{x}\in\mathrm{SVIP}(N_{0},K)\), there exists \(\hat{x}^{*}\in N_{0}(\hat{x})\) such that \[0\leq\langle\hat{x}^{*},y-\hat{x}\rangle=\sum_{\nu\in I}\langle\hat{x}^{*,\nu },y^{\nu}-\hat{x}^{\nu}\rangle=\langle\hat{x}^{*,\nu_{0}},x^{\nu_{0}}-\hat{x} ^{\nu_{0}}\rangle.\] On the other hand, note that \(\hat{x}^{*}\in N_{0}(\hat{x})\) implies \(0\neq\hat{x}^{*,\nu_{0}}\in N_{\nu_{0}}(\hat{x})\). Since \(U_{\nu_{0}}^{s}(\hat{x})\) is open, \[0\leq\langle\hat{x}^{*,\nu_{0}},x^{\nu_{0}}-\hat{x}^{\nu_{0}}\rangle<0,\] a contradiction. We now state the converse of the previous result. **Theorem 3.2**.: _For each player \(\nu\in I\), we assume that_ 1. \(K_{\nu}\) _is non-empty, closed and convex valued;_ 2. \(U_{\nu}^{s}(x)\) _is convex, for all_ \(x\in\mathbb{R}^{n}\)_;_ 3. \(x^{\nu}\in\mathrm{cl}(U_{\nu}^{s}(x))\)_, for all_ \(x\in\mathbb{R}^{n}\)_._ _If \(\hat{x}\) is a generalized Nash equilibrium, then it is a solution of \(\mathrm{SVIP}(N_{0},K)\), where \(N_{0}\) is defined in Theorem 3.1._ Proof.: First, notice that \(\hat{x}\) is a generalized Nash equilibrium if, and only if, \(\hat{x}\in K(\hat{x})\) and \(U_{\nu}^{s}(\hat{x})\cap K_{\nu}(\hat{x}^{-\nu})=\emptyset\), for all \(\nu\in I\). Thanks to assumption \(3\). the set \(U_{\nu}^{s}(\hat{x})\) is non-empty. Moreover, the sets \(U_{\nu}^{s}(\hat{x})\) and \(K_{\nu}(\hat{x}^{-\nu})\) are convex, due to assumptions \(1\). and \(2\)._. Now, by a separation theorem, there exists \(\hat{x}^{*,\nu}\in\mathbb{R}^{n_{\nu}}\setminus\{0\}\) such that \[\langle\hat{x}^{*,\nu},x^{\nu}-y^{\nu}\rangle\geq 0,\text{ for all }x^{\nu}\in K_{\nu}(\hat{x}^{-\nu})\text{ and all }y^{\nu}\in U_{\nu}^{s}(\hat{x}).\] This implies \(\hat{x}^{*,\nu}\in N_{\nu}(\hat{x})\). By assumption \(3\)._, \(\hat{x}^{\nu}\in\mathrm{cl}(U_{\nu}^{s}(\hat{x}))\) and we obtain that \[\langle\hat{x}^{*,\nu},x^{\nu}-\hat{x}^{\nu}\rangle\geq 0,\text{ for all }x^{\nu}\in K_{\nu}(\hat{x}^{-\nu}),\] which in turn implies \[\langle\hat{x}^{*},x-\hat{x}\rangle=\sum_{\nu\in I}\langle\hat{x}^{*,\nu},x^{ \nu}-\hat{x}^{\nu}\rangle\geq 0,\text{ for all }x\in K(\hat{x}).\] The following example shows that we cannot drop assumption \(3\). in Theorem 3.2. **Example 3.3**.: For \(\nu\in\{1,2\}\), we consider the relation \(\succeq_{\nu}\) defined on \(\mathbb{R}^{2}\) by \[(a,b)\succeq_{\nu}(x,y)\text{ if, and only if, }(a,b)=(x,y)=(0,0).\] Moreover, consider the strategy maps \(K_{1},K_{2}:\mathbb{R}\to\mathbb{R}\) as \[K_{1}(x)=K_{2}(x)=[-1,1].\] It is not difficult to show that \[U_{1}^{s}(x,y)=\{a\in\mathbb{R}:a\succ_{1,y}x\}=\emptyset\text{ and }U_{2}^{s}(x,y)=\{b\in\mathbb{R}:b\succ_{2,x}y\}=\emptyset.\] Hence, this two-player game satisfies condition 2 in Theorem 3.2, but not condition 3. On the other hand, \(\hat{x}=(0,0)\) is a Nash equilibrium, but it is not a solution of the variational inequality associated to the map \(N_{0}\) defined in Theorem 3.1. ### An Existence Result From now on, we will use the continuity definitions for set-valued maps given in [12, Chapter 17]. Motivated by the previous subsection, and similarly to [8, 9], we define the set-valued map \(T:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) as \[T(x):=\prod_{\nu\in I}\mathrm{conv}(N_{\nu}(x)\cap S_{\nu}(0,1)). \tag{1}\] We will later use the map \(T\) to construct a quasivariational inequality necessary to our goals. We will now establish that the map \(T\) is upper hemicontinuous under suitable assumptions. **Proposition 3.4**.: _Assume that \(U_{\nu}^{s}\) is lower hemicontinuous, for all \(\nu\in I\). Then the map \(T\), defined in (1), is upper hemicontinuous with convex and compact values. Moreover, if \(U_{\nu}^{s}(x)\) is convex, for each \(x\), then \(T\) is non-empty valued._ Proof.: It is clear that \(T(x)\) is convex and compact because is the Cartesian product of convex and compact sets. Since \(U_{\nu}^{s}\) is lower hemicontinuous, from Proposition 25 in [13], we obtain that \(N_{\nu}\) is closed. Thus, the map \(D_{\nu}:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n_{\nu}}\) defined by \(D_{\nu}(x)=N_{\nu}(x)\cap S_{\nu}(0,1)\) is closed, because \(\mathrm{graph}(D_{\nu})=\mathrm{graph}(N_{\nu})\cap(\mathbb{R}^{n}\times S_{ \nu}(0,1))\). By the Closed Graph Theorem, \(D_{\nu}\) is upper hemicontinuous. Consequently, by Theorem 17.35 in [12], the map \(\mathrm{conv}(D_{\nu})\) is upper hemicontinuous. The result now follows from Theorem 17.28 in [12]. Finally, if \(U_{\nu}^{s}(x)=\emptyset\), then there is nothing to prove. Otherwise, since \(U_{\nu}^{s}(x)\) is convex and \(x^{\nu}\notin U_{\nu}^{s}(x)\), using a separation theorem, there exists \(x^{*,\nu}\in\mathbb{R}^{n_{\nu}}\setminus\{0\}\) such that \[\langle x^{*,\nu},y^{\nu}-x^{\nu}\rangle\leq 0\text{ for all }y^{\nu}\in U_{\nu}^{s}(x).\] That means \(\lambda x^{*,\nu}\in N_{\nu}(x)\), for all \(\lambda>0\), and this implies \(N_{\nu}(x)\cap S_{\nu}(0,1)\neq\emptyset\). The result follows. _Remark 3.5_.: In Proposition 3.4, to guarantee the lower hemicontinuity of \(U_{\nu}^{s}\), it is enough that the asymmetric relation \(\succ_{\nu}\) is an open subset of \(\mathbb{R}^{n}\times\mathbb{R}^{n}\). In that sense, Bergstrom _et al._[14] gave characterizations on this topological property. On the other hand, Shafer [15] showed that \(\succ_{\nu}\) is an open subset of \(\mathbb{R}^{n}\times\mathbb{R}^{n}\), provided that there exists a continuous function \(f_{\nu}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) satisfying \(f_{\nu}(x,y)>0\) if, and only if, \(x\succ_{\nu}y\). It is important to note that the openness of \(\succ_{\nu}\) is not necessary. For instance, consider the relation \(\succeq\) defined on \(\mathbb{R}^{2}\) as \[(a,b)\succeq(x,y)\text{ if, and only if, }a\geq 0\text{ and }b\geq y.\] It is not difficult to see that \(\succ\) is not an open subset of \(\mathbb{R}^{2}\times\mathbb{R}^{2}\). However, \[U^{s}(x,y)=\begin{cases}[0,+\infty[,&x<0,\\ \emptyset,&\text{otherwise},\end{cases}\] which is clearly lower hemicontinuous. Finally, we are ready to state our existence result. **Theorem 3.6**.: _For each player \(\nu\in I\), assume that_ 1. \(X_{\nu}\) _is non-empty, convex and compact;_ 2. \(K_{\nu}\) _is continuous with non-empty, closed and convex values;_ 3. \(U^{s}(x)\) _is convex and open, for all_ \(x\in\mathbb{R}^{n}\)_;_ 4. \(U^{s}_{\nu}\) _is lower hemicontinuous._ _Then, there exists at least a generalized Nash equilibrium._ Proof.: By Proposition 3.4, the map \(T\) is upper hemicontinuous with convex, compact and non-empty values. Due to Theorem 4 in [16], the Stampacchia quasivariational inequality problem associated to \(T\) and \(K\) admits a solution, say \(\hat{x}\). If \(T(\hat{x})\subset N_{0}(\hat{x})\), where \(N_{0}\) is defined in Theorem 3.1, the result follows from Proposition 3.4 and Theorem 3.1. If there exists \(\nu\in I\) such that \(0\in\operatorname{conv}(N_{\nu}(\hat{x})\cap S_{\nu}(0,1))\), then there exist \(x_{1}^{*,\nu},x_{2}^{*,\nu},\ldots,x_{m}^{*,\nu}\in N_{\nu}(\hat{x})\cap S_{ \nu}(0,1)\) and \(t_{1},t_{2},\ldots,t_{m}\in[0,1]\), such that \[0=\sum_{i=1}^{m}t_{i}x_{i}^{*,\nu}\text{ and }\sum_{i=1}^{m}t_{i}=1.\] Let \(j\in\{1,2,\ldots,m\}\) such that \(t_{j}\neq 0\). For such \(j\) we have \[-x_{j}^{*,\nu}=\sum_{i\neq j}\frac{t_{i}}{t_{j}}x_{i}^{*,\nu}.\] Thus, \(x_{j}^{*,\nu}\) and \(-x_{j}^{*,\nu}\) belong to \(N_{\nu}(\hat{x})\). However, since \(U^{s}_{\nu}(\hat{x})\) is open, this implies that \(x_{j}^{*,\nu}=0\). Thus, we get a contradiction. The following example shows that our result cannot be deduced from Theorem 3 in [17]. **Example 3.7**.: Let \(I=\{1,2\}\) and, for each \(\nu\in I\), consider the relation \(\succeq_{\nu}\) on \(\mathbb{R}^{2}\) defined as \[x\succeq y\text{ if, and only if, }x^{\nu}\geq y^{\nu}.\] Clearly, \(\succeq_{\nu}\) satisfies all assumptions of Theorem 3.6. Now, for each \(\nu\in I\) consider the set \(X_{\nu}=[-1,1]\) and the maps \(K_{\nu}:X_{-\nu}\rightrightarrows X_{\nu}\) and \(Q_{\nu}:\mathbb{R}^{2}\rightrightarrows X_{\nu}\) defined respectively as \(K_{\nu}(x^{-\nu})=X_{\nu}\) and \[Q_{\nu}(x)=\{z^{\nu}\in X_{\nu}\::\:(z^{\nu},x^{-\nu})\succeq_{\nu}x,\text{ for all }z^{\nu}\in X_{\nu}\},\] with \(x=(x^{\nu},x^{-\nu})\). In this context, we obtain a generalized game \(H=\langle X_{\nu},K_{\nu},Q_{\nu}\rangle\), according to [17, SS4.3]. In this case, existence of Nash equilibria is guaranteed by Theorem 3.6. However, as \(H\) is not _uniform quasi-concave_[17, Definition 5], we cannot apply Theorem 3 in [17]. When the preference relation \(\succeq_{\nu}\) is defined via a utility function \(\theta_{\nu}\), Theorem 3.6 generalizes a classical result due to Arrow and Debreu [18, Lemma 2.5]. **Corollary 3.8**.: _Suppose for each \(\nu\in I\), \(X_{\nu}\) is a compact, convex and non-empty subset of \(\mathbb{R}^{n_{\nu}}\), the objective function \(\theta_{\nu}\) is continuous and quasi-concave with respect to its variable and the constraint map \(K_{\nu}\) is continuous with convex, compact and non-empty values. Then, there exists at least one generalized Nash equilibrium._
2303.03453
Entangling Quantum Memories via Heralded Photonic Bell Measurement
A common way to entangle quantum memories is via photonic entanglement swaps. Each of two memories, connected by an optical channel, emits a photonic qubit entangled with itself, and the photonic qubits undergo an entanglement swap on a beamsplitter in the middle of the channel. We compare two choices of encoding of the photonic qubit: single rail and dual rail. At low channel loss the dual-rail scheme outperforms the single rail scheme. However, as expected, the high-loss rate asymptote for the dual rail scheme scales quadratically worse with loss compared with single rail. Considering the following non-idealities: imperfect mode matching at the swap, carrier-phase mismatch across the interfered photonic qubits, and detector excess noise, we evaluate the density operator of the heralded two-qubit entangled state. We calculate a lower bound on its distillable entanglement per copy, and its Fidelity (with the ideal Bell state). For both schemes, imperfect swap-visibility results in a constant-factor decrease in the rate, while excess noise results in a dropoff of distillable entanglement beyond a certain total channel loss threshold, to zero. Despite the single-rail scheme's better rate-loss scaling, it is more severely affected by excess noise. The single-rail scheme is adversely affected by stochastic carrier-phase mismatch, which does not affect the dual-rail scheme. We study entanglement distillation on the heralded noisy entangled states for both methods, and outline a suite of quantum networking studies that our work could incite.
Prajit Dhara, Dirk Englund, Saikat Guha
2023-03-06T19:23:25Z
http://arxiv.org/abs/2303.03453v2
# Entangling Quantum Memories via Heralded Photonic Bell Measurement ###### Abstract A common way to entangle a pair of quantum memories is via a photonic entanglement swap. Each of two memories, connected by an optical channel, emits a photonic qubit entangled with itself, and the photonic qubits undergo an entanglement swap on a beamsplitter in the middle of the channel. We compare two choices of encoding of the photonic qubit: the single rail and dual rail. In the regime of low channel loss, i.e., when the loss of the half-channel connecting one memory site to the swap site is less than \(\approx 6\) dB, the dual-rail scheme is seen to outperform the single rail scheme. However, as expected, the high-loss rate asymptote for the dual rail scheme is worse: it scales as \(O(\eta)\) as opposed to that of the single-rail scheme which scales as \(O(\sqrt{\eta})\) ebits per transmitted photonic mode, where \(\sqrt{\eta}\) is the transmissivity of the half channel. Considering the following non-idealities: imperfect mode matching at the swap, carrier-phase mismatch across the interfered photonic qubits from the two sides, and detector excess noise, we evaluate the explicit density operator of the heralded two-qubit entangled state. We calculate a lower bound on its distillable entanglement per copy, and its Fidelity (with the ideal Bell state). For both schemes, imperfect swap-visibility results in a constant-factor decrease in the rate, while excess noise results in a sharp dropoff of distillable entanglement beyond a certain total channel loss threshold, to zero. Despite the single-rail scheme's better rate-loss scaling, it is more severely affected by excess noise. The single-rail scheme is adversely affected by stochastic carrier-phase mismatch, which does not affect the dual-rail scheme. We study entanglement distillation on the heralded noisy entangled states for both methods. Our evaluation of the density operator of the entangled state will hopefully pave the way for more realistic performance evaluations of larger quantum networks and the development of advanced entanglement distillation schemes. ## I Introduction The most common way to generate entanglement between a pair of'matter' qubits--be it trapped-ion, solid-state defect center, or superconducting qubits--is by first generating photons entangled with each qubit, and performing a Bell State Measurement (BSM) on the photonic qubits via a beamsplitter and photon detectors, which succeeds with some probability. The probability of success drops with the overall loss in the photons' lifetimes, from the time they were generated (entangled with the matter qubit) to when they were detected. When the BSM fails, the matter qubits are re-initialized and re-used. But when the BSM succeeds, the two matter qubits are 'heralded' in a two-qubit entangled state, whose fidelity can be quite high, especially if the matter qubits can be held with negligible drop in fidelity up until the success-failure information about the BSM arrives back at the matter-memory sites. This method has been proposed and implemented for the generation of entanglement between solid state spin qubits [1; 2; 3; 4], trapped ion qubits [5; 6; 7; 8] and superconducting qubits [9; 10]. Compared to the _source in the middle_[11; 12] method for entanglement generation, in the present approach, matter quantum memories themselves are the source of the entangling photons. This eliminates the complexity of building a reliable high rate high-fidelity entangled pair source and field-deploying the same, at the potential cost of stricter network operational requirements. Previous works examining the linear optics based entanglement swap protocol have covered various crucial aspect of the topic. Entanglement swapping between nitrogen-vacancy color center spin qubits in diamond is analyzed in [13] with specific focus on the novelties and challenges of solid state spin qubits. An alternative approach to the quantum state analysis has been covered in [14], with focus on the time dynamics of the optical entanglement swap for various photonic encoding choices. Most studies of entanglement distribution protocols and platforms utilize the quantum state fidelity (with ideal QM Bell pairs) as the decisive and quantitative'metric' [15] for the protocol performance. The difficulty arises in the fact that sub-unity fidelity has different implications on the utility of the distributed entangled state. The current article utilizes the hashing bound of the modelled state, which serves as a lower bound to the state's distillable entanglement. The article is organized as follows. The swap setup and associated definitions are covered in Sec. II. Section III analyzes the states generated by ideal entanglement swaps and presents the fundamental tradeoffs. A detailed analysis of the entanglement swapping with non-idealities is covered in Sec. IV. In Sec. V, we analyze the effect of quantum state distillation, and highlight the link level improvements in the heralded quantum state. Section VI concludes the analysis with discussion of potential applications for the underlying models and proposals for improved swapping. System considerations The setup analyzed in this article is a generalization of entanglement swapping between a wide class of physical qubit implementations. We consider two parties, Alice (\(A\)) and Bob (\(B\)) equipped with _emissive quantum memories_ (QM). For the entirety of this manuscript, emissive QMs denote systems which are able to generate an entangled state of the internal state of the QM and one or more photonic modes. The emitted optical modes are transmitted over a loss-prone transmission media (for e.g. optical fiber, dielectric waveguides, or free-space optical links) to a central node Charlie (\(C\)), which performs a linear optical entanglement swap. The actual implementation of the swap is encoding choice dependent; however, the swap success outcome (where the swaps heralds the generation of shared entanglement between \(A\) and \(B\)) is probabilistic. The optical channels between \(A-C\) and \(B-C\) are modeled as pure loss bosonic channels, with associated transmissivities \(\eta_{A}\) and \(\eta_{B}\) respectively, where \(\eta_{k}\in[0,1]\), which maybe expressed in dB as \(\eta_{\rm dB}=-10\log_{10}\eta\). Readers should note that any additional losses (say in the memory to photon interface or in the detectors) can be lumped into the channel transmissivity parameters. The actual physical details of the quantum channel may be wide ranging -- our analysis holds true for communication on photonic waveguides networking solid state memories on a chip (\(\eta_{\rm dB}\sim 0-3\) dB), optical fiber links at the metropolitan scale (\(\eta_{\rm dB}\sim 1-10\) dB) or free-space optical links for satellite based communications (\(\eta_{\rm dB}\sim 10-10^{2}\) dB). The final degree of freedom in this system is the choice of photonic qubit encoding format. We limit our analysis and discussion to discrete variable encoding formats which rely on the vacuum and single photon states of an bosonic mode. The _single rail_ encoding, is defined by the presence or absence of a photon in a single bosonic mode; the logical qubit states for the mode labeled by \(k\) are defined by \[\left|\bar{0}\right\rangle\equiv\left|0\right\rangle_{k};\left|\bar{1}\right \rangle\equiv\left|1\right\rangle_{k}. \tag{1}\] Al alternative is the _dual rail_ encoding, the logical qubit states are represented by the presence of a single photon in one of two orthogonal bosonic modes labeled by \(k1\) and \(k2\) (which may be spatial, temporal, spectral modes or any combination thereof), as \[\begin{split}\left|\bar{0}\right\rangle\equiv\left|1\right\rangle _{k1}\left|0\right\rangle_{k2}=\left|1,0\right\rangle_{k};\\ \left|\bar{1}\right\rangle\equiv\left|0\right\rangle_{k1}\left|1 \right\rangle_{k2}=\left|0,1\right\rangle_{k}.\end{split} \tag{2}\] Before we proceed with the analysis of the entanglement swapping circuitry, additional notation for the local entangled pair (namely between the quantum memory and the photonic qubit) has to be established. We consider general entangled states of the form, \[\left|\psi\right\rangle_{S}=\sqrt{\gamma}\left|\mathbf{1}\right\rangle_{S} \left|\bar{0}\right\rangle_{S,k}+\sqrt{1-\gamma}\left|\mathbf{0}\right\rangle _{S}\left|\bar{1}\right\rangle_{S,k}. \tag{3}\] where \(\left|\mathbf{0}\right\rangle\) and \(\left|\mathbf{1}\right\rangle\) represent the qubit levels of the QM and the subscript \(S=\{A,B\}\) is used to denote the party that posses the corresponding memory. The generation of the state in Eq. (3) is again qubit hardware and photonic qubit encoding dependent [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Fig. 1 depicts this abstractly, with purple'matter' qubits and orange photonic qubits. The photonic qubits are transmitted over an optical channel or link before they meet at the entanglement swapping circuit (blue diamond). The details of the swap are encoding dependent and are discussed in Appendix C (see Fig. C.1). In the most general scenario, the photonic qubits are mixed on a balanced (50:50) beamsplitter (for erasing the which source/path information) and detected by photon number resolving detectors. Detection of specific click patterns heralds the successful generation of an entangled state (say \(\rho_{AB}\)) shared between the QMs along with the entangled state parity information. Since swaps are probabilistic, we denote the probability of successfully heraldsing the state by \(P_{\rm succ.}\). Of the most common metrics to evaluate the 'quality of entanglement', _state fidelity_ evaluates the overlap of the final state \(\rho_{AB}\) with the ideal target state (usually a Bell state, here we choose \(\left|\Psi^{+}\right\rangle\)). The state fidelity is evaluated by \[F(\rho_{AB},|\Psi^{+}\rangle)\coloneqq\left\langle\Psi^{+}|\rho_{AB}|\Psi^{+ }\right\rangle. \tag{4}\] Fidelity is a reliable and insightful state quality indicator, but only in the regime where \(F(\cdot)\) is close to unity. In the context of shared entanglement generation, evaluating (or bounding) the _distillable entanglement_ of the final state is more insightful. Distillable entanglement, represented by \(E_{D}(\rho_{AB})\), quantifies the number of perfect entangled pairs (Bell pairs) that can be distilled from \(\rho_{AB}\), assuming both parties have _ideal universal quantum Figure 1: Midpoint entanglement swap for photonic qubits (orange) entangled to matter qubits (purple). The entanglement swapping circuit (blue diamond) is photonic qubit encoding dependent. Appendix C covers detailed descriptions for the swaps. computers_ (using an arbitrary non-specified distillation circuit) and unlimited two-way classical communications. For general states, \(E_{D}(\rho_{AB})\) is non-trivial to evaluate; for the present study we will use the _hashing bound_\(I(\rho_{AB})\), which is a lower bound to the state's distillable entanglement. The hashing bound is \[I(\rho_{AB})=\max[S(\rho_{A})-S(\rho_{AB}),S(\rho_{B})-S(\rho_{AB})], \tag{5}\] where, \(\rho_{A}=\operatorname{Tr}_{B}(\rho_{AB})\), \(\rho_{B}=\operatorname{Tr}_{A}(\rho_{AB})\) and \(S(\rho)\) is the von-Neumann entropy of the state \(\rho\). The product of the hashing bound (units: ebits per state copy) and the probability of success (units: states per swap attempt) yields a lower bound to the distillable entanglement rate. Henceforth we will use the symbol \(\mathcal{R}(\rho)\) to denote this where, \[\mathcal{R}(\rho)=I(\rho)\times P_{\text{succ.}}. \tag{6}\] For any protocol/encoding choice the rates are upper bounded by the repeater-less bound to the channel capacity, given by \(D_{2}(\eta)=-\log_{2}(1-\sqrt{\eta})\approx 1.44\sqrt{\eta}\) for \(\eta\ll 1\)[16]. Thus, we expect \(I(\rho_{AB})\leq E_{D}(\rho_{AB})<D_{2}(\eta)\) for all values of \(\eta\). ## III Ideal entanglement swapping With ideal hardware and no optical transmission loss (\(\eta_{A}=\eta_{B}=1\)) the linear optical entanglement swap succeeds with a maximal \(P_{\text{succ.}}=1/2\)[17], irrespective of the choice of encoding. However, in the presence of loss, the action of the swap is quite distinct between the two encoding formats. To simplify analysis, we consider the specific case of symmetric channel losses i.e. \(\eta_{A}=\eta_{B}=\sqrt{\eta}\) for a total channel transmissivity of \(\eta_{A}\eta_{B}=\eta\). When mode \(k\) of the state in Eq. (3) is dual rail encoded, the final state is heralded with the probability of success \(P_{\text{succ.}}=2\gamma\left(1-\gamma\right)\eta\), and of the form, \[\rho_{\text{dual}}=\frac{1}{2}\left(\left|\mathbf{0}\right\rangle_{A}\left| \mathbf{1}\right\rangle_{B}\pm\left|\mathbf{1}\right\rangle_{A}\left|\mathbf{0 }\right\rangle_{B}\right)\left(\left\langle\mathbf{0}\right|_{A}\left\langle \mathbf{1}\right|_{B}\pm\left\langle\mathbf{1}\right|_{A}\left\langle\mathbf{0 }\right|_{B}\right)\right. \tag{7}\] \(\rho_{\text{dual}}\) has unit fidelity with the corresponding QM Bell states \(\left|\Psi^{\pm}\right\rangle=\left(\left|\mathbf{0}\right\rangle_{A}\left| \mathbf{1}\right\rangle_{B}\pm\left|\mathbf{1}\right\rangle_{A}\left|\mathbf{0 }\right\rangle_{B}\right)/\sqrt{2}\), implying that pure loss causes no detriment to the quality of the state. Additionally, the optimal value of \(\gamma\) that maximizes \(P_{\text{succ.}}\) at a given value of \(\eta\) is _always_\(1/2\). In contrast, when the single rail encoding is used, the final state is heralded with \(P_{\text{succ.}}=2\sqrt{\eta}\left(1-\gamma\right)(1-(1-\gamma)\sqrt{\eta})\) and with the final state density operator, \[\rho_{\text{single}}= \frac{\alpha_{1}}{2}\cdot\left(\left|\mathbf{0}\right\rangle_{A} \left|\mathbf{1}\right\rangle_{B}\pm\left|\mathbf{1}\right\rangle_{A}\left| \mathbf{0}\right\rangle_{B}\right)\left(\left\langle\mathbf{0}\right|_{A} \left\langle\mathbf{1}\right|_{B}\pm\left\langle\mathbf{1}\right|_{A}\left\langle \mathbf{0}\right|_{B}\right)\] \[+\alpha_{2}\cdot\left|\mathbf{0}\right\rangle\!\left\langle \mathbf{0}\right|_{A}\otimes\left|\mathbf{0}\right\rangle\!\left\langle \mathbf{0}\right|_{B}, \tag{8}\] whose coefficients \(\alpha_{k}\) are given in terms of the parameters as, \[\alpha_{1} =\gamma\left(1-\gamma\right)\sqrt{\eta}/P_{\text{succ.}}, \tag{9a}\] \[\alpha_{2} =\sqrt{\eta}(1-\gamma)^{2}\left(1-\sqrt{\eta}\right)/P_{\text{ succ.}}. \tag{9b}\] Readers may note that when \(\eta<1\), the state fidelity of \(\rho_{\text{single}}\) w.r.t. the ideal Bell pair \(\left|\Psi^{+}\right\rangle\equiv(\left|0,1\right\rangle+\left|1,0\right\rangle )/\sqrt{2}\) is always less than one, since \(\alpha_{2}\neq 0\,\forall\,\eta\in(0,1)\). The \(\alpha_{2}\) term only becomes zero for either \(\eta=1\) (perfect transmission), \(\eta=0\) (no transmission) or when \(\gamma=1\) (initial state is no longer entangled). One can thus evaluate an optimal value of \(\gamma\) which maximizes \(P_{\text{succ.}}\) with a constraint on some other heralded state metric (such as fidelity, or distillable entanglement). ## IV Tradeoffs for non-ideal swaps ### Problem Setup We extend the analysis of the previous section by considering hardware and channel non-idealities which would be relevant for any practical entanglement swapping link - (1) excess noise in channel/detectors, (2) imperfect mode matching and (3) carrier level phase mismatch of the bosonic modes. The complete state descriptions with these detriments accounted for is elaborated in Appendix B. Our methodology for modeling these non-idealities are summarized below. _Excess noise_ -- Excess noise in the system can arise from background photons in the channel, electronic Johnson-Nyquist noise in detectors and detector dark clicks. All of these effects can be lumped into a single parameter, namely, the excess photons per mode, which we shall denote using the symbol \(P_{d}\) with a typical range \(P_{d}\in[0,1)\). For a non-zero \(P_{d}\) the final quantum state becomes mixed and has additional terms whose proportion in the total state are \(\mathcal{O}(P_{d}^{k})\), where \(k\in\mathbb{Z}^{+}\). _Imperfect mode matching_ -- Mode matching is crucial for the photonic entanglement swap to function properly, as any analysis of the modes post-interference must not reveal the path or 'which memory' information. Imperfect mode matching leads to inherent distinguishability of the interacting photons, which is detrimental to entanglement swapping. The effect of mode mismatch is compactly described by a visibility parameter \(\mathcal{V}\in[0,1]\), where \(\mathcal{V}=1\) denotes perfect mode matching. This one number \(\mathcal{V}\) can account for the total mode mismatch in the spatio-temporal-polarization mode of the photonic-qubit-bearing mode pairs emanating from the two memory sites, being interfered on the swapping beamsplitter. _Carrier Phase Mismatch_ -- In addition to mode matching, any imprecision in the optical carrier phase match is an important parameter for entanglement swapping, especially for the single-rail scheme as the memory-qubit photonic-qubit entangled states from the two sides undergo a fast-oscillating carrier phase based on their total propagation length. This fast-oscillating carrier phase is much more difficult to lock compared to the spatio-temporal modes of the photonic modes, arriving from the two memory sites. The carrier phase is imparted to the system by applying the unitary \(U(\theta)=\exp(i\theta\hat{n})\) to the bosonic mode(s), where \(\hat{n}\) is the modal number operator and \(\theta\) is a random phase drawn from a zero-mean normal distribution of variance \(\varepsilon\), i.e., \(\theta\sim\mathcal{N}(0,\varepsilon)\). Phase mismatch is treated by introducing it as a complex visibility parameter which may be combined with the mode mismatch parameter to yield a single complex quantity \(\mathcal{V}=|\mathcal{V}|e^{i\theta}\) with \(|\mathcal{V}|\in[0,1]\) and \(\theta\sim\mathcal{N}(0,\varepsilon)\). The stochastic nature of \(\theta\) necessitates evaluating the ensemble averaged quantum state of the two heralded memory qubits, to examine the effect of phase mismatch. ### Tradeoffs We examine the trade-off between the different encoding choices for our entanglement generation link, by evaluating the hashing bound rate \(\mathcal{R}(\rho_{AB})\) and state fidelity, \(F(\rho_{AB},|\Psi^{\pm}))\) in Fig. 2(a) and (b) respectively. We examine and compare swaps employing dual rail encoding (depicted by the red for \(|\mathcal{V}|=1\) and orange for \(|\mathcal{V}|=0.95\)) and the single rail encoding (blue for \(|\mathcal{V}|=1;\varepsilon=0\), purple for \(|\mathcal{V}|=0.95;\varepsilon=0\) and cyan for \(|\mathcal{V}|=0.95;\varepsilon=0.1\)). For the single rail swap, the qubit parameter \(\gamma\) is chosen to maximize \(\mathcal{R}(\rho_{AB})\) at a given \(\eta\). The black dot-dashed curve in Fig. 2(a) is the repeater-less bound of \(D_{2}(\eta)\)[16]. For all encodings, the line style depicts the excess noise in the channel: solid lines for \(P_{d}=0\), dashed lines for \(P_{d}=10^{-4}\) and dotted lines for \(P_{d}=10^{-2}\)). We make the following observations -- * In terms of \(\mathcal{R}(\rho_{AB})\), the single rail swap outperforms the dual rail encoded swap (with a scaling of \(\mathcal{O}(\eta)\)) in the high loss limit (\(\eta\ll 1\)). However, in the low loss limit (\(\sqrt{\eta}<\) 8dB), the dual rail swap outperforms the single rail swap. * Swapping with \(|\mathcal{V}|<1\), does not affect \(P_{\text{succ.}}\) but affects the state quality \(I(\rho_{AB})\), which is lower than the ideal unity visibility state for all \(\eta\). * Phase mismatch only affects the single rail state. The visibility parameter of the ensemble averaged state is modified as \(\mathcal{V}\rightarrow\mathcal{V}\times\exp(-\varepsilon)\approx\mathcal{V}( 1-\varepsilon)\); when \(\varepsilon\) is small. We see the corresponding effect of a lowered \(\mathcal{V}\) on \(\mathcal{R}(\rho_{AB})\) and \(F(\rho_{AB},|\Psi^{\pm}))\) (cyan curve). * For \(P_{d}>0\), \(\mathcal{R}(\rho_{AB})\) crashes to zero at a finite value of \(\eta\), indicating a maximum range for entanglement swapping in the presence of excess noise. However, \(F(\rho_{AB},|\Psi^{\pm}))\) doesn't collapse to a value less than \(0.5\) for the corresponding states. Additionally, the single rail swapped state is more susceptible to excess noise, i.e., for same \(P_{d}\), \(\mathcal{R}(\rho_{AB})\) crashes to zero for lower loss for the single rail swapped state. * In the presence of excess noise and mode mismatch, states (heralded by either single or dual rail swaps) with lower \(|\mathcal{V}|\) are more susceptible to loss. The difference in the rate scaling is quite evident from the expressions of \(P_{\text{succ.}}\) as highlighted in Section III. For \(\eta\ll 1\), upto the highest order the single rail swap is \(\mathcal{O}(\sqrt{\eta})\) and dual rail swap is \(\mathcal{O}(\eta)\). Intuitively, this makes sense as well, the single rail swap requires the successful transmission of a single photon to the midpoint over half of the entire link (\(\propto\sqrt{\eta}\)), whereas the dual rail swap requires two photons (one from A and B each). The non-trivial behavior of the single rail swap rate scaling for \(\eta\to 1\) regime arises as a consequence of the optimization of \(\gamma\) to maximize \(\mathcal{R}(\rho_{AB})\). Consider the ideal swap with loss i.e., \(|\mathcal{V}|=1\) and \(P_{d}=0\), for which Figure 2: Comparison of the single rail swap (blue for \(|\mathcal{V}|=1,\varepsilon=0\), purple for \(\mathcal{V}=0.95,\varepsilon=0\), cyan for \(\mathcal{V}=0.95,\varepsilon=0.1\)) with the dual rail swap (red for \(|\mathcal{V}|=1\), orange for \(\mathcal{V}=0.95\)) using the (a) hashing bound rate \(\mathcal{R}(\rho_{AB})\) where we compare them to repeater less bound (black) and, (b) state fidelity \(F(\rho_{AB},|\Psi^{\pm})\).The effect of non-zero excess noise is highlighted by the various line styles: solid for \(P_{d}=0\), dashed for \(P_{d}=10^{-4}\) and dotted for \(P_{d}=10^{-2}\). the expressions for successful swap probability and state hashing bound are respectively given as, \[P_{\text{succ.}} =2\sqrt{\eta}(1-\gamma)(1-(1-\gamma)\sqrt{\eta}); \tag{10a}\] \[I(\rho) =h_{2}\bigg{(}\frac{\gamma/2}{1-\sqrt{\eta}(1-\gamma)}\bigg{)}-h_{ 2}\bigg{(}\frac{\gamma}{1-\sqrt{\eta}(1-\gamma)}\bigg{)} \tag{10b}\] where \(h_{2}(x)=-x\log_{2}x-(1-x)\log_{2}(1-x)\) is the binary entropy function. Numerical maximization of \(\mathcal{R}(\rho_{AB})\) yields the optimal \(\gamma\) as function of \(\eta\) as depicted in Fig. 3 (blue line). We plot the state fidelity (orange dashed) and distillable entanglement (orange solid) for the corresponding values of \(\eta\). For \(\eta=1\), \(I(\rho_{AB})\) attains the maximal value of \(1\) ebit per copy. For this instance, \(P_{\text{succ.}}\) becomes \(2\gamma(1-\gamma)\) which is maximized for \(\gamma=1/2\). In the \(\eta\ll 1\) regime, \(P_{\text{succ.}}\approx 2\sqrt{\eta}(1-\gamma)\) and correspondingly \(I(\rho)\approx h_{2}(\gamma/2)-h_{2}(\gamma)\). Here \(\mathcal{R}(\rho)\) has \(\sqrt{\eta}\) as a pure multiplicative factor; this means the optimal \(\gamma\) that maximizes it is independent of \(\eta\) and is a solution of the transcendental equation, \[\frac{\partial}{\partial\gamma}\left[(1-\gamma)(h_{2}(\gamma/2)-h_{2}(\gamma ))\right]=0. \tag{11}\] The effect of imperfect mode-matching manifests as a sub-unity swap visibility, i.e. \(\left|\mathcal{V}\right|<1\). This effect can be analyzed by expressing the final state in the Bell basis. As an example, the reader may consider the noiseless (\(P_{d}=0\)) dual rail swapped state, which may be expressed in the Bell basis as, \[\rho_{AB}=\frac{(1+\mathcal{V}^{2})}{2}\left|\Psi^{+}\right\rangle\!\!\left\langle \Psi^{+}\right|+\frac{(1-\mathcal{V}^{2})}{2}\left|\Psi^{-}\right\rangle\!\! \left\langle\Psi^{-}\right|. \tag{12}\] whose \(I(\rho_{AB})=1-h_{2}((1-\mathcal{V}^{2})/2)\). This quantity decreases monotonically with \(\mathcal{V}\). Hence in Fig. 2 the \(\mathcal{R}(\rho_{AB})\) curve for \(\left|\mathcal{V}\right|=0.95\) is strictly below the curve for \(\left|\mathcal{V}\right|=1\). A similar analysis can be performed for the single rail state; however the state formulation is not straightforward. The effect of phase mismatch is an important distinction between two encodings. The detailed state description and derivation (see Appendix B-C) highlight this difference in how phase mismatch affects the initial quantum state. The initial single rail memory-photon state has the form \[\left|\psi\right\rangle_{\text{mem,photon}}=\sqrt{\gamma}\left| \mathbf{1}\right\rangle_{S}\left|0\right\rangle_{S,k}+e^{i\theta_{S}}\sqrt{1- \gamma}\left|\mathbf{0}\right\rangle_{S}\left|1\right\rangle_{S,k}, \tag{13}\] whereas the optimal initial state for the dual rail swap is expressible as \[\left|\psi\right\rangle_{\text{mem,photon}}=e^{i\theta_{S}}[\left| \mathbf{1}\right\rangle_{S}\left|1,0\right\rangle_{S,k}+\left|\mathbf{0} \right\rangle_{S}\left|0,1\right\rangle_{S,k}]. \tag{14}\] The phase mismatch effect for the former hence introduces a relative phase, whereas it manifests as global phase for the latter. Consequently upon entanglement swapping of the photonic qubits, the off-diagonal elements \(\langle\mathbf{0},\mathbf{1}|\rho_{\text{single}}|\mathbf{1},\mathbf{0}\rangle\) or \(\langle\mathbf{1},\mathbf{0}|\rho_{\text{single}}|\mathbf{0},\mathbf{1}\rangle\), of the single rail heralded state is dependent on a complex \(\mathcal{V}\) term where \(\arg\mathcal{V}=\theta_{A}-\theta_{B}\). On the other hand, the dual rail state is only dependent on the magnitude, i.e. \(\left|\mathcal{V}\right|\). Readers should note that the carrier phase difference \(\arg\mathcal{V}\) is a stochastic parameter. To characterize the state performance, it is necessary to evaluate the ensemble averaged state. Since we typically consider \(\theta_{A},\theta_{B}\sim\mathcal{N}(0,\varepsilon)\), this implies \(\arg\mathcal{V}\sim\mathcal{N}(0,2\varepsilon)\). Under these considerations, the ensemble averaged state's visibility is modified as \(\mathcal{V}\rightarrow\mathcal{V}\exp(-\varepsilon)\approx\mathcal{V}(1-\varepsilon)\) for small \(\varepsilon\) (see derivation in Appendix). The effect of excess noise on the state is evident when we consider the detailed state description. As an example, consider the heralded dual rail state for the symmetric case (\(\eta_{A}=\eta_{B}=\sqrt{\eta},\,\eta_{d}=1\) and \(\mathcal{V}=1\); details in Appendix C). The complete state may be expressed in terms of the Bell states \(\{\left|\Psi^{\pm}\right\rangle,\left|\Phi^{\pm}\right\rangle\}\) as, \[\rho_{AB}= \beta_{1}\left|\Psi^{+}\right\rangle\!\!\left\langle\Psi^{+} \right|+\beta_{2}\mathbb{I}_{4}, \tag{15}\] where \(\mathbb{I}_{4}\) is the 4-dimensional identity operator and, \[\beta_{1} =\frac{(1-P_{d})^{4}\times\eta}{2\mathbf{N}_{d}}, \tag{16a}\] \[\beta_{2} =\frac{P_{d}(1-P_{d})^{2}}{\mathbf{N}_{d}}\bigg{(}\frac{1}{2}(1-P _{d})\sqrt{\eta}(1-\sqrt{\eta})+P_{d}(1-\sqrt{\eta})^{2}\bigg{)}, \tag{16b}\] with \(N_{d}\) ensuring \(\text{Tr}(\rho_{AB})=1\). It is clear that component that contributes to 'usable' entanglement is the \(\left|\Psi^{+}\right\rangle\!\!\left\langle\Psi^{+}\right|\) component. The complementary terms is the maximally mixed two qubit state, and yields no distillable entanglement. Hence for all \(P_{d}>0\), the contribution of \(\beta_{2}\) would surpass that of \(\beta_{1}\) at some finite value of \(\eta\) which limits \(\mathcal{R}(\rho_{AB})\) by setting \(I(\rho_{AB})=0\). We call this the _maximum range_ of the protocol, since the state Figure 3: State fidelity \(F(\rho_{AB},\left|\Psi^{\pm}\right\rangle)\) (orange solid) and hashing bound \(I(\rho_{AB})\) (orange dashed) plotted with optimal qubit initialization parameter \(\gamma\) (blue) for different values of channel loss. Assume \(P_{d}=0,\mathcal{V}=1,\varepsilon=0\). descriptions indicates minimal usable entanglement beyond this value of \(\eta\). General formulae can be obtained by equating \(I(\rho_{AB})=0\); however very minimum insight is gained for the true maximum range trend. In lieu of analytically expressible values, Fig. 4 plots the numerically extracted value of the maximum range for various values of \(P_{d}\). For the single rail swap, readers may also find it interesting to note that the optimal value of \(\gamma\) yields a state with fidelity \(F\approx 0.858\) for the high loss regime. Related studies [13] aim to generate states at a target fidelity close to \(1\). In a similar vein, we may optimize \(\gamma\)to obtain \(F\geq F_{\text{target}}\) in lieu of choosing a \(\gamma\) that maximizes \(\mathcal{R}(\rho_{AB})\),. Fig. 5 compares the optimal \(\mathcal{R}(\rho_{AB})\) (blue) with the cases where we set a target fidelity of \(1-10^{-2}\) (orange), \(1-10^{-3}\) (yellow), and \(1-10^{-4}\) (purple). All the rate curves are parallel i.e. they maintain the \(\mathcal{O}(\sqrt{\eta})\) scaling for \(\eta\ll 1\). However, the actual rates are lower than the optimal rate curve. This also corresponds to a longer range (i.e. higher loss) at which the single rail swap starts outperforming the dual rail swap (note the cross over between the solid and red dashed lines). Readers interested in detailed analysis of the \(\mathcal{R}(\rho_{AB})\), \(P_{\text{succ}}\), and \(I(\rho_{AB})\) of the states over the whole parameter space may refer to the detailed state analyses included in Appendices B - C of the manuscript. ## V Improvements using entanglement distillation Hardware non-idealities limit the utility of entanglement generation using photonic BSMs as discussed in Sec. IV. Entanglement distillation is one way to improve the utility, at the expense of additional quantum state processing at the end user (\(A\) amd \(B\)) sites. As an example, we consider the distillation protocol introduced in [18] and demonstrate its ability to improve state quality. Fig. 6, highlights the the distillation process -- Alice (Bob) generate a pair of entangled states and perform \(\pi/2(-\pi/2)\) rotations along the qubit \(x\)-axis (represented by \(\bar{R}_{x}(\pm\pi/2)\)) on both of their qubits. This is followed by bilateral CNOT gates and Pauli-\(Z\) basis measurements on the target qubits. The distillation process is successful if both measurement outcomes concur. Fig. 7 summarizes the improvements in the heralded state quality under multiple rounds of distillation. We consider swaps with \(P_{d}=10^{-2}\) excess photons per mode and no photon distinguishability i.e. \(\mathcal{V}=1\). The panes labeled (A1) and (B1) highlight the improvement in the maximum range of the un-distilled state, when we employ multiple rounds of distill Figure 4: Maximum range of heralded states by dual (red) and single (blue dashed) rail swaps for specified values of excess noise \(P_{d}\). Assume \(\mathcal{V}=1,\varepsilon=0\). Figure 6: Distillation protocol introduced in [18] using local \(\bar{R}_{x}(\pm\pi/2)\) gates, bilateral CNOT gates, and Pauli-\(Z\) measurements. A ‘success’ is declared when both measurement outcomes are ‘0’ or ‘1’, followed by reconciliation (using classical communication) between the two parties. cost of a poorer rate scaling for each subsequent round of distillation. Given the rate scaling of \(\mathcal{O}(\sqrt{\eta})\) and \(\mathcal{O}(\eta)\) for the single and dual rail swapping protocols respectively, with \(k\)-rounds on distillation the rate scalings drop to \(\mathcal{O}(\eta^{2^{k-1}})\) and \(\mathcal{O}(\eta^{2^{k}})\) respectively. Given the rapid deterioration in the rate scaling, we consider the state's \(I(\rho)\) as shown in the panes labeled (A2) and (B2). This supports the conclusions drawn from the distillable entanglement rate discussion above. We note that under continued rounds of distillation, the max range is limited to a finite value of loss, which we label \(\eta_{\rm lim}\). Fig. 8 highlights the limiting values for the single and dual rail cases considering \(P_{d}=10^{-3};\mathcal{V}=1\). It is important to note here that all the analysis in this article has been focussed on _achievable_ distillable entanglement. In the regime where \(I(\rho)\to 0\), we cannot conclusively claim that about the actual entanglement content in the heralded and/or distilled state is indeed zero. A complete characterization could be performed by calculating an upper bound to the heralded state's distillable entanglement. However these quantities are hard to calculate for our generalized analytic state formulations; we leave this problem open for future studies. In panes (A3) and (B3), we analyze the fidelity of the distilled states. As expected the fidelity improves under distillation but only upto a specific value of channel loss where \(F(\rho_{AB},|\Psi^{+}\rangle)=0.5\). One may solve Figure 8: Improvement in the state \(I(\rho)\) from the heralded original state (solid lines) under multiple rounds of distillation (dashed lines; upto 15 rounds of distillation) for the (a) single rail and (b) dual rail case. The limiting value of maximum range \(\eta_{\rm lim}\) is labeled. Black arrow marks the direction for multiple rounds of distillation. We consider considering \(P_{d}=10^{-3};\mathcal{V}=1;\varepsilon=0\) for both encoding choices. Figure 7: State quality evaluation after rounds of distillation heralded swaps with \(P_{d}=10^{-2}\) and \(\mathcal{V}=1\). We plot the (A1,B1) hashing bound rate \(\mathcal{R}(\rho)\), (A2,B2) hashing bound per copy \(I(\rho)\) and, (A3,B3) fidelity of the distilled states \(F(\rho,|\Psi^{+}\rangle)\) (dashed lines) and compare it with the pre-distillation (blue solid) and ideally heralded (red dot-dashed) states for the single (top panes; A) and dual rail (bottom panes; B) encodings for entanglement swaps. \(F(\rho_{AB},|\Psi^{+}))=0.5\) for \(\eta\) to evaluate the limiting range \(\eta_{\lim}\). The reson behind this is straightforward- at \(F(\rho_{AB},|\Psi^{+}))=0.5\) the initial joint state is the two-qubit maximally mixed state \(\rho_{AB}=\mathbb{I}/4\). This state has no distillable entanglement i.e. \(E_{D}(\mathbb{I}/4)=0\). Consequently, starting with this state, no distillation protocol will improve the state quality. For \(\eta<\eta_{\lim}\), the heralded state \(F(\rho_{AB},|\Psi^{+}))>0.5\), meaning the state quality may be improved by distillation. Hence \(I(\rho)\) improves asymptotically (with additional rounds of distillation) to 1 for \(\eta<\eta_{\lim}\). We plot contours of \(\eta_{\lim}\) (labelled in dB) for various values of excess noise and state visibility in Fig. 9. We refer the reader to Appendix D for more details. ## VI Results and conclusions The general study of the'swap in the middle' entanglement distribution architecture carried out in this paper is relevant for various physical quantum memories, including color-center, trapped ion, neutral-atom and superconducting qubits. Our calculations highlight the key trade-offs _vis. a vis._ photonic-qubit encoding that need to be accounted for in such entanglement distribution among two memories connected by an optical channel, mediated by a heralded photonic swap. The evaluation of the distillable entanglement rate as compared to entangled-state distribution rate (at a given target fidelity) serves as an useful metric for the fair comparison of the two photonic-qubit encoding choices considered here. There are some pros and cons between the two encodings. Dual rail wins in the low loss regime, but has an inferior high-loss rate scaling. However, dual rail is unaffected by carrier-phase mismatch, and is less affected (than single-rail) by excess noise. Imperfect visibility in the swap appears to adversely effect both encodings similarly. Additionally, we find that using fidelity as the sole distributed state quality metric can lead to misleading and sub-optimal conclusions. Readers should note that, our claims do not preclude other encoding choices, i.e. there may be higher dimensional encodings that close the 'gap' between the swap performance plots we show for the two encodings, and the repeaterless bound, even for the midpoint swap architecture, especially in the low-loss regime, which is relevant for short-range quantum links. We demonstrate the varying effect posed by the two major sources of infidelity in the final entangled state between the memory qubits, namely, mode mismatch and excess noise. We hope that the results of our study will serve as illuminating guidelines for experimental demonstrations, and further research into advanced entanglement distillation protocols pursuant to the actual form of the heralded entangled state, as well as future extensions of our work to more complex quantum network topologies. ## VII Acknowledgments We thank Kevin C. Chen (MIT) for fruitful discussions. We would also like to thank Eneet Kaur (Univ. of Arizona) for suggesting the distillation protocol. The authors acknowledge the Mega Qubit Router (MQR) project funded under federal support via a subcontract from the University of Arizona Applied Research Corporation (UA-ARC). Additionally, the authors acknowledge National Science Foundation (NSF) Engineering Research Center for Quantum Networks (CQN), awarded under cooperative agreement number 1941583, for supporting this research. S.G. has outside interests in SensorQ Technologies Incorporated and Guha, LLC. D.E. holds shares in Quantum Network Technologies (QNT), a performer on the MQR award. These interests have been disclosed to UA and MIT respectively, and reviewed in accordance with their conflict of interest policies, with any conflicts of interest to be managed accordingly. Figure 9: Contour plot for \(\eta_{\lim}\) (labels in dB) for varying values of excess noise (\(P_{d}\)) and visibility (\(\mathcal{V}\)) of the dual (solid) and single (dashed) rail swapped states. ## Appendix A Parameter Explanation Throughout this section we shall use the following common symbols. Wherever the subscript \(k\) is used, \(k=A\) and \(k=B\) refers to Alice's and Bob's parameters respectively. We assume that all similar hardware components are identical i.e. all detectors are alike, all beamsplitters have the same configuration etc. ## Appendix B Detailed Derivation of Entangled QM State ### Single Rail _Initial State:_ \[\ket{\psi}_{S}=\sqrt{\gamma}\ket{\mathbf{1}}_{S}\ket{0}_{S,k}+e^{i\theta_{S}} \sqrt{1-\gamma}\ket{\mathbf{0}}_{S}\ket{1}_{S,k}, \tag{10}\] _State After Loss:_ The general state of the QM and single rail encoded photonic after loss on the photonic mode (subscript \(k\)) is given as, \[\ket{\psi}_{S}=\sqrt{\gamma}\ket{\mathbf{1}}_{S}\ket{0}_{S,k}\otimes\ket{0}_{ E}+e^{i\theta_{S}}\sqrt{1-\gamma}\sqrt{\eta}\ket{\mathbf{0}}_{S}\ket{1}_{S,k} \otimes\ket{0}_{E}+e^{i\theta_{S}}\sqrt{1-\gamma}\sqrt{1-\eta}\ket{\mathbf{0}} _{S}\ket{0}_{S,k}\otimes\ket{1}_{E} \tag{11}\] _State after beamsplitter:_ For the beamsplitter interaction, we have to consider two copies of entangled states in Eq. (11), with \(S=A,B\) respectively. Considering the standard balanced beamsplitter interaction, the output state has three main components - (1) no photons will be detected by the detectors, (2) only one photon will be detected, and (3) photon bunching is observed. \[\ket{\varphi}_{s,0}=U_{BS}(\ket{\psi}_{A}\otimes\ket{\psi}_{B})^{0- \text{photon}}= (\sqrt{\gamma_{A}}\ket{\mathbf{1}}_{A}\ket{0}_{A,k}\otimes\ket{0}_{ E}+e^{i\theta_{A}}\sqrt{1-\gamma_{A}}\sqrt{1-\eta_{A}}\ket{\mathbf{0}}_{A} \ket{0}_{A,k}\otimes\ket{1}_{E}) \tag{12a}\] \[\otimes(\sqrt{\gamma_{B}}\ket{\mathbf{1}}_{B}\ket{0}_{B,k} \otimes\ket{0}_{E}+e^{i\theta_{B}}\sqrt{1-\gamma_{B}}\sqrt{1-\eta_{B}}\ket{ \mathbf{0}}_{B}\ket{0}_{B,k}\otimes\ket{1}_{E})\] \begin{table} \begin{tabular}{c l} \hline \hline **Symbol** & **Definition** \\ \hline \(\eta_{k}\) & Individual channel transmissivity from source \(k\) to Entanglement Swapping Station. \\ \hline \(\gamma_{k}\) & Qubit initialization parameter for source \(k\) (only for single rail case). \\ \hline \(\eta_{d}\) & Detection efficiency of photon detectors. \\ \hline \(P_{d}\) & Total excess noise (photons per qubit slot) in photon detectors. \\ \hline \(\mathcal{V}=\ket{\mathcal{V}}\exp(i\arg\mathcal{V})\) & Mode matching parameter for individual interacting photonic pulses with \(\ket{\mathcal{V}}\) evaluates mode overlap and \(\arg\mathcal{V}\) evaluates the carrier phase mismatch. This parameter is squared for the dual rail case. \\ \hline \(F(\rho,\ket{\Psi^{\pm}})\) & Fidelity of density operator \(\rho\) with target Bell states \(\ket{\Psi^{\pm}}\) \\ \hline \(P_{\text{succ.}}\) & Entanglement swapping rate (at a given fidelity target whenever applicable) \\ \hline \(I(\rho)\) & Distillable entanglement of the state \(\rho\) \\ \hline \(\mathcal{R}(\rho)\) & Lower bound to ultimate entanglement swapping rate: \(\mathcal{R}(\rho)=I(\rho)\times P_{\text{succ.}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Symbols with their corresponding definitions. \[\ket{\varphi}_{s,1}= U_{BS}(\ket{\psi}_{A}\otimes\ket{\psi}_{B})^{1-\text{photon}} \tag{3b}\] \[= \frac{1}{\sqrt{2}}e^{i\theta_{A}}\sqrt{1-\gamma_{A}}\sqrt{\eta_{A}} \times\sqrt{\gamma_{B}}\ket{\mathbf{1}}_{A}\ket{\mathbf{0}}_{B}\left[\ket{1}_{A, k}\ket{0}_{B,k}+\ket{0}_{A,k}\ket{1}_{B,k}\right]\otimes\ket{0,0}_{E}\] \[+\frac{1}{\sqrt{2}}e^{i\theta_{B}}\sqrt{1-\gamma_{B}}\sqrt{\eta_{B }}\times\sqrt{\gamma_{A}}\ket{\mathbf{0}}_{A}\ket{\mathbf{1}}_{B}\left[-\ket{1 }_{A,k}\ket{0}_{B,k}+\ket{0}_{A,k}\ket{1}_{B,k}\right]\otimes\ket{0,0}_{E}\] \[+\frac{1}{\sqrt{2}}e^{i(\theta_{A}+\theta_{B})}\sqrt{1-\gamma_{A}} \sqrt{\eta_{A}}\sqrt{1-\gamma_{B}}\sqrt{1-\eta_{B}}\ket{\mathbf{0}}_{A}\ket{ \mathbf{0}}_{B}\left[\ket{1}_{A,k}\ket{0}_{B,k}+\ket{0}_{A,k}\ket{1}_{B,k} \right]\otimes\ket{0,1}_{E}\] \[+\frac{1}{\sqrt{2}}e^{i(\theta_{A}+\theta_{B})}\sqrt{1-\gamma_{A}} \sqrt{1-\eta_{A}}\sqrt{1-\gamma_{B}}\sqrt{\eta_{B}}\ket{\mathbf{0}}_{A}\ket{ \mathbf{0}}_{B}\left[-\ket{1}_{A,k}\ket{0}_{B,k}+\ket{0}_{A,k}\ket{1}_{B,k} \right]\otimes\ket{1,0}_{E}\] \[\ket{\varphi}_{s,2} =U_{BS}(\ket{\psi}_{A}\otimes\ket{\psi}_{B})^{2-\text{photon}}\] \[=e^{i(\theta_{A}+\theta_{B})}\sqrt{(1-\gamma_{A})(1-\gamma_{B})} \times\sqrt{\eta_{A}\eta_{B}}\ket{\mathbf{0}}_{A}\ket{\mathbf{0}}_{B}\left[ \ket{0}_{A,k}\ket{2}_{B,k}-\ket{0}_{A,k}\ket{2}_{B,k}\right]\otimes\ket{0,0}_{E} \tag{3c}\] The final state is obtained by projecting the photonic modes \(A,k\) and \(B,k\) to the states \(\ket{0,1}\) or \(\ket{1,0}\). We then trace out the environment modes to obtain the final spin-spin state, \[\rho_{\text{single}}=(1-P_{d})^{2}\operatorname{Tr}_{E}(\ket{\varphi}\!\! \bra{\varphi}_{s,1})+(1-P_{d})P_{d}\operatorname{Tr}_{E}(\ket{\varphi}\!\! \bra{\varphi}_{s,0}) \tag{3d}\] ### Dual Rail _Initial State:_ \[\ket{\psi}_{S}=e^{i\theta_{S}}\left[\frac{1}{\sqrt{2}}\ket{\mathbf{1}}_{S} \ket{\vec{0}}_{S,k}+\frac{1}{\sqrt{2}}\ket{\mathbf{0}}_{S}\ket{\vec{1}}_{S,k} \right]=e^{i\theta_{S}}\left[\frac{1}{\sqrt{2}}\ket{\mathbf{1}}_{S}\ket{1,0}_{ S,k}+\frac{1}{\sqrt{2}}\ket{\mathbf{0}}_{S}\ket{0,1}_{S,k}\right], \tag{3e}\] _State after loss:_ \[\ket{\psi}_{S}= e^{i\theta_{S}}\frac{1}{\sqrt{2}}\sqrt{\eta}\ket{\mathbf{1}}_{S} \ket{1,0}_{S,k}\ket{0,0}_{E}+e^{i\theta_{S}}\frac{1}{\sqrt{2}}\sqrt{\eta} \ket{\mathbf{0}}_{S}\ket{0,1}_{S,k}\ket{0,0}_{E} \tag{3f}\] \[+e^{i\theta_{S}}\frac{1}{\sqrt{2}}\sqrt{1-\eta}\ket{\mathbf{1}}_{S }\ket{0,0}_{S,k}\ket{1,0}_{E}+e^{i\theta_{S}}\frac{1}{\sqrt{2}}\sqrt{1-\eta} \ket{\mathbf{0}}_{S}\ket{0,0}_{S,k}\ket{0,1}_{E},\] _State after beamsplitter:_ For the beamsplitter interaction, we have to consider two copies of entangled states in Eq. (3f). Considering two standard balanced beamsplitters, the output state has four main components (1) no photons will be detected by the detectors (2) only one photon will be detected any of the detectors, (3) two photons detect a single photon each, and (4) photon bunching is observed. \[\ket{\varphi}_{d,0}= e^{i\theta_{A}}(\frac{1}{\sqrt{2}}\sqrt{1-\eta_{A}}\ket{ \mathbf{1}}_{A}\ket{0,0}_{A,k}\ket{1,0}_{E}+\frac{1}{\sqrt{2}}\sqrt{1-\eta_{A}} \ket{\mathbf{0}}_{A}\ket{0,0}_{A,k}\ket{0,1}_{E}) \tag{3f}\] \[\otimes e^{i\theta_{B}}(\frac{1}{\sqrt{2}}\sqrt{1-\eta_{B}}\ket{ \mathbf{1}}_{B}\ket{0,0}_{B,k}\ket{1,0}_{E}+\frac{1}{\sqrt{2}}\sqrt{1-\eta_{B}} \ket{\mathbf{0}}_{B}\ket{0,0}_{B,k}\ket{0,1}_{E})\] \[\ket{\varphi}_{d,1}=\frac{e^{i(\theta_{A}+\theta_{B})}}{2\sqrt{2}} \bigg{[}\sqrt{\eta_{A}(1-\eta_{B})}\ket{\mathbf{1}}_{A}\ket{\mathbf{1}}_{B} \left[\ket{0,0}_{A,k}\ket{1,0}_{B,k}+\ket{1,0}_{A,k}\ket{0,0}_{B,k}\right]\ket{0, 0,1}_{E} \tag{3f}\] \[+\sqrt{\eta_{A}(1-\eta_{B})}\ket{\mathbf{1}}_{A}\ket{\mathbf{1}}_{B }\left[\ket{0,0}_{A,k}\ket{1,0}_{B,k}+\ket{1,0}_{A,k}\ket{0,0}_{B,k}\right]\ket{0, 0,0}_{E}\] \[+\sqrt{\eta_{A}(1-\eta_{B})}\ket{\mathbf{0}}_{A}\ket{\mathbf{1}}_{B }\left[\ket{0,1}_{A,k}\ket{0,0}_{B,k}+\ket{0,0}_{A,k}\ket{0,1}_{B,k}\right]\ket{0, 0,1}_{E}\] \[+\sqrt{\eta_{B}(1-\eta_{A})}\ket{\mathbf{1}}_{A}\ket{\mathbf{1}}_{B }\left[-\ket{1,0}_{A,k}\ket{0,0}_{B,k}+\ket{0,0}_{A,k}\ket{1,0}_{B,k}\right]\ket{ 1,0,0}_{E}\] \[+\sqrt{\eta_{B}(1-\eta_{A})}\ket{\mathbf{1}}_{A}\ket{\mathbf{1}}_{B }\left[-\ket{1,0}_{A,k}\ket{0,0}_{B,k}+\ket{0,0}_{A,k}\ket{1,0}_{B,k}\right]\ket{ 1,0,0}_{E}\] \[\left|\varphi\right\rangle_{d,2}= \frac{e^{i(\theta_{A}+\theta_{B})}}{4}\sqrt{\eta_{A}\eta_{B}}\bigg{[} \left|\mathbf{1}\right\rangle_{A}\left|\mathbf{0}\right\rangle_{B}\otimes\left[ -\left|1,0\right\rangle_{A,k}\left|0,1\right\rangle_{B,k}+\left|1,1\right\rangle _{A,k}\left|0,0\right\rangle_{B,k}-\left|0,0\right\rangle_{A,k}\left|1,1 \right\rangle_{B,k}+\left|0,1\right\rangle_{A,k}\left|1,0\right\rangle_{B,k}\right] \left[\left.+\left|\mathbf{0}\right\rangle_{A}\left|\mathbf{1}\right\rangle_{B }\otimes\left[\left|1,0\right\rangle_{A,k}\left|0,1\right\rangle_{B,k}+\left| 1,1\right\rangle_{A,k}\left|0,0\right\rangle_{B,k}-\left|0,0\right\rangle_{A, k}\left|1,1\right\rangle_{B,k}-\left|0,1\right\rangle_{A,k}\left|1,0\right\rangle_{B,k} \right]\right]\otimes\left|0,0,0,0\right\rangle_{E} \tag{126c}\] \[\left|\varphi\right\rangle_{d,3}=\frac{e^{i(\theta_{A}+\theta_{B} )}}{2}\sqrt{\eta_{A}\eta_{B}}\bigg{[}\left|\mathbf{1}\right\rangle_{A}\left| \mathbf{1}\right\rangle_{B}\left[\left|2,0\right\rangle_{A,k}\left|0,0\right \rangle_{B,k}+\left|0,0\right\rangle_{A,k}\left|2,0\right\rangle_{B,k}\right]\] (127d) \[\left.+\left|\mathbf{0}\right\rangle_{A}\left|\mathbf{0}\right\rangle _{B}\left[\left|0,2\right\rangle_{A,k}\left|0,0\right\rangle_{B,k}+\left|0,0 \right\rangle_{A,k}\left|0,2\right\rangle_{B,k}\right]\right]\otimes\left|0,0,0,0\right\rangle_{E}\] The final state is obtained by projecting the photonic modes \(A,k\text{ and }B,k\) to the states \(\left|0,0,1,1\right\rangle,\left|0,1,1,0\right\rangle,\left|1,0,0,1\right\rangle\) or \(\left|1,1,0,0\right\rangle\). We then trace out the environment modes to obtain the final spin-spin state, \[\rho_{\text{dual}}=(1-P_{d})^{4}\operatorname{Tr}_{E}(\left|\varphi\right\rangle \!\!\left\langle\varphi\right|_{d,2})+(1-P_{d})^{3}P_{d}\operatorname{Tr}_{E }(\left|\varphi\right\rangle\!\!\left\langle\varphi\right|_{d,1})+(1-P_{d})^{2 }P_{d}^{2}\operatorname{Tr}_{E}(\left|\varphi\right\rangle\!\!\left\langle \varphi\right|_{d,0}) \tag{128}\] ## Appendix C General Description of Final Spin-Spin Entangled State The generalized state description for both the single rail and dual rail encoded photonic entanglement swap is covered in this section. We state the density matrices of the final quantum memory (QM) entangled state with the two qubit basis ordering of \(\{\left|\mathbf{1},\mathbf{1}\right\rangle,\left|\mathbf{1},\mathbf{0}\right\rangle,\left|\mathbf{0},\mathbf{1}\right\rangle,\left|\mathbf{0},\mathbf{0}\right\rangle\}\). For the single rail case, we start with the initial QM- photon state of \(\sqrt{\gamma}\left|\mathbf{1}\right\rangle\left|0\right\rangle+\sqrt{1- \gamma}\left|\mathbf{0}\right\rangle\left|1\right\rangle\), whereas for the dual rail swap, we consider the initial state of \((\left|\mathbf{1}\right\rangle\left|1,0\right\rangle+\left|\mathbf{0}\right\rangle \left|0,1\right\rangle)/\sqrt{2}\). ### Single rail photonic qubit based swap For the single rail case, the swapping circuit comprises of a single balanced beam splitter and a pair of photon number resolving detectors. A'success' is defined as a \(\left[0,1\right]\) or \(\left[1,0\right]\) click pattern, and the final entangled state's density operator in the chosen basis is then given as, \[\rho_{\text{single}}=\frac{(1-P_{d})^{2}}{\mathbf{N}_{s}}\begin{pmatrix}0&0&0 &0\\ 0&c_{1}^{(0)}&c_{3}^{(0)}&0\\ 0&c_{3}^{(0)}&c_{2}^{(0)}&0\\ 0&0&0&c_{4}^{(0)}\end{pmatrix}+\frac{P_{d}(1-P_{d})}{\mathbf{N}_{s}}\begin{pmatrix} c_{5}^{(1)}&0&0&0\\ 0&c_{1}^{(1)}&0&0\\ 0&0&c_{2}^{(1)}&0\\ 0&0&0&c_{4}^{(1)}\end{pmatrix} \tag{129}\] with \(\mathbf{N}_{s}=(1-P_{d})^{2}\left[c_{1}^{(0)}+c_{2}^{(0)}+c_{4}^{(0)}\right]+P_{ d}(1-P_{d})\left[c_{1}^{(1)}+c_{2}^{(1)}+c_{4}^{(1)}+c_{5}^{(1)}\right]\), where, \[c_{1}^{(0)} =\frac{1}{2}\,\gamma_{A}\left(1-\gamma_{B}\right)\eta_{B}\eta_{d}, \quad c_{2}^{(0)}=\frac{1}{2}\left(1-\gamma_{A}\right)\gamma_{B}\,\eta_{A}\eta _{d} \tag{130}\] \[c_{3}^{(0)} =\frac{1}{2}\,(-1)^{m}\eta_{d}\sqrt{\gamma_{A}(1-\gamma_{A})} \sqrt{\gamma_{B}(1-\gamma_{B})}\sqrt{\eta_{A}\eta_{B}}\times\mathcal{V}\] \[c_{4}^{(0)} =\frac{1}{2}\eta_{d}(1-\gamma_{A})(1-\gamma_{B})\left(\eta_{A}+ \eta_{B}-2\eta_{A}\eta_{B}\eta_{d}\right)\] where \(m=\{0,1\}\) is a single parity bit determined by the click pattern (\(m=0\) for \([0,1];m=1\) for \([1,0]\)), and the visibility \(\mathcal{V}=|\mathcal{V}|\exp(i(\theta_{A}-\theta_{B}))\), where the \(|\mathcal{V}|\) is the mode overlap and \(\theta_{A},\theta_{B}\) are the individual carrier phases for each source. The additional terms are expressible as, \[c_{1}^{(1)} =\gamma_{A}(1-\gamma_{B})(1-\eta_{B}\eta_{d});\quad c_{2}^{(1)}=(1- \gamma_{A})\gamma_{B}(1-\eta_{A}\eta_{d}); \tag{131}\] \[c_{4}^{(1)} =(1-\gamma_{A})(1-\gamma_{B})(1-\eta_{A}\eta_{d})(1-\eta_{B}\eta _{d});\] \[c_{5}^{(1)} =\gamma_{A}\gamma_{B}.\] Expressing \(\mathcal{V}=|\mathcal{V}|\exp(i\theta^{\prime})\), where \(\theta^{\prime}=|\theta_{A}-\theta_{B}|\) with \(\theta_{A},\theta_{B}\sim\mathcal{N}(0,\varepsilon)\). This implies \(\theta^{\prime}\sim\mathcal{N}(0,2\varepsilon)\). To get an accurate description of the average state under the effect of phase mismatch, we have to consider ensemble average state. Let us re-express \(\rho_{\text{single}}\) from Eq. (10) as \(\rho_{\text{single}}(\theta^{\prime})\), then we may calculate the ensemble averaged state as \[\rho_{\text{single}}^{\text{avg.}}=\int_{-\infty}^{\infty}\rho_{\text{single }}(\theta)\cdot P(\theta)d\theta \tag{12}\] Since the only density operator term with \(\theta^{\prime}\) dependence is \(c_{3}^{(0)}\), the ensemble averaged state \(\rho_{\text{single}}^{\text{avg.}}\) has the same form as \(\rho_{\text{single}}\) with \[c_{3}^{(0)}=\frac{1}{2}\,(-1)^{m}\eta_{d}\sqrt{\gamma_{A}(1-\gamma_{A})}\sqrt {\gamma_{B}(1-\gamma_{B})}\sqrt{\eta_{A}\eta_{B}}\times|\mathcal{V}|\exp(- \varepsilon) \tag{13}\] where we have applied \[\int_{-\infty}^{\infty}e^{i\theta}\cdot\frac{\exp(-\theta^{2}/2\sigma^{2})}{ \sigma\sqrt{2\pi}}d\theta=\exp(-\sigma^{2}/2). \tag{14}\] to resolve the integral in Eq. (12). ### Dual rail photonic qubit based swap For the dual rail case, the swapping circuit is dependent on the choice of the two orthogonal modes. For time-bin encoding, the swap comprises of a single balanced beamsplitter and two detectors (with 4 detection slots). For polarization encoding, we require a single balanced beamsplitter, followed by two polarizing beamsplitters and four detectors. For spectral encoding the polarizing beam splitter is replaced by a diffractive or spectral de-multiplexing element. For spatial encoding, two balanced beam splitters and four detectors are required. There are four click patterns (of 8) that herald a successful swap. The final entangled state's density operator in the chosen basis is then given as, \[\rho_{\text{dual}}=\frac{(1-P_{d})^{4}}{\mathbf{N}_{d}}\begin{pmatrix}0&0&0&0 \\ 0&c_{1}^{(0)}&c_{3}^{(0)}&0\\ 0&c_{3}^{(0)*}&c_{2}^{(0)}&0\\ 0&0&0&0\end{pmatrix}+\frac{P_{d}(1-P_{d})^{2}}{\mathbf{N}_{d}}\begin{pmatrix}c _{1}^{(1)}&0&0&0\\ 0&c_{1}^{(1)}&0&0\\ 0&0&c_{1}^{(1)}&0\\ 0&0&0&c_{1}^{(1)}\end{pmatrix} \tag{15}\] with \(\mathbf{N}_{d}=2(1-P_{d})^{4}\times c_{1}^{(0)}+4P_{d}(1-P_{d})^{2}c_{1}^{(1)}\) where, \[c_{1}^{(0)}=c_{2}^{(0)}=\frac{1}{4}\eta_{A}\eta_{B}\eta_{d}^{2};\quad c_{3}^{ (0)}=\frac{1}{4}(-1)^{m}\eta_{A}\eta_{B}\eta_{d}^{2}\times|\mathcal{V}|^{2}, \tag{16}\] where \(m=\{0,1\}\) is a single parity bit determined by the click pattern (\(m=0\text{ for }[0,1,1,0]\text{ or }[1,0,0,1];m=1\text{ for }[1,1,0,0]\text{ or }[0,0,1,1]\)), and, \[c_{1}^{(1)}=\frac{1}{2}(1-P_{d})\eta_{d}\,(\eta_{A}+\eta_{B}-2\eta_{A}\eta_{B }\eta_{d})+P_{d}(1-\eta_{A}\eta_{d})(1-\eta_{B}\eta_{d}). \tag{17}\] ### Evaluation of State Fidelity and Hashing Bound Let us consider the generalized state description for both single and dual rail cases, which is a QM joint density matrix of the form \[\sigma_{AB}=\frac{1}{\mathbf{N}}\begin{pmatrix}\mathbf{a}&0&0&0\\ 0&\mathbf{b}&\mathbf{c}&0\\ 0&\mathbf{c}^{*}&\mathbf{d}&0\\ 0&0&0&\mathbf{e}\end{pmatrix} \tag{18}\] For this subsection alone we consider \(\mathbf{a},\mathbf{b},\mathbf{d},\mathbf{e}\in\mathbb{R}\) and \(\mathbf{c}\in\mathbb{C}\) with \(\mathbf{N}=\mathbf{a}+\mathbf{b}+\mathbf{d}+\mathbf{e}\). The partial trace \(\mathrm{Tr}_{A}(\hat{\sigma}_{AB})\) yields the density matrix \[\sigma_{B}=\frac{1}{\mathbf{N}}\begin{pmatrix}\mathbf{a}+\mathbf{b}&0\\ 0&\mathbf{d}+\mathbf{e}\end{pmatrix} \tag{109}\] The generalized expressions for the state fidelity is given as \[F(\sigma_{AB},|\Psi^{\pm}))=\frac{\mathbf{b}+\mathbf{d}\pm 2|\mathbf{c}|}{2 \mathbf{N}} \tag{110}\] For the distillable entanglement, we calculate the eigenvalues for \(\sigma_{AB}\) and \(\sigma_{B}\), \[\mathrm{eig}[\sigma_{AB}] =\left\{\frac{\mathbf{a}}{\mathbf{N}},\frac{\mathbf{e}}{\mathbf{ N}},\frac{\mathbf{b}+\mathbf{d}\pm\sqrt{(\mathbf{b}-\mathbf{d})^{2}+4|\mathbf{c}|^{2 }}}{\mathbf{N}}\right\}; \tag{111a}\] \[\mathrm{eig}[\sigma_{B}] =\left\{\frac{\mathbf{a}+\mathbf{b}}{\mathbf{N}},\frac{\mathbf{ d}+\mathbf{e}}{\mathbf{N}}\right\} \tag{111b}\] ## Appendix D Analysis of Distillation Circuit ### Iterative Map for Analytical Proofs We consider the use of distillation circuits for the improvement of the heralded state quality and entanglement distribution utility in the presence of network non-idealities. Specifically we use the protocol proposed in [18], illustrated by Fig. 1(a). Starting with the general states (as shown in Appendix C) and applying the distillation circuit is cumbersome to analyse analytically. However we may consider the specific non-ideal case of the dual rail swap with no excess noise \(P_{d}=0\), with \(\mathcal{V}<1\) and show the utility of distillation. As analyzed in [18], we start with a mixed entangled state which is Bell-diagonal and of the form, \[\hat{\rho}=A\,|\Psi^{+}\rangle\!\langle\Psi^{+}|+B\,|\Psi^{-}\rangle\!\langle \Psi^{-}|+C\,|\Phi^{+}\rangle\!\langle\Phi^{+}|+D\,|\Phi^{-}\rangle\!\langle \Phi^{-}|\,. \tag{49}\] Such a state maybe compactly represented as a vector \([A,B,C,D]\). The action of the Deutsch _et. al_ distillation circuit also yields a Bell-diagonal state represented in the vector format as \([A^{\prime},B^{\prime},C^{\prime},D^{\prime}]\) where the following transformation rules hold, \[\begin{split} A^{\prime}&=(A^{2}+D^{2})/N\\ B^{\prime}&=2AD/N\\ C^{\prime}&=B^{2}+C^{2}/N\\ D^{\prime}&=2BC/N,\end{split} \tag{50}\] Figure 10: Common 2-to-1 circuits used to entanglement distillation (a) proposed by Deustch _et. al_ in [18] and (b) proposed by Bennett _et. al_ in [19]. Figure 11: Various physical implementations of linear optical entanglement swaps based on the encoding choice. A single balanced beamsplitter and a pair of detectors is sufficient for the single rail swap (S1). For a dual rail spatial encoding requires two beamsplitters and four detectors (49). A single beam splitter followed by a diffractive element is sufficient for spectrally encoded qubits (50), whereas for polarization encodings a pair of polarizing beamsplitters are necessary (50) each employing four detectors. For temporally encoded qubits, a single beamsplitter and a pair of detectors are sufficient (51). with \(N=(A+D)^{2}+(B+C)^{2}\). Returning to our special case (\(P_{d}=0;\mathcal{V}<1\)), with \(\mathcal{V}^{\prime}=|\mathcal{V}|^{2}\) we may express the dual-rail heralded state as \(\hat{\rho}_{d}=((1+\mathcal{V}^{\prime})\,|\Psi^{+}\rangle\!\langle\Psi^{+}|+(1 -\mathcal{V}^{\prime})\,|\Psi^{-}\rangle\!\langle\Psi^{-}|)/2\). Hence as per our vector notation \[\text{Original state:}\begin{cases}A=(1+\mathcal{V}^{\prime})/2\\ B=(1-\mathcal{V}^{\prime})/2\\ C=D=0.\end{cases} \tag{10}\] A single round of distillation yields the state (in vector notation), \[\text{One round:}\ \begin{cases}A^{\prime}=A^{2}/(A^{2}+B^{2})\\ C^{\prime}=B^{2}/(A^{2}+B^{2})\\ B^{\prime}=D^{\prime}=0.\end{cases} \tag{11}\] Since \(B^{\prime}=D^{\prime}=0\), for subsequent rounds these components _always_ remain zero as per the mapping rule. Consequently after \(k\)-rounds of distillation, the \(A\) component (say represented by \(A^{(k)}\)) is given as \(A^{(k)}:=A^{2^{k}}/(A^{2^{k}}+B^{2^{k}})\). This dictates the fidelity of the distilled state w.r.t. the target \(|\Psi^{+}\rangle\) state. We may rearrange terms to represent \(A^{(k)}\) as \[A^{(k)} =\frac{(1+\mathcal{V}^{\prime})^{K}}{(1+\mathcal{V}^{\prime})^{K }+(1-\mathcal{V}^{\prime})^{K}} \tag{12a}\] \[=\left(1+\left(\frac{1-\mathcal{V}^{\prime}}{1+\mathcal{V}^{ \prime}}\right)^{K}\right)^{-1}, \tag{12b}\] where \(K=2^{k}\). Since \(\mathcal{V}^{\prime}\in[0,1]\), \((1-\mathcal{V}^{\prime})/(1+\mathcal{V}^{\prime})<1\), which implies the denominator of Eq. (12a) decreases as \(k\) increases. Consequently \(A^{(k+1)}\geq A^{(k)}\), with equality holding for \(\mathcal{V}^{\prime}=1\), this implies the fidelity of the state increases as \(k\) (i.e. the rounds of distillation) increases. One may equivalently analyze the hashing bound for this hierarchy of states to prove the viability of using the distillation circuit. In contrast, applying the alternate distillation circuit (see Fig. 0(b)) introduced by Bennett _et. al_ in Bennett et al. (2009) shows no advantage. Starting with a pre-distillation mixed state which is Bell diagonal and representable in the vector format, \([A,B,C,D]\), the state after distillation is also Bell diagonal with the the map \[A^{\prime} =(A^{2}+B^{2})/2 \tag{13}\] \[B^{\prime} =AB\] \[C^{\prime} =(C^{2}+D^{2})/2\] \[D^{\prime} =CD\] Starting out with a heralded state with no excess noise as before, \[\text{Original state:}\begin{cases}A=(1+\mathcal{V}^{2})/2\\ B=(1-\mathcal{V}^{2})/2\\ C=D=0,\end{cases} \tag{14}\] which gives \[\text{One round:}\begin{cases}A^{\prime}=(1+\mathcal{V}^{4})/2\\ B=(1-\mathcal{V}^{4})/2\\ C=D=0\end{cases} \tag{15}\] Similarly after \(k\)-rounds of distillation fidelity is given by \(A^{(k)}=(1+\mathcal{V}^{2^{k+1}})/2\). Since \(\mathcal{V}\in[0,1]\), this number decreases with \(k\), which means that additional rounds of this protocol is not beneficial to improving the state quality. ### Max Network Range In the main text we have highlighted how the max range (i.e. value of \(\eta\) at which \(I(\rho)\to 0\)) increases and saturates to a limiting value which we labeled \(\eta_{\text{lim}}\) i.e. the channel loss where \(F(\rho_{AB},|\Psi^{+}\rangle)=0.5\). The straightforward reason is that at \(F(\rho_{AB},|\Psi^{+}\rangle)=0.5\) the initial joint state is the two-qubit maximally mixed state \(\rho_{AB}=\mathbb{I}/4\) with no distillable entanglement i.e. \(E_{D}(\mathbb{I}/4)=0\); hence no amount of distillation would improve the state quality. Hence for all \(\eta<\eta_{\text{lim}}\), the heralded state \(F(\rho_{AB},|\Psi^{+}\rangle)>0.5\), meaning the state quality may be improved by distillation, which is why we can \(I(\rho)\) improves asymptotically to \(1\). Since a complete analytical treatment for general \(P_{d}\) becomes intractable, we show some numerical results in Fig. D.2 that highlights the result. We solve \(F(\rho_{AB},|\Psi^{+}\rangle)=0.5\) to extract \(\eta_{\text{lim}}\) for a range of \(P_{d}\in[10^{-4},10^{-1}]\) at \(\mathcal{V}=1\) and plot the same in Fig. D.2(a). Values of \(\eta_{\text{lim}}\) for \(P_{d}=\{10^{-4},10^{-3},10^{-2}\}\) (corresponding symbols) are marked for the single rail (blue text) and dual rail (red text). We then look at the max range for the corresponding values of \(P_{d}\) vs. rounds of distillation for the single rail Fig. D.2(b1) and dual rail Fig. D.2(b2) swap. The calculated and numerically extracted \(\eta_{\text{lim}}\) values (marked by lines) show close agreement.
2304.07692
Topological properties of some classes of submodules
We study the topology of a class of proper submodules and some of its distinguished subclasses and call them structure spaces. We give several criteria for the quasi-compactness of these structure spaces. We study $T_0$ and $T_1$ separation properties and characterize structure spaces in which nonempty irreducible closed subsets have unique generic points. We provide a sufficient condition for the connectedness of structure spaces. We prove that the structure spaces of proper submodules are spectral, and moreover, we characterize the spectral structure spaces of Noetherian modules. Finally, we discuss continuous functions between these spaces.
Amartya Goswami
2023-04-16T04:49:23Z
http://arxiv.org/abs/2304.07692v1
# Topological properties of some classes of submodules ###### Abstract. We study the topology of a class of proper submodules and some of its distinguished subclasses and call them structure spaces. We give several criteria for the quasi-compactness of these structure spaces. We study \(T_{0}\) and \(T_{1}\) separation properties and characterize structure spaces in which nonempty irreducible closed subsets have unique generic points. We provide a sufficient condition for the connectedness of structure spaces. We prove that the structure spaces of proper submodules are spectral, and moreover, we characterize the spectral structure spaces of Noetherian modules. Finally, we discuss continuous functions between these spaces. Key words and phrases:module, extraordinary submodule, spectral space, quasi-compact 2020 Mathematics Subject Classification: 54F65 ## 1. Introduction and Preliminaries Since the introduction of prime submodules in [1], much attention has been drawn to studying this class (see [1, 2, 13]). Following the existence of various classes of ideals of rings, it is natural to introduce similar classes of submodules, and hence we now have notions like maximal, prime, semiprime, primary, strongly irreducible, cyclic, and finitely generated submodules. As far as studying topologies is concerned, the main choice of class is prime submodules or its subclass of minimal prime submodules. However, it has been immediately observed that unlike prime ideals of a (commutative) ring, one cannot endow a Zariski topology on the class of prime submodules of a module. To make closed sets behave well with respect to finite unions, we need an additional condition on the module, and hence what we obtain is called a top module (see [1]). An extensive study of various topologies on prime spectra (or their variations like minimal prime, coprime, fully prime) of submodules can be found in [1, 1, 10 ###### Contents * 1 Introduction [MISSING_PAGE_POST] semiprime_[10] if \(N\) is an intersection of prime submodules of \(M\); * _extraordinary_[12] if \(L\) and \(K\) are semiprime submodules of \(M\) with \(L\cap K\subseteq N\), then either \(L\subseteq N\) or \(K\subseteq N\); * _primary_[13] if \(rm\in N\) with \(r\in R\) and \(m\in M\) imply that either \(r^{n}\in(N:M)\) for some \(n\in\mathds{N}\), or \(m\in N\); * _radical_ if \[N=\sqrt{N}=\bigcap_{N\subseteq P}\{P\mid P\text{ is a prime submodule of }M\};\] * _strongly irreducible_[14] if \(L\cap K\subseteq N\) implies either \(L\subseteq N\) or \(K\subseteq N\), for all submodules \(L,K\in M\); * _irreducible_[15] if \(N\neq N_{1}\cap N_{2}\) for two submodules \(N_{1}\) and \(N_{2}\) of \(M\) with \(N_{i}\neq N\); * _completely irreducible_[16] if \(N\neq\bigcap_{\lambda\in\Lambda}N_{\lambda}\) for submodules \(\{N_{\lambda}\}_{\lambda\in\Lambda}N_{1}\) of \(M\) with \(N_{\lambda}\neq N\) for all \(\lambda\in\Lambda\); * _minimal_[17] if \(N\neq 0\) and \(N\) contains no other nonzero submodules of \(M\); * _minimal prime_[18] if \(N\) is both a minimal and a prime submodule of \(M\). * _cyclic_ if \(N\) is generated by a single element of \(M\); * _finitely generated_ if \(N\) is generated by a finite subset of \(M\). Before we discuss topologies on distinguished classes of submodules in the next section, we first need some notation. By \(\mathcal{S}(M)\), we shall denote the set of all submodules of a module \(M\). To denote the class of proper submodules of \(M\) or any one of its distinguished subclasses listed in Definition 1.1, we shall use the notation \(\mathcal{D}(M)\). It is clear that \(M\notin\mathcal{D}(M)\) for all such classes of submodules. ## 2. Structure spaces Our goal is to introduce a closure operation \(\mathcal{C}\) on a distinguished class of submodules of \(M\) in such a way that the closure operation \(\mathcal{C}\) induces a topology on that class of submodules. The closure \(\mathcal{C}\) is defined on a \(\mathcal{D}(M)\) as follows. \[\mathcal{C}(N)=\{L\in\mathcal{D}(M)\mid S\subseteq L\}\qquad(N\in\mathcal{S} (M)). \tag{2.1}\] In the following lemma, we gather some elementary properties of \(\mathcal{C}\) needed in the sequel. Since the proofs of these results are straightforward, we skip them. **Lemma 2.1**.: _Let \(M\) be a module. A closure operation \(\mathcal{C}\) on a \(\mathcal{D}(M)\) has the following properties._ 1. _For any two_ \(N,N^{\prime}\in\mathcal{D}(M)\) _with_ \(N\subseteq N^{\prime}\) _imply that_ \(\mathcal{C}(N)\supseteq\mathcal{C}(N^{\prime})\)_;_ 2. \(\mathcal{C}(0)=\mathcal{D}(M)\) _and_ \(\mathcal{C}(M)=\emptyset\); 3. \(\bigcap_{\lambda\in\Lambda}\mathcal{C}(N_{\lambda})=\mathcal{C}\) _(_\(\sum_{\lambda\in\Lambda}N_{\lambda}\)_) for all_ \(N_{\lambda}\in\mathcal{S}(M)\) _and for all_ \(\lambda\in\Lambda\); 4. \(\mathcal{C}(N)\cup\mathcal{C}(N^{\prime})\subseteq\mathcal{C}(N\cap N^{\prime})\) _for all_ \(N\)_,_ \(N^{\prime}\in\mathcal{S}(M)\)_;_ 5. \(\mathcal{C}(N)\supseteq\mathcal{C}(\sqrt{N})\) _for all_ \(N\in\mathcal{S}(M)\)_._ In general, the sets \(\{\mathcal{C}(N)\}_{N\in\mathcal{S}(M)}\) are not closed under finite unions, and hence we may not have equality in Lemma 2.1(4). However, if for any two \(N,N^{\prime}\in\mathcal{S}(M)\), there exists an \(N^{\prime\prime}\in\mathcal{S}(M)\) such that \(\mathcal{C}(N)\cup\mathcal{C}(N^{\prime})=\mathcal{C}(N^{\prime\prime})\), then obviously we obtain a equality in Lemma 2.1(4). A module with this property is called a _top_ module. (see [12, SS2]). Therefore, in a top module, the collection \(\{\mathcal{C}(N)\}_{N\in\mathcal{S}(M)}\) of subsets of any class \(\mathcal{D}(M)\) of submodules induces the Zariski topology on \(\mathcal{D}(M)\). It is clear from this discussion that unless \(M\) is a top module, even for the class of prime submodules, the collection \(\{\mathcal{C}(N)\}_{N\in\mathcal{S}(M)}\) does not induce the Zariski topology on it. This is the major difference in behaviour in topology endowed on ideals of a ring and on submodules of a module. For an arbitrary module \(M\), using these subsets of a distinguished class \(\mathcal{D}(M)\) of submodules, we wish to construct a topology on \(\mathcal{D}(M)\). Note that from Lemma 2.1(2), it follows that \[\bigcap_{N\in\mathcal{S}(M)}\mathcal{C}(N)=\emptyset,\] and hence by [14, Theorem 15 A.13., p. 254], the collection \(\{\mathcal{C}(N)\}_{N\in\mathcal{S}(M)}\) as a closed subbasis generates a unique topology on \(\mathcal{D}(M)\). We denote this topology by \(\tau_{\mathcal{D}(M)}\). For the topological space \((\mathcal{D}(M),\tau_{\mathcal{D}(M)})\), we will write \(\mathcal{D}(M)\) and call this topological space a _structure space_. ### Compactness Next, we are interested in studying the quasi-compactness of a structure space, \(\mathcal{D}(M)\). Although, from Lemma 2.1(2), we have \(N=M\) implies that \(\mathcal{C}(N)=\emptyset\), but the converse may not be true for all distinguished classes of submodules, and what we want is to use this converse property to prove the quasi-compactness of structure spaces. However, if \(M\) is finitely generated, then the converse holds for any \(\mathcal{D}(M)\). Since our topology is generated by subbasis closed sets, in the proof of the following theorem we shall rely on the Alexander Subbasis Theorem. Note that our result generalizes Theorem 2.13 of [13]. **Theorem 2.2**.: _If \(M\) is a finitely generated module, then every structure space is quasi-compact._ Proof.: To show the compactness, thanks to Alexander Subbasis Theorem, it suffices to prove the finite intersection property only for subbasis closed sets. Let \(\bigcap_{\lambda\in\Lambda}K_{\lambda}=\emptyset\) for a family \(\{K_{\lambda}\}_{\lambda\in\Lambda}\) of subbasis closed sets of a structure space \(\mathcal{D}(M)\). We claim that \(\bigcap_{i=1}^{n}K_{\lambda_{i}}=\emptyset\), for some finite subset \(\{\lambda_{1},\ldots,\lambda_{n}\}\) of \(\Lambda\). Let \(\{N_{\lambda}\}_{\lambda\in\Lambda}\in\mathcal{S}(M)\) such that \(K_{\lambda}=\mathcal{C}(N_{\lambda})\) for all \(\lambda\in\Lambda\), By Lemma 2.1(3), this implies that \(\mathcal{C}\left(\sum_{\lambda\in\Lambda}N_{\lambda}\right)=\emptyset.\) Since \(M\) is finitely generated, \(\sum_{\lambda\in\Lambda}N_{\lambda}=M\), and also there exists a finite subset of \(\Lambda\) for which this equality holds. Another sufficient condition for the quasi-compactness of a structure space \(\mathcal{D}(M)\) is the inclusion of all maximal submodules of \(M\) in \(\mathcal{D}(M)\). However, a module may not have any maximal submodules, which can be guaranteed to exist if we consider a module over a perfect ring (see [1]). Note that the basic structure of argument in the proof of the following theorem is similar to that of Theorem 2.2. **Theorem 2.3**.: _If \(M\) is a module over a perfect ring and if a class \(\mathcal{D}(M)\) of submodules of \(M\) contains all maximal submodules of \(M\), then the structure space \(\mathcal{D}(M)\) is quasi-compact._ Proof.: Let \(\{\mathcal{K}_{\lambda}\}_{\lambda\in\Lambda}\) be a family of subbasis closed sets of \(\mathcal{D}(M)\) such that \(\bigcap_{\lambda\in\Lambda}\mathcal{K}_{\lambda}=\emptyset.\) This implies that \(\mathcal{K}_{\lambda}=\mathcal{C}(N_{\lambda})\) for some ideals \(N_{\lambda}\) of \(M\), and \[\bigcap_{\lambda\in\Lambda}\mathcal{C}(N_{\lambda})=\mathcal{C}\left(\sum_{ \lambda\in\Lambda}N_{\lambda}\right)=\emptyset.\] If \(\sum_{\lambda\in\Lambda}N_{\lambda}\neq M\), then we must have a maximal ideal \(L\) of \(M\) such that \(\sum_{\lambda\in\Lambda}N_{\lambda}\subseteq L\). Moreover, \[N_{\lambda}\subseteq\sum_{\lambda\in\Lambda}N_{\lambda}\subseteq L,\] for all \(\lambda\in\Lambda.\) Therefore \(L\in\mathcal{C}(N_{\lambda})=\mathcal{K}_{\lambda}\) for all \(\lambda\in\Lambda\), a contradiction of our assumption. Hence \(\sum_{\lambda\in\Lambda}N_{\lambda}=M\), and the identity \(1\in\sum_{\lambda\in\Lambda}N_{\lambda}.\) This implies the existence of a finite subset \(\{\lambda_{1},\ldots,\lambda_{n}\}\) of \(\Lambda\) such that \(1=\sum_{i=1}^{n}x_{\lambda_{i}}\) (where \(x_{\lambda_{i}}\in N_{\lambda_{i}}\)), and hence \(M=\sum_{i=1}^{n}N_{\lambda_{i}}\), which establishes the finite intersection property. Therefore, \(\mathcal{D}(M)\) is quasi-compact by Alexander Subbase Theorem. **Corollary 2.4**.: _The structure spaces of maximal, prime, semiprime, extraordinary, strongly irreducible, primary, irreducible, completely irreducible, radical submodules are quasi-compact._ Notice that in Theorem 2.3 the containment of all maximal submodules to a class \(\mathcal{D}(M)\) of submodules is a sufficient condition for quasi-compactness of the structure space \(\mathcal{D}(M)\). However, for the class of finitely generated submodules of a module, for instants, it is also a necessary condition. **Proposition 2.5**.: _Let \(M\) be a module over a perfect ring. If the structure space \(\mathcal{D}(M)\) of finitely generated (proper) submodules is quasi-compact, then \(\mathcal{D}(M)\) contains all maximal submodules of \(M\)._ Proof.: Let \(L\) be a maximal submodule of the module \(M\) such that \(L\) is not finitely generated. Let us consider the collection of closed subspaces: \[\mathcal{K}=\left\{\mathcal{C}(\langle x\rangle)\cap\mathcal{D}(M)\mid x\in L \right\}.\] We claim that \(\cap\mathcal{K}=\emptyset\). If not, let \(T\in\cap\mathcal{K}\). Then \(T\) is finitely generated and \(L\subseteq T\). Since \(L\) is not a finitely generated submodule, we must have \(T\supseteq L\), and that implies \(T=M\), which contradicts the fact that \(\mathcal{D}(M)\) consists of proper submodules. But clearly \(\mathcal{K}\) has the finite intersection property and hence \(\mathcal{D}(M)\) is not quasi-compact. Since, in general, a submodule of a finitely generated module need not be finitely generated, the following result provides another sufficient condition for the quasi-compactness of a structure space, and this is an extension of Proposition 3.5 of [10]. **Theorem 2.6**.: _If \(M\) is a Noetherian module then every structure space \(\mathcal{D}(M)\) is quasi-compact._ Proof.: Consider a collection \(\{\mathcal{D}(M)\cap\mathcal{C}(N_{\lambda})\}_{\lambda\in\Omega}\) of subbasis closed sets of \(\mathcal{D}(M)\) with the finite intersection property. By assumption, the submodule \(T=\sum_{\lambda\in\Omega}N_{\lambda}\) is finitely generated, say \(T=(\alpha_{1},\ldots,\alpha_{n})\). For every \(1\leqslant j\leqslant n\), there exists a finite subset \(\Lambda_{j}\) of \(\Omega\) such that \(\alpha_{j}\in\sum_{\lambda\in\Lambda_{j}}N_{\lambda}\). Thus, if \(\Lambda:=\bigcup_{j=1}^{n}\Lambda_{j}\), it immediately follows that \(T=\sum_{\lambda\in\Lambda}N_{\lambda}\). Hence we have \[\bigcap_{\lambda\in\Omega}\left(\mathcal{D}(M)\cap\mathcal{C}(N_{ \lambda})\right) =\mathcal{D}(M)\cap\mathcal{C}(T)\] \[=\mathcal{D}(M)\cap\mathcal{C}\left(\sum_{\lambda\in\Lambda}N_{ \lambda}\right)\] \[=\bigcap_{\lambda\in\Lambda}\left(\mathcal{D}(M)\cap\mathcal{C}( N_{\lambda})\right)\neq\emptyset,\] since \(\Lambda\) is finite and \(\{\mathcal{D}(M)\cap\mathcal{C}(N_{\lambda})\}_{\lambda\in\Omega}\) has the finite intersection property. Then the conclusion follows by the Alexander Subbasis Theorem. **Corollary 2.7**.: _If \(M\) is a Noetherian module then every structure space \(\mathcal{D}(M)\) is Noetherian._ ### Separation properties We now focus on separation properties of structure spaces. Since for any two distinct elements \(L_{1}\) and \(L_{2}\) of a distinguished class of submodules implies that \(\mathcal{C}(L_{1})\neq\mathcal{C}(L_{2})\), we immediately have the following separation property, which generalizes Proposition 2.4 of [11]. **Proposition 2.8**.: _Every structure space is a \(T_{0}\)-space._ Before we embark on the existence of generic points of structure spaces, we recall two topological notions that we need. A closed subset \(S\) of a space \(X\) is called _irreducible_ if \(S\) is not the union of two properly smaller closed subsets of \(X\). An element \(s\) of \(S\) is called a _generic point_ of \(S\) if \(S=\overline{\{s\}}\), where \(\overline{\{s\}}\) is the closure of \(\{s\}\). If \(M\) is a top module and \(\mathcal{D}(M)\) is the class of prime submodules of \(M\) endowed with the Zariski topology, then it is shown in [13, Proposition 3, Corollary 1] that a nonempty closed subset \(X\) of \(\mathcal{D}(M)\) is irreducible if and only if the intersection of all prime submodules belonging to \(X\) is itself a prime submodule of \(M\). For an arbitrary structure space \(\mathcal{D}(M)\), we do not have a similar characterization. However, in Lemma 2.9 below, we shall identify a class of irreducible closed subsets of a structure space \(\mathcal{D}(M)\), which is in fact good enough to play good roles in * characterizing \(T_{1}\) structure spaces (Theorem 2.11), * proving the structure space of proper submodules is spectral (Theorem 2.17), and * studying connectedness of structure spaces (Theorem 2.22). **Lemma 2.9**.: _The subsets \(\{C(N)\mid N\in\mathcal{C}(N)\}\) of a structure space \(\mathcal{D}(M)\) are irreducible._ Proof.: Let \(\overline{N}\) be the closure of the submodule \(N\) of a module \(M\). We actually show more, namely, \(\overline{N}=\mathcal{C}(N)\) for every \(N\in\mathcal{C}(N)\). It is evident that \(\overline{N}\subseteq\mathcal{C}(N)\). To have the other half of the inclusion, let us first consider the trivial case, that is, when \(\overline{N}=\mathcal{D}(M)\). Since \[\mathcal{D}(M)=\overline{N}\subseteq\mathcal{C}(N)\subseteq\mathcal{D}(M),\] we have the equality. Now, suppose \(\overline{N}\neq\mathcal{D}(M)\). Since \(\overline{N}\) is a closed set, it has the following representation \[\overline{N}=\bigcap_{\lambda\in\Lambda}\left(\bigcup_{i=1}^{n_{\lambda}} \mathcal{C}(N_{\lambda i})\right),\] for some index set \(\Lambda\) and submodules \(N_{\lambda 1},\dots,N_{\lambda n_{\lambda}}\) of \(M\). By hypothesis, this implies \(\bigcup_{i=1}^{n_{\lambda}}\mathcal{C}(N_{\lambda i})\neq\emptyset\), and hence \(N\in\bigcup_{i=1}^{n_{\lambda}}\mathcal{C}(N_{\lambda i})\), for each \(\lambda\). From these, we conclude that for each \(\lambda\), \[\mathcal{C}(N)\subseteq\bigcup_{i=1}^{n_{\lambda}}\mathcal{C}(N_{\lambda i}),\] and hence, \(\mathcal{C}(N)\subseteq\overline{N}\), as required. **Corollary 2.10**.: _Every non-empty subbasis closed subset of structure space of proper submodules is irreducible._ One immediate application of Lemma 2.9 is the following characterization of \(T_{1}\)-structure spaces. It is known (see [11, Theorem 3]) that in a top module, the class of prime submodules endowed with Zariski topology is a \(T_{1}\)-space if and only if every prime submodule is maximal in the class of prime submodules. The following theorem generalizes this result. **Theorem 2.11**.: _A structure space is a \(T_{1}\)-space if and only if it is contained in the class of structure space of maximal submodules._ Proof.: If \(L\in\mathcal{D}(M)\), then by Lemma 2.9, \(\overline{L}=\mathcal{C}(L)\). Suppose \(N\) is a maximal submodule of \(M\) such that \(L\subseteq N\). Let \(\mathcal{D}(M)\) be a \(T_{1}\)-space. Then \[N\in\mathcal{C}(L)=\overline{L}=\{L\},\] which implies \(N=L\). Conversely, let \(\mathcal{D}(M)\) be a class of submodules of \(M\) contained in the class of maximal submodules of \(M\). Then for every maximal submodule \(N\) of \(M\), we have \(\mathcal{C}(N)=\{N\}\), which implies \(N\in\mathcal{C}(N)\). Hence, by Lemma 2.9, \(\overline{N}=\{N\}\). This proves that the structure space of the class of maximal submodules of \(M\) is a \(T_{1}\)-space and so is \(\mathcal{D}(M)\). As a consequence of the above theorem, we obtain a necessary condition for a Noetherian module to be Artinian. Recall that a module is called _Noetherian_ if every ascending chain of submodules of it is eventually stationary, whereas an Artinian module is a module that satisfies the descending chain condition on its poset of submodules. **Corollary 2.12**.: _If \(\mathcal{D}(M)\) is a discrete structure space of a Noetherian module \(M\), then \(M\) is Artinian._ Recall that a topological space is _sober_ if every nonempty irreducible closed subset of it has a unique generic point. Sobriety is one of separation properties and also a component of the definition of a spectral space, which we shall discuss later. In [1, Theorem 3.3], it is proved that the class of prime submodules of a top module endowed with Zariski topology is sober. We shall provide a characterization of structure spaces that have this property and this characterization generalizes the results Corollary 1 of [11], Theorem 3.3 of [1], and Theorem 2.6(2)(3) of [10]. We first need a technical auxiliary lemma whose proof is easy to verify. **Lemma 2.13**.: _If \(N^{\omega}:=\bigcap\{L\mid L\in\mathcal{C}(N)\}\) for \(N\in\mathcal{S}(M)\), then \(N=N^{\omega}\), whenever \(N\in\mathcal{D}(M)\)._ **Theorem 2.14**.: _Every non-empty irreducible subbasis closed set \(\mathcal{C}(N)\) of a structure space \(\mathcal{D}(M)\) has a unique generic point if and only if \(\mathcal{C}(N)\) contains \(N^{\omega}\)._ Proof.: Let \(\mathcal{C}(N)\) be a nonempty irreducible subbasis closed subset of a structure space \(\mathcal{D}(M)\) such that \(\mathcal{C}(N)\) has a unique generic point \(L\). This implies that \[\mathcal{C}(N)=\overline{L}=\mathcal{C}(L).\] By Lemma 2.13, this implies that \(L=N^{\omega}\in\mathcal{D}(M)\). The proof of the converse is more intriguing. Let every nonempty irreducible subbasis closed set \(\mathcal{C}(N)\) contains \(N^{\omega}\) and let \(E\) be an irreducible closed subset of \(\mathcal{D}(M)\). Then \[E=\bigcap_{i\in I}\left(\bigcup_{\alpha=1}^{n_{i}}\mathcal{C}(N_{i\alpha}) \right),\] for some submodules \(N_{ij}\) of \(M\). Since \(E\) is irreducible, for every \(i\in I\), there exists a submodule \(N_{i}\) of \(M\) such that \[E\subseteq\mathcal{C}(N_{i})\subseteq\bigcup_{\alpha=1}^{n_{i}}\mathcal{C}(N _{i\alpha}),\] and thus \[E=\bigcap_{i\in I}\mathcal{C}(N_{i})=\mathcal{C}\left(\sum_{i\in I}N_{i} \right)=\mathcal{C}\left(\sum_{i\in I}N_{i}\right)^{\omega}.\] Since \(\left(\sum_{i\in I}N_{i}\right)^{\omega}\in\mathcal{D}(M)\), we have \(E=\overline{\left(\sum_{i\in I}N_{i}\right)^{\omega}}\). The uniqueness part of the claim follows from Proposition 2.8, and this proves the theorem. **Corollary 2.15**.: _Every nonempty irreducible closed subset of structure spaces of proper, prime, minimal prime submodule has a unique generic point._ To show sobriety of the structure space \(\mathcal{D}(M)\) of strongly irreducible submodules of a module, our argument goes through multiplicative lattices. Since \(\mathcal{D}(M)\) is a multiplicative lattice with product as intersection, every strongly irreducible submodule becomes a prime element of this lattice, and hence by [13, Lemma 2.6], every nonempty irreducible closed subset of \(\mathcal{D}(M)\) has a unique generic point, where once again the uniqueness of generic points follows from Proposition 2.8. The structure spaces of semiprime and extraordinary submodules are also sober because of the same reason. The following proposition gathers all these conclusions. **Proposition 2.16**.: _Every nonempty irreducible closed subset of structure spaces of strongly irreducible, semiprime, and extraordinary submodules has a unique generic point._ Our next goal is to prove the spectrality of the structure space of a specific class of submodules of a module, and this is an extension of Theorem 2.9 of [10]. **Theorem 2.17**.: _The structure space of proper submodules of a module is spectral._ #### 2.2.1. Spectral spaces Recall that a topological space is called _spectral_ (see [14]), if it quasi-compact, admitting a basis of quasi-compact open subspaces that are closed under finite intersections, and if every nonempty irreducible closed subset of them has a unique generic point. Our proof of Theorem 2.17 is self-contained in the sense that it is constructible topology-independent. Moreover, we have adapted a technique that avoids the checking of the existence of a basis of quasi-compact open subspaces that are closed under finite intersections, which rather follows from the following key lemma. **Lemma 2.18**.: _Let \(X\) be a spectral space and let \(S\) be a subspace of \(X\) having the following properties:_ 1. \(S\) _is quasi-compact._ 2. \(S\) _is an open subspace of_ \(X\)_._ 3. _Every nonempty irreducible closed subset of_ \(S\) _has a unique generic point._ _Then \(S\) is a spectral space._ Proof.: Since \(S\) is quasi-compact and sober, all we need is to prove that the set \(\mathcal{B}_{S}\) of quasi-compact open subsets of \(S\) forms a basis of a topology that is closed under finite intersections. It is easy to check the following fact. A subset of \(S\) is open in \(S\) if and only if it is open in \(X\), and therefore, a subset of \(S\) belongs to \(\mathcal{B}_{S}\) if and only if it belongs to \(\mathcal{B}_{X}\). Let \(O\) be an open subset of \(S\). Since \(O\) is also open in \(X\), we have \(O=\cup\mathcal{U}\), for some subset \(\mathcal{U}\) of \(\mathcal{B}_{X}\). But each element of \(\mathcal{U}\) being a subset of \(O\) is a subset of \(S\), and it belongs to \(\mathcal{B}_{S}\). Therefore every open subset of \(S\) can be presented as a union of quasi-compact open subsets of \(S\). Now it remains to prove that \(\mathcal{B}_{S}\) is closed under finite intersections, but this immediately follows from the fact that \(\mathcal{B}_{X}\) is closed under finite intersections. Proof of Theorem 2.17.: In order to apply Lemma 2.18 for the structure space \(\mathcal{D}(M)\) of proper submodules of a module, we need to achieve the following: 1. The structure space \(\mathcal{S}(M)\) of all submodules of a module \(M\) is spectral. 2. \(\mathcal{D}(M)\) is quasi-compact. 3. Every nonempty irreducible closed subset of \(\mathcal{D}(M)\) has a unique generic point. 4. \(\mathcal{D}(M)\) is an open subspace of the structure space \(\mathcal{S}(M)\). Now, (2) and (3) respectively follow from Proposition 2.2 and Proposition 2.14, whereas (1) follows from the application of Theorem 4.2 from [10] on the algebraic lattice \(\mathcal{S}(M)\). Therefore, what remains is to show (4). Since \(M\in\mathcal{S}(M)\), by Lemma 2.9, we have \(M=\mathcal{C}(M)=\overline{M}\), and therefore, \(\{M\}\) is closed. This implies that \(\mathcal{D}(M)\) is open, and this completes the proof. If our modules are Noetherian and a structure space is sober, then, thanks to Theorem 2.6, we obtain further examples of spectral spaces. **Corollary 2.19**.: _Let \(R\) be a Noetherian module and let \(\mathcal{D}(M)\) be a structure space. Then \(\mathcal{D}(M)\) is spectral if and only if it is sober._ ### Connectedness Now we turn our attention to the connectedness of structure spaces. The usual notion of connectedness does not have a direct formulation in terms of subbasis closed sets. We, therefore, introduce a different kind of connectedness (in terms of subbasis closed sets) of structure spaces that is related to the conventional notion of connectedness. We say a closed subbasis \(\mathcal{S}\) of a topological space \(X\)_strongly disconnects_\(X\) if there exist two non-empty subsets \(A\) and \(B\) of \(\mathcal{S}\) such that \(X=A\cup B\) and \(A\cap B=\emptyset\). From the above definition, it is clear that if some closed subbasis strongly disconnects a space, then the space is disconnected. Moreover, if a space is disconnected, then some closed subbasis strongly disconnects it. But this does not imply that every closed subbasis strongly disconnects the space. However, we have the following result, analogous to Theorem 3.21 and Lemma 3.22 of [10]. **Lemma 2.20**.: _If a quasi-compact space is disconnected, then every closed basis that is closed under finite intersections strongly disconnects the space. Moreover, for any structure space \(\mathcal{D}(M)\), the closed basis is closed under binary intersections._ Applying Lemma 2.20, we immediately have the following result. **Theorem 2.21**.: _Suppose \(\mathcal{D}(M)\) is a quasi-compact space. Then \(\mathcal{D}(M)\) is disconnected if and only if the closed basis strongly disconnects \(\mathcal{D}(M)\)._ A consequence of Lemma 2.9 is the following sufficient condition for a structure space to be connected. **Theorem 2.22**.: _If the zero submodule is in \(\mathcal{D}(M)\), then the structure space \(\mathcal{D}(M)\) is connected._ Proof.: By Lemma 2.1(2), we have \(\mathcal{D}(M)=\mathcal{C}(0)\), and since irreducibility implies connectedness, the claim now follows from Lemma 2.9. **Corollary 2.23**.: _The structure spaces of proper, finitely generated, cyclic submodules are connected._ ### Continuity Once we have structure spaces of submodules, it is natural to study continuous functions between them. Let \(\phi\colon M\to M^{\prime}\) be a module homomorphism. Likewise, for prime spectra (endowed with Zariski topologies) of rings, it is natural to expect a continuous Likewise, for prime spectra (endowed with Zariski topologies) of rings, it is natural to expect a continuous function \(\phi_{!}\colon\mathcal{D}(M^{\prime})\to\mathcal{D}(M)\) between "similar type" submodule spaces. It is indeed so, and we shall discuss that in the next proposition. However, in the case of rings, under a ring homomorphism, the inverse image of a prime spectrum is a prime spectrum, which even fails to hold for maximal ideals of rings. Therefore, we can expect the same problem when we deal with various classes of submodules of a module. So, we need to consider only those module homomorphisms for which such preservation holds. We call this the _contraction property_ of a module homomorphism. To show a function is continuous, it suffices to show that the inverse image of subbasis closed sets are subbasis closed sets. Because our topology is built of subbasis closed sets, we shall use this continuity criteria in our proofs. **Proposition 2.24**.: _Let \(\mathcal{D}(M)\) be a structure space and \(\phi\colon M\to M^{\prime}\) be a module homomorphism having the contraction property with respect to \(\mathcal{D}(M)\). Let \(N^{\prime}\in\mathcal{D}(M^{\prime}).\) Define a function \(\phi_{!}\colon\mathcal{D}(M^{\prime})\to\mathcal{D}(M)\) by \(\phi_{!}(N)=\phi^{-1}(N)\). Then_ 1. _the function_ \(\phi_{!}\) _is continuous._ 2. _If_ \(\phi\) _is surjective, then the structure space_ \(\mathcal{D}(M^{\prime})\) _is homeomorphic to the closed subspace_ \(\mathcal{C}(\ker\phi)\) _of the structure space_ \(\mathcal{D}(M)\)_._ 3. _The image_ \(\phi_{!}(\mathcal{D}(M^{\prime}))\) _is dense in_ \(\mathcal{D}(M)\) _if and only if_ \[\ker\phi\subseteq\bigcap_{L\in\mathcal{D}(M)}L;\] Proof.: (1) Let \(\mathcal{C}(N)\) be a subbasis closed set of a structure space \(\mathcal{D}(M).\) Since \[\phi_{!}^{-1}(\mathcal{C}(N))=\{N^{\prime}\in\mathcal{D}(M^{\prime})\mid\phi( N)\subseteq N^{\prime}\}=\mathcal{C}(\langle\phi(N)\rangle),\] we have the desired continuity of the function \(\phi_{!}\). (2) We observe that \(\phi_{!}(N^{\prime})\in\mathcal{C}(\ker\phi)\), and this implies \(\operatorname{im}\phi_{!}=\mathcal{C}(\ker\phi).\) Since for all \(N^{\prime}\in\mathcal{D}(M^{\prime})\), \[\phi(\phi_{!}(N^{\prime}))=N^{\prime}\cap\operatorname{im}\phi=N^{\prime},\] the function \(\phi_{!}\) is injective. To show that \(\phi_{!}\) is a closed function, first we claim that under a module homomorphism, inverse image of a subbasis closed set is a subbasis closed. Indeed, for any subbasis closed set \(\mathcal{C}(N)\) of \(\mathcal{D}(M^{\prime})\), we have \[\phi_{!}(\mathcal{C}(N))=\phi^{-1}\{L^{\prime}\in\mathcal{D}(M^{\prime})\mid N \subseteq L^{\prime}\}=\mathcal{C}(\phi^{-1}(N)).\] If \(\mathcal{K}\) is a closed subset of \(\mathcal{D}(M^{\prime})\), then there exist an index set \(\Omega\) and submodules \(\{N_{i\alpha}\}_{1\leqslant i\leqslant n_{\alpha},\alpha\in\Omega}\) of \(M^{\prime}\) such that \[\mathcal{K}=\bigcap_{\alpha\in\Omega}\left(\bigcup_{i=1}^{n_{\alpha}}\mathcal{ C}(N_{i\alpha})\right).\] Then, applying \(\phi_{!}\) on \(\mathcal{K}\), we have \[\phi_{!}(\mathcal{K})=\phi^{-1}\left(\bigcap_{\alpha\in\Omega}\left(\bigcup_ {i=1}^{n_{\alpha}}\mathcal{C}(N_{i\alpha})\right)\right)=\bigcap_{\alpha\in \Omega}\bigcup_{i=1}^{n_{\alpha}}\mathcal{C}(\phi^{-1}(N_{i\alpha})),\] a closed subset of \(\mathcal{D}(M)\). Since \(\phi_{!}\) is continuous by (1), this completes the proof. (3) We claim that \(\overline{\phi_{!}(\mathcal{C}(N^{\prime}))}=\mathcal{C}(\phi^{-1}(N^{\prime}))\), for all \(N^{\prime}\in\mathcal{S}(M^{\prime}).\) To this end, let \(L\in\phi_{!}(\mathcal{C}(N^{\prime})).\) From this, we obtain \(\phi(L)\in\mathcal{C}(N^{\prime})\), or, equivalently, \(L\in\mathcal{C}(\phi^{-1}(N^{\prime})).\) From the fact that \(\phi^{-1}(\mathcal{C}(N^{\prime}))=\mathcal{C}(\phi^{-1}(N^{\prime}))\), we then obtain the other half of the inclusion. Since \[\overline{\phi_{!}(\mathcal{D}(M^{\prime}))}=\mathcal{C}(\phi^{-1}(0))= \mathcal{C}(\ker\phi),\] the closed subspace \(\mathcal{C}(\ker\phi)\) is equal to \(\mathcal{D}(M)\) if and only if \(\ker\phi\subseteq\cap_{L\in\mathcal{D}(M)}L\). **Corollary 2.25**.: _The structure space \(\mathcal{D}(M/N)\) is homeomorphic to the closed subspace \(\mathcal{C}(N)\) of \(\mathcal{D}(M)\)._ **Concluding Remarks 2.26**.: We conclude with some programs for further study. (A) The present work can be generalized further to study the topological properties of various classes of sub-semimodules of semimodules. With some routine changes of terminologies and assumptions, one may extend some of the results obtained in [1] for the prime spectra of semimodules to distinguished classes of sub-semimodules, similar to that which we have considered here. In Table 1, we provide some examples of these generalizations. (B) In Theorem 2.17 and Corollary 2.19, we have seen examples of spectral structure spaces. With some additional assumptions on modules, it would be worth it to obtain further examples of structure spaces that are spectral. (C) Due to the lack of subtractivity in semirings, the theory of ideals is significantly different than that of rings. To minimize this gap, the notion of _\(k\)-ideals_ (also known as _subtractive_ ideals) has been introduced in [10], which further generalized it for semimodules, called _subtractive_ submodules (see [11]). To best of author's knowledge, no topological studies have been made on this special class of subsemimodules.
2301.02134
Time-reversal invariant finite-size topology
We report finite-size topology in the quintessential time-reversal (TR) invariant systems, the quantum spin Hall insulator (QSHI) and the three-dimensional, strong topological insulator (STI): previously-identified helical or Dirac cone boundary states of these phases hybridize in wire or slab geometries with one open boundary condition for finite system size, and additional, topologically-protected, lower-dimensional boundary modes appear for open boundary conditions in two or more directions. For the quasi-one-dimensional (q(2-1)D) QSHI, we find topologically-protected, quasi-zero-dimensional (q(2-2)D) boundary states within the hybridization gap of the helical edge states, determined from q(2-1)D bulk topology characterized by topologically non-trivial Wilson loop spectra. We show this finite-size topology furthermore occurs in 1T'-WTe2 in ribbon geometries with sawtooth edges, based on analysis of a tight-binding model derived from density-functional theory calculations, motivating experimental investigation of our results. In addition, we find quasi-two-dimensional (q(3-1)D) finite-size topological phases occur for the STI, yielding helical boundary modes distinguished from those of the QSHI by a non-trivial magneto-electric polarizability linked to the original 3D bulk STI. Finite-size topological phases therefore exhibit signatures associated with the non-trivial topological invariant of a higher-dimensional bulk. Finally, we find the q(3-2)D STI also exhibits finite-size topological phases, finding the first signs of topologically-protected boundary modes of codimension greater than 1 due to finite-size topology. Finite-size topology of four or higher-dimensional systems is therefore possible in experimental settings without recourse to thermodynamically large synthetic dimensions.
R. Flores-Calderón, Roderich Moessner, Ashley M. Cook
2023-01-05T16:21:54Z
http://arxiv.org/abs/2301.02134v2
# Time-reversal invariant finite-size topology ###### Abstract We report finite-size topology in the quintessential time-reversal (TR) invariant systems, the quantum spin Hall insulator (QSHI) and the three-dimensional, strong topological insulator (STI): previously-identified helical or Dirac cone boundary states of these phases hybridize in wire or slab geometries with one open boundary condition for finite system size, and _additional, topologically-protected, lower-dimensional_ boundary modes appear for open boundary conditions in _two or more_ directions. For the quasi-one-dimensional (q(2-1)D) QSHI, we find topologically-protected, quasi-zero-dimensional (q(2-2)D) boundary states within the hybridization gap of the helical edge states, determined from q(2-1)D bulk topology characterized by topologically non-trivial Wilson loop spectra. We show this finite-size topology furthermore occurs in \(\mathrm{IT}^{-}\mathrm{WT}\mathrm{e}_{2}\) in ribbon geometries with sawtooth edges, based on analysis of a tight-binding model derived from density-functional theory calculations, motivating experimental investigation of our results. In addition, we find quasi-two-dimensional (q(3-1)D) finite-size topological phases occur for the STI, yielding helical boundary modes distinguished from those of the QSHI by a non-trivial magneto-electric polarizability linked to the original 3D bulk STI. Finite-size topological phases therefore exhibit signatures associated with the non-trivial topological invariant of a higher-dimensional bulk, clearly distinguishing them from previously-known topological phases. Finally, we find the q(3-2)D STI also exhibits finite-size topological phases, finding the first signs of topologically-protected boundary modes of codimension greater than \(1\) due to finite-size topology. Finite-size topology of four or higher-dimensional systems is therefore possible in experimental settings without recourse to thermodynamically large synthetic dimensions. ## I Introduction The discovery of the first topological insulator (TI), the quantum spin Hall insulator (QSHI) in HgTe quantum wells [1; 2] heralded a paradigm shift in condensed matter physics towards broad study of topological phases of matter. Understanding and characterization of topology is now central to the field, with major applications ranging from fault-tolerant quantum computing [3; 4] to unconventional superconductivity [5]. Consequently, searching for novel, experimentally-accessible topological systems is a major theme of the last few decades [2; 6; 7; 8; 9; 10; 11; 12; 13]. These efforts usually target experimental confirmation of a hallmark of topological phases known as bulk-boundary correspondence: a non-trivial topological invariant of the system bulk is associated with topologically-robust, gapless boundary states. While it has long been understood that a \(D\)-dimensional bulk topology yields \((D-1)\)-dimensional gapless boundary states for most topological phases [14], the recent discovery of additional bulk-boundary correspondence even in the canonical phases, known as finite-size topology [15], shows this foundational aspect of topological physics is richer than previously-thought. If a system is characterized by a topological invariant computed in the \(D\)-dimensional infinite bulk, but is finite in size and thin in one direction as illustrated in Fig. 1 (for the QSHI \(D=2\) while for the 3D TI \(D=3\) ), such that topologically-protected boundary states interfere with one another, this _quasi-\((D-1)\)- or \(q(D-1)\)-dimensional_ bulk is characterized by an additional topological invariant. When this additional invariant takes non-trivial values, open boundary conditions in a second direction yield an additional set of quasi-\((D-2)\)-dimensional, topologically-protected boundary states localized on this boundary of the quasi-\((D-1)\)-dimensional system. As these quasi-\((D-2)\)-dimensional states are localized on the boundary in correspondence with a non-trivial value for a topological invariant of the quasi-\((D-1)\)-dimensional bulk, and robust against local perturbations respecting the symmetries protecting the topological phase _in the \(D\)-dimensional infinite bulk_, they constitute previously-unidentified topological phases of matter. Following the previous thinning process we end up with a q\((D-1)\)-dimensional bulk with topological edge states in one less dimension, the situation is then just like at the start of the program, but with \(D\) replaced by \(D-1\). Thus one may think of applying the thinning process once again, now thinning the \(x_{D-1}\) dimension and hybridizing the previous q\((D-2)\) edge states. We then arrive at a q\((D-2)\) dimensional bulk with q\((D-2-1)\) dimensional edge states which again can be subjected to the same procedure. The general process is illustrated in Fig. 2, while Fig. 1 c) shows the specific case of the 3D TI q\((3-2)\) bulk. We note that this procedure could in principle be applied until there are no more number of dimensions to thin down. Although theoretical discovery of the Chern insulator [16] preceded theoretical prediction of the TR-invariant QSHI _derived from it_[17; 18], experimental confirmation of the QSHI [2] occurred within one year of the prediction, while more than two decades passed for the Chern insulator [19]. This reflects a broader trend in the field, of TR-invariant topological insulators being confirmed experimentally more quickly and easily than TR-symmetry-broken topological insulators reliant on engineering particular magnetic orders [2; 20; 21]. Following this idea in order to more rapidly observe finite-size topology in experiment, we study the time-reversal invariant finite-size topology of the QSHI and the strong TI (STI), by considering these systems in geometries as shown in Fig. 1. We also note that, due to the vast experimental studies in TI ultra-thin films [22; 23; 24], Van der Waals heterostructures [25; 26; 27; 28], and transition-metal dichalcogenides in particular given their large spin-orbit coupling [29; 30], there may already be signs of finite-size topology in previous experiments. Past work, for instance, indicates few-layer 1T\({}^{*}\)-MoTe\({}_{2}\) is semi-metallic [31], while the monolayer is predicted to be a quantum spin Hall insulator [32], suggesting the few-layer topology derives from the Weyl semimetal phase of the three-dimensional bulk, while the monolayer topological phase has a distinct origin due to a strictly two-dimensional bulk. Since finite-size topological phases occur for the Kitaev chain and Chern insulator [15], non-trivial finite-size topology is expected for TR-invariant systems of the QSHI and STI given concrete relationships between Hamiltonians for these topological phases: the Kitaev chain Hamiltonian may be used to construct the Chern insulator Hamiltonian, if many chains are coupled forming a 2D system [33], and a Chern insulator Hamiltonian and its time-reversed partner are the basis of Hamiltonians for the QSHI[17; 18]. We find that FST extends to these TRI topological phases. As Hamiltonians for TR-invariant topological phases are used to construct Hamiltonians for other topological phases, these results also reveal that a larger set of topological phases harbor FST: a Weyl semimetal phase[34] Hamiltonian may be constructed from magnetically-doped STI and trivial insulator thin films stacked alternatingly, while a stack of QSHIs corresponding to a topological phase is constructed from magnetically- sponds directly to the weak 3D TI [35; 36]. Topological crystalline phases may furthermore be constructed, for instance, as Chern insulators within mirror subsectors or with the Chern insulator bulk confined to a mirror-invariant plane of a three-dimensional Brillouin zone [37]. More generally, topological crystalline phases are characterized by considering symmetry-protection by crystalline point group symmetries _in addition to_ the internal symmetries of the ten-fold way. On a technical level this is accomplished by expressing the Hamiltonian in a block diagonal form using the additional symmetry, in each sub-sector internal symmetries are still present and thus can be analyzed by classification schemes obtained from the ten-fold way [38; 39]. In this manuscript, we first, section II, characterize finite-size topology in a QSHI wire. We start by considering a thin QSHI system with one open thin dimension and one infinite periodic dimension. The energy and Wilson loop spectra of this q(2-1)D system reveal that the non-trivial zones in phase space are a subset of the original 2D bulk topological regions. Furthermore opening boundary conditions again in the remaining periodic direction shows the presence of edge states localized on the q(2-2)D boundaries. We end this section by studying the response to on-site disorder and perturbations, where the robustness of the edge states indicates a link to the original 2D bulk gap. Afterwards in section III, we consider a more realistic and experimentally accessible system \(1T^{\prime}\) WTe\({}_{2}\) in the QSHI phase. We find similarly that this material realizes a finite-size topological phase with topologically-robust q(2-2)D edge states for a sawtooth ribbon geometry, the presence of this edge states is again verified to be predicted by a non-trivial Wilson loop spectrum. Extending our analysis to the 3D case we consider in section IV the STI in a q(3-1)D slab geometry. In this case, interference between the STI Dirac cone surface states yields q(3-2)D edge states. We show the Wilson loop spectrum of the q(3-1)D bulk displays topologically non-trivial signatures in correspondence with these boundary states, indicating Wilson loop spectra are a robust bulk diagnostic of finite-size topology. Additionally we compute the magneto-electric polarizability, which should be trivially zero if the system is just a 2D QSHI, instead we encounter the response expected for the infinite 3D TI bulk. This central result allows us to contemplate the idea of detecting topological signatures of higher dimensional phases, say the 4D TI, in quasi lower dimensional systems. Finally, we study the case of a STI in a q(3-2)D wire geometry, where we once again use the Wilson loop indicator to find a novel bulk-boundary correspondence restricted to a subset of the original 3D topological phase diagram. In this final case the number of edge states is seen to follow the number of \(\pm\pi\) phases such that only even numbers of distinct edge states appear. In section VI we summarize our results and present some concluding remarks. ## II QSHI wire As a starting point of our analysis we consider a QSHI first considering the canonical Bernevig-Hughes-Zhang Hamiltonian for HgTe quantum wells [1] where we also add a Rashba-type spin orbit coupling. Thus the Hamiltonian in momentum space has the form [40]: \[h(k_{x},k_{y})= (u+2t(\cos k_{x}+\cos k_{y}))\sigma_{z}+\sin k_{y}\;\sigma_{y} \tag{1}\] \[+ \sin k_{x}\;s_{z}\sigma_{x}+c\;s_{x}\sigma_{y},\] where \(s_{i},\sigma_{i}\) are Pauli matrices in spin and orbital space respectively. For simplicity we omit the identity in spin space and denote the tensor product by placing two matrices next to each other. The real number \(u\) corresponds to a staggered potential, \(t\) to a hopping parameter and \(c\) is the spin orbit coupling that breaks \(s_{z}\) spin symmetry. The phase diagram for this Hamiltonian includes both a region in which the QSHI phase is realized and a region in which the Dirac semimetal (DSM) phase is realized, as discussed in reference [40] and plotted in Fig. 3 a) and b). In the following analysis, we first consider the QSHI regime, and then that of the DSM. Next we consider what happens if we open boundary conditions (OBC) in the \(x\) direction for a small number of lattice sites \(N\). Since the helical edge modes of the QSHI are not completely localized at the boundary, but instead decay exponentially into the bulk [41], these boundary states interfere in systems of finite-width. The lattice second quantized Hamiltonian with open boundary conditions in the \(\hat{x}\)-direction and periodic boundary conditions in the \(\hat{y}\) direction is: \[\hat{H} =\sum_{k,n}\Psi^{\dagger}_{k,n}\left((u+2t\cos k)\sigma_{z}+\sin k \;\sigma_{y}+c\;s_{x}\sigma_{y}\right)\Psi_{k,n}\] \[+\Psi^{\dagger}_{k_{y},n+1}\left(t\;\sigma_{z}+\frac{i}{2}s_{z} \sigma_{x}\right)\Psi_{k,n}+\text{h.c.}\;, \tag{2}\] Figure 3: a) Direct gap heat plot of the 2D bulk hamiltonian eq. (1) as a function of potential \(u\) and spin-orbit coupling constant \(c\), b) Topological phase diagram of the 2D bulk, showing the QSHI phase (yellow) and DSM gapless phase (blue) of the 2D bulk. c) Quasi-1D dispersion for PBC in \(y\) and OBC (PBC) in \(x\) with \(N=6\) sites. The parameters for the gap closing with OBC in \(x\) are \(u=0.76,c=0.8,t=1/2\) d) Spectrum for PBC in \(y\) as a function of the staggered potential \(u\) and \(c=0.8,t=1/2\). where \(k\equiv k_{y}\) and \(n\) runs over the \(N\) sites of the open \(x\) direction. Here, \(\Psi_{k,n}\) are four component spinor fermion operators acting on the spin and orbit degrees of freedom. We first examine the spectrum of Eq. 2 for small \(N\) on the order of a few lattice constants. We first consider the spectrum for a particular point in phase space as shown in Fig. 3 c) for a small system with \(N=6\) with non-trivial 2D bulk invariant, as shown in Fig. 3 b). Comparing the dispersion for the system with periodic boundary conditions in each direction (black lines) to that of the system with open boundary conditions only in the \(\hat{x}\)-direction (red lines), we see the periodic system is gapped, while the system with open boundary conditions is instead gapless. While gapless boundary states are expected due to the non-trivial bulk topology, the gaplessness in this case is not topologically-robust: the gapless boundary modes interfere in finite-size systems to open a hybridization gap in general. Under certain conditions, however, the boundary modes interfere destructively, corresponding to a fine-tuned gapless state when hybridization matrix elements pass through zero. Examining the spectrum for the q(2-1)D bulk as a function of \(u\) as shown in Fig. 3 d), we see this more general pattern of finite interference gaps, with a discrete set of \(u\) corresponding to gap-closings and destructive interference between the helical boundary modes. We will show these gap-closings can correspond to topological phase transitions, and some of these gapped regions host finite-size topological phases. ## Appendix A Periodic system To characterize the finite-size topological phases of this time-reversal invariant system, we now re-interpret the original model with OBC in the \(\hat{x}\)-direction as a q(2-1)D bulk, and characterize topology of this q(2-1)D bulk system similarly to characterization of a \(d\)-dimensional bulk. We therefore first compute a phase diagram for the minimum direct gap over the Brillouin zone of the q(2-1)D bulk as a function of \(u,c\) for fixed hopping \(t=1/2\), shown in Fig.4 a),b) for \(N=6\) and \(N=7\) layers in the \(\hat{x}\)-direction, respectively. A dome forms in the phase diagram, consisting of a set of curved, stripe-like regions of finite minimum direct gap separated by lines along which the q(2-1)D minimum direct bulk gap is zero, with these lines intersecting to form a checkerboard-like pattern at larger values of \(c\). As the number of lattice sites in the \(\hat{x}\)-direction increases, the number of gap-closing lines increases while the regions of finite minimum direct gap decrease in size. This pattern is consistent with a picture of gap-closings due to interference between the helical boundary modes of the QSHI: the boundary modes in this q(2-1)D system possess a standing wave character, and the gap-closing lines correspond to hybridisation matrix elements passing through zero with tuning of system parameters. With increasing system size, this interference pattern becomes denser as the difference in wavelength between the oscillatory components of the helical boundary modes generically decreases. It is particularly interesting to compare these phase diagrams for the q(2-1)D bulk with the counterpart phase diagram of the 2D bulk shown in Fig. 3 a),b), which reveals that different kinds of topological phases of the 2D bulk (and corresponding different gapless boundary states) yield different interference patterns as a function of \(u\) and \(c\). Notably, the checkerboard region of the phase diagram corresponds to the DSM phase region of the corresponding phase diagram for the 2D bulk, revealing that the DSM phase is generally gapped out in the q(2-1)D regime, and exhibits more complex interference pattern than does the QSHI. As the 2D minimum direct bulk gap remains finite over the region of the phase diagram where we observe this interference pattern between helical boundary modes of the QSHI, and the 2D minimum direct bulk gap remains closed due to topologically-protected band-touchings of the DSM, topological invariants of the 2D bulk do not change within these regions. However, as subsets of each of these regions possess a finite minimum direct gap in the q(2-1)D spectrum, it is possible to further characterize the topology of the q(2-1)D system if suitable topological invariant(s) are identified. To further characterize finite-size topology of this quasi-1D TRI system with Wilson loop spectra, we compute the Wilson loop eigenvalues [42], which distinguish between topologically-distinct phases of matter as they characterize holonomy in a system due to parallel transport through non-contractible loops in the BZ [43, 44, 45]. The Wilson loop spectra for the q(2-1)D system are computed by integrating the Berry connection over the remaining \(k\equiv k_{y}\) momentum coordinate, using the following expression: [42] \[\mathcal{W}=\mathcal{P}e^{-\int_{-\pi}^{\pi}dkA(k)}, \tag{3}\] where \(A(k)\) is the non-Abelian Berry connection over the occupied bands and \(\mathcal{P}\) is the path ordering operator. Since Figure 4: Quasi-(2-1)D minimum direct bulk gap for a) \(N=6\), b) \(N=7\) and \(t=1/2\). Plot of the number of \(\pm\pi\) phases \(N_{\pm\pi}\) (red is 2, black 0) in the Wilson loop eigenvalues for c) \(N=6\), d) \(N=7\) we compute the Wilson matrix for a tight binding system we discretize Eq. (3). The set of Wilson loop eigenvalue phases is the Wannier charge center spectrum characterizing polarization. In a topologically non-trivial phase, Wannier charge center(s) are fixed to value(s) of \(\pm\pi\), so we compute the number of these non-trivial phases as \(N_{\pm\pi}\). The phase diagrams characterizing \(N_{\pm\pi}\) vs. \(u\) and \(c\) are shown in Fig. 4 c), d) for systems with \(N=6\) or \(N=7\) layers in the \(\hat{x}\)-direction, respectively. These \(N_{\pm\pi}\) vs. \(c\) and \(u\) phase diagrams shown in Fig. 4 c) and d) reveal alternating regions of \(N_{\pm\pi}=0\) and \(N_{\pm\pi}=2\), indicating the system undergoes a variety of topological phase transitions. We even observe stripe-like regions at smaller \(c\), which intersect to form checkerboard patterns at larger \(c\). These lines across which \(N_{\pm\pi}\) changes in value are in direct correspondence with lines shown in Fig. 4 a) and b), respectively, along which the q(2-1)D minimum direct gap goes to zero. Taken together, these phase diagrams in Fig. 4 reveal a topological phase transition occurs every time the q(2-1)D minimum direct bulk gap goes to zero. The phase diagrams for \(N=6\) layers differ dramatically from those for \(N=7\) layers, reflecting the dependence of this topology on finite-size effects. From the plots, one can see that, as the number of layers in the \(\hat{x}\)-direction increases, the number of topologically-distinct regions also increases in agreement with the number of lines along which the q(2-1)D minimum direct bulk gap is zero. The topological phase diagram of the 2D bulk Hamiltonian (1) studied in Ref. [40] is therefore being further divided into topologically-distinct regions in the q(2-1)D regime in a strongly \(N\)-dependent manner, revealing that topological phase transitions due to finite-size topology [15], may occur without the minimum direct gap of the 2D bulk going to zero. ## Appendix B Bulk-boundary correspondence and disorder Having characterized finite-size topology of the q(2-1)D bulk of Hamiltonian Eq. (1) with open boundary conditions in the \(\hat{x}\)-direction and periodic in \(\hat{y}\), we now explore the additional bulk-boundary correspondence of finite-size topological phases. We first study the spectral signatures of this bulk-boundary correspondence that appear for non-trivial Wilson loop spectra in accordance with the modern theory of polarization of Ref. [46]. \(N_{\pm\pi}\neq 0\) for the q1D bulk corresponds to topologically-protected, q0D bound states for open boundary conditions in the \(\hat{y}\)-direction in addition to open boundary conditions in the \(\hat{x}\)-direction. With system size in the \(\hat{y}\)-direction of \(L_{y}\), such a bulk-boundary correspondence characterized in the q1D bulk by \(N_{\pm\pi}\) is clear for \(L_{y}\gg L_{x}\) as shown in Fig. 5 a) b). In this case, one finds in-gap states close to zero energy within the q(2-1)D bulk gap of the energy spectrum. The separation in energy between these states decreases exponentially to zero as a function of \(L_{y}\), realizing a four-fold degenerate manifold of zero-energy states. Such states are not present for periodic boundary conditions in the \(\hat{y}\)-direction, further indicating they appear as a consequence of bulk-boundary correspondence for the finite-size topological phase. To further explore the extent to which these in-gap states are due to an additional bulk-boundary correspondence of finite-size topological phases, we compute the probability density for these in-gap states. We find these states are localized at the boundaries of the q1D system as shown in Fig. 5 f). Probability density peaks at the corners of the system as seen in Fig. 5 f) and decays over fifteen to twenty unit cells to zero for \(x\) approaching the q(2-1)D bulk. As the system is time-reversal invariant, we also compute the spin polarization for these q(2-2)D boundary modes. We find the q(2-2)D boundary modes are spin-polarized in the \(\pm\hat{z}\) direction, with a spin up/down pair localized at each end of the q(2-1)D system. We also study the robustness of the in-gap q(2-2)D boundary states against symmetry-breaking due to disorder. We introduce disorder in the q(2-1)D system for OBC as a uni Figure 5: a) Spectrum for OBC (red)in both directions, PBC in \(y\) (black) as a function of the staggered potential \(u\) and \(c=0.6,t=1/2,N=6\). b) Spectrum for OBC (red) in both directions, PBC in \(y\) (black) as a function of the staggered potential \(u\) and \(c=0.8,t=1/2,N=6\) with a particle-hole symmetry and sublattice symmetry breaking on-site potential \(0.1\cos(2\pi n/N)\sigma_{x}\). c) Disorder-averaged spectrum for \(N=6\) sites and \(u=1,t=1/2\) as a function of \(c\) for 200 uniformly distributed random particle hole symmetric potentials of strength \(\kappa=0.5u\). d) Disorder-averaged spectrum with the same parameters but for 200 particle-hole symmetry-breaking disorder potentials of strength \(\kappa=0.2u\). e) 2D minimum direct bulk gap as a function of \(c\) for \(u=1\). f)Density profile of q(2-2)D state for the same number of sites, \(c=0.8,u=1.0\) and \(L_{y}=300\). form random potential, with strength \(\kappa\). The disorder average spectrum for \(200\) disorder realizations of the form \(\kappa_{n}s_{0}\sigma_{z}\) is shown in Fig. 5 c) as a function of spin-orbit coupling \(c\) for \(\kappa_{n}\in(-0.2u,0.2u)\). The q(2-2)D states are present over a wide range in \(c\). Surprisingly, they survive even for disorder strengths \(\kappa_{n}\) greater than the q(2-1)D minimum direct bulk gap for the QSHI phase. The edge states may move away from zero energy, if the perturbation breaks particle-hole symmetry, such effect is shown in Fig. 5 b) as a function of \(u\), where an onsite perturbation \(0.1\cos(2\pi n/N)s_{0}\sigma_{x}\) is present. For disorder effects we considered a term of the form \(V_{i}s_{0}\sigma_{0}\), the spectrum is thus shown in Fig. 5 d) where the edge modes deviate from zero energy, as in the case of an SSH chain with next nearest neighbours studied in reference [47]. However, the q(2-2)D boundary modes persist and remain strongly localized, with their pair degeneracy preserved. Interestingly, the topological invariant remains fixed at non-trivial values even when a particle-hole breaking term is present in the Hamiltonian. This reflects the dependence of this non-trivial topology on the presence of the topologically-protected boundary states of the higher-dimensional phase, which require only time-reversal symmetry to remain robust up to the minimum direct 2D bulk gap going to zero. We further find \(N_{\pm\pi}\) still predicts the existence of edge states in the nontrivial region. The validity of the invariant and bulk-boundary correspondence in the absence of particle hole symmetry will be further verified for the case of the \(1T^{\prime}\)-WTe\({}_{2}\) wire. The phase is therefore protected by time-reversal symmetry alone while particle-hole symmetry is required only to anchor the in-gap states to zero energy. This analysis parallels the one on ref. [47] where analogously the invariant and phase are protected by only inversion symmetry while both inversion and particle-hole symmetry secure the zero-energy value. For particle-hole-symmetric disorder Fig.5 c) the perturbation strength is protected not by the q(2-1)D gap but rather the 2D bulk gap of the system, which, for \(c=0.8,u=1.0\), is \(\Delta\approx 0.5\) as seen in Fig. 5 e). If the perturbation breaks time reversal symmetry, however, the Kramers degeneracy of the q(2-2)D bound states is broken as expected. We conclude from this analysis that the q(2-1)D wire with spinful time-reversal symmetry, realized for a system with a 2D bulk and open boundary conditions in one direction, exhibits finite-size topological phases for subsets of the topologically non-trivial regions of the 2D bulk topological phase diagram. In these subsets, bounded by lines along which the minimum direct q(2-1)D bulk goes to zero rather than the minimum direct 2D bulk gap, in general, the wire harbors topologically-protected q(2-2)D boundary modes for open boundary conditions in two directions appearing in Kramers pairs. In the quasi-1D bulk, these subsets correspond to Wilson loop spectra with some Wilson loop eigenvalue phases fixed to \(\pm\pi\), protected by spinful time-reversal symmetry. These signatures of non-trivial topology are therefore associated with time-reversal invariant, q(2-1)D finite-size topological phases due to interference between topologically-protected gapless boundary modes resulting from 2D bulk topology, either of the quantum spin Hall insulator or the 2D Dirac semimetal. In contrast, the ten-fold way classification scheme of topological phases of matter determines a 1D system in class AII [48] has trivial topological classification. These results are therefore evidence of topologically non-trivial phases of matter outside of the ten-fold way classification scheme. ## III \(1T^{\prime}\)-WTe\({}_{2}\) wire Although the previous model is based on the celebrated HgTe quantum wells, which allowed for the discovery of the first QSHI, it may not be most suitable for the experimental discovery of finite-size topology. The need for a small sample size in one direction may be easier to achieve for monolayers such as the \(1T^{\prime}\)-TWe\({}_{2}\) QSHI [49]. We thus consider the model studied in reference [50] which combines density-functional theory calculations, symmetry considerations and fitting to experimental data. Since the finite-size topology comes from the hybridization of the edge states, we consider only the lattice termination which results in a Dirac crossing near the Fermi energy. That is we study the sawtooth \(y\) ribbon with open boundary conditions in the \(x\) direction. The \(1T^{\prime}\)-WTe\({}_{2}\) Hamiltonian is presented in the Supplementary Material. Under such circumstances we expect the Dirac crossing to gap out in general on larger and larger energy scales as the system size in the \(x\) direction \(N_{x}\) decreases. Such a gap opening does occur for the original material parameters for even \(N_{x}\) values ranging from \(4\) to \(20\). It is worth noting that the gap still exists even for larger \(N_{x}\) but its magnitude becomes very small compared to other material energy scales. Such a gapped system may then host topological q(2-2)D edge states if the Wilson loop spectrum is nontrivial. Specifically, we find that for \(N_{x}=8,10,12\) the gap has a \(\pm\pi\) phase in the Wilson loop spectrum. Even in the case where the spectrum is trivial, for example at \(N_{x}=6\), we may still tune the system so that a gap closing occurs and a nontrivial Wilson loop phase appears. To do so, we may apply an electric field perpendicular to the monolayer corresponding to addition of a symmetry-allowed Rashba spin-orbit coupling term to the Hamiltonian. Since the original model already includes such couplings, we further include a change of the in-plane SOC parameters by a quantity \(\Delta\lambda_{\text{SOC}}\). For the case of \(N_{x}=6\), we obtain a phase diagram for the number of Wilson loop eigenvalues with phase \(\pm\pi\), \(N_{\pm\pi}\), vs. spin-orbit coupling anisotropy, \(\Delta\lambda_{SOC}\), showing a change in \(N_{\pm\pi}\) from \(0\) to \(1\) with increasing \(\Delta\lambda_{SOC}\) as shown in Fig. 6 a). Based on our previous results for a canonical toy model of the QSHI, we expect q(2-2)D edge states to appear if we open boundary conditions in the \(\hat{y}\)-direction as well. Such edge states due to an additional bulk-boundary correspondence characterized by \(N_{\pm\pi}\) of the q(2-1)D bulk do exist, as shown in Fig. 6 b)corresponding to a line of four-fold degenerate, in-gap states as a function of \(\Delta\lambda_{SOC}\). The four-fold degeneracy corresponds to a two-fold Kramers degeneracy due to spinful time-reversal symmetry, and two-fold degeneracy corresponding to boundary modes localized at the left and right edge as in the previous simple model of eqref. (1). For a fixed Rashba spin orbit coupling change of \(\Delta\lambda_{\text{SOC}}=0.35eV\), we find that the edge states localize on each end of the wire as shown in Fig. 6 c). ## IV 3D TI slab We now study the finite size topology of the quintessential 3D TI in symmetry class AII [39; 48]. The classification in this case is dictated by four \(\mathbb{Z}_{2}\) topological invariants \((\nu_{0};\nu_{1},\nu_{2},\nu_{3})\)[35; 36], where the last three invariants are related to the translational invariance of the system classifying the weak TI phase (WTI) while the \(\nu_{0}\) parameter classifies the strong TI phase (STI). In the following, we study the STI phase with trivial lower-dimensional invariants corresponding to \((1;000)\), which is realized in the model Bloch Hamiltonian [51]: \[H(\mathbf{k})=-2\lambda\sum_{\mu}\sin k_{\mu}\sigma_{z}s_{\mu}+ \sigma_{x}s_{0}(M-t\sum_{\mu}\cos k_{\mu}), \tag{4}\] where the Pauli matrices \(\{\sigma_{z}\}\),\(\{s_{j}\}\) act on orbital and spin degrees of freedom respectively, \(\lambda\) is a spin orbit coupling parameter that breaks spin conservation, \(M\) is an onsite staggered potential, and \(t\) is a nearest-neighbor hopping integral. In the following, we take all energies to be in units of \(t\) by setting \(t=1\), and consider the regime in which \(1<M<3\) and \(\lambda\) positive, for which the model realizes the desired strong TI phase. First, we specialize to the case of a thin slab in the \(\hat{x}\)-direction of \(N\) layers. In that case, we expect in analogy to the QSHI that the Dirac cones from the upper and lower surfaces interfere due to the thin bulk and hybridize to open a gap even for open boundary conditions. The hybridization gap, just as in the previous 2D case, is expected to sometimes protect a non-trivial finite size topological phase. To study this, we once again characterize topology using the Wilson loop spectrum, now as a function of \(k_{y}\) or \(k_{z}\) with OBC in \(x\) to characterize topology first of a q(3-1)D bulk, in regions of the phase diagram where the 3D bulk corresponds to the strong TI. As the isotropy of the Hamiltonian Eq. (4) suggests, there is no difference between the Wilson loop eigenvalues of \(W(k_{y})\) and \(W(k_{z})\), thus we consider only \(W(k_{z})\), where each Wilson loop matrix is now defined as: \[\mathcal{W}(k_{y}) =\mathcal{P}e^{-\int_{-\pi}^{\pi}dk_{z}A_{z}(k_{y},k_{z})}, \tag{5}\] \[\mathcal{W}(k_{z}) =\mathcal{P}e^{-\int_{-\pi}^{\pi}dk_{y}A_{y}(k_{y},k_{z})}. \tag{6}\] A typical Wannier charge center spectrum vs. the remaining momentum component, \(k_{z}\), is plotted in Fig. 7, which shows the spectral flow a) characteristic of a TR invariant topological insulator for some values and a trivial spectrum b) for others within the \((1;000)\) 3D bulk phase classification. The non-trivial spectral flow only appears for certain parameter regimes: these regions can be distinguished in a systematic way by counting the number of fixed \(\pm\pi\) phases in the Wilson loop spectrum, as in the previous case. This regions of parameter space which have an energy gap are again the "bubbles" observed in the QSHI case. We plot the phase diagram as a function of the model parameters in Fig. 8 a),b) for \(N=5,6\) layers, respectively. The phase diagram changes dramatically for each value of \(N\), the number of layers, indicative of the finite-size topology [15]. The pattern of trivial and nontrivial regions is entirely contained with the non-trivial region of the 3D bulk topological phase diagram for Hamiltonian (4), indicating the 3D minimum direct bulk gap remains finite during these topological phase transitions of the q(3-1)D bulk. We remark that the sudden changes in color near \(\lambda=0\) seem to be just an artifact of the numerical precision when approaching the STI gap-closing in the 3D bulk. One of the consequences of the system being in a topologically non-trivial regime, according to the Wilson loop spectra of the q(3-1)D bulk, is an additional bulk-boundary correspondence: opening boundary conditions in a second direction in these regions of phase space, we find topologically-protected q(3-2)D states that now are localized at the edges of the slab, as shown in Fig. 8 c). These q(3-2)D states appear within the q(3-1)D bulk gap similarly to the case of q(2-2)D boundary states appearing in the q(2-1)D bulk gap of the QSHI. A topological phase diagram for the q(3-1)D system with open-boundary conditions in the \(\hat{y}\)-direction as a function of \(M\) is also shown in Fig. 8 d), demonstrating the direct correspondence of the topologically-protected boundary Figure 6: a) Energy spectrum (eV) as a function of the change in Rashba spin orbit coupling \(\Delta\lambda_{\text{SOC}}\) for a \(N_{x}=6\) saw-tooth terminated \(1T^{\prime}\)-TWe\({}_{2}\) wire. b) Energy spectrum (eV) for the system with \(N_{x}=12\) as a function of \(\Delta\lambda_{\text{SOC}}\) showing edge states at zero field. c) Number of Wilson loop spectrum \(\pm\pi\) phases for \(N_{x}=6\) as a function of \(\Delta\lambda_{\text{SOC}}\) d) Wave function probability density for one edge state at \(\Delta\lambda_{\text{SOC}}=0.35eV\) and \(N_{x}=6,N_{y}=200\) for the same saw-tooth terminated \(1T^{\prime}\)-TWe\({}_{2}\) wire. modes with the topologically non-trivial regions of the q(3-1)D bulk in Fig. 8 a). This indicates that \(N_{\pm\pi}\), the number of \(\pm\pi\) phases in the Wilson loop eigenvalue spectrum, characterizes finite-size topological phases resulting from interference of the topologically-protected Dirac cones of the 3D TI, in addition to characterizing finite-size topological phases due to interference between the helical boundary modes of the QSHI. While the finite-size topological phase in the q(3-1)D system exhibits helical boundary modes analogous to those of the QSHI, this topological phase is not just a QSHI. Notably, the 3D minimum direct gap remains finite in this region of the phase diagram where non-trivial finite-size topological phases occur, so the 3D bulk is still in the topological phase \((1;000)\). The finite-size topological phase therefore exhibits signatures associated with a non-trivial intrinsically 3D topological invariant. This is indicated by adding perturbations to the system to probe the magneto-electric polarizability of the system, which depends on the intrinsically 3D topological invariant, a connection identified in previous work by Essin _et al._[52]. We then compare the result to a q(3-1)D stack of 2D QSHI in the \(x\) direction with the same perturbation to demonstrate the finite-size topological phase of the q(3-1)D system is distinct from a QSHI. The type of perturbations we consider to determine the magnetoelectric polarizability in these two cases are TRS-breaking terms in the form of weak Zeeman field in only the uppermost and lowest layers i.e. \(V=\mathbf{\kappa}\cdot\mathbf{s}\) with \(|\mathbf{\kappa}\mid\approx 0.1\). This could correspond to ferromagnetically-ordered magnetic dopants in just these layers. This situation is illustrated schematically in Fig. 9 a). We first consider a Zeeman field oriented in the \(yz\)-plane and labeled \(\mathbf{\kappa}_{\parallel}\) as shown in Fig. 9 a) for both the \((1;000)\) system (q(3-1)D STI) and the \((0;001)\) system (q2D WTI, or stack of QSHIs). In this case, these systems react similarly, their spectra capping out as shown in Fig.9 b). This is expected, as the perturbation breaks TR symmetry. However, the responses of the two systems are strikingly distinct if we instead consider an applied Zeeman field oriented along the \(\hat{x}\)-axis, or field \(\mathbf{\kappa}_{\perp}\) as shown in Fig. 9 a). In the case of the \((1;000)\) system (q(3-1)D STI), two of the q1D boundary states gap out, leaving two gapless boundary modes remaining. They constitute chiral boundary modes of a QHE on each surface as shown in Fig. 9 c), corresponding to a layer-dependent Hall conductivity \(\sigma_{xy}\) as shown in Fig. 9 d). This is a known manifestation of the quantized magnetoelectric polarizability of the STI, resulting from the non-trivial value of the strong invariant, these results can be directly compared with past work for systems with a 3D bulk rather than q(3-1)D bulk [52]. Notably, the layer-dependent Hall conductivity exhibits this non-trivial response only for topologically non-trivial finite-size topological phases as characterized by \(N_{\pm\pi}\). This agrees with the response theory of previously-studied finite-size topological phases, and reflects the fact that only occupied states of the non-trivial bubbles carry Berry phase contributions to the underlying 3D topological invariant. In constrast, there is no topological magnetoelectric effect and no QHE in the surface layers of the QSHI stack. Instead, the states maintain their degeneracy and helicity, for small fields, which is consistent when we view the field as parallel to the edge surfaces hosting the helical boundary states of the QSHI layers. Therefore, although the spectrum of q(3-1)D STI slab appears to be similar to the spectrum of a QSHI stack, its response to TRS-breaking perturbations reveals that it is a finite-size topological phase arising from \((1;000)\) topology of the 3D bulk. ## V 3D TI wire We finally consider a q(3-2)D wire geometry for the STI, demonstrating that finite-size topology arises from interference between topologically-protected boundary states of finite-size topological phases, as well as interference be Figure 7: a) Wilson loop nontrivial eigenvalue spectrum as a function of \(k_{z}\) for OBC in \(x\) and PBC in \(y,z\). The parameters are \(M=2.0,\lambda=0.2\) and \(N=6\) layers. b) Wilson loop trivial eigenvalue spectrum as a function of \(k_{z}\) for OBC in \(x\) and PBC in \(y,z\). The parameters are \(M=1.6\), \(\lambda=0.1\) and \(N=6\) layers. Figure 8: Phase diagram from wilson loop spectrum for a slab with a) \(N=5\), b) \(N=6\), black-0, yellow-\(2\pm\pi\) phases. c)Quasi-1D dispersion for OBC in \(x\)\(N=6\), PBC in \(z\) and OBC (PBC) in \(y\) as a function of \(k_{z}\). The parameters are \(M=1.8,\lambda=0.1\). d) Spectrum for OBC in \(x,y\) as a function of \(M\), for discretized values of \(k_{z}\) with \(N=5\) and \(100\) sites in the remaining directions, \(\lambda=0.1\). tween the topologically-protected boundary states of topological phases in the ten-fold way. We therefore consider the same Hamiltonian Eq. (4) as considered in the previous section, but now with open boundary conditions in \(\hat{x}\)- and \(\hat{y}\)-directions and periodic boundary conditions in the \(\hat{z}\)-direction, with the number of lattice sites in the \(\hat{x}\)- and \(\hat{y}\)-directions, \(N_{x}\) and \(N_{y}\), respectively, each much less than \(N_{z}\), the number of lattice sites in the \(\hat{z}\)-direction (so \(N_{x},N_{y}\ll N_{z}\)). According to the ten-fold way classification scheme for topological phases of matter, this is an effectively 1D system in class AII, which is topologically-trivial [38, 39, 48]. Here, we show finite-size topological phases are nonetheless possible in this system, first characterizing finite-size topology of the q1D bulk through analysis of Wilson loop spectra, and then by demonstrating an additional bulk-boundary correspondence yielding q(3-3)D topologically-protected boundary modes in the q1D wire, ultimately resulting from interference between the topologically-protected Dirac cone surface states of the STI. For the q(3-2)D bulk of the STI with periodic boundary conditions only in the \(\hat{z}\)-direction, we compute Wilson loop spectra by integrating over this one good momentum component, similarly to the Wilson loop spectra calculations for the q(3-2)D bulk of the QSHI in Section II. The topological phase diagrams of the STI q(3-2)D bulk are then determined by computing the number of eigenvalues in the Wilson loop spectrum with \(\pm\pi\) phases, or \(N_{\pm\pi}\). Although in the previous cases the spectrum only had two \(\pm\pi\) phases or none corresponding to nontrivial and trivial regions, we find for the 3D TI wire that some regions of the phase diagram have four \(\pm\pi\) phase eigenvalues as shown in red in Fig. 10 a) for \(N_{x}=N_{y}=4\). In the case of \(N_{x}=N_{y}=6\), the phase diagram changes again and now there are not only two and four \(\pm\pi\) phases but also six \(\pm\pi\) phases as shown in blue in Fig. 10 e). The change of invariant is consistent with a gap closing of the q(3-2)D bulk as shown in Fig.10 b),f) where the gap closings coincide with the change of number of \(\pm\pi\) phases in the Wilson loop spectrum. Figure 11: a) Spectrum for OBC in x,y,z as a function of \(M\), with \(N_{x},N_{y}=4\) and \(N_{z}=50\) sites in the remaining directions with \(\lambda=0.1\). b) Same plot but for \(N_{x}=4,N_{y}=5\) and \(N_{z}=100\). Figure 9: a) Diagram of perturbations and edge states (pink) on a q(3-1)D STI slab, parallel field (purple) and perpendicular field (black), we consider the perturbations for OBC in \(x,y\) and PBC in \(z\). b) Spectrum of STI slab with OBC in \(x\) and \(y\), \(N_{x}=5\), \(N_{y}=100\). A constant Zeeman field ordering \(\kappa=0.1\) parallel to the slab top and bottom layers is present. The parameters are \(M=1.8,\lambda=0.1\). c) Same parameters as in c) but now with a perpendicular field the system exhibits a QHE.d) Layer conductivity (Chern number) for \(N_{x}=6\), \(M=2.0\),\(\lambda=0.2\). As in previous cases, we find the topological phase diagrams for the q(3-2)D bulk of the STI depend strongly on system size. In the case of \(N_{x}=4,N_{y}=5\), we find that the region with four \(\pm\pi\) phases occurs for smaller parameter regions and is furthermore shifted in \(M\) and skewed relative to the corresponding regions in the \(N_{x}=N_{y}=4\), reflecting the difference between \(x\) and \(y\) directions in phase space. The skewness is also more prominent as we increase the number of sites as seen for \(N_{x}=N_{y}=6\) in Fig. 10 e). In this case, even though the topologically non-trivial regions diminish in size, they are more strongly skewed. The deformation is such that, starting in a trivial region according to \(N_{\pm\pi}\) near \(M=1.75\), with almost zero \(\lambda\), we can drive the system into a topologically non-trivial regime as determined by \(N_{\pm\pi}\) by increasing the spin-orbit coupling to \(\lambda\approx 0.1\). A comparison of the phase diagrams in Fig. 10, indicates that, as the number of sites increases, the regions present originally in \(N_{x}=N_{y}=4\) remain for \(N_{x}=N_{y}=6\) although now shifted to greater \(M\) and reduced in size over phase space. We notice also that the new regions appear from the left. The topological phase diagram and corresponding q(3-2)D minimum direct bulk gap phase diagram for \(N_{x}=N_{y}=6\) are shown in Fig. 10 e), f), respectively. While there are similarities between results for this system size and the smaller ones, there is a topological region for which six Wilson loop eigenvalues have phases fixed to \(\pm\pi\). The regions of greater \(N_{\pm\pi}\) appear to be subsets of regions with lesser \(N_{\pm\pi}\): the blue region is contained within a red region, and red regions are contained within yellow regions. The states again localize at the boundaries of the wire and states occur in Kramers pairs. These results indicate the number of \(\pm\pi\) phases in the Wilson loop spectrum, \(N_{\pm\pi}\), corresponds to half the number of zero energy edge states \(N(E=0)\) in the q(3-2)D STI system with OBC in all directions. We can verify this relation appears to hold for the q(2-1)D QSHI wire and the q(3-1)D STI slab as well. As \(N_{\pm\pi}>2\) occurs for the q1D STI wire, this comes to suggest an integer classification \(2\mathbb{Z}\) for the q(3-2)D STI finite-size topological phases, to be explored in greater detail in future work. Based on the topological phase diagrams for the q(3-2)D bulk, we now check for a finite-size topological bulk-boundary correspondence in this geometry by opening boundary conditions in the \(\hat{z}\)-direction, searching for topologically-protected boundary modes localized at the ends of the q(3-2)D wire. In analogy to the q(2-1)D QSHI wire, we study the non-trivial number of \(\pm\pi\) eigenvalues in the Wilson loop spectrum, Fig. 11 a),b). The system with \(N_{\pm\pi}=4\) phases has now eight edge states within the q(3-2)D bulk gap. These states occur in Kramers pairs, with each state in a given Kramers pair localized at the same edge There are, however, differences between these states observable in the probability density distributions. We show the probability densities as a function of layer index in each of the \(\hat{z}\)- and \(\hat{y}\)-directions, respectively,for four of the in-gap states, in Figs. 12 a) and b), respectively. The corresponding probability densities as a function of layer in the \(\hat{z}\)-direction and \(\hat{y}\)-direction for the other four in-gap states are shown in Figs. 12 c) and d), respectively. We see that the second set of four are distinguished from the first four by their localization: the second set of four are pushed inwards from the edge in both the \(\hat{z}\)- and \(\hat{y}\)-direction relative to the first four in-gap states. We find similar physics for \(N_{x}=N_{y}=6\) in the \(N_{\pm\pi}=6\) phase: there are 12 q(3-3)D edge states at zero energy within the q(3-2)D bulk gap. This change in localization suggests that there is a distinction between edge states which may give way to distinct phases not distinguished by just the parity of the number of edge states. ## VI Concluding remarks In this work, we have studied finite-size topology in time-reversal invariant systems, emerging from the hybridization of helical boundary modes in QSHIs and of Dirac cones in the strong TI. In the case of the QSHI, we find the helical boundary modes generically interfere to realize regions in phase space where the q(2-1)D bulk spectrum (periodic boundary conditions in one direction and open boundary conditions in the other) is gapped. These regions are separated from one another by critical points at which the q(2-1)D minimum direct bulk gap is zero. We characterize the topology of these gapped regions by computing Wilson loop spectra, finding topologically non-trivial gapped phases of the q(2-1)D bulk corresponding to a non-trivial number of Wilson loop eigenvalues with phase fixed to \(\pm\pi\). For open boundary conditions in each direction and a q(2-1)D wire geometry, these Wilson loop eigenvalues with phase \(\pm\pi\) correspond to topologically-protected q(2-2)D boundary modes localized at the ends of the wire. These q(2-2)D boundary modes occur in Kramers pairs and are robust against dis Figure 12: a) Probability density plot for one q(3-3)D edge mode as a function of wire length \(z\) with \(M=1.25,\lambda=0.1\),\(N_{x}=N_{y}=4,N_{z}=80\) b) same edge state as a function of site index \(y\), c) Probability density for another quasi-0D edge mode within the same model parameters as a function of wire length d) now as a function of site index \(y\). order respecting spinful time-reversal symmetry, maintaining a fourfold degeneracy for particle-hole symmetric disorder, and splitting into doubly-degenerate Kramers pairs for disorder breaking particle-hole symmetry. In these cases, the in-gap, q(2-2)D modes are still topologically-robust in that they must correspond to time-reversal invariant charge transfer in an aperiodic Thouless pump from valence bands to conduction bands, and this connectivity between q(2-1)D bulk valence and conduction bands is observed in topological phase diagrams. We first observe this finite-size topology of the QSHI for a canonical Hamiltonian describing HgTe quantum wells, but also find the finite-size topological phase occurs in a tight-binding model for 1T'-WTe\({}_{2}\) ribbons with sawtooth edges derived from density functional theory calculations, thus potentially relevant to experiment. In the case of the strong topological insulator protected by time-reversal symmetry, we find finite-size topological phases both for q(3-1)D slab geometries and q(3-2)D wire geometries. Wilson loop spectra are used to characterize the topology of the q(3-1)D and q(3-2)D bulk: the winding of the Wilson loop eigenvalue phases characterizes the q(3-1)D topology, similarly to characterization of 2D topological phases in the bulk, while the q(3-2)D wire topology in this case is also characterized by the number of \(\pm\pi\) Wilson loop eigenvalue phases as in the case of the q(2-1)D QSHI. For open boundary conditions in two directions, the q(3-1)D STI slab exhibits helical boundary modes in the finite-size topological phase, but also exhibits signatures of the magneto-electric polarizability of the STI, distinguishing this finite-size topological phase from the QSHI. In the case of the q(3-2)D wire, q(3-3)D boundary modes occur for open boundary conditions in all three directions, similarly to those of the q(2-1)D QSHI. However, results indicate that topological classification for the q(3-2)D STI is integer rather than \(\mathbb{Z}_{2}\), with unusual localization of the quasi-0D topological boundary modes. Importantly, these results show that finite-size topology yields topologically-protected boundary modes of codimension greater than \(1\). We close by pointing out an intriguing possible extension of the work to a system with dimension \(D>3\). For example, a four-dimensional topological phase is expected in a q(4-1)D setting, as pictured schematically in Fig. 13 for a small system size in some fourth dimension. These extra non-spatial dimensions could come from physical degrees of freedom, typically considered for three-dimensional systems, such as a \(p\) orbital degree of freedom as considered in Fig. 13 a). One could now imagine an infinite 4D bulk defined by the \(\hat{x}\)-, \(\hat{y}\)-, \(\hat{z}\)-, and \(\hat{L}_{z}\) (angular momentum)-axes. For non-trivial 4D bulk topological invariant, open boundary conditions in the \(\hat{L}_{z}\)-direction and a large system size in the \(\hat{L}_{z}\)-direction, three-dimensional topologically-protected boundary modes are realized. For the physical scenario of small system sizes in the Figure 13: a) Schematic diagram of a 4D topological phase consisting of some \(x,y,z\) real space directions and an \(L_{z}\) orbital degree of freedom which gets thinned in the orbital direction. In this case the quasi-3D slab spectrum from the 4D bulk results from the hybridization of the original 3D boundary modes. b) The previous q(4-1)D slab possess an additional bulk-boundary correspondence when additional open boundary conditions in the \(z\) direction are considered. This results in q(4-2)D boundary modes. \(\hat{L}_{z}\)-direction as shown in Fig. 13 a), these three-dimensional boundary modes could interfere to realize a topologically nontrivial FST phase, such that additional topologically-protected boundary modes are realized when boundary conditions are additionally opened in a second real-space direction, such as the \(\hat{z}\)-direction as shown in Fig. 13 b). This raises the possibility of topologically-protected boundary states for systems in one to three-dimensions reflecting higher-dimensional, \(D>3\) bulk topology.
2310.14017
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series
Contrastive representation learning is crucial in medical time series analysis as it alleviates dependency on labor-intensive, domain-specific, and scarce expert annotations. However, existing contrastive learning methods primarily focus on one single data level, which fails to fully exploit the intricate nature of medical time series. To address this issue, we present COMET, an innovative hierarchical framework that leverages data consistencies at all inherent levels in medical time series. Our meticulously designed model systematically captures data consistency from four potential levels: observation, sample, trial, and patient levels. By developing contrastive loss at multiple levels, we can learn effective representations that preserve comprehensive data consistency, maximizing information utilization in a self-supervised manner. We conduct experiments in the challenging patient-independent setting. We compare COMET against six baselines using three diverse datasets, which include ECG signals for myocardial infarction and EEG signals for Alzheimer's and Parkinson's diseases. The results demonstrate that COMET consistently outperforms all baselines, particularly in setup with 10% and 1% labeled data fractions across all datasets. These results underscore the significant impact of our framework in advancing contrastive representation learning techniques for medical time series. The source code is available at https://github.com/DL4mHealth/COMET.
Yihe Wang, Yu Han, Haishuai Wang, Xiang Zhang
2023-10-21T13:59:31Z
http://arxiv.org/abs/2310.14017v4
# Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series ###### Abstract Contrastive representation learning is crucial in medical time series analysis as it alleviates dependency on labor-intensive, domain-specific, and scarce expert annotations. However, existing contrastive learning methods primarily focus on one single data level, which fails to fully exploit the intricate nature of medical time series. To address this issue, we present COMET, an innovative hierarchical framework that leverages data consistencies at all inherent levels in medical time series. Our meticulously designed model systematically captures data consistency from four potential levels: observation, sample, trial, and patient levels. By developing contrastive loss at multiple levels, we can learn effective representations that preserve comprehensive data consistency, maximizing information utilization in a self-supervised manner. We conduct experiments in the challenging patient-independent setting. We compare COMET against six baselines using three diverse datasets, which include ECG signals for myocardial infarction and EEG signals for Alzheimer's and Parkinson's diseases. The results demonstrate that COMET consistently outperforms all baselines, particularly in setup with 10% and 1% labeled data fractions across all datasets. These results underscore the significant impact of our framework in advancing contrastive representation learning techniques for medical time series. The source code is available at [https://github.com/DL4mHealth/COMET](https://github.com/DL4mHealth/COMET). ## 1 Introduction Time series data is crucial in various real-world applications, ranging from finance [1; 2; 3], engineering [4; 5], to healthcare [6; 7]. Unlike domains such as computer vision [8; 9] and natural language processing [10; 11], where human recognizable features exist, time series data often lacks readily discernible patterns, making data labeling challenging. Consequently, the scarcity of labeled data poses a significant hurdle in effectively utilizing time series for analysis and classification tasks. To address the paucity of labeled data in time series analysis, self-supervised contrastive learning has emerged as a promising approach. For example, TimeCLR [12] proposes a DTW data augmentation for time series data; TS2vec [13] designs a cropping and masking mechanism to form positive pairs; ExpCLR [14] introduces a novel loss function to utilize continuous expert features. By leveraging the inherent consistency within unlabeled data, contrastive learning algorithms enable the extraction of effective representations without relying on explicit labels. This paradigm shift opens up possibilities for overcoming the data scarcity issue and enhancing the capabilities of time series analysis. Despite recent advancements in contrastive learning methods for time series, existing approaches fail to exploit the full potential of medical time series data, such as electroencephalogram (EEG) signals. Unlike conventional time series data, medical time series often exhibit more data levels (Figure 1), including patient, trial, sample, and observation levels. Current contrastive learning techniques exclusively employ subsets of these levels (as illustrated in Table 1). Additionally, many of these methods are tailored to specific data types, which restricts their capacity to capture the rich complexity of medical time series. For example, CLOCS [15] presents a contrastive learning method for ECG using sample and patient levels. Mixing-up [16] captures sample-level consistency through a mixing data augmentation scheme. TNC [17] exploits trial-level consistency by contrasting neighbor samples in the same trial as positive pairs. Neither of them leverages all the levels exhibited in the medical time series. After reviewing existing contrastive learning methods within the time series domain, we consistently posed a pivotal question to ourselves: Can we design a straightforward yet appliable contrastive learning framework that can be adapted to all forms of medical time series data, akin to the classical model SimCLR [18] in the domain of contrastive learning? Our objective is to craft an innovative framework that utilizes all information within medical time series in the context of self-supervised contrastive learning. It enables us to harness patient and trial information to learn consistency across instances while leveraging the sample and observation levels' information to facilitate conventional instance discrimination. In this paper, we propose a hierarchical framework, COMET, that systematically leverages all four levels of medical time series, namely patient, trial, sample, and observation, to reduce the reliance on labeled data. By incorporating self-supervised contrastive learning, our method aims to bridge the gap between the limited availability of labeled data and the need for robust and generalizable models in medical time series analysis. We conduct extensive experiments with six baselines on three diverse datasets in a challenging patient-independent setting. COMET outperforms SOTAs by 14% and 13% F1 score with label fractions of 10% and 1%, respectively, on EEG-based Alzheimer's detection. Further, COMET outperforms SOTAs by 0.17% and 2.66% F1 score with label fractions of 10% and 1%, respectively, on detecting Myocardial infarction with ECG. Finally, COMET outperforms SOTAs by 2% and 8% F1 score with label fractions of 10% and 1%, respectively, in the EEG-based diagnosis of Parkinson's disease. The results of downstream tasks demonstrate the effectiveness and stability of our method. ## 2 Related Work **Medical time series.** Medical time series [19; 20; 21; 22] is a distinct type of time series data used for healthcare (_e.g._, disease diagnosis, monitoring, and rehabilitation). It can be collected in a low-cost, non-invasive manner [23] or a high-cost, invasive manner [24]. Unlike general time series, which typically consist of sample and observation levels, medical time series introduces two additional data levels: patient and trial. These extra levels of data information in medical time series enable the development of specialized methods tailored to address the unique characteristics and requirements of medical time series analysis. Various types of medical time series, including EEG [28; 29; 30], ECG [31; 32], EMG [33], and EOG [34], offer valuable insights into specific medical conditions and play a crucial role in advancing healthcare practices. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Models** & **Patient** & **Trial** & **Sample** & **Observation** \\ \hline **SimCLR**[25] & & & ✓ & \\ **TF-C**[26] & & & ✓ & \\ **Mixing-up**[16] & & & ✓ & \\ **TNC**[17] & & ✓ & & \\ **TS2vec**[13] & & & ✓ & ✓ \\ **TS-TCC**[27] & & & ✓ & ✓ \\ **CLOS**[15] & ✓ & & ✓ & \\ **COMET(Ours)** & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Existing methods only utilize partial levels Figure 1: Structure of medical time series. Medical time series commonly have four levels (coarse to fine): patient, trial, sample, and observation. An observation is a single value in univariate time series and a vector in multivariate time series. Contrastive learning for time series.Contrastive learning has demonstrated its ability to learn effective representations in various domains, including image processing [35; 18; 36], graph analysis [37; 38; 37], and time series [39]. The key idea behind contrastive representation learning is to mine data consistency by bringing similar data closer together and pushing dissimilar data further apart. Many existing works on contrastive learning focus on general time series, while some are designed specifically for medical data [40]. One classic framework, SimCLR, transforms a single sample into two augmented views and performs contrastive learning [25]. Other settings, such as using sub-series and overlapping-series, leverage sample-level consistency [41]. TF-C [26] contrasts representations learned from the time and frequency domains to exploit sample-level consistency. Mixing-up [16] learn sample-level consistency by utilizing a mixing component as a data augmentation scheme. TS-TCC and TS2vec [27; 13] apply data augmentation at the sample-level and perform contrastive learning at the observation-level. TNC [17] learns trial-level consistency by contrasting neighboring and non-neighboring samples. NCL [42] can also be used to learn trial-level consistency if we define samples from a trial as a neighborhood. CLOCS [15] learns patient-level consistency in cardiac signal features by contrasting different leads over time. Certain prior methods have implicitly utilized hierarchical structure [13; 15]. However, as shown in Table 1, none of these methods leverage all the levels present in the medical time series, potentially resulting in the loss of useful information during training. We explicitly present a hierarchical framework in the context of contrastive learning, which can be applied across diverse types of medical time series data. In our paper, we aim to leverage data consistency at all levels in medical time series. Our work plays a role in summarizing, inspiring, and guiding future works in self-supervised contrastive learning on medical time series. ## 3 Preliminaries and Problem Formulation ### Medical Time Series In this section, we clarify the key conceptions of **observation** (or measurement), **sample** (or segment), **trial** (or recording), and **patient** (or subject) in the context of medical time series (Figure 1). For better understanding, we illustrate the concepts with an example of Electroencephalography (EEG) signals for Alzheimer's Disease diagnosis (details in Appendix A). **Definition 1: Observation.**_An observation \(\mathbf{x}_{i,t}\in\mathbb{R}^{F}\) in medical time series data represents a single data point or a vector captured at a specific timestamp \(t\)._ Here we use \(i\) to denote the sample index (see **Definition 2**) and \(t\) to denote the timestamp. It may record physiological status, laboratory test results, vital signs, or other measurable health indicators. The observation is a single real value for univariate time series while vector for multivariate time series. The \(F\) is the feature dimension if it is a multivariate time series. **Definition 2: Sample.**_A sample \(\mathbf{x}_{i}=\{\mathbf{x}_{i,t}|t=1,\cdots,T\}\) is a sequence of consecutive observations_, typically measured at regular intervals over a specified period (\(T\) timestamps). It can also be called a _segment_ or _window_. Here we use \(i\) to denote the sample index. In the medical time series, a sample might consist of a sequence of heart rate measurements or blood pressure readings. **Definition 3: Trial.**_A trial \(\mathbf{r}_{i}\) is a collection of consecutive samples._ It can also be called a _record_. Here we use \(i\) to denote the trial ID. In medical time series, a trial is a continuous set of observations collected over a not-short period (_e.g._, 30 minutes). Therefore, a trial is generally too long (_e.g._, hundreds of thousands of observations) to feed into deep learning models for representation learning directly and is usually split into shorter subsequences (_i.e._, samples/segments). To represent the aggregate of samples stemming from a particular trial \(\mathbf{r}_{i}\) with trial ID \(i\), we employ the notation \(\mathcal{R}_{i}\). **Definition 4: Patient.**_A patient \(\mathbf{p}_{i}\) represents a collection of multiple trials stemming from a single patient._ It can also be called a _subject_. Here we use \(i\) to denote the patient ID. It is important to note that trials for a given patient may exhibit variations due to differing data collection timeframes, sensor placements, patient conditions, and other contributing factors. As shown in **Definition 3**, a trial is typically divided into many samples for better representation learning. In practical scenarios, a patient, which constitutes a cluster of trials, is also divided into samples that may share identical or distinct trial IDs but maintain the same patient ID. To represent the aggregate of samples stemming from a particular patient \(\mathbf{p}_{i}\) with the corresponding patient ID \(i\), we employ the notation \(\mathcal{P}_{i}\). In this work, we propose a novel hierarchical contrastive framework to learn representative and generalizable embeddings by comprehensively exploring instance discrimination at observation and sample levels and harnessing consistency across instances at trial and patient levels. Although we elaborate the proposed COMET in the context of medical time series, we note our model can possibly be extended to other time series beyond healthcare as long as extra information is available. For example, a climate dataset contains multiple meteorological satellites, each satellite contains multiple measuring units, and each unit contains multiple sensors, and every sensor can measure a specific observation at a certain timestamp. The key is to utilize all available information, excluding label data, for contrastive pre-training, such as patient ID. To adapt our approach to other domains, researchers must consider a crucial question: Does the dataset have additional information beyond sample labels? If affirmative, can this information be harnessed for contrastive learning? The example of satellite sensor application underscores the potential existence of supplementary information even in non-medical domains. ### Problem Formulation **Problem (Self-Supervised Representation Learning For Medical Time Series).**_Let an unlabeled dataset \(\mathcal{D}\) consist of a set of patients, where each patient \(\mathbf{p}_{i}\) has multiple trials, each trial \(\mathbf{r}_{i}\) can be segmented into many samples, and each sample \(\mathbf{x}_{i}\) comprises a series of observations. We aim to pre-train an encoder \(G\) that exploits data consistency at all available levels in a self-supervised contrastive manner. For a given time series sample \(\mathbf{x}_{i}\in\mathbb{R}^{T\times F}\) with \(T\) timestamps and \(F\) feature dimensions, the encoder \(G\) learns a sample-level representation \(\mathbf{h}_{i}\in\mathbb{R}^{T\times K}\), where \(\mathbf{h}_{i,t}\in\mathbb{R}^{K}\) is the observation-level representation at timestamp \(t\) with K dimensions._ By exploiting hierarchical consistency at multiple data levels, we aim to learn a representation \(\mathbf{h}_{i}\) that is both representative (yielding good performance in downstream tasks) and generalizable (maintaining stability across different patients). Depending on the fine-tuning settings [18], a specific fraction of labels \(y_{i}\) corresponding to samples \(\mathbf{x}_{i}\) are necessary. ## 4 Method In this section, we first present our assumption of data consistency behind designing a hierarchical contrastive framework. Then, we describe the architecture of the proposed model COMET (Figure 2). Figure 2: **Overview of COMET approach. Our COMET model consists of four contrastive blocks, each illustrating the formulation of positive pairs and negative pairs at different data levels. In the observation-level contrastive, an observation \(\mathbf{x}_{i,t}\) and its augmented view \(\widetilde{\mathbf{x}}_{i,t}\) serve as a positive pair. Similarly, in the sample-level contrastive, a sample \(\mathbf{x}_{i}\) and its augmented view \(\widetilde{\mathbf{x}}_{i}\) form a positive pair. Moving to the trial-level contrastive, two samples \(\mathbf{x}\) and \(\mathbf{x}^{+}\) from the same trial \(\mathbf{r}_{i}\) are considered to be a positive pair. The patient-level contrastive follows a similar pattern, where two samples \(\mathbf{x}\) and \(\mathbf{x}^{+}\) from the same patient \(\mathbf{p}_{i}\) are regarded as a positive pair. Positive and corresponding negative pairs will be utilized to build contrastive loss in embedding space after being processed by encoder \(G\).** ### Hierarchical Data Consistency Capturing data consistency is crucial in the development of a contrastive learning framework [13]. Data consistency refers to the shared commonalities preserved within the data, which provide a supervisory signal to guide model optimization. Contrastive learning captures data consistency by contrasting positive and negative data pairs, where positive pairs share commonalities and negative pairs do not. We propose consistency across four data levels: observation, sample, trial, and patient, from fine-grained to coarse-grained in the medical time series. Although we present four levels here, our model can easily be adapted to accommodate specific datasets by adding or removing data levels. **Observation-level data consistency.** We assume a slightly augmented observation (_e.g._, channel masked) will carry similar information as the original observation [13]. We use \(\mathbf{x}_{i,t}\) as the anchor observation at timestamp \(t\), and \(\mathbf{x}_{i,t^{-}}\) as the observation at another timestamp \(t^{-}\) in the sample \(\mathbf{x}_{i}\). We consider the anchor observation \(\mathbf{x}_{i,t}\) and an augmented observation \(\widetilde{\mathbf{x}}_{i,t}\) as positive pairs \((\mathbf{x}_{i,t},\widetilde{\mathbf{x}}_{i,t})\) (with closer embeddings). Conversely, we consider the original observation \(\mathbf{x}_{i,t}\) and the observations \(\widetilde{\mathbf{x}}_{i,t^{-}}\) and \(\mathbf{x}_{i,t^{-}}\) at another timestamp \(t^{-}\) as negative pairs \((\mathbf{x}_{i,t},\widetilde{\mathbf{x}}_{i,t^{-}}),(\mathbf{x}_{i,t},\mathbf{x}_{i,t^{-}})\), with distant embeddings. **Sample-level data consistency.** The sample-level consistency is based on our assumption that a slightly perturbed sample (e.g., temporally masked) should carry similar information as the original sample [18; 16; 26]. We consider the anchor sample \(\mathbf{x}_{i}\) and its augmented view \(\widetilde{\mathbf{x}}_{i}\) as positive pair \((\mathbf{x}_{i},\widetilde{\mathbf{x}}_{i})\). We regard the anchor sample \(\mathbf{x}_{i}\) and a different sample \(\mathbf{x}_{j}\) and its augmented view \(\widetilde{\mathbf{x}}_{j}\) as negative pairs: \((\mathbf{x}_{i},\widetilde{\mathbf{x}}_{j})\) and \((\mathbf{x}_{i},\mathbf{x}_{j})\). **Trial-level data consistency.** We assume that samples sliced from the same trial should carry similar information compared to those obtained from different trials. For simplicity, we use \(\mathbf{x}\) to denote the anchor sample and \(\mathbf{x}^{+}\) to denote a sample from the same trial \(\mathbf{r}_{i}\) as the anchor sample, while \(\mathbf{x}^{-}\) to denote a sample from another trial \(\mathbf{r}_{j}\). In other words, we have \(\{\mathbf{x},\mathbf{x}^{+}\}\in\mathcal{R}_{i}\) and \(\mathbf{x}^{-}\in\mathcal{R}_{j}\). We treat sample \(\mathbf{x}\) and the sample \(\mathbf{x}^{+}\) from the same trial as positive pair \((\mathbf{x},\mathbf{x}^{+})\). We regard sample \(\mathbf{x}\) and the sample \(\mathbf{x}^{-}\) from different trials as negative pair \((\mathbf{x},\mathbf{x}^{-})\). **Patient-level data consistency.** We assume samples originating from the same patient are likely to contain similar information when compared to those from different patients. Here, we use \(\mathbf{x}\) to denote the anchor sample and \(\mathbf{x}^{+}\) to denote a sample from the same patient \(\mathbf{p}_{i}\), while \(\mathbf{x}^{-}\) from another patient \(\mathbf{p}_{j}\). In other words, there are \(\{\mathbf{x},\mathbf{x}^{+}\}\in\mathcal{P}_{i}\) and \(\mathbf{x}^{-}\in\mathcal{P}_{j}\). We have positive pair \((\mathbf{x},\mathbf{x}^{+})\) including samples from the same patient and negative pair \((\mathbf{x},\mathbf{x}^{-})\) that from different patients. **Disease-level data consistency.** For completeness, we introduce disease-level data consistency, which suggests that samples associated with the same type of disease should exhibit shared patterns, even when collected from different patients in different ways. However, capturing disease-level consistency requires ground truth labels, which are not available in a self-supervised approach. As a result, we do NOT employ disease-level consistency in this paper. Nevertheless, it can be adapted for semi-supervised or supervised contrastive learning and may prove beneficial in learning domain-adaptable representations for certain diseases across patients and even datasets. A common principle underlying all definitions is that _the X-level data consistency refers to the positive pair belonging to the same X, where X could be observation, sample, trial, patient, or disease._ We assume that _each patient is associated with only one label_, such as suffering from a specific disease, which implies that all samples from the same patient essentially originate from the same distribution. However, in cases where data from a patient is derived from multiple distributions (e.g., a patient could perform various daily activities; associated with multiple labels), the assumptions of trial-level and patient-level consistency are not satisfied. Therefore, the user can switch on the observation-level and sample-level consistency. Building upon the concepts of data consistency, we introduce four contrastive blocks corresponding to the four data levels. Our model is highly flexible, allowing users to enable or disable any of the blocks based on the requirements of a specific task or dataset. ### Observation-Level Contrastive Block For a given time series sample \(\mathbf{x}_{i}\), we apply data augmentation (such as masking) to generate an augmented sample \(\widetilde{\mathbf{x}}_{i}\)[25; 43]. We input the original sample \(\mathbf{x}_{i}\) and its augmented view \(\widetilde{\mathbf{x}}_{i}\) into contrastive encoder \(G\) to obtain their respective representations \(\mathbf{h}_{i}=G(\mathbf{x}_{i})\) and \(\widetilde{\mathbf{h}}_{i}=G(\widetilde{\mathbf{x}}_{i})\). It is important to note that we apply data augmentation to the samples, which indirectly extends to augmenting the observations, simplifying the encoding process. To capture observation-level consistency, we assume that, after being processed by encoder \(G\), the representation of observation \(\mathbf{x}_{i,t}\) is close to the representation of the augmented observation \(\widetilde{\mathbf{x}}_{i,t}\). In contrast, it should be distant from the representations of observations \(\mathbf{x}_{i,t-}\) and \(\widetilde{\mathbf{x}}_{i,t-}\) originating from any other timestamp \(t^{-}\). Specifically, our positive pair is \((\mathbf{x}_{i,t},\widetilde{\mathbf{x}}_{i,t})\) and negative pairs are \((\mathbf{x}_{i,t},\mathbf{x}_{i,t-})\) and \((\mathbf{x}_{i,t},\widetilde{\mathbf{x}}_{i,t-})\). **Observation-level contrastive loss.** The observation-level contrastive loss \(\mathcal{L}_{\text{O}}\)[13] for the input sample \(\mathbf{x}_{i}\) is defined as: \[\mathcal{L}_{\text{O}}=\mathbb{E}_{\mathbf{x}_{i}\in\mathcal{D}}\left[\mathbb{E} _{t\in\mathcal{T}}\left[-\text{log}\frac{\text{exp}(\mathbf{h}_{i,t}\cdot \widetilde{\mathbf{h}}_{i,t})}{\sum_{t-\in\mathcal{T}}(\text{exp}(\mathbf{h}_{i,t} \cdot\widetilde{\mathbf{h}}_{i,t-})+\mathbb{1}_{[t\neq t^{-}]}\text{exp}(\mathbf{h}_{i,t}\cdot\mathbf{h}_{i,t-}))}\right]\right] \tag{1}\] where \(\mathcal{T}=\{1,\cdots,T\}\) is the set of all timestamps in sample \(\mathbf{x}_{i}\) and \(\cdot\) denotes dot product. The \(\mathbb{1}_{[t\neq t^{-}]}\) is an indicator function that equals to \(0\) when \(t=t^{-}\) and \(1\) otherwise. ### Sample-Level Contrastive Block For an input time series sample \(\mathbf{x}_{i}\) and its augmented view \(\widetilde{\mathbf{x}}_{i}\), we calculate their representations through \(\mathbf{h}_{i}=G(\mathbf{x}_{i})\) and \(\widetilde{\mathbf{h}}_{i}=G(\widetilde{\mathbf{x}}_{i})\). The augmentation applied here could be the same as or different from the augmentation used in Section 4.2. We assume that after passing through the encoder \(G\), the representation of the sample \(\mathbf{x}_{i}\) is close to the representation of its augmented view \(\widetilde{\mathbf{x}}_{i}\), while far away from the representations of any other samples \(\mathbf{x}_{j}\) and \(\widetilde{\mathbf{x}}_{j}\). In specific, our positive pair is \((\mathbf{x}_{i},\widetilde{\mathbf{x}}_{i})\), and negative pairs are \((\mathbf{x}_{i},\widetilde{\mathbf{x}}_{j})\) and \((\mathbf{x}_{i},\mathbf{x}_{j})\). **Sample-level contrastive loss.** The sample-level contrastive loss \(\mathcal{L}_{\text{S}}\)[18; 43] for the input sample \(\mathbf{x}_{i}\) is defined as: \[\mathcal{L}_{\text{S}}=\mathbb{E}_{\mathbf{x}_{i}\in\mathcal{D}}\left[-\text{log }\frac{\text{exp}(\mathbf{h}_{i}\cdot\widetilde{\mathbf{h}}_{i})}{\sum_{j=1}^{| \mathcal{D}|}(\text{exp}(\mathbf{h}_{i}\cdot\widetilde{\mathbf{h}}_{j})+\mathbb{1}_{ [i\neq j]}\text{exp}(\mathbf{h}_{i}\cdot\mathbf{h}_{j}))}\right] \tag{2}\] where \(|\mathcal{D}|\) represents the total number of samples in the dataset \(\mathcal{D}\) and \(\cdot\) denotes dot product. The \(\mathbb{1}_{[i\neq j]}\) is an indicator function that equals \(0\) when \(i=j\) and \(1\) otherwise. ### Trial-Level Contrastive Block For an input sample \(\mathbf{x}\in\mathcal{R}_{i}\), where \(\mathcal{R}_{i}\) is a collection of all samples segmented from trial \(\mathbf{r}_{i}\), we feed it into the contrastive encoder \(G\) to generate a sample-level representation \(\mathbf{h}=G(\mathbf{x})\). To seize trial-level data consistency, we assume that the representation of the anchor sample \(\mathbf{x}\in\mathcal{R}_{i}\) is close to the representation of sample \(\mathbf{x}^{+}\) that also come from the trial \(\mathbf{r}_{i}\). In contrast, the representation of the anchor sample \(\mathbf{x}\) is far away from the representation of sample \(\mathbf{x}^{-}\) that come from a different trial \(\mathbf{r}_{j}\), where \(\mathbf{x}^{-}\in\mathcal{R}_{j}\). In other words, we have positive pair \((\mathbf{x},\mathbf{x}^{+})\) and negative pair \((\mathbf{x},\mathbf{x}^{-})\). **Trial-level contrastive loss.** The trial-level contrastive loss \(\mathcal{L}_{\text{R}}\)[15; 18] for the input sample \(\mathbf{x}\) is defined as: \[\mathcal{L}_{\text{R}}=\mathbb{E}_{\mathbf{x}\in\mathcal{D}}\left[\mathbb{E}_{\bm {x}^{+}\in\mathcal{R}_{i}}\left[-\text{log}\frac{\text{exp}(\text{sim}(\mathbf{h}, G(\mathbf{x}^{+}))/\tau)}{\sum_{j=1}^{J}\sum_{\mathbf{x}^{-}\in\mathcal{R}_{j}}(\text{ exp}(\text{sim}(\mathbf{h},G(\mathbf{x}^{-}))/\tau))}\right]\right] \tag{3}\] where \(J\) is the total number of trials in the dataset \(\mathcal{D}\). The \(\text{sim}(\mathbf{u},\mathbf{v})=\mathbf{u}^{T}\mathbf{v}/\left\|\mathbf{u}\right\|\|\mathbf{v}\|\) denotes the cosine similarity, and \(\tau\) is a temperature parameter to adjust the scale. The \(G(\mathbf{x}^{+})\) and \(G(\mathbf{x}^{-})\) are learned representations of samples \(\mathbf{x}^{+}\in\mathcal{R}_{i}\) and \(\mathbf{x}^{-}\in\mathcal{R}_{j}\), respectively. To measure the trial-level loss for sample \(\mathbf{x}\), we iterate all the \(\mathbf{x}^{+}\) in \(\mathcal{R}_{i}\), and averaging across \(|\mathcal{R}_{i}|-1\) positive pairs. In this block, we do NOT learn a trial-level embedding representing the entire trial. Instead, we learn a representation for each sample within the trial while considering trial-level data consistency. Similarly, we follow this protocol for the patient-level contrastive block. ### Patient-Level Contrastive Block For an input sample \(\mathbf{x}\in\mathcal{P}_{i}\), where \(\mathcal{P}_{i}\) denotes all samples from patient \(\mathbf{p}_{i}\), we feed it into the contrastive encoder \(G\) to generate a sample-level representation \(\mathbf{h}=G(\mathbf{x})\). Similar to the above trial-level contrastive block, we have positive pair \((\mathbf{x},\mathbf{x}^{+})\) and negative pair \((\mathbf{x},\mathbf{x}^{-})\), in which \(\mathbf{x}^{+}\) come from the same patient while \(\mathbf{x}^{-}\) come from a different patient. **Patient-level contrastive loss.** The patient-level contrastive loss \(\mathcal{L}_{\text{P}}\)[15] for the input sample \(\mathbf{x}\) is defined as: \[\mathcal{L}_{\text{P}}=\mathbb{E}_{\mathbf{x}\in\mathcal{D}}\left[\mathbb{E}_{\bm {x}^{+}\in\mathcal{P}_{i}}\left[-\text{log}\frac{\text{exp}(\text{sim}(\mathbf{h },G(\mathbf{x}^{+}))/\tau)}{\sum_{j=1}^{M}\sum_{\mathbf{x}^{-}\in\mathcal{P}_{j}}(\text {exp}(\text{sim}(\mathbf{h},G(\mathbf{x}^{-}))/\tau))}\right]\right] \tag{4}\] where \(M\) is the total number of patients in the dataset \(\mathcal{D}\). In this block, the \(G(\mathbf{x}^{+})\) and \(G(\mathbf{x}^{-})\) are learned representations of samples \(\mathbf{x}^{+}\in\mathcal{P}_{i}\) and \(\mathbf{x}^{-}\in\mathcal{P}_{j}\), respectively. ### Overall Loss Function The overall loss function \(\mathcal{L}\) consists of four loss terms The observation-level loss \(\mathcal{L}_{\text{O}}\) and sample-level loss \(\mathcal{L}_{\text{S}}\) encourage the encoder to learn robust representations that are invariant to perturbations. The trial-level loss \(\mathcal{L}_{\text{R}}\) and patient-level loss \(\mathcal{L}_{\text{P}}\) compel the encoder to learn cross-sample features within a trial or a patient. In summary, the overall loss function of the proposed COMET model is: \[\mathcal{L}=\lambda_{1}\mathcal{L}_{\text{O}}+\lambda_{2}\mathcal{L}_{\text{S }}+\lambda_{3}\mathcal{L}_{\text{R}}+\lambda_{4}\mathcal{L}_{\text{P}} \tag{5}\] where \(\lambda_{1},\lambda_{2},\lambda_{3}\), \(\lambda_{4}\in[0,1]\) are hyper-coefficients that control the relative importance and adjust the scales of each level's loss. Users can simply turn off specific data levels by setting \(\lambda\) of those levels to 0. We set \(\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4}=1\). We calculate the total loss by taking the expectation of \(\mathcal{L}\) across all samples \(\mathbf{x}\in\mathcal{D}\). In practice, the contrastive losses are calculated within a mini-batch. ## 5 Experiments We compare the COMET model with six baselines on three datasets. Following the setup in SimCLR [18], we use unlabeled data to pre-train encoder \(G\) and evaluate it in two downstream settings (Appendix E): partial fine-tuning (P-FT; _i.e._, linear evaluation [18]) and full fine-tuning (F-FT). All datasets are split into training, validation, and test sets in **patient-independent** setting (Figure 3), which is very challenging due to patient variations (Detail Explanations in Appendix F.1). **Datasets.** (1) **AD**[44] has 23 patients, 663 trials, and 5967 multivariate EEG samples. There are 4329, 891, and 747 samples in training, validation, and test sets. The sampling rate (frequency) is 25GHz. Each sample is a one-second interval with 256 observations. A binary label based on whether the patient has Alzheimer's disease is assigned to each sample. (2) **PTB**[45] has 198 patients, 6237 trials, and 62370 multivariate ECG samples. There are 53950, 3400, and 5020 samples in training, validation, and test sets. The sampling rate (frequency) is 250Hz. Each sample is a heartbeat with 300 observations. A binary label based on whether the patient has Myocardial infarction is assigned to each sample. (3) **TDBrain**[46] has 72 patients, 624 trials, and 11856 multivariate EEG samples. There are 8208, 1824, and 1824 samples in training, validation, and test sets. The sampling rate (frequency) is 256Hz. Each sample is a one-second interval with 256 observations. A binary label based on whether the patient has Parkinson's disease is assigned to each sample. See appendix D for further details about data statistics, train-val-test split, and data preprocessing. **Baselines.** We compare with 6 state-of-the-art methods: TS2vec [13], Mixing-up [16], TS-TCC [27], SimCLR [43], CLOCS [15] and TF-C [26]. Since TF-C is designed for transfer learning, we implement its downstream tasks the same as ours for a fair comparison. The evaluation metrics are accuracy, precision (macro-averaged), recall(macro-averaged), F1 score(macro-averaged, AUROC(macro-averaged), and AUPRC(macro-averaged). Figure 3: **Patient-dependent/independent Setting. In the patient-dependent setting, samples from the same patient can appear in both the training and testing sets. In contrast, in the patient-independent setting, samples from the same patient are exclusively included in the training or testing set.** **Implementation.** We use a ten residual blocks dilated CNN module as the backbone for encoders \(G\). To preserve the magnitude feature of time series, which is often an important feature in time series, we utilize timestamp masking as proposed in [13] as an augmentation method. While many existing works apply data augmentation directly on raw time series, we use a fully connected projection layer to map the raw input into an embedding with a higher feature dimension. This strategy helps prevent situations where some parts of the raw data are zero, which could result in ineffective augmentation if timestamp masking were directly applied. Additionally, to ensure stability during training, we incorporate a hierarchical loss [13], which encapsulates the observation and sample-level losses. To guarantee there are samples with the same trial and patient IDs in a batch for trial and patient levels contrasting, we design two specific shuffle functions. See Appendix C for further details. For datasets where trial information is absent or limited to a single trial per patient, we provide a solution in Appendix F.2. We conduct experiments with five seeds(41-45) based on the same data split to account for variance and report the mean and standard deviation. All experiments expect baseline SimCLR run on NVIDIA RTX 4090. Baseline SimCLR runs on Google Colab NVIDIA A100. See further implementation details in Appendix E. ### Results on Partial Fine-tuning **Setup.** Linear evaluation is a widely used P-FT setup to evaluate the quality of representation, where a linear classifier is trained on top of a frozen network for sample classification [18; 15; 47; 48]. This evaluation method allows for a quick assessment of representation quality. In linear evaluation, we add a logistic regression classifier \(L\) on top of the pre-trained encoder \(G\). The training process utilizes 100% labeled training data for sample classification. Notably, the parameters of the encoder \(G\) are frozen during training, ensuring that only the classifier \(L\) is fine-tuned. See further details in E.1. **Results.** We report the experimental results of the P-FT setup on AD in Table 2. On average, our COMET model claims a large margin of more than 7% over all baselines on the F1 score on AD. ### Results on Full Fine-tuning **Setup.** In an F-FT setup, we add a two-layer fully connected network classifier \(P\) to the pre-trained encoder \(G\). The training process utilizes 100%, 10%, and 1% of the labeled training data for sample classification, respectively. Unlike the P-FT approach, both the encoder \(G\) and classifier \(P\) are trainable now, allowing for fine-tuning of the entire network structure. See further details in E.2. **Results.** The results for F-FT on the three datasets are presented in Table 3 and Table 4. In general, COMET demonstrates success in 46 out of 54 tests conducted across the three datasets, considering six different evaluation metrics. With 100% labeled data, the F1 score of COMET outperforms the best baseline, TS2Vec, by 2% on the AD dataset and surpasses the best baseline, Mixing-up, by 4% on the TDBrain dataset. Furthermore, it achieves a result comparable to the best baseline, TF-C, on the PTB dataset. Notably, the F1 score of COMET surpasses the best baseline, TF-C, by 14% and 13% with label fractions of 10% and 1%, respectively, on the AD dataset. Additionally, on the TDBrain dataset, the F1 score of COMET outperforms the best baseline, Mixing-up, by 2% and 8% with label fractions of 10% and 1%, respectively. Similarly, the F1 score of COMET outperforms the best baseline, CLOCS, by 0.17% and 2.66% with label fractions of 10% and 1%, respectively, on the PTB dataset. It is interesting to observe that the F1 score of COMET with label fractions of 10% and 1% outperforms a label fraction of 100% on the AD and PTB datasets. This suggests a potential overfitting of COMET to the training data. A similar phenomenon is observed with SimCLR, TF-C, CLOCS, and \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Models** & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline **TS2vec** & 66.48\(\pm\)5.33 & 67.72\(\pm\)5.09 & 67.40\(\pm\)5.20 & 66.32\(\pm\)5.46 & 74.12\(\pm\)6.88 & 72.96\(\pm\)7.21 \\ **TF-C** & 77.03\(\pm\)2.80 & 75.79\(\pm\)5.07 & 64.27\(\pm\)5.03 & 64.85\(\pm\)5.56 & 80.71\(\pm\)4.03 & 79.27\(\pm\)4.15 \\ **Mixing-up** & 46.16\(\pm\)1.38 & 52.62\(\pm\)4.90 & 50.81\(\pm\)1.32 & 37.37\(\pm\)1.98 & 64.42\(\pm\)4.69 & 62.85\(\pm\)6.07 \\ **TS-TCC** & 59.71\(\pm\)8.63 & 61.66\(\pm\)8.63 & 60.33\(\pm\)3.26 & 58.66\(\pm\)8.39 & 67.53\(\pm\)10.04 & 68.33\(\pm\)9.37 \\ **SimCLR** & 57.16\(\pm\)2.05 & 56.67\(\pm\)3.53 & 57.52\(\pm\)21.49 & 41.11\(\pm\)4.26 & 56.67\(\pm\)3.91 & 52.10\(\pm\)4.11 \\ **CLOCS** & 66.99\(\pm\)**2.42** & 67.17\(\pm\)2.76 & 67.34\(\pm\)32.99 & 66.91\(\pm\)2.88 & 73.71\(\pm\)3.62 & 52.18\(\pm\)3.54 \\ **COMET (Ours)** & **76.09\(\pm\)**4.21** & **77.36\(\pm\)3.97** & **74.68\(\pm\)4.62** & **74.80\(\pm\)4.33** & **81.30\(\pm\)4.97** & **80.50\(\pm\)5.31** \\ \hline \hline \end{tabular} \end{table} Table 2: **Partial fine-tuning results.** A logistic regression(LG) classifier \(L\) is added on top of a frozen encoder \(G\) on dataset AD. Only fine-tuning the classifier \(L\). TS2vec, where a higher fraction of labeled data did not necessarily lead to improved performance. As a result, COMET demonstrates its superiority and stability across all the datasets. Furthermore, COMET outperforms SOTAs methods significantly with 10% and 1% label fractions, highlighting the effectiveness of our contrastive pre-training approach in reducing the reliance on labeled data. ### Ablation Study, Visualization, Additional Downstream Tasks, and Heavy Duty Baseline **Ablation study.** We verify the effectiveness of each contrastive block and other COMET variants. Besides, we study the impact of hyperparameter \(\lambda\). (Appendix G.1) **Visualization.** We visualize the embedding space and show our model can learn more distinctive and robust representations. (Appendix G.2) **Additional downstream tasks.** Apart from the classification tasks in Section 5.1-5.2, we show the proposed COMET outperforms baselines in a wide range of downstream tasks, including clustering and anomaly detection. (Appendix G.3) **Heavy duty baseline.** We show the superiority of our model does NOT result from the newly added contrastive blocks (_i.e._, increased model parameters). COMET outperforms the heavy-duty SimCLR and TS2Vec with four contrastive blocks and four times pre-training epochs. (Appendix G.4) \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Datasets** & **Fraction** & **Models** & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline \multirow{10}{*}{AD} & \multirow{4}{*}{**100\%**} & **TS2vec** & 81.26\({}_{2.12}\) & 81.21\({}_{2.14}\) & 81.34\({}_{2.04}\) & 81.12\({}_{2.26}\) & 89.20\({}_{1.76}\) & 88.94\({}_{1.85}\) \\ & & **TF-C** & 75.31\({}_{2.87}\) & 75.87\({}_{2.73}\) & 74.83\({}_{2.89}\) & 74.54\({}_{2.85}\) & 79.45\({}_{1.02}\) & 79.33\({}_{3.10}\) \\ & & **Mixing-up** & 65.68\({}_{2.79}\) & 72.61\({}_{2.41}\) & 68.25\({}_{2.69}\) & 63.98\({}_{1.92}\) & 84.63\({}_{2.50}\) & 83.46\({}_{2.54}\) \\ & & **TS-TCC** & 73.55\({}_{2.10}\) & 77.22\({}_{2.13}\) & 73.83\({}_{2.98}\) & 71.66\({}_{1.19}\) & 86.17\({}_{2.51}\) & 85.73\({}_{2.11}\) \\ & & **SimCLR** & 54.77\({}_{2.77}\) & 50.51\({}_{2.87}\) & 50.58\({}_{2.12}\) & 81.88\({}_{1.48}\) & 42.50\({}_{1.72}\) & 50.15\({}_{2.02}\) & 42.16\({}_{2.04}\) \\ & & **CLOCS** & 78.37\({}_{2.60}\) & 83.99\({}_{2.11}\) & 76.14\({}_{2.73}\) & 75.78\({}_{2.79}\) & 91.17\({}_{2.51}\) & 90.72\({}_{2.35}\) \\ & & **COMET (Ours)** & **84.50\({}_{**4.46}\)** & **88.31\({}_{**2.42}\)** & **82.95\({}_{**3.39}\)** & **83.33\({}_{**3.15}\)** & **94.44\({}_{2.37}\)** & **94.43\({}_{2.48}\)** \\ \hline \multirow{10}{*}{AD} & \multirow{4}{*}{**100\%**} & **TS2vec** & 73.28\({}_{2.43}\) & 74.14\({}_{2.43}\) & 73.52\({}_{2.37}\) & 73.70\({}_{2.04}\) & 81.06\({}_{2.50}\) & 81.85\({}_{2.51}\) \\ & & **TF-C** & 75.66\({}_{2.11}\) & 75.58\({}_{2.11}\) & 75.58\({}_{2.11}\) & 75.81\({}_{2.11}\) & 81.38\({}_{2.41}\) & 81.36\({}_{2.41}\) & 81.56\({}_{1.186}\) \\ & & **Mixing-up** & 59.38\({}_{2.33}\) & 64.85\({}_{2.38}\) & 61.94\({}_{2.42}\) & 58.17\({}_{2.34}\) & 75.02\({}_{2.64}\) & 73.44\({}_{2.52}\) \\ & & **TS-TCC** & 77.83\({}_{2.60}\) & 79.73\({}_{2.79}\) & 76.18\({}_{2.71}\) & 76.43\({}_{2.76}\) & 84.12\({}_{2.73}\) & 84.12\({}_{2.761}\) \\ & & **SimCLR** & 56.09\({}_{2.92}\) & 53.31\({}_{2.54}\) & 57.13\({}_{2.59}\) & 44.10\({}_{1.54}\) & 53.81\({}_{2.57}\) & 51.08\({}_{1.53}\) \\ & & **CLOCS** & 76.97\({}_{2.30}\) & 81.70\({}_{2.31}\) & 74.69\({}_{2.36}\) & 74.75\({}_{2.53}\) & 86.91\({}_{2.34}\) & 86.70\({}_{2.36}\) \\ & & **COMET (Ours)** & **91.43\({}_{**3.32}\)** & **92.52\({}_{**2.36}\)** & **90.71\({}_{**3.56}\)** & **91.14\({}_{**3.31}\)** & **96.44\({}_{2.54}\)** & **96.48\({}_{2.52}\)** \\ \hline \multirow{10}{*}{**100\%**} & **TS2vec** & 64.93\({}_{3.33}\) & 65.28\({}_{3.52}\) & 65.14\({}_{2.39}\) & 64.64\({}_{3.58}\) & 70.56\({}_{2.38}\) & 68.97\({}_{2.57}\) \\ & & **TF-C** & 75.66\({}_{2.16}\) & 75.26\({}_{2.61}\) & 74.77\({}_{2.84}\) & 74.33\({}_{2.48}\) & 78.96\({}_{2.61}\) & 81.89\({}_{2.10}\) \\ & & **Mixing-up** & 63.67\({}_{2.67}\) & 65.02\({}_{2.09}\) & 64.64\({}_{2.63}\) & 63.55\({}_{2.11}\) & 71.95\({}_{2.39}\) & 70.15\({}_{2.30}\) \\ & & **TS-TCC** & 53.04\({}_{2.80}\) & 52.39\({}_{1.73}\) & 52.00\({}_{2.00}\) & 40.69\({}_{2.12}\) & 48.89\({}_{1.90}\) & 51.41\({}_{2.71}\) \\ & & **SimCLR** & 55.42\({}_{2.42}\) & 52.18\({}_{2.85}\) & 53.17\({}_{2.76}\) & 45.02\({}_{2.47}\) & 52.18\({}_{2.55}\) & 50.87\({}_{1.45}\) \\ & & **CLOCS** & **64.50\({}_{2.46}\)** & 65.67\({}_{2.47}\) & 74.72\({}_{3.74}\) & 63.74\({}_{2.86}\) & 63.01\({}_{2.11}\) & 69.16\({}_{2.75}\) & 68.15\({}_{2.57}\) \\ & & **COMET (Ours)** & **88.22\({}_{**2.36}\)** & **88.55\({}_{2.73}\)** & **88.56\({}_{2.34}\)** & **88.14\({}_{**3.37}\)** & **86.05\({}_{2.18}\)** & **96.12\({}_{2.13}\)** \\ \hline \multirow{10}{*}{**TDBrain**} & \multirow{4}{*}{**100\%**} & **TS2vec** & 80.21\({}_{2.11}\) & 81 ## 6 Conclusion This paper introduces COMET, a hierarchical contrastive representation learning framework tailored for medical time series. COMET leverages all data levels of medical time series, including patient, trial, sample, and observation levels, to capture the intricate complexities of the data. Through extensive experiments on three diverse datasets, we demonstrate that our method surpasses existing state-of-the-art methods in medical time series analysis. Our framework also shows its effectiveness in patient-independent downstream tasks, highlighting its potential to advance medical time series analysis and improve patient care and diagnosis. One limitation of our work is the presence of label conflicts between different levels. In patient-level consistency, we assume all samples belonging to the same patient are positive while others are negative. In trial-level consistency, we consider samples from the same trial as positive samples and others as negative. This means a positive pair at the patient level may be considered negative at the trial level, as they do not belong to the same trial. In future research, we aim to investigate the efficacy of our method across a wider range of datasets, with a particular focus on ECG datasets. Additionally, we intend to explore approaches to integrate disease-level consistency within our self-supervised contrastive framework. We discuss the **broader impacts** with potential negative social impacts in Appendix H. ## Acknowledgments and Disclosure of Funding This work is partially supported by the National Science Foundation under Grant No. 2245894. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline **Datasets** & **Fraction** & **Models** & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline \multirow{7}{*}{**TPB**} & \multirow{7}{*}{**100\%**} & **TS2vec** & 85.14\(\pm\)1.66 & 87.82\(\pm\)2.21 & 76.84\(\pm\)3.99 & 79.66\(\pm\)3.63 & 90.50\(\pm\)1.59 & **90.07\(\pm\)1.73** \\ & & **TF-C** & 87.50\(\pm\)2.40 & 85.50\(\pm\)3.04 & **82.68\(\pm\)1.48** & **83.77\(\pm\)3.50** & 77.59\(\pm\)1.92 & 80.62\(\pm\)15.10 \\ & & **Mixing-up** & 87.61\(\pm\)1.48 & **89.56\(\pm\)2.08** & 79.30\(\pm\)1.67 & 82.56\(\pm\)2.00 & 89.29\(\pm\)1.26 & 88.94\(\pm\)1.04 \\ & & **TS-TCC** & 82.24\(\pm\)3.55 & 85.28\(\pm\)5.09 & 69.46\(\pm\)3.55 & 72.00\(\pm\)6.85 & 87.60\(\pm\)2.51 & 86.26\(\pm\)3.00 \\ & & **SimCLR** & 84.19\(\pm\)1.32 & 85.85\(\pm\)1.73 & 78.39\(\pm\)2.95 & 76.84\(\pm\)2.30 & 85.85\(\pm\)1.89 & 70.70\(\pm\)2.36 \\ & & **CLOCS** & 86.39\(\pm\)3.98 & 87.60\(\pm\)2.81 & 77.95\(\pm\)3.90 & 87.14\(\pm\)3.78 & 90.41\(\pm\)2.07 & 87.35\(\pm\)3.36 \\ & & **COMET (Ours)** & **87.84\(\pm\)1.98** & 87.67\(\pm\)1.72 & 81.14\(\pm\)3.88 & 83.45\(\pm\)3.22 & **92.95\(\pm\)1.56** & 87.47\(\pm\)2.82 \\ \hline \multirow{7}{*}{**TPB**} & \multirow{7}{*}{**100\%**} & **TS2vec** & 82.49\(\pm\)4.71 & 80.39\(\pm\)5.04 & **83.35\(\pm\)0.91** & 80.18\(\pm\)4.04 & 93.03\(\pm\)1.03 & **92.81\(\pm\)9.07** \\ & & **TF-C** & 85.37\(\pm\)1.28 & 82.80\(\pm\)2.35 & 79.94\(\pm\)7.81 & 80.91\(\pm\)1.84 & 81.57\(\pm\)1.56 & 84.57\(\pm\)1.12 \\ & & **Mixing-up** & 87.05\(\pm\)1.41 & 86.56\(\pm\)3.24 & 80.61\(\pm\)2.82 & 82.62\(\pm\)1.99 & 89.28\(\pm\)1.43 & 87.22\(\pm\)2.76 \\ & & **TS-TCC** & 83.38\(\pm\)1.53 & 85.11\(\pm\)2.11 & 72.24\(\pm\)2.45 & 72.27\(\pm\)2.64 & 86.06\(\pm\)1.76 & 84.34\(\pm\)2.08 \\ & & **SimCLR** & 83.84\(\pm\)2.58 & 87.19\(\pm\)1.34 & 72.51\(\pm\)4.63 & 75.44\(\pm\)4.77 & 87.19\(\pm\)1.34 & 69.99\(\pm\)3.34 \\ & & **CLOCS** & 88.25\(\pm\)2.48 & 88.64\(\pm\)2.12 & 81.04\(\pm\)4.64 & 83.84\(\pm\)4.03 & 91.91\(\pm\)2.02 & 89.76\(\pm\)3.34 \\ & & **COMET (Ours)** & **89.49\(\pm\)3.98** & **88.26\(\pm\)2.60** & 81.65\(\pm\)3.00 & **84.11\(\pm\)5.44** & **84.38\(\pm\)1.09** & 92.48\(\pm\)2.22 \\ \hline \multirow{7}{*}{**10\%**} & \multirow{7}{*}{**12\%**} & **TS2vec** & 81.95\(\pm\)1.85 & 78.78\(\pm\)0.98 & 83.83\(\pm\)0.36 & 79.77\(\pm\)1.44 & 90.99\(\pm\)1.34 & 88.69\(\pm\)1.44 \\ & & **TF-C** & 85.82\(\pm\)2.34 & 82.79\(\pm\)3.20 & 81.86\(\pm\)2.36 & 82.21\(\pm\)2.62 & 89.14\(\pm\)2.89 & 89.47\(\pm\)2.92 \\ \cline{1-1} & & **Mixing-up** & 84.71\(\pm\)4.11 & 82.33\(\pm\)6.60 & 78.17\(\pm\)4.99 & 79.81\(\pm\)5.33 & 89.04\(\pm\)3.57 & 84.67\(\pm\)5.16 \\ \cline{1-1} & & **TS-TCC** & 78.61\(\pm\)1.19 & 78.27\(\pm\)1.79 & 64.56\(\pm\)2.55 & 62.33\(\pm\)3.16 & 79.97\(\pm\)2.17 & 76.98\(\pm\)2.00 \\ \cline{1-1} & & **SimCLR** & 84.19\(\pm\)1.90 & 87.15\(\pm\)1.27 & 73.21\(\pm\)3.43 & 76.31\(\pm\)3.26 & 87.15\(\pm\)2.77 & 70.58\(\pm\)2.71 \\ \cline{1-1} & & **CLOCS** & 88.80\(\pm\)2.82 & 88.11\(\pm\)4.37 & 84.47\(\pm\)3.41 & 85.57\(\pm\)3.41 & 94.73\(\pm\)1.59 & **94.04\(\pm\)2.14** \\ \cline{1-1} & & **COMET (Ours)** & **90.52\(\pm\)1.90** & **85.85\(\pm\)2.93** & **88.23\(\pm\)1.98** & **88.23\(\pm\)2.10** & **95.08\(\pm\)1.50** & 93.78\(\pm\)1.98 \\ \hline \hline \end{tabular} \end{table} Table 4: Full fine-tuning results of ECG datasets. A two-layer fully connected networks classifier \(P\) is trained on top of pre-trained encoder \(G\) on ECG dataset PTB. Fine-tuning the whole network includes \(G\) and \(P\). We use 100%, 10%, and 1% of labeled training data in PTB. ## References * [1] Dawei Cheng, Fangzhou Yang, Sheng Xiang, and Jin Liu. Financial time series forecasting with multi-modality graph neural network. _Pattern Recognition_, 121:108218, 2022. * [2] Yajiao Tang, Zhenyu Song, Yulin Zhu, Huaiyu Yuan, Maozhang Hou, Junkai Ji, Cheng Tang, and Jianqiang Li. A survey on machine learning models for financial time series forecasting. _Neurocomputing_, 512:363-380, 2022. * [3] Xiaoyu He, Suixiang Shi, Xiulin Geng, and Lingyu Xu. Information-aware attention dynamic synergetic network for multivariate time series long-term forecasting. _Neurocomputing_, 500:143-154, 2022. * [4] Zhiwen Chen, Jiamin Xu, Tao Peng, and Chunhua Yang. Graph convolutional network-based method for fault diagnosis using a hybrid of measurement and prior knowledge. _IEEE transactions on cybernetics_, 52(9):9157-9169, 2021. * [5] Quan Qian, Yi Qin, Jun Luo, and Dengyu Xiao. Cross-machine transfer fault diagnosis by ensemble weighting subdomain adaptation network. _IEEE Transactions on Industrial Electronics_, 2023. * [6] Catherine Arsenault, Anna Gage, Min Kyung Kim, Neena R Kapoor, Patricia Akweongo, Freddie Amponsah, Amit Aryal, Daisuke Asai, John Koku Awoonor-Williams, Wondim Ayele, et al. Covid-19 and resilience of healthcare systems in ten countries. _Nature Medicine_, 28(6):1314-1324, 2022. * [7] Andrea Brizzi, Charles Whittaker, Luciana MS Servo, Iwona Hawryluk, Carlos A Prete Jr, William M de Souza, Renato S Aguiar, Leonardo JT Araujo, Leonardo S Bastos, Alexandra Blenkinsop, et al. Spatial and temporal fluctuations in covid-19 fatality rates in brazilian hospitals. _Nature medicine_, 28(7):1476-1485, 2022. * [8] Danyang Tu, Xiongkuo Min, Huiyu Duan, Guodong Guo, Guangtao Zhai, and Wei Shen. End-to-end human-gaze-target detection with transformers. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 2192-2200. IEEE, 2022. * [9] Jaime Spencer, Richard Bowden, and Simon Hadfield. Medusa: Universal feature learning via attentional multitasking. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3800-3809, 2022. * [10] Haris Bin Zia, Ignacio Castro, Arkaitz Zubiaga, and Gareth Tyson. Improving zero-shot cross-lingual hate speech detection with pseudo-label fine-tuning of transformer language models. In _Proceedings of the International AAAI Conference on Web and Social Media_, volume 16, pages 1435-1439, 2022. * [11] Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. _Advances in Neural Information Processing Systems_, 35:11763-11784, 2022. * [12] Xinyu Yang, Zhenguo Zhang, and Rongyi Cui. Timeclr: A self-supervised contrastive learning framework for univariate time series representation. _Knowledge-Based Systems_, 245:108606, 2022. * [13] Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. Ts2vec: Towards universal representation of time series. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 8980-8987, 2022. * [14] Manuel T Nonnenmacher, Lukas Oldenburg, Ingo Steinwart, and David Reeb. Utilizing expert features for contrastive learning of time-series representations. In _International Conference on Machine Learning_, pages 16969-16989. PMLR, 2022. * [15] Dani Kiyasseh, Tingting Zhu, and David A Clifton. Clocs: Contrastive learning of cardiac signals across space, time, and patients. In _International Conference on Machine Learning_, pages 5606-5615. PMLR, 2021. * [16] Kristoffer Wickstrom, Michael Kampffmeyer, Karl Oyvind Mikalsen, and Robert Jenssen. Mixing up contrastive learning: Self-supervised representation learning for time series. _Pattern Recognition Letters_, 155:54-61, 2022. * [17] Sana Tonekaboni, Danny Eytan, and Anna Goldenberg. Unsupervised representation learning for time series with temporal neighborhood coding. In ICLR, 2021. * [18] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_, pages 1597-1607. PMLR, 2020. * [19] Minji Lee, Leandro RD Sanz, Alice Barra, Audrey Wolff, Jaakko O Nieminen, Melanie Boly, Mario Rosanova, Silvia Casarotto, Olivier Bodart, Jitka Annen, et al. Quantifying arousal and awareness in altered states of consciousness using interpretable deep learning. _Nature communications_, 13(1):1064, 2022. * [20] Chenxi Sun, Hongyan Li, Moxian Song, and Shenda Hong. A ranking-based cross-entropy loss for early classification of time series. _IEEE Transactions on Neural Networks and Learning Systems_, 2023. * [21] Huizi Cui, Lingge Zhou, Yan Li, and Bingyi Kang. Belief entropy-of-entropy and its application in the cardiac interbeat interval time series analysis. _Chaos, Solitons & Fractals_, 155:111736, 2022. * [22] Zongxing Xie, Bing Zhou, Xi Cheng, Elinor Schoenfeld, and Fan Ye. Passive and context-aware in-home vital signs monitoring using co-located uwb-depth sensor fusion. _ACM Transactions on Computing for Healthcare_, 3(4):1-31, 2022. * [23] Martin Krause, Shannon K McWilliams, Kenneth J Bullard, Lena M Mayes, Leslie C Jameson, Susan K Mikulich-Gilbertson, Ana Fernandez-Bustamante, and Karsten Bartels. Neostigmine versus sugammadex for reversal of neuromuscular blockade and effects on re-intubation for respiratory failure or newly initiated non-invasive ventilation-an interrupted time series design. _Anesthesia and analgesia_, 131(1):141, 2020. * [24] Mohammad-Parsa Hosseini, Tuyen X Tran, Dario Pompili, Kost Elisevich, and Hamid Soltanian-Zadeh. Multimodal data analysis of epileptic eeg and rs-fmri via deep learning and edge computing. _Artificial Intelligence in Medicine_, 104:101813, 2020. * [25] Johannes Poppelbaum, Gavneet Singh Chadha, and Andreas Schwung. Contrastive learning based self-supervised time-series analysis. _Applied Soft Computing_, 117:108397, 2022. * [26] Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, and Marinka Zitnik. Self-supervised contrastive pre-training for time series via time-frequency consistency. In _NeurIPS_, 2022. * [27] Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. In _IJCAI_, pages 2352-2359, 2021. * [28] Siyi Tang, Jared Dunnmon, Khaled Kamal Saab, Xuan Zhang, Qianying Huang, Florian Dubost, Daniel Rubin, and Christopher Lee-Messer. Self-supervised graph neural networks for improved electroencephalographic seizure analysis. In _International Conference on Learning Representations_, 2021. * [29] Chaoqi Yang, M Brandon Westover, and Jimeng Sun. Manydg: Many-domain generalization for healthcare applications. In _The Eleventh International Conference on Learning Representations_, 2023. * [30] Xiaodong Qu, Zepeng Hu, Zhaonan Li, and Timothy J Hickey. Ensemble methods and lstm outperformed other eight machine learning classifiers in an eeg-based bci experiment. In _International Conference on Learning Representations_, 2020. * [31] Uri Shaham, Jonathan Svirsky, Ori Katz, and Ronen Talmon. Discovery of single independent latent variable. _Advances in Neural Information Processing Systems_, 35:25251-25263, 2022. * [32] Xu Chen and Brett Wujek. A unified framework for automatic distributed active learning. _IEEE Transactions on Pattern Analysis & Machine Intelligence_, 44(12):9774-9786, 2022. * [33] Yuanchao Dai, Jing Wu, Yuanzhao Fan, Jin Wang, Jianwei Niu, Fei Gu, and Shigen Shen. Mseva: A musculoskeletal rehabilitation evaluation system based on emg signals. _ACM Transactions on Sensor Networks_, 19(1):1-23, 2022. * [34] Yingying Jiao, Yini Deng, Yun Luo, and Bao-Liang Lu. Driver sleepiness detection from eeg and eog signals using gan and lstm networks. _Neurocomputing_, 408:100-111, 2020. * [35] Jean-Bastien Grill, Florian Strub, Florent Altche, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. _Advances in neural information processing systems_, 33:21271-21284, 2020. * [36] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_, 2018. * [37] Chunhui Zhang, Hongfu Liu, Jundong Li, Yanfang Ye, and Chuxu Zhang. Contrastive graph few-shot learning. In _NeurIPS 2022 Workshop: New Frontiers in Graph Learning_, 2022. * [38] Zekun Tong, Yuxuan Liang, Henghui Ding, Yongxing Dai, Xinke Li, and Changhu Wang. Directed graph contrastive learning. _Advances in Neural Information Processing Systems_, 34:19580-19593, 2021. * [39] Aniruddh Raghu, Payal Chandak, Ridwan Alam, John Guttag, and Collin Stultz. Contrastive pre-training for multimodal medical time series. In _NeurIPS 2022 Workshop on Learning from Time Series for Health_, 2022. * [40] Ziyu Liu, Azadeh Alavi, Minyi Li, and Xiang Zhang. Self-supervised contrastive learning for medical time series: A systematic review. _Sensors_, 23(9):4221, 2023. * [41] Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. _Advances in neural information processing systems_, 32, 2019. * [42] Hugo Yeche, Gideon Dresdner, Francesco Locatello, Matthias Huser, and Gunnar Ratsch. Neighborhood contrastive learning applied to online patient monitoring. In _International Conference on Machine Learning_, pages 11964-11974. PMLR, 2021. * [43] Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, and Cecilia Mascolo. Exploring contrastive learning in human activity recognition for healthcare. _Machine Learning for Mobile Health Workshop at NeurIPS_, 2020. * [44] J Escudero, Daniel Abasolo, Roberto Hornero, Pedro Espino, and Miguel Lopez. Analysis of electroencephalograms in alzheimer's disease patients with multiscale entropy. _Physiological measurement_, 27(11):1091, 2006. * [45] PhysioToolkit PhysioBank. Physionet: components of a new research resource for complex physiologic signals. _Circulation_, 101(23):e215-e220, 2000. * [46] Hanneke van Dijk, Guido van Wingen, Damiaan Denys, Sebastian Olbrich, Rosalinde van Ruth, and Martijn Arns. The two decades brainclinics research archive for insights in neurophysiology (tdbrain) database. _Scientific data_, 9(1):333, 2022. * [47] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In _European conference on computer vision_, pages 649-666. Springer, 2016. * [48] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. _Advances in neural information processing systems_, 32, 2019. * [49] Xiang Lan, Dianwen Ng, Shenda Hong, and Mengling Feng. Intra-inter subject self-supervised learning for multivariate cardiac signals. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 4532-4540, 2022. ## Appendix A A Medical Time Series Example Here we use EEG data of Alzheimer's as an example. An EEG dataset has many patients with or without Alzheimer's (healthy control). Data collectors take multiple trials on a specific patient. These trials could be collected continuously within a short time or across a long period but follow the same experiment manner. Usually, the timestamps are too long for the deep learning pipeline to learn. For example, a 5 minutes trial with a sampling rate 25GHz has 76800 timestamps. Researchers generally use data preprocessing to split a trial into many samples, such as 1-second and 3-second short samples, for further representation learning. Each observation denotes a scalar or a vector of real value at a specific timestamp. An experiment with a sampling rate 25GHz has 256 observations in one second. ## Appendix B Data Augmentation Banks **Binomial masking**: Generate a mask following a binomial distribution that masks some timestamps of a sample, setting all channels at the masked timestamps to zero. **Channel binomial masking**: Generate a mask following a binomial distribution that masks some channels of some timestamps of a sample, setting only a subset of channels at the masked timestamps to zero. **Continuous masking**: Mask some continuous sequences of timestamps of a sample, setting all channels at the masked timestamps to zero. **Channel continuous masking**: Mask some continuous sequences of timestamps of a sample, setting only a random half of the channels at the masked timestamps to zero. **All true**: Do not apply any masking to the sample. Output the raw sample. ## Appendix C Shuffle Function Banks In real-world scenarios, ensuring the presence of samples from the same trial or patient within a training batch becomes increasingly low probability as the number of patients grows. This situation can hinder learning meaningful representations at the trial and patient levels. To address this situation, we have designed two distinct shuffle functions that serve to rearrange samples while also upholding the requirement to include samples originating from the same trial and patient. These functions are called the "trial shuffle" and the "batch shuffle". **Trial shuffle**: This function shuffles samples originating from the same trial and subsequently shuffles the trial order. Initially, we arrange the samples by sorting them based on their trial IDs. Next, samples from the same trial are grouped into sets, and the order of samples within each trial set is shuffled. Finally, we sort the trial sets themselves while preserving the order of samples within each respective trial set. **Batch shuffle**: This function shuffles samples in a batch and subsequently shuffles the order of batches. The logic of trial and batch shuffle are similar. Initially, we arrange the samples by sorting them based on their trial IDs. Next, we group samples into batch sets, and the order of samples within each batch set is shuffled. Finally, we sort the batch sets themselves while preserving the order of samples within each respective batch set. **Random shuffle**: Besides the two specifically designed shuffle functions, this random shuffle function shuffles all the samples in the dataset. All the shuffle functions mentioned here are designed to shuffle samples within the dataset before training. During training, it is also essential to shuffle samples each epoch to prevent the model from memorizing the dataset and encourage it to learn more useful representations. To address this, we implemented a specially crafted BatchSampler class in PyTorch, following the "batch shuffle" approach. This BatchSampler shuffles the samples locally within each epoch, ensuring that the pre-shuffled sample order is not disrupted significantly. This approach guarantees that each batch contains samples from the same trial. It's worth noting that when a batch consists of samples from the same trial, it must have samples from the same patient. ## Appendix D Data Preprocessing ### AD Data Preprocessing The AD dataset [44] comprises EEG recordings from 12 patients with Alzheimer's disease and 11 healthy controls. Each patient has an average of 30.0 \(\pm\) 12.5 trials. Each trial corresponds to a 5-second interval with 1280 timestamps (sampled at 256Hz) and includes 16 channels. Prior to further processing, each trial is scaled using a standard scaler. To facilitate analysis, we segment each trial into nine half-overlapping samples, where each sample has a duration of 256 timestamps (equivalent to 1 second). Additionally, we assign a unique trial ID and patient ID to each sample based on its origin in terms of the patient and trial. We split training, validation, and test sets in a patient-independent way. We use samples from patient IDs 17 and 18 as the validation set and samples from IDs 19 and 20 as the test set. The rest of the samples are all put into the training set. ### PTB Data Preprocessing The PTB dataset [45] consists of ECG recordings from 290 patients, with 15 channels sampled at 1000 Hz. There are a total of 8 types of heart diseases present in the dataset. For this paper, we focus on binary classification using a subset of the dataset that includes 198 patients with major disease labels, specifically Myocardial infarction and healthy controls. To preprocess the ECG signals, they are first normalized using a standard scaler after being resampled to a frequency of 250 Hz. Due to special peak information in ECG signals, a regular sliding window segmentation approach may result in the loss of crucial information. To address this issue, a different segmentation strategy is employed. Instead of sliding windows, the raw trials are segmented into individual heartbeats, with each heartbeat treated as a sample. To perform this segmentation, (1) the first step involves determining the maximum duration. The median value of R-Peak intervals across all channels is calculated for each raw trial, and outliers are removed to obtain a reasonable maximum interval as the standard heartbeat duration. (2) The next step is to determine the position of the first R-Peak. The median value of the first R-Peak position is calculated and used for all channels. (3) Next, the raw trials are split into individual heartbeat segments based on the median value of their respective R-Peak intervals. Each heartbeat is sampled starting from the R-Peak position, with the segments extending to both ends with half the length of the median interval. (4) To ensure the same length of the heartbeat samples, zero padding is applied to align their lengths with the maximum duration. (5) Finally, the samples are merged into trials, where 10 nearby samples are grouped together to form a pseudo-trial, similar to the neighborhood idea presented in [17]. We split training, validation, and test sets in a patient-independent way. We use samples from 28 patients(7 healthy and 21 positive) as the validation set and samples from another 28 patients(7 healthy and 21 positive) as the test set. The rest of the samples are all put into the training set. ### TDBrain Data Preprocessing The TDBrain [46] is a large dataset that monitors the brain signals of 1274 patients with 33 channels (500 Hz) during EC (Eye closed) and EO (Eye open) tasks. The dataset consists of 60 types of diseases, and it is possible for a patient to have multiple diseases simultaneously. This paper focuses on a subset of the dataset, specifically 25 patients with Parkinson's disease and 25 healthy controls. Only the EC task trials are used for representation learning. To process the raw EC trials, we employ a sliding window approach that continuously moves from the middle of the trial to both ends without any overlap. Each raw EC trial is divided into processed pseudo-trials with a length of 2560 timestamps (10 seconds) after resampling to 256 Hz. These processed pseudo-trials are then scaled using a standard scaler. Furthermore, each pseudo-trial is split into 19 half-overlapping samples, with each sample having a length of 256 timestamps (1 second). In addition to the binary label indicating Parkinson's disease or healthy, each sample is assigned a patient and trial ID based on the patient and processed trial it originates from. It is important to note that the trial ID refers to the ID of the processed pseudo-trial and not the raw EC trial. We split training, validation, and test sets in a patient-independent way. We use samples from 8 patients(4 healthy and 4 positive) as the validation set and samples from another 8 patients(4 healthy and 4 positive) as the test set. The rest of the samples are all put into the training set. ## Appendix E COMET and Baseline Implementation Details We implement the baselines following their corresponding papers, including TS2vec [13], Mixing-up [16], TS-TCC [27], SimCLR [43], CLOCS [15], and TF-C [26]. In our COMET framework and all baselines, we use the last epoch of the contrastive pre-training encoder \(G\) for downstream tasks. For the P-FT and F-FT tasks, we save the best model in terms of F1 score on the validation set during training and load the saved model to evaluate the performance on the test set. **COMET (our model)** We use a two-layer, fully connected network as the projection head to map the input dimension to the output dimension. The input dimension corresponds to the feature dimension of the data, while the hidden dimension is set to 128, and the output dimension is set to 64. To augment the data, we apply time series masking on the output dimension using the [all_true, all_true, continuous, continuous] (see Appendix B) for the observation, sample, trial, and patient-level contrastive blocks, respectively. The augmented output dimension from the projection head is then passed to the encoder \(G\) for representation learning. For the encoder \(G\), we adopt a dilated CNN module. It consists of 10 hidden blocks, each following the order "GELU -> DilatedConv -> GELU -> DilatedConv." A residual connection is applied between the beginning and end of each block. The dilation factor of the convolution in the i-th block is set to \(2^{i}\). Each hidden dimension of the dilated convolution is set to 64, and the kernel size is set to 3. The output dimension of encoder \(G\) is fixed at 320. We utilize positive pair selection strategies specific to each contrastive block to build the contrastive loss in the embedding space (after encoder \(G\)). A max-pooling layer is employed before applying the representation to downstream tasks. During contrastive pre-training, we set the learning rate to 0.0001. The pre-training batch size is 256, and the total number of pre-training epochs is 100. We report the hyperparameters that achieved good results and stability among random seeds. The hyperparameters \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), \(\lambda_{4}\) are assigned values of (0.25, 0.25, 0.25, 0.25), (0.1, 0.7, 0.1, 0.1), and (0.25, 0.25, 0.25, 0.25) for the AD, PTB, and TDBrain datasets, respectively. **TS2vec**[13] introduces contextual consistency using overlapping subseries and a hierarchical loss function to capture data consistency at the observation and sample levels. To incorporate their methodology, we utilize the open-source code available at [https://github.com/yuezhihan/ts2vec](https://github.com/yuezhihan/ts2vec). However, their code does not include downstream tasks for F-FT (Full Fine-Tuning), so we implement these tasks using the same setup as our COMET. Specifically, we set the number of epochs for contrastive pre-training to 100, the learning rate to 0.0001, and the batch size to 256. To align with our model, we adjust the convolution blocks to 10, matching our configuration. We adopt the default settings provided by the TS2vec implementation for other settings during pre-training. **TS-TCC**[27] leverages temporal and contextual consistency by contrasting a strong and weak augmentation. They employ a transformer-based autoregressive as the encoder. They perform a cross-view prediction by using temporal context to predict one view's future. Then, they maximize the similarity of contexts generated by the encoder to leverage contextual consistency. We utilize the open-source code available [https://github.com/emadellecen24/TS-TCC](https://github.com/emadellecen24/TS-TCC). We set the number of pre-training epochs to 100. We adopt the default settings provided by the TS-TCC implementation for other settings during pre-training. **Mixing-up**[16] proposes a mixing-up augmentation by mixing the proportion of two time-series samples. This augmentation involves creating an augmented sample as the convex combination of two randomly selected time-series samples from the dataset. The mixing parameter follows a beta distribution, determining the proportion of the two samples in the augmentation process. We utilize the open-source code available at [https://github.com/wickstrom/mixupontranstivelearning](https://github.com/wickstrom/mixupontranstivelearning) to implement Mixing-up. Although they only provide downstream tasks for F-FT, we align their method with our setups for all downstream tasks, including P-FT and F-FT. We set the number of pre-training epochs to 100, the learning rate to 0.0001, and the batch size to 256 during pre-training. **SimCLR**[18] is the most classic contrastive learning framework first proposed in the CV domain. It applies data augmentation techniques to create augmented views of input samples and constructs a contrastive loss based on these views. While initially designed for images, SimCLR has also been successfully adapted to time series data, as demonstrated in previous work such as [43]. To implement SimCLR, we utilize their open-source code available at [https://github.com/iantangc/ContrastiveLearningHAR](https://github.com/iantangc/ContrastiveLearningHAR). We set the number of pre-training epochs to 100, the learning rate to 0.0001, and the batch size to 512, aligning with their recommended settings. We use the default values provided in the SimCLR implementation for other configuration parameters during pre-training. CLOCS[15] employs samples from the same patient as positive pairs to leverage the data invariance in ECG recordings. They make use of both temporal and spatial information for contrastive learning. To implement CLOCS, we utilized their open-source code, available at [https://github.com/danikiyasseh/CLOCS](https://github.com/danikiyasseh/CLOCS). We incorporated their contrastive loss function to implement the Contrastive Multi-segment Coding (CMSC) mechanism described in their paper. Additionally, we modified their backbone to use TCN, which shares the same network structure as our COMET, including the configuration parameters. Specifically, we set the number of pre-training epochs to 100, the learning rate to 0.0001, and the batch size to 256. **TF-C**[26] leverages the consistency between time domain and frequency domain. They assume that the time-based and frequency-based representations of the same example exhibit proximity in the time-frequency space. We utilize their open-source code available at [https://github.com/mims-harvard/TFC-pretraining](https://github.com/mims-harvard/TFC-pretraining) to implement TF-C. While the original method is primarily designed for transfer learning, we extend it to incorporate downstream tasks such as P-FT and F-FT in our experiments. We set the number of pre-training epochs to 100, the learning rate to 0.0001, and the batch size to 256. We use the default values provided in the TF-C implementation for other configuration parameters during pre-training. ### Partial Fine-tuning In the P-FT (Partial Fine-Tuning) setup, we introduce a classifier \(L\) on top of the pre-trained encoder \(G\), while keeping the parameters of \(G\) fixed. Only the classifier \(L\) is fine-tuned in this setup. In the COMET, TS2vec, and Mixing-up approaches, we utilize logistic regression from the Sklearn library to implement the classifier \(L\). We use the default settings of Sklearn, except for setting the maximum iteration to 100,000. We employ a one-layer fully connected network as the classifier \(L\) for the TF-C method. The learning rates are specifically set to 8e-5, 3e-5, and 1e-4 for the AD, PTB, and TDBrain datasets, respectively. As for TS-TCC and SimCLR, we follow their respective default settings for the partial fine-tuning phase. ### Full Fine-tuning In the F-FT (Full Fine-Tuning) setup, we introduce a classifier \(P\) on top of the pre-trained encoder \(G\), where both the parameters of the encoder \(G\) and the classifier \(P\) are trainable. In this setup, we fine-tune both the classifier \(P\) and the encoder \(G\). We utilize a fraction of 100%, 10%, and 1% labeled training data for fine-tuning. In the COMET, TS2vec, and Mixing-up approaches, we set the finetune learning rate to 1e-4. We use a batch size of 128 and perform fine-tuning for 50, 100, and 100 epochs for 100%, 10%, and 1% label fractions, respectively. The classifier \(P\) is implemented as a two-layer fully connected network with hidden dimensions 128. For TF-C, we change the hidden dimension of this two-layer fully connected network to 64. The learning rates are specifically set to 8e-5, 3e-5, and 1e-4 for the AD, PTB, and TDBrain datasets, respectively. Regarding TS-TCC and SimCLR, we follow their respective default settings for the full fine-tuning phase. ## Appendix F Experimental Setting ### Patient-Independent Experimental Setting This paper adopts a patient-independent setting for the train-validation-test split. In medical time series classification tasks, two common approaches to splitting the data are patient-independent and patient-dependent settings [15, 1]. Figure 3 illustrates these two settings' differences. In the patient-dependent setting, samples from the same patient can appear in both the training and testing sets, whereas in the patient-independent setting, samples from the same patient are exclusively included in either the training or testing set. Performing patient-independent representation learning poses challenges due to the unique characteristics and different data distributions exhibited by each patient [49]. Even if two patients share the same label, the patterns within their data may differ significantly due to individual noise characteristics, potentially overshadowing the common patterns observed across patients. However, in real-world scenarios, it is essential for a model to be robust and general to perform patient-independent representation learning. The goal is to train a model on a subset of patients with known labels and utilize it to predict other patients with unknown labels. In contrast, patient-dependent classification is usually impractical since it requires knowledge of the labels for all patients. ### Pseudo-Trial Experimental Setting For many medical time series datasets, patient information is readily available, but trial information may be absent or limited to a single trial per patient. In such cases, the question arises of effectively employing trial-level contrastive learning. The solution is straightforward. We can generate pseudo-trials rather than merely deactivating the trial-level block by setting \(\lambda\) to 0. We can define groups of adjacent samples as pseudo-trials and assign them the same trial ID. In our paper, we employed ten neighboring samples as a pseudo-trial for the PTB and TDBrain datasets, and this approach yielded favorable results. This concept is akin to the approach taken by TNC [17], where close samples are defined as positive pairs. ## Appendix G Ablation Study, Visualization, Additional Downstream Tasks, and Heavy Duty Baseline ### Ablation Study Ablation study of contrastive blocksWe examined the effectiveness of each contrastive block and progressively incorporated each block from the observation-level to all four levels. Table 5 compares the full-level COMET model with its six variants on the AD, PTB, and TDBrain datasets. The variants are as follows: (1) **O**, **S**, **R**, **P**, which utilize only one level of the contrastive block. We activate that specific level by setting its \(\lambda\) to 1 and deactivate the other three levels by setting their \(\lambda\) to 0; (2) **S+O**, which combines the sample and observation-level contrastive blocks. The \(\lambda\) values for sample and observation levels are set equally to 0.5; (3) **R+S+O**, incorporating the trial, sample, and observation-level contrastive blocks. The \(\lambda\) values for these three levels are set equally to 0.33; and (4) **P+R+S+O**, representing our complete COMET method with all four levels of contrastive blocks. The \(\lambda\) values for the four levels are set equally to 0.25. The "Fraction" column indicates the fraction of labeled training samples used during fine-tuning. We observed that variants **R** and **P** exhibit strong performance, achieving comparable or even outperforming the full-level COMET model **P+R+S+O** on the PTB and TDBrain datasets across different fractions of labeled data (100%, 10%, and 1%). Notably, they also perform well with a 1% fraction in the AD dataset, although they exhibit significant instability in this case. This finding suggests that adding more contrastive blocks may not necessarily improve final results. Achieving a balance in weights among different levels through \(\lambda\) is crucial. Nevertheless, training models at individual levels can provide insights into the importance of each level in contrastive representation learning. Furthermore, it's noteworthy that the full-level COMET model **P+R+S+O** consistently demonstrates comparable or superior results across all datasets and fractions. Importantly, we did not perform any parameter tuning here, opting to set all the \(\lambda\) values equally, highlighting the stability of the full-level COMET **P+R+S+O** compared to other variants. Ablation study of hyperparameter \(\lambda\)We conducted an analysis to assess the impact of hyperparameter \(\lambda\) on the AD, PTB, and TDBrain datasets in table 6. The values of \(\lambda_{1},\lambda_{2},\lambda_{3}\), and \(\lambda_{4}\), from left to right, correspond to the patient, trial, sample, and observation levels, respectively. In this analysis, all four data levels are incorporated, representing the full-level COMET model **P+R+S+O**. We applied a significant weight to one data level by setting its corresponding \(\lambda\) to 0.7 while assigning lower weights of 0.1 to the other levels. Furthermore, we explored scenarios with increased weights on patient and trial levels or sample and observation levels. The "Fraction" column indicates the fraction of labeled training samples used during fine-tuning. We observed that the results exhibited greater stability than the ablation study of contrastive blocks. In the contrastive block ablation study, there were significant discrepancies between different COMET variants at times. For instance, the **S** and **R** variants of the TDBrain dataset exhibited substantial differences across various fraction setups. However, in the full-level COMET model, the differences between these inter-running setups were notably reduced, even when a heavy weight was applied to one data level. For instance, the differences between lambda setups (0.1, 0.1, 0.7, 0.1) and (0.1, 0.7, 0.1, 0.1) were not as significant in the TDBrain dataset. This observation underscores again the stability and robustness of the full-level COMET model. ### Visualization To visualize the effectiveness of COMET, we depict the learned representation \(h_{i}\) using the F-FT setup on the AD dataset as a case study. It is important to note that the learned representation \(h_{i}\) consists of 320 dimensions after pooling. To visualize the representations more interpretably, we employ UMAP, a dimensionality reduction technique with 20 neighbors and a minimum distance of 0.2. To establish a benchmark for comparison, we utilize TS2vec, which has shown the best performance among all baselines in the F-FT setup on the AD dataset. Additionally, we compute the average pairwise Euclidean distance between the negative (healthy) and positive (Alzheimer) classes, offering a quantitative measure of separability between the two classes. ### Performance on Downstream Tasks ClusteringWe assess the clustering performance of COMET using the AD dataset as a case study. Instead of employing a classifier model on top of the encoder, we apply K-means clustering (K=2) to the encoder \(G\). We utilize three widely-used evaluation metrics: Silhouette score, Adjusted Rand Index (ARI), and Normalized Mutual Information (NMI). To establish a benchmark for comparison, we consider TS2vec, which has shown the best performance among all baselines in the F-FT setup on the AD dataset. Table 7 illustrates that COMET surpasses TS2vec with an improvement of 0.0586 in Silhouette score, 0.945 in ARI, and 0.881 in NMI. Anomaly detectionWe evaluate the anomaly detection performance of COMET using the AD dataset as a case study. While some previous works focus on identifying outlier observations within a sample [S2, S3], we concentrate on sample-level anomaly detection. We construct a very unbalanced AD test set comprising 90% negative (healthy) samples and 10% positive (Alzheimer's) samples. The negative samples are considered normal, while the positive samples are treated as outliers. The test set is prepared accordingly, while the remaining aspects of the experiment follow the F-FT setup. Specifically, We utilize the saved trained models from the F-FT setup to evaluate the new test set, and for comparison, we still employ TS2vec as a benchmark. The "Fraction" column indicates the fraction of labeled training samples used during fine-tuning. The experiment result is shown in table 8. The COMET outperforms TS2vec by 5.25%, 15.3% and 11.4% with label fraction 100%, 10% and 1%, respectively. Figure 4: **Visualizing the learned representation (a) Visualization for TS2vec. (b) Visualization for COMET(Ours). The visualized representation is trained in the F-FT setup on the AD dataset. Dark blue and light blue denotes the negative class(Health) and positive class(Alzheimer), respectively. We calculate the mean Euclidean distance between pairwise samples from two classes for each pair of samples to evaluate the class separability. As the figure shows, our method exhibits superior separation between the two classes, resulting in larger pairwise distances.** \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Datasets** & **Fraction** & **Blocks** & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline \hline \multirow{9}{*}{**AD**} & **O** & 86.35\(\pm\)0.25 & 87.09\(\pm\)0.04 & 85.55\(\pm\)0.92 & 85.70\(\pm\)10.10 & 92.62\(\pm\)8.99 & 92.77\(\pm\)8.72 \\ & **S** & 73.03\(\pm\)5.63 & 78.45\(\pm\)5.76 & 76.26\(\pm\)3.91 & 76.36\(\pm\)3.92 & 85.43\(\pm\)3.61 & 84.63\(\pm\)3.76 \\ & **R** & 77.83\(\pm\)1.99 & 83.41\(\pm\)1.35 & 75.41\(\pm\)1.92 & 72.08\(\pm\)2.45 & 85.11\(\pm\)6.90 & 83.27\(\pm\)10.00 \\ & **P** & 71.41\(\pm\)1.53 & 76.91\(\pm\)1.55 & 68.70\(\pm\)1.66 & 66.54\(\pm\)1.74 & 76.11\(\pm\)1.69 & 76.06\(\pm\)16.19 \\ & **S+O** & 82.73\(\pm\)2.05 & 84.33\(\pm\)2.71 & 81.51\(\pm\)2.07 & 81.92\(\pm\)2.11 & 90.02\(\pm\)2.47 & 90.02\(\pm\)2.40 \\ & **R+S+O** & 91.19\(\pm\)3.14 & 91.74\(\pm\)3.01 & 90.70\(\pm\)3.34 & 90.89\(\pm\)3.25 & 95.86\(\pm\)2.63 & 95.86\(\pm\)2.67 \\ & **P+R+S+O** & **91.43\(\pm\)3.12** & **92.52\(\pm\)2.36** & **90.71\(\pm\)3.36** & **91.14\(\pm\)3.31** & **96.44\(\pm\)1.84** & **96.48\(\pm\)2.82** \\ \hline \multirow{9}{*}{**100\%**} & **O** & 69.56\(\pm\)7.54 & 71.43\(\pm\)8.04 & 68.19\(\pm\)7.16 & 67.83\(\pm\)7.41 & 77.32\(\pm\)6.09 & 77.32\(\pm\)8.45 \\ & **S** & 59.09\(\pm\)3.50 & 61.08\(\pm\)4.86 & 59.83\(\pm\)4.00 & 58.12\(\pm\)3.57 & 63.99\(\pm\)6.62 & 63.33\(\pm\)6.09 \\ & **R** & **90.15\(\pm\)13.23** & **93.83\(\pm\)7.28** & **89.03\(\pm\)1.48** & **88.17\(\pm\)1.78** & **96.22\(\pm\)5.97** & 96.02\(\pm\)6.28 \\ & **P** & 85.57\(\pm\)13.45 & 90.56\(\pm\)7.16 & 83.98\(\pm\)1.53 & 82.72\(\pm\)1.80 & 94.23\(\pm\)5.82 & 94.10\(\pm\)5.65 \\ & **S+O** & 63.24\(\pm\)3.42 & 63.52\(\pm\)4.15 & 63.04\(\pm\)4.26 & 62.72\(\pm\)4.14 & 67.98\(\pm\)1.57 & 67.25\(\pm\)4.97 \\ & **R+S+O** & 82.65\(\pm\)4.23 & 83.21\(\pm\)4.06 & 82.97\(\pm\)4.58 & 82.46\(\pm\)4.43 & 90.07\(\pm\)6.77 & 90.24\(\pm\)6.56 \\ & **P+R+S+O** & 88.22\(\pm\)3.36 & 88.55\(\pm\)2.73 & 88.56\(\pm\)3.14 & 88.14\(\pm\)3.37 & 96.05\(\pm\)1.36 & **96.12\(\pm\)1.31** \\ \hline \multirow{9}{*}{**100\%**} & **O** & 85.27\(\pm\)1.73 & 84.21\(\pm\)1.25 & 77.44\(\pm\)4.01 & 79.78\(\pm\)3.37 & 88.13\(\pm\)2.33 & 84.76\(\pm\)2.14 \\ & **S** & 84.30\(\pm\)3.26 & 84.00\(\pm\)2.16 & 75.08\(\pm\)5.69 & 77.74\(\pm\)5.31 & 88.66\(\pm\)2.05 & 84.80\(\pm\)2.15 \\ & **R** & 88.63\(\pm\)1.43 & 88.24\(\pm\)1.45 & 82.46\(\pm\)2.35 & 84.72\(\pm\)2.12 & 89.29\(\pm\)3.96 & 86.21\(\pm\)4.96 \\ & **P** & **88.85\(\pm\)3.22** & **88.74\(\pm\)2.30** & **82.78\(\pm\)4.61** & **84.75\(\pm\)5.38** & **94.32\(\pm\)1.81** & **90.61\(\pm\)2.81** \\ & **S+O** & 84.38\(\pm\)2.34 & 84.33\(\pm\)1.78 & 75.49\(\pm\)5.89 & 77.69\(\pm\)4.96 & 70.37\(\pm\)2.99 & 86.57\(\pm\)3.42 \\ & **R+S+O** & 85.32\(\pm\)1.39 & 88.82\(\pm\)1.78 & 77.54\(\pm\)5.48 & 79.64\(\pm\)3.94 & 92.34\(\pm\)1.85 & 88.75\(\pm\)19.89 \\ & **P+R+S+O** & 86.36\(\pm\)1.43 & 87.18\(\pm\)1.28 & 77.87\(\pm\)2.58 & 70.79\(\pm\)2.44 & 93.41\(\pm\)2.35 & 89.09\(\pm\)1.64 \\ \hline \multirow{9}{*}{**PTB**} & **O** & 86.75\(\pm\)1.44 & 85.42\(\pm\)2.10 & 80.88\(\pm\)3.27 & 82.45\(\pm\)2.99 & 90.71\(\pm\)3.16 & 88.84\(\pm\)3.39 \\ & **S** & 86.34\(\pm\)0.73 & 84.75\(\pm\)1.49 & 80.21\(\pm\)1.78 & 81.90\(\pm\)1.25 & 88.36\(\pm\)3.01 & 85.20\(\pm\)1.17 \\ & **R** & 88.12\(\pm\)1.75 & 86.36\(\pm\)2.28 & 84.16\(\pm\)3.85 & 84.80\(\pm\)2.99 & 91.80\(\pm\)2.90 & 90.45\(\pm\)1.90 \\ & **P** & **90.38\(\pm\)2.23** & **93.32\(\pm\)2.72** & 85.69\(\pm\)4.36 & 87.27\(\pm\)2.45 & **93.06\(\pm\)3.36** & **91.83\(\pm\)2.93** \\ & **S+O** & 85.84\(\pm\)1.74 & 84.14\(\pm\)0.76 & 80.07\(\pm\)5.18 & 81.41\(\pm\)3.82 & 90.28\(\pm\)2.75 & 87.57\(\pm\)3.42 \\ & **R+S+O** & 85.88\(\pm\)3.08 & 83.60\(\pm\)3.61 & 84.25\(\pm\)1.94 & 82.48\(\pm\)2.71 & 90.79\(\pm\)1.79 & 88.68\(\pm\)1.91 \\ & **P+R+S+O** & 90.32\(\pm\)1.61 & 89.67\(\pm\)1.94 & **85.85\(\pm\)2.99** & **87.35\(\pm\)2.36** & 92.78\(\pm\)1.11 & 90.46\(\pm\)3.09 \\ \hline \multirow{9}{*}{**100\%**} & **O** & 85.73\(\pm\)1.39 & 83.79\(\ \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Datasets** & **Fraction** & \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\) & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline \hline \multirow{9}{*}{**AD**} & & (0.1,0.1,0.1,0.7) & **87.82\(\pm\)**7.44** & **91.08\(\pm\)**4.46** & **86.62\(\pm\)**8.58** & **86.77\(\pm\)**8.64** & **97.95\(\pm\)**1.28** & **97.94\(\pm\)**1.28** \\ & & (0.1,0.1,0.7,0.1) & 85.03\(\pm\)5.64 & 89.09\(\pm\)3.22 & 83.32\(\pm\)3.47 & 83.78\(\pm\)6.59 & 95.33\(\pm\)1.81 & 95.30\(\pm\)1.90 \\ & & (0.1,0.7,0.1,0.1) & 82.76\(\pm\)8.67 & 88.00\(\pm\)4.65 & 80.98\(\pm\)10.02 & 80.72\(\pm\)10.30 & 95.32\(\pm\)2.72 & 95.26\(\pm\)2.71 \\ & **100\%** & (0.7,0.1,0.1,0.1) & 82.22\(\pm\)3.81 & 87.27\(\pm\)4.41 & 80.42\(\pm\)9.62 & 80.17\(\pm\)10.00 & 94.84\(\pm\)2.64 & 94.80\(\pm\)2.65 \\ & & (0.2,0.2,0.3,0.3) & 86.88\(\pm\)5.78 & 89.96\(\pm\)4.00 & 85.46\(\pm\)5.82 & 85.97\(\pm\)6.36 & 94.03\(\pm\)36.96 & 94.07\(\pm\)3.54 \\ & & (0.3,0.3,0.2) & 86.72\(\pm\)5.97 & 89.76\(\pm\)3.12 & 85.41\(\pm\)6.92 & 85.73\(\pm\)36.96 & 95.14\(\pm\)1.95 & 59.09\(\pm\)2.04 \\ & & (0.3,0.3,0.5,0.5) & 82.84\(\pm\)6.96 & 88.09\(\pm\)3.99 & 80.87\(\pm\)2.93 & 81.04\(\pm\)8.06 & 94.73\(\pm\)2.98 & 94.64\(\pm\)3.12 \\ \hline \multirow{9}{*}{**AD**} & & (0.1,0.1,0.7,0.7) & 89.56\(\pm\)8.67 & 91.37\(\pm\)6.37 & 88.64\(\pm\)9.65 & 88.82\(\pm\)5.91 & 96.55\(\pm\)4.41 & 96.47\(\pm\)6.63 \\ & & (0.1,0.1,0.7,0.1) & 87.84\(\pm\)6.09 & 88.72\(\pm\)5.16 & 87.50\(\pm\)5.67 & 87.42\(\pm\)6.66 & 94.58\(\pm\)5.32 & 34.85\(\pm\)5.67 \\ & & (0.1,0.7,0.1,0.1) & 79.89\(\pm\)4.06 & 85.79\(\pm\)7.95 & 70.79\(\pm\)1.605 & 75.54\(\pm\)1.98 & 28.99\(\pm\)6.45 & 92.67\(\pm\)6.64 \\ & **10\%** & (0.7,0.1,0.1,0.1) & 78.26\(\pm\)13.58 & 84.99\(\pm\)6.68 & 76.38\(\pm\)15.30 & 73.44\(\pm\)13.94 & 92.47\(\pm\)7.64 & 92.57\(\pm\)7.93 \\ & & (0.2,0.2,0.3,0.3) & **93.25\(\pm\)1.73** & **93.86\(\pm\)1.24** & **92.77\(\pm\)2.09** & **93.09\(\pm\)1.89** & **97.68\(\pm\)6.87** & **97.75\(\pm\)8.82** \\ & & (0.3,0.3,0.2,0.2) & 91.43\(\pm\)3.39 & 92.71\(\pm\)2.39 & 90.68\(\pm\)2.45 & 91.90\(\pm\)6.00 & 96.81\(\pm\)1.62 & 96.88\(\pm\)1.64 \\ & & (0.3,0.3,0.5,0.5) & 91.54\(\pm\)3.63 & 92.93\(\pm\)2.92 & 90.77\(\pm\)4.25 & 91.91\(\pm\)3.91 & 92.67\(\pm\)2.63 & 97.29\(\pm\)1.67 \\ \hline \multirow{9}{*}{**100\%**} & & (0.1,0.1,0.7,0.7) & 95.88\(\pm\)2.29 & 96.21\(\pm\)1.92 & 95.64\(\pm\)2.55 & 95.79\(\pm\)2.36 & 92.72\(\pm\)3.94 & 93.24\(\pm\)8.74 \\ & & (0.1,0.1,0.7,0.1) & 85.03\(\pm\)9.80 & 87.16\(\pm\)5.75 & 85.65\(\pm\)5.84 & 84.62\(\pm\)10.42 & 94.71\(\pm\)2.73 & 94.73\(\pm\)2.89 \\ & & (0.1,0.7,0.1,0.1) & **97.19\(\pm\)1.93** & **97.32\(\pm\)1.11** & **97.81\(\pm\)8.02** & **97.15\(\pm\)1.31** & **99.09\(\pm\)1.26** & **98.98\(\pm\)1.11** \\ & & (0.7,0.1,0.1,0.1) & 97.00\(\pm\)1.00 & 97.10\(\pm\)1.06 & 96.92\(\pm\)11.14 & 96.96\(\pm\)1.12 & 98.93\(\pm\)1.02 & 98.99\(\pm\)1.26 \\ & & (0.2,0.2,0.3,0.3) & 84.55\(\pm\)3.79 & 84.47\(\pm\)7.51 & 84.75\(\pm\)7.65 & 84.44\(\pm\)4.49 & 89.95\(\pm\)0.83 & 89.78\(\pm\)11.27 \\ & & (0.3,0.3,0.2,0.2) & 86.96\(\pm\)5.32 & 87.16\(\pm\)5.66 & 86.87\(\pm\)3.31 & 86.72\(\pm\)19.9 & 92.68\(\pm\)7.36 & 92.93\(\pm\)7.43 \\ & & (0.3,0.3,0.5,0.25) & 92.40\(\pm\)5.22 & 92.75\(\pm\)2.41 & 91.26\(\pm\)2.44 & 92.27\(\pm\)2.58 & 97.76\(\pm\)1.11 & 97.66\(\pm\)1.17 \\ \hline \multirow{9}{*}{**PTB**} & & (0.1,0.1,0.1,0.7) & 85.51\(\pm\)2.46 & 85.52\(\pm\)2.76 & 96.96\(\pm\)3.20 & 79.60\(\pm\)3.91 & 91.30\(\pm\)2.82 & 87.11\(\pm\)3.34 \\ & & (0.1,0.1,0.7,0.1,0.1) & 85.94\(\pm\)3.40 & 87.29\(\pm\)4.03 & 76.96\(\pm\)5.61 & 97.64\(\pm\)5.94 & 93.78\(\pm\)2.51 & **90.78\(\pm\)4.38** \\ & & (0.1,0.7,0.1,0.1) & **87.84\(\pm\)1.98** & 87.67\(\pm\)1.72 & **81.14\(\pm\)3.38** & **83.45\(\pm\)3.22** & 92.95\(\pm\)1.56 & 87.47\(\pm\)2.82 \\ & **100\%** & (0.7,0.1,0.1,0.1) & 87.76\(\pm\)2.75 & **88.02\(\pm\)1.40** & 80.65\(\pm\)5.84 & 82.96\(\pm\)5.07 & 91.82\(\pm\)3.66 & 88.70\(\pm\)3. ### Heavy Duty Baselines In COMET, we incorporate four contrastive blocks to leverage four levels of data consistency, allowing the data to pass through the model four times within one epoch during contrastive pre-training. To ensure that our superior performance is not due to increased data passing, we conduct experiments on the AD dataset with two baselines: SimCLR and TS2vec. SimCLR utilizes only one contrastive block during training. We employ two strategies to match the data passing number with COMET: (1) Run SimCLR with one contrastive block for four times the original number of epochs in pre-training (400 epochs instead of 100). (2) Duplicate the contrastive blocks, resulting in four SimCLR contrastive blocks. The notation **4E** signifies running the model for four times the original number of epochs, while **4B** indicates the use of four times the number of contrastive blocks compared to the original SimCLR. TS2vec incorporates two contrastive blocks during training, leading to data passing twice within one epoch. Similarly, we adopt two strategies to align the data passing number with COMET: (1) Run TS2vec for two times the original number of epochs in pre-training (200 epochs instead of 100). (2) Duplicate the contrastive blocks, resulting in four TS2vec contrastive blocks. The notation **2E** denotes running the model for four times the original number of epochs, while **2B** indicates the use of four times the number of contrastive blocks compared to the original TS2vec. The results are presented in Table 9. We observe that simply increasing the number of epochs or contrastive blocks does not improve performance but rather leads to a decrease in most cases. We speculate that this decrease is caused by overfitting. ## Appendix H Broader Impacts Our approach for self-supervised contrastive learning improves classification performance on target datasets in patient-independent medical diagnosis scenarios. Leveraging different data consistency levels in medical time series is crucial to enable effective and accurate contrastive learning without sufficient labels. Our work will encourage the research community to discover universal frameworks for other practical applications based on time series representation learning. We also hope our work can attract more researchers to the more general problem of hierarchical consistency from other related fields. From the societal perspective, our work and the line of contrastive learning can promote more efficient use of medical time series with the lack of labels. Specifically, our model has the potential to identify patterns and anomalies that may not be immediately apparent to human experts. This could lead to earlier and more accurate diagnoses, improving patient outcomes and reducing healthcare costs. However, practitioners need to be aware of the limitations of the model. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Method** & **Silhouette** & **ARI** & **NMI** \\ \hline **Random Init.** & 0.1184\(\pm\) 0.0082 & 0.1189\(\pm\)0.0664 & 0.1258\(\pm\)0.0660 \\ \hline **TS2vec** & 0.0795\(\pm\) 0.0032 & 0.0013\(\pm\) 0.0026 & 0.0018\(\pm\)0.0016 \\ **COMET (Ours)** & 0.1381\(\pm\) 0.0139 & 0.9358\(\pm\)0.0264 & 0.8827\(\pm\)0.0414 \\ \hline \hline \end{tabular} \end{table} Table 7: **Performance on downstream clustering.** The clustering performance is evaluated on the AD dataset. We compare the baseline TS2vec, which performs best in the F-FT setup. \begin{table} \begin{tabular}{c l l l l l l l} \hline \hline **Fraction** & **Models** & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline **100\%** & **TS2vec** & 82.11\(\pm\)3.30 & 66.05\(\pm\)2.45 & 82.13\(\pm\)3.21 & 68.70\(\pm\)3.37 & 91.27\(\pm\)1.47 & 76.80\(\pm\)3.08 \\ **COMET (Ours)** & 83.03\(\pm\)11.65 & 71.76\(\pm\)7.60 & 90.33\(\pm\)6.31 & 73.95\(\pm\)11.97 & 97.99\(\pm\)1.37 & 91.52\(\pm\)5.52 \\ \hline **10\%** & **TS2vec** & 76.05\(\pm\)6.35 & 62.48\(\pm\)2.90 & 77.33\(\pm\)4.74 & 62.67\(\pm\)5.80 & 86.76\(\pm\)3.95 & 72.21\(\pm\)5.09 \\ **COMET (Ours)** & 88.22\(\pm\)2.88 & 73.22\(\pm\)3.31 & 92.49\(\pm\)1.73 & 77.97\(\pm\)3.87 & 97.91\(\pm\)1.14 & 92.21\(\pm\)4.43 \\ \hline **1\%** & **TS2vec** & 67.24\(\pm\)8.05 & 57.34\(\pm\)3.12 & 67.87\(\pm\)6.45 & 54.35\(\pm\)6.21 & 72.75\(\pm\)6.82 & 61.32\(\pm\)5.25 \\ **COMET (Ours)** & 77.57\(\pm\)4.21 & 64.72\(\pm\)1.95 & 84.41\(\pm\)3.38 & 65.74\(\pm\)3.67 & 93.13\(\pm\)2.82 & 79.65\(\pm\)7.65 \\ \hline \hline \end{tabular} \end{table} Table 8: **Performance on anomaly detection**. Sample-level anomaly detection on a very unbalanced AD test set comprising 90% negative (healthy) samples and 10% positive (Alzheimer) samples. All datasets in this paper are publicly available and are not associated with privacy or security concerns. Furthermore, we have followed guidelines on responsible use specified by the primary authors of the datasets used in the current work. \begin{table} \begin{tabular}{c l l l l l l l} \hline \hline **Fraction** & **Models** & **Accuracy** & **Precision** & **Recall** & **F1 score** & **AUROC** & **AUPRC** \\ \hline \multirow{5}{*}{**100\%**} & **4E-SimCLR** & 56.87\(\pm\)2.51 & 57.67\(\pm\)4.02 & 53.31\(\pm\)2.30 & 48.28\(\pm\)4.63 & 57.67\(\pm\)4.02 & 51.97\(\pm\)1.44 \\ & **4B-SimCLR** & 53.92\(\pm\)3.81 & 51.88\(\pm\)3.25 & 51.26\(\pm\)3.05 & 46.92\(\pm\)5.07 & 51.88\(\pm\)3.25 & 50.75\(\pm\)1.69 \\ & SimCLR & 54.77\(\pm\)1.97 & 50.15\(\pm\)7.02 & 50.58\(\pm\)1.92 & 43.18\(\pm\)4.27 & 50.15\(\pm\)7.02 & 50.42\(\pm\)1.06 \\ & **2E-TS2vec** & 76.49\(\pm\)4.16 & 78.97\(\pm\)3.54 & 77.69\(\pm\)3.66 & 76.21\(\pm\)4.65 & 88.44\(\pm\)4.20 & 88.12\(\pm\)2.53 \\ & **2B-TS2vec** & 81.61\(\pm\)1.65 & 81.47\(\pm\)1.75 & 81.53\(\pm\)1.68 & 81.43\(\pm\)1.65 & 89.50\(\pm\)1.60 & 89.22\(\pm\)1.75 \\ & **TS2vec** & 81.26\(\pm\)2.08 & 81.21\(\pm\)2.14 & 81.34\(\pm\)2.04 & 81.12\(\pm\)2.06 & 89.20\(\pm\)1.76 & 88.94\(\pm\)1.85 \\ & **COMET (Ours)** & **84.50\(\pm\)4.46** & **88.31\(\pm\)2.42** & **82.95\(\pm\)3.39** & **83.33\(\pm\)5.15** & **94.44\(\pm\)2.37** & **94.43\(\pm\)2.48** \\ \hline \multirow{5}{*}{**10\%**} & **4E-SimCLR** & 57.97\(\pm\)1.74 & 58.41\(\pm\)8.31 & 53.69\(\pm\)2.25 & 46.47\(\pm\)5.71 & 58.41\(\pm\)8.31 & 52.33\(\pm\)1.38 \\ & **4B-SimCLR** & 53.57\(\pm\)6.29 & 53.89\(\pm\)5.05 & 52.97\(\pm\)4.22 & 50.60\(\pm\)5.46 & 53.89\(\pm\)5.05 & 51.80\(\pm\)2.34 \\ & SimCLR & 56.90\(\pm\)2.25 & 53.81\(\pm\)5.74 & 51.73\(\pm\)2.99 & 44.10\(\pm\)4.84 & 53.81\(\pm\)5.74 & 51.08\(\pm\)1.53 \\ & **2E-TS2vec** & 66.29\(\pm\)4.86 & 69.92\(\pm\)5.13 & 67.79\(\pm\)4.64 & 56.36\(\pm\)8.55 & 78.54\(\pm\)4.98 & 77.05\(\pm\)5.29 \\ & **2B-TS2vec** & 72.61\(\pm\)4.46 & 73.86\(\pm\)3.46 & 72.98\(\pm\)3.66 & 72.30\(\pm\)4.24 & 81.91\(\pm\)4.83 & 81.74\(\pm\)4.85 \\ & **TS2vec** & 73.28\(\pm\)4.34 & 74.14\(\pm\)4.33 & 73.52\(\pm\)3.77 & 73.00\(\pm\)4.18 & 81.66\(\pm\)5.20 & 81.58\(\pm\)5.11 \\ & **COMET (Ours)** & **91.43\(\pm\)3.12** & **92.52\(\pm\)3.36** & **90.71\(\pm\)3.36** & **91.14\(\pm\)3.31** & **96.44\(\pm\)2.84** & **96.48\(\pm\)2.82** \\ \hline \multirow{5}{*}{**1\%**} & **4E-SimCLR** & 58.07\(\pm\)1.93 & 57.72\(\pm\)3.50 & 54.92\(\pm\)1.99 & 51.93\(\pm\)3.11 & 57.72\(\pm\)3.50 & 52.91\(\pm\)1.28 \\ & **4B-SimCLR** & 54.67\(\pm\)5.43 & 54.86\(\pm\)4.94 & 54.48\(\pm\)4.64 & 53.68\(\pm\)4.89 & 54.86\(\pm\)4.94 & 52.67\(\pm\)2.68 \\ & **SimCLR** & 55.42\(\pm\)2.43 & 52.18\(\pm\)5.55 & 51.37\(\pm\)2.76 & 45.02\(\pm\)4.79 & 52.18\(\pm\)5.55 & 50.87\(\pm\)1.45 \\ & **2E-TS2vec** & 63.56\(\pm\)4.62 & 64.97\(\pm\)3.53 & 64.49\(\pm\)3.90 & 63.28\(\pm\)4.69 & 70.26\(\pm\)3.55 & 68.77\(\pm\)3.59 \\ & **2B-TS2vec** & 64.18\(\pm\)3.53 & 64.26\(\pm\)4.80 & 64.26\(\pm\)4.80 & 63.93\(\pm\)4.61 & 70.07\(\pm\)5.97 & 68.62\(\pm\)6.25 \\ & **TS2vec** & 64.93\(\pm\)3.53 & 62.85\(\pm\)3.52 & 65.14\(\pm\)3.59 & 64.64\(\pm\)3.58 & 70.56\(\pm\)5.38 & 68.97\(\pm\)5.75 \\ & **COMET (Ours)** & **88.22\(\pm\)3.36** & **88.55\(\pm\)2.73** & **88.56\(\pm\)3.14** & **88.14\(\pm\)3.37** & **96.05\(\pm\)1.36** & **96.12\(\pm\)1.31** \\ \hline \hline \end{tabular} \end{table} Table 9: Heavy Duty Baselines. Run more epochs or add more contrastive blocks to SimCLR and TS2vec on the AD dataset.
2306.12380
On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation
The dependence on training data of the Gibbs algorithm (GA) is analytically characterized. By adopting the expected empirical risk as the performance metric, the sensitivity of the GA is obtained in closed form. In this case, sensitivity is the performance difference with respect to an arbitrary alternative algorithm. This description enables the development of explicit expressions involving the training errors and test errors of GAs trained with different datasets. Using these tools, dataset aggregation is studied and different figures of merit to evaluate the generalization capabilities of GAs are introduced. For particular sizes of such datasets and parameters of the GAs, a connection between Jeffrey's divergence, training and test errors is established.
Samir M. Perlaza, Iñaki Esnaola, Gaetan Bisson, H. Vincent Poor
2023-06-21T16:51:50Z
http://arxiv.org/abs/2306.12380v1
# On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation ###### Abstract The dependence on training data of the Gibbs algorithm (GA) is analytically characterized. By adopting the expected empirical risk as the performance metric, the sensitivity of the GA is obtained in closed form. In this case, sensitivity is the performance difference with respect to an arbitrary alternative algorithm. This description enables the development of explicit expressions involving the training errors and test errors of GAs trained with different datasets. Using these tools, dataset aggregation is studied and different figures of merit to evaluate the generalization capabilities of GAs are introduced. For particular sizes of such datasets and parameters of the GAs, a connection between Jeffrey's divergence, training and test errors is established. ## I Introduction The Gibbs algorithm (GA) randomly selects a model by sampling the Gibbs probability measure, which is the unique solution to the empirical risk minimization (ERM) problem with relative entropy regularization (ERM-RER) [1]. The input of the GA is twofold. It requires a number of labeled patterns (datasets); and a prior on the set of models in the form of a \(\sigma\)-measure, e.g., the Lebesgue measure, the counting measure, or a probability measure. One of the main features of the GA is that it does not require an assumption on the statistical properties of the datasets [2, 3, 4]. Nonetheless, the generalization capabilities of the Gibbs algorithm are often characterized by the generalization error, for which statistical assumptions on the datasets must be considered, e.g., training, and unseen datasets are identically distributed. When the prior on the set of models is a probability measure, a closed-form expression for the generalization error is presented in [5], while upper bounds have been derived in [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27], and references therein. In a more general setting, when the prior on the set of models is a \(\sigma\)-measure, the generalization capabilities of the GA have been studied in [1, 28], and [29], using the sensitivity of the empirical risk to deviations from the Gibbs probability measure to another probability measure. This method does not require any statistical assumptions on the datasets and is chosen as the workhorse of the present analysis. The main motivation of this work is to break away from the implicit assumption in existing literature that all training datasets are drawn from the same probability measure and thus, can be aggregated to improve the generalization capabilities of a given GA. In practical settings, training data might be acquired from multiple sources that might be subject to different impairments during data acquisition, data storage and data transmission. For instance, consider a GA trained upon a particular dataset and assume that a new dataset from a different source is made available. Hence, the following questions arise concerning the generalization capabilities of such a GA: Would such a GA generalize over the new dataset? Should the new dataset be aggregated to the previous dataset to build a new GA in the aim of improving generalization? How does the GA trained upon the existing dataset compare in terms of generalization with respect to a new GA trained upon the new dataset? The answers to such questions are far from trivial. One of the main challenges to answer such questions stems from the fact that the probability measures generating each of those datasets are unknown and potentially different due to a variety of impairments. This paper introduces a closed-form expression for the difference of the expected empirical risk on a given dataset induced by a GA trained upon this dataset and the one induced by an alternative algorithm (another probability measure). This quantity was coined _sensitivity of the GA algorithm_ in [28] and is shown to be central to tackling the questions above. This is in part due to the fact that it allows studying the generalization capabilities of GAs based on actual datasets, which disengages from the assumption that both training and unseen data follow the same probability distribution. More specifically, by studying the sensitivity, closed-form expressions for the difference between training error and test error can be obtained. These expressions lead to a clearer understanding of the roles of the size of datasets chosen for training and testing, as well as the parameters of the GAs. As a byproduct, the difference between the expected empirical risk on the aggregation of two datasets induced by two GAs trained upon the constituent datasets is characterized. Similarly, the difference between the expected empirical risk on one of the constituent datasets induced by two GAs trained upon the aggregated dataset and the constituent dataset is also characterized. These explicit expressions allow comparing two GAs trained upon different datasets, which is relevant under learning paradigms such as federated learning [30]. ## II Problem Formulation Let \(\mathcal{M}\), \(\mathcal{X}\) and \(\mathcal{Y}\), with \(\mathcal{M}\subseteq\mathds{R}^{d}\) and \(d\in\mathds{N}\), be sets of _models_, _patterns_, and _labels_, respectively. A pair \((x,y)\in\mathcal{X}\times\mathcal{Y}\) is referred to as a _labeled pattern_ or as a _data point_. Given \(n\) data points, with \(n\in\mathds{N}\), denoted by \((x_{1},y_{1})\), \((x_{2},y_{2})\), \(\ldots\), \((x_{n},y_{n})\), a dataset is represented by the tuple \(\big{(}(x_{1},y_{1})\), \((x_{2},y_{2})\), \(\ldots\), \((x_{n},y_{n})\big{)}\in\big{(}\mathcal{X}\times\mathcal{Y}\big{)}^{n}\). Let the function \(f:\mathcal{M}\times\mathcal{X}\rightarrow\mathcal{Y}\) be such that the label \(y\) assigned to the pattern \(x\) according to the model \(\boldsymbol{\theta}\in\mathcal{M}\) is \[y=f(\boldsymbol{\theta},x). \tag{1}\] Let also the function \[\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow[0,+\infty] \tag{2}\] be such that given a data point \((x,y)\in\mathcal{X}\times\mathcal{Y}\), the risk induced by a model \(\boldsymbol{\theta}\in\mathcal{M}\) is \(\ell(f(\boldsymbol{\theta},x),y)\). In the following, the risk function \(\ell\) is assumed to be nonnegative and for all \(y\in\mathcal{Y}\), \(\ell(y,y)=0\). Given a dataset \[\boldsymbol{z}=\big{(}(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{n},y_{n})\big{)} \in\big{(}\mathcal{X}\times\mathcal{Y}\big{)}^{n}, \tag{3}\] the _empirical risk_ induced by the model \(\boldsymbol{\theta}\), with respect to the dataset \(\boldsymbol{z}\) in (3), is determined by the function \(\mathsf{L}_{\boldsymbol{z}}:\mathcal{M}\rightarrow[0,+\infty]\), which satisfies \[\mathsf{L}_{\boldsymbol{z}}(\boldsymbol{\theta})=\frac{1}{n}\sum_{i=1}^{n} \ell(f(\boldsymbol{\theta},x_{i}),y_{i}). \tag{4}\] Using this notation, the ERM problem consists of the following optimization problem: \[\min_{\boldsymbol{\theta}\in\mathcal{M}}\mathsf{L}_{\boldsymbol{z}}( \boldsymbol{\theta}). \tag{5}\] Let the set of solutions to the ERM problem in (5) be denoted by \(\mathcal{T}(\boldsymbol{z})\triangleq\arg\min_{\boldsymbol{\theta}\in \mathcal{M}}\mathsf{L}_{\boldsymbol{z}}(\boldsymbol{\theta})\). Note that if the set \(\mathcal{M}\) is finite, the ERM problem in (5) always possesses a solution, and thus, \(|\mathcal{T}(\boldsymbol{z})|>0\). Nonetheless, in general, the ERM problem might not necessarily possess a solution. Hence, for some cases, it might be observed that \(|\mathcal{T}(\boldsymbol{z})|=0\). ### _Notation_ The _relative entropy_ is defined below as the extension to \(\sigma\)-finite measures of the relative entropy usually defined for probability measures. **Definition 1** (Relative Entropy): _Given two \(\sigma\)-finite measures \(P\) and \(Q\) on the same measurable space, such that \(Q\) is absolutely continuous with respect to \(P\), the relative entropy of \(Q\) with respect to \(P\) is_ \[\text{D}(Q\|P)\triangleq\int\frac{\mathrm{d}Q}{\mathrm{d}P}(x)\log\!\left( \frac{\mathrm{d}Q}{\mathrm{d}P}(x)\right)\!\mathrm{d}P(x), \tag{6}\] _where the function \(\frac{\mathrm{d}Q}{\mathrm{d}P}\) is the Radon-Nikodym derivative of \(Q\) with respect to \(P\)._ Given a measurable space \((\Omega,\mathscr{F})\), the set of all \(\sigma\)-finite measures on \((\Omega,\mathscr{F})\) is denoted by \(\triangle(\Omega,\mathscr{F})\). Given a \(\sigma\)-measure \(Q\in\triangle(\Omega,\mathscr{F})\), the subset of \(\triangle(\Omega,\mathscr{F})\) including all \(\sigma\)-finite measures absolutely continuous with \(Q\) is denoted by \(\triangle_{Q}(\Omega,\mathscr{F})\). Given a subset \(\mathcal{A}\) of \(\mathds{R}^{d}\), the Borel \(\sigma\)-field on \(\mathcal{A}\) is denoted by \(\mathscr{B}(\mathcal{A})\). ### _The ERM-RER Problem_ The _expected empirical risk_ is defined as follows. **Definition 2** (Expected Empirical Risk): _Let \(P\) be a probability measure in \(\Delta(\mathcal{M},\mathscr{B}(\mathcal{M}))\). The expected empirical risk with respect to the dataset \(\boldsymbol{z}\) in (3) induced by the measure \(P\) is_ \[\mathsf{R}_{\boldsymbol{z}}(P)=\int\mathsf{L}_{\boldsymbol{z}}(\boldsymbol{ \theta})\mathrm{d}P(\boldsymbol{\theta}), \tag{7}\] _where the function \(\mathsf{L}_{\boldsymbol{z}}\) is in (4)._ The following lemma follows immediately from the properties of the Lebesgue integral. **Lemma 1**: _Given a dataset \(\boldsymbol{z}\in\left(\mathcal{X}\times\mathcal{Y}\right)^{n}\) and two probability measures \(P_{1}\) and \(P_{2}\) over the measurable space \((\mathcal{M},\mathscr{B}(\mathcal{M}))\), for all \(\alpha\in[0,1]\), the function \(\mathsf{R}_{\boldsymbol{z}}\) in (7) satisfies_ \[\mathsf{R}_{\boldsymbol{z}}(\alpha P_{1}+(1-\alpha)P_{2})\!\!=\!\!\alpha \mathsf{R}_{\boldsymbol{z}}(P_{1})+(1-\alpha)\mathsf{R}_{\boldsymbol{z}}(P_{2}). \tag{8}\] _The ERM-RER problem is parametrized by a \(\sigma\)-finite measure on \((\mathcal{M},\mathscr{B}(\mathcal{M}))\) and a positive real, which are referred to as the reference measure and the regularization factor, respectively. Let \(Q\) be a \(\sigma\)-finite measure on \((\mathcal{M},\mathscr{B}(\mathcal{M}))\) and let \(\lambda>0\) be a positive real. The ERM-RER problem, with parameters \(Q\) and \(\lambda\), consists in the following optimization problem:_ \[\min_{P\in\triangle_{Q}(\mathcal{M},\mathscr{B}(\mathcal{M}))}\mathsf{R}_{ \boldsymbol{z}}(P)+\lambda D(P\|Q), \tag{9}\] _where the dataset \(\boldsymbol{z}\) is in (3); and the function \(\mathsf{R}_{\boldsymbol{z}}\) is defined in (7). For the ease of presentation, the parameters of the ERM-RER problem in (9) are chosen such that_ \[Q(\{\boldsymbol{\theta}\in\mathcal{M}:\mathsf{L}_{\boldsymbol{z}}(\boldsymbol{ \theta})=+\infty\})=0. \tag{10}\] _The case in which the regularization is \(\text{D}(Q\|P)\) (instead of \(\text{D}(P\|Q)\)) in (9) is left out of the scope of this work. The interested reader is referred to [31]._ ### _The Solution to the ERM-RER Problem_ The solution to the ERM-RER problem in (9) is presented by the following lemma. **Lemma 2** (Theorem \(2.1\) in [28]): _Given a \(\sigma\)-finite measure \(Q\) and a dataset \(\boldsymbol{z}\in\left(\mathcal{X}\times\mathcal{Y}\right)^{n}\), let the function \(K_{Q,\boldsymbol{z}}:\mathds{R}\rightarrow\mathds{R}\cup\{+\infty\}\) be such that for all \(t\in\mathds{R}\),_ \[K_{Q,\boldsymbol{z}}(t)\!=\!\!\log\!\bigg{(}\int\exp(t\;\mathsf{L}_{ \boldsymbol{z}}(\boldsymbol{\theta}))\mathrm{d}Q(\boldsymbol{\theta})\bigg{)}, \tag{11}\] where the function \(\mathsf{L}_{\mathbf{z}}\) is defined in (4). Let also the set \(\mathcal{K}_{Q,\mathbf{z}}\subset(0,+\infty)\) be \[\mathcal{K}_{Q,\mathbf{z}}{\triangleq}\Big{\{}s>0:\ K_{Q,\mathbf{z}}\!\left(- \frac{1}{s}\right)<+\infty\Big{\}}. \tag{12}\] Then, for all \(\lambda\in\mathcal{K}_{Q,\mathbf{z}}\), the solution to the ERM-RER problem in (9) is a unique measure on \((\mathcal{M},\mathscr{B}(\mathcal{M}))\), denoted by \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\), whose Radon-Nikodym derivative with respect to \(Q\) satisfies that for all \(\mathbf{\theta}\in\operatorname{supp}Q\), \[\frac{\mathrm{d}P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}}{\mathrm{d}Q}(\bm {\theta}){=}\mathrm{exp}\!\left(-K_{Q,\mathbf{z}}\!\left(-\frac{1}{\lambda}\right) -\frac{1}{\lambda}\mathsf{L}_{\mathbf{z}}(\mathbf{\theta})\right)\!. \tag{13}\] Among the numerous properties of the solution to the ERM-RER problem in (9), the following property is particularly useful in the remainder of this work. **Lemma 3**: _Given a \(\sigma\)-finite measure \(Q\) over the measurable space \((\mathcal{M},\mathscr{B}(\mathcal{M}))\), and given a dataset \(\mathbf{z}\in(\mathcal{X}\times\mathcal{Y})^{n}\), for all \(\lambda\in\mathcal{K}_{Q,\mathbf{z}}\), with \(\mathcal{K}_{Q,\mathbf{z}}\) in (12), the following holds:_ \[\mathsf{R}_{\mathbf{z}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q, \lambda)}\right)+\lambda D\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)} \right)\!\!=\!\!-\lambda K_{Q,\mathbf{z}}\!\left(-\frac{1}{\lambda}\right)\!, \tag{14}\] _where the function \(\mathsf{R}_{\mathbf{z}}\) is defined in (7); the function \(K_{Q,\mathbf{z}}\) is defined in (11); and the probability measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\) is the solution to the ERM-RER problem in (9)._ _Proof:_ The proof is presented in [29]. ## III Sensitivity of the ERM-RER Solution The sensitivity of the expected empirical risk \(\mathsf{R}_{\mathbf{z}}\) to deviations from the probability measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\) towards an alternative probability measure \(P\) is defined as follows. **Definition 3** (Sensitivity [28]): _Given a \(\sigma\)-finite measure \(Q\) and a positive real \(\lambda>0\), let \(\mathsf{S}_{Q,\lambda}:(\mathcal{X}\times\mathcal{Y})^{n}\times\triangle_{Q} (\mathcal{M},\mathscr{B}(\mathcal{M}))\rightarrow(-\infty,+\infty]\) be a function such that_ \[\mathsf{S}_{Q,\lambda}(\mathbf{z},P)=\left\{\begin{array}{ll}\mathsf{R}_{\mathbf{z}} (P)-\mathsf{R}_{\mathbf{z}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)} \right)&\text{ if }\lambda\in\mathcal{K}_{Q,\mathbf{z}}\\ +\infty&\text{ otherwise,}\end{array}\right. \tag{15}\] _where the function \(\mathsf{R}_{\mathbf{z}}\) is defined in (7) and the measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\) is the solution to the ERM-RER problem in (9). The sensitivity of the expected empirical risk \(\mathsf{R}_{\mathbf{z}}\) when the measure changes from \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\) to \(P\) is \(\mathsf{S}_{Q,\lambda}(\mathbf{z},P)\)._ The following theorem introduces an exact expression for the sensitivity in Definition 3. **Theorem 1**: _Given a \(\sigma\)-finite measure \(Q\) over the measurable space \((\mathcal{M},\mathscr{B}(\mathcal{M}))\) and a probability measure \(P\in\triangle_{Q}(\mathcal{M},\mathscr{B}(\mathcal{M}))\), it holds that for all datasets \(\mathbf{z}\in\left(\mathcal{X}\times\mathcal{Y}\right)^{n}\) and for all \(\lambda\in\mathcal{K}_{Q,\mathbf{z}}\), with \(\mathcal{K}_{Q,\mathbf{z}}\) in (12),_ \[\mathsf{S}_{Q,\lambda}(\mathbf{z},P){=}\lambda\Big{(}D\!\left(P_{\mathbf{\Theta}|\mathbf{Z }=\mathbf{z}}^{(Q,\lambda)}\|Q\right)+D\!\left(P\!\|P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}} ^{(Q,\lambda)}\right)-D\!\left(P\!\|Q\right)\!\Big{)},\] _where the probability measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\) is the solution to the ERM-RER problem in (9)._ _Proof:_ The proof uses the fact that, under the assumption in (10), the probability measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\) in (13) is mutually absolutely continuous with respect to the \(\sigma\)-finite measure \(Q\); see for instance [1]. Hence, the probability measure \(P\) is absolutely continuous with respect to \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\), as a consequence of the assumption that \(P\) is absolutely continuous with respect to \(Q\). The proof follows by noticing that for all \(\mathbf{\theta}\in\mathcal{M}\), \[\log\!\left(\frac{\mathrm{d}P}{\mathrm{d}P_{\mathbf{\Theta}|\mathbf{Z}= \mathbf{z}}^{(Q,\lambda)}}(\mathbf{\theta})\right)=\log\!\left(\frac{\mathrm{d}Q}{ \mathrm{d}P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}}(\mathbf{\theta})\frac{ \mathrm{d}P}{\mathrm{d}Q}(\mathbf{\theta})\right) \tag{16}\] \[=\!\!-\log\!\left(\frac{\mathrm{d}P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}} ^{(Q,\lambda)}}{\mathrm{d}Q}(\mathbf{\theta})\right)+\log\!\left(\frac{\mathrm{d}P}{ \mathrm{d}Q}(\mathbf{\theta})\right)\] (17) \[=\!\!K_{Q,\mathbf{z}}\!\left(-\frac{1}{\lambda}\right)+\frac{1}{ \lambda}\mathsf{L}_{\mathbf{z}}(\mathbf{\theta})+\log\!\left(\frac{\mathrm{d}P}{ \mathrm{d}Q}(\mathbf{\theta})\right)\!, \tag{18}\] where the functions \(\mathsf{L}_{\mathbf{z}}\) and \(K_{Q,\mathbf{z}}\) are defined in (4) and in (11), respectively; and the equality in (18) follows from Lemma 2. Hence, the relative entropy \(D\!\left(P\!\|P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\right)\) satisfies \[D\!\left(P\!\|P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\right) =\int\log\!\left(\frac{\mathrm{d}P}{\mathrm{d}P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}}(\mathbf{\theta})\right)\!\mathrm{d}P(\mathbf{\theta})\] \[=\!\!K_{Q,\mathbf{z}}\!\left(-\frac{1}{\lambda}\right)+\int\!\left( \frac{1}{\lambda}\mathsf{L}_{\mathbf{z}}(\mathbf{\theta})+\log\!\left(\frac{\mathrm{d}P}{ \mathrm{d}Q}(\mathbf{\theta})\right)\right)\!\mathrm{d}P(\mathbf{\theta}) \tag{19}\] \[=\!\!K_{Q,\mathbf{z}}\!\left(-\frac{1}{\lambda}\right)+\frac{1}{ \lambda}\mathsf{R}_{\mathbf{z}}(P)+D\!\!\left(P\!\|Q\right)\] (20) \[=\!\!-D\!\left(\!P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\|Q \right)+\frac{1}{\lambda}\Big{(}\mathsf{R}_{\mathbf{z}}(P)-\mathsf{R}_{\mathbf{z}} \!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}}^{(Q,\lambda)}\right)\Big{)}\] \[+D\!\left(P\!\|Q\right)\!, \tag{21}\] where the function \(\mathsf{R}_{\mathbf{z}}\) is defined in (7), the equality in (19) follows from (18), and the equality in (21) follows from Lemma 3. Finally, the proof is completed by re-arranging the terms in (21). ## IV Validation of Gibbs Algorithms Consider the dataset \(\mathbf{z}_{0}\in\left(\mathcal{X}\times\mathcal{Y}\right)^{n_{0}}\) that aggregates dataset \(\mathbf{z}_{1}\in\left(\mathcal{X}\times\mathcal{Y}\right)^{n_{1}}\) and dataset \(\mathbf{z}_{2}\in\left(\mathcal{X}\times\mathcal{Y}\right)^{n_{2}}\) as constituents. That is, \(\mathbf{z}_{0}=(\mathbf{z}_{1},\mathbf{z}_{2})\), with \(n_{0}=n_{1}+n_{2}\). Datasets \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\) are referred to as _constituent datasets_, whereas, the dataset \(\mathbf{z}_{0}\) is referred to as _ _Proof:_ The proof is presented in [29]. \(\blacksquare\) For all \(i\in\{0,1,2\}\), let \(Q_{i}\in\triangle(\mathcal{M},\mathscr{B}(\mathcal{M}))\) and \(\lambda_{i}\in\mathcal{K}_{Q_{i},\mathbf{z}_{i}}\), with \(\mathcal{K}_{Q_{i},\mathbf{z}_{i}}\) in (12), be the \(\sigma\)-finite measure acting as the reference measure and regularization factor for the learning task with dataset \(i\), respectively. Each dataset induces a different ERM-RER problem formulation of the form \[\min_{P\in\triangle_{Q_{i}}(\mathcal{M},\mathscr{B}(\mathcal{M}))}\mathsf{R}_ {\mathbf{z}_{i}}(P)+\lambda_{i}D(P\|Q_{i}), \tag{24}\] where \(\mathsf{R}_{\mathbf{z}_{i}}\) is the expected empirical risk defined in (7). For all \(i\in\{0,1,2\}\), the solution to the ERM-RER problem in (24) is the probability measure denoted by \(P^{(Q_{i},\lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}\). In particular, from Lemma 2, it holds that the probability measure \(P^{(Q_{i},\lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}\) satisfies for all \(\mathbf{\theta}\in\operatorname{supp}Q_{i}\), \[\frac{\mathrm{d}P^{(Q_{i},\lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_ {i}}}{\mathrm{d}Q_{i}}(\mathbf{\theta})\!\!=\!\!\exp\!\left(\!-K_{Q_{i},\mathbf{z}_{i} }\!\left(\!-\frac{1}{\lambda_{i}}\right)-\frac{1}{\lambda_{i}}\mathsf{L}_{\bm {z}_{i}}(\mathbf{\theta})\!\right). \tag{25}\] For all \(i\in\{0,1,2\}\), the probability measure \(P^{(Q_{i},\lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}\) in (25) represents a GA trained upon the dataset \(\mathbf{z}_{i}\) with parameters \((Q_{i},\lambda_{i})\). In the following, such an algorithm is denoted by \(\mathsf{GA}_{i}\) and the dataset \(\mathbf{z}_{i}\) is often referred to as the _training dataset_ of \(\mathsf{GA}_{i}\). The dataset \(\mathbf{z}_{j}\), with \(j\in\{0,1,2\}\setminus\{i\}\), which might contain datapoints that are not in \(\mathbf{z}_{i}\), is referred to as the _test dataset_ for \(\mathsf{GA}_{i}\). ### _Gibbs Algorithms Trained on Constituent Datasets_ The expected empirical risk induced by \(\mathsf{GA}_{i}\) on the training dataset \(\mathbf{z}_{i}\) is the _training expected empirical risk_, which is denoted by \(\mathsf{R}_{\mathbf{z}_{i}}\!\left(P^{(Q_{i},\lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}= \mathbf{z}_{i}}\right)\) and often referred to as the _training error_[32]. Alternatively, the expected empirical risk induced by \(\mathsf{GA}_{i}\) on the test dataset \(\mathbf{z}_{j}\) is the _test expected empirical risk_, which is denoted by \(\mathsf{R}_{\mathbf{z}_{j}}\!\left(P^{(Q_{i},\lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}= \mathbf{z}_{i}}\right)\) and often referred to as the _test error_[32]. The following theorem provides explicit expressions involving the training and test errors of \(\mathsf{GA}_{1}\) and \(\mathsf{GA}_{2}\). **Theorem 2**: _Assume that the \(\sigma\)-finite measures \(Q_{1}\) and \(Q_{2}\) in (24) are mutually absolutely continuous. Then, for all \(i\in\{1,2\}\) and \(j\in\{1,2\}\setminus\{i\}\),_ \[\mathsf{R}_{\mathbf{z}_{i}}\!\left(P^{(Q_{j},\lambda_{j})}_{\mathbf{\Theta }|\mathbf{Z}=\mathbf{z}_{j}}\right)-\mathsf{R}_{\mathbf{z}_{i}}\!\left(P^{(Q_{i},\lambda_{ i})}_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}\right)=\lambda_{i}\Big{(}D\!\left(P^{(Q_{i}, \lambda_{i})}_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}\right)\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof:: The proof uses the following argument: \[\mathsf{R}_{\mathsf{z}_{0}}\!\left(\alpha P_{\mathbf{\Theta}|\mathbf{Z}= \mathbf{z}_{1}}^{(Q_{0},\lambda_{1})}+(1-\alpha)P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{2}}^{(Q _{2},\lambda_{2})}\right)-\mathsf{R}_{\mathsf{z}_{2}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z }=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})}\right)\] \[= \alpha\mathsf{R}_{\mathsf{z}_{0}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}= \mathbf{z}_{1}}^{(Q_{1},\lambda_{1})}+(1-\alpha)R_{\mathsf{z}_{0}}\!\left(P_{\mathbf{ \Theta}|\mathbf{Z}=\mathbf{z}_{2}}^{(Q_{2},\lambda_{2})}\right.\right.\] \[\left.\left.-\alpha\mathsf{R}_{\mathsf{z}_{0}}\!\left(P_{\mathbf{ \Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})}\right)-(1-\alpha)\mathsf{R}_{ \mathsf{z}_{0}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})} \right)\right. \tag{31}\] \[= \alpha\Big{(}\mathsf{R}_{\mathsf{z}_{0}}\!\left(P_{\mathbf{\Theta}| \mathbf{Z}=\mathbf{z}_{1}}^{(Q_{1},\lambda_{1})}\right)-\mathsf{R}_{\mathsf{z}_{0}} \!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})}\right)\Big{)}\] \[+(1-\alpha)\Big{(}\mathsf{R}_{\mathsf{z}_{0}}\!\left(P_{\mathbf{ \Theta}|\mathbf{Z}=\mathbf{z}_{2}}^{(Q_{2},\lambda_{2})}\right)-\mathsf{R}_{\mathsf{z }_{0}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})}\right) \Big{)}\] (32) \[= \alpha\mathsf{S}_{Q_{0},\lambda_{0}}\Big{(}\mathsf{z}_{0},P_{\mathbf{ \Theta}|\mathbf{Z}=\mathbf{z}_{1}}^{(Q_{1},\lambda_{1})}\Big{)}\!+\!(1\!-\!\alpha) \mathsf{S}_{Q_{0},\lambda_{0}}\Big{(}\mathsf{z}_{0},P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{ z}_{2}}^{(Q_{2},\lambda_{2})}\Big{)}, \tag{33}\] where the equality in (31) follows from Lemma 1, and the equality in (33) follows from Definition 3. The proof is completed by Theorem 1. The following corollary of Theorem 4 is obtained by subtracting the equality in (30) with \(\alpha=1\) from the equality in (30) with \(\alpha=0\). **Corollary 1**: _Assume that the \(\sigma\)-finite measures \(Q_{0}\), \(Q_{1}\) and \(Q_{2}\) in (24) are pair-wise mutually absolutely continuous. Then, for all \(i\in\{0,1,2\}\), the probability measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}^{(Q_{i},\lambda_{i})}\) in (25) satisfies_ \[\mathsf{R}_{\mathsf{z}_{0}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{ 2}}^{(Q_{2},\lambda_{2})}\right)-\mathsf{R}_{\mathsf{z}_{0}}\!\left(P_{\mathbf{ \Theta}|\mathbf{Z}=\mathbf{z}_{1}}^{(Q_{1},\lambda_{1})}\right)\] \[= \lambda_{0}\Bigg{(}D\!\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{2}}^ {(Q_{2},\lambda_{2})}\|P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})} \right)-D\!\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{2}}^{(Q_{2},\lambda_{2})}\|Q _{0}\right)\Bigg{)}\] \[-\lambda_{0}\Bigg{(}D\!\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{1}}^ {(Q_{1},\lambda_{1})}\|P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})} \right)-D\!\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{1}}^{(Q_{1},\lambda_{1})}\|Q _{0}\right)\Bigg{)}, \tag{34}\] _where the function \(\mathsf{R}_{\mathsf{z}_{0}}\) is defined in (7)._ Corollary 1 is an alternative to Theorem 3 involving the GA trained upon the aggregated dataset, i.e., \(\mathsf{GA}_{0}\). ### _Gibbs Algorithms Trained on Aggregated Datasets_ Training a GA upon the aggregation of datasets does not necessarily imply lower expected empirical risk on the constituent datasets. As argued before, datasets might be obtained up to different levels of fidelity. Hence, a validation method for \(\mathsf{GA}_{0}\) is based on the expected empirical risk induced by \(\mathsf{GA}_{0}\) on a constituent dataset \(\mathbf{z}_{i}\), with \(i\in\{1,2\}\), which is denoted by \(\mathsf{R}_{\mathsf{z}_{i}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0}, \lambda_{0})}\right)\). A pertinent figure of merit is the difference \(\mathsf{R}_{\mathbf{z}_{i}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0}, \lambda_{0})}\right)-\mathsf{R}_{\mathbf{z}_{i}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z }_{i}}^{(Q_{i},\lambda_{i})}\right)\). The following theorem provides an explicit expression for such quantity. **Theorem 5**: _Assume that the \(\sigma\)-finite measures \(Q_{0}\), \(Q_{1}\) and \(Q_{2}\) in (24) are pair-wise mutually absolutely continuous. Then, for all \(i\in\{0,1,2\}\),_ \[\mathsf{R}_{\mathsf{z}_{i}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0 }}^{(Q_{0},\lambda_{0})}\right)-\mathsf{R}_{\mathbf{z}_{i}}\!\left(P_{\mathbf{\Theta}| \mathbf{Z}=\mathbf{z}_{i}}^{(Q_{i},\lambda_{i})}\right)=\lambda_{i}\bigg{(}D\!\!\left(P_ {\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}^{(Q_{i},\lambda_{i})}\right.\] \[\left.+D\!\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0}, \lambda_{0})}\|P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}^{(Q_{i},\lambda_{i})}\right)-D \!\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0},\lambda_{0})}\|Q_{i} \right)\!\bigg{)}, \tag{35}\] _where, the function \(\mathsf{R}_{\mathbf{z}_{i}}\) is defined in (7) and the measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}^{(Q_{i},\lambda_{i})}\) satisfies (25)._ Proof:: The proof is immediate from Theorem 1 by noticing that for all \(i\in\{1,2\}\), the differences \(\mathsf{R}_{\mathbf{z}_{i}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^{(Q_{0}, \lambda_{0})}\right)-\mathsf{R}_{\mathbf{z}_{i}}\!\left(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z }_{i}}^{(Q_{i},\lambda_{i})}\right)\) can be written in terms of the sensitivity \(\mathsf{S}_{Q_{i},\lambda_{i}}\!\left(\mathbf{z}_{i},P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{0}}^ {(Q_{0},\lambda_{0})}\right)\). ### _Special Cases_ Consider a given \(\sigma\)-finite measure \(Q\) and assume that for all \(i\in\{0,1,2\}\) and for all \(\mathcal{A}\in\mathscr{B}(\mathcal{M})\), \(Q(\mathcal{A})=Q_{i}(\mathcal{A})\). Assume also that the parameters \(\lambda_{0}\), \(\lambda_{1}\), and \(\lambda_{2}\) in (24) satisfy \(\lambda_{1}=\frac{n_{0}}{n_{1}}\lambda_{0}\) and \(\lambda_{2}=\frac{n_{0}}{n_{2}}\lambda_{0}\). These assumptions are referred to as the case of _homogeneous priors_ with measure \(Q\), and the case of _proportional regularization_, respectively. The term "proportional" stems from the fact that the regularization factor decreases proportionally to the size of the data set in the optimization problem in (24). Under these assumptions, the following corollary of Theorem 2 unveils an interesting connection with the Jeffrey's divergence [33]. **Corollary 2**: _Consider the case of homogeneous priors with a \(\sigma\)-finite measure \(Q\) and proportional regularization with parameter \(\lambda_{0}\). Then, for all \(i\in\{1,2\}\), the probability measure \(P_{\mathbf{\Theta}|\mathbf{Z}=\mathbf{z}_{i}}^{(Q,\lambda_{i})}\) in (25), satisfies_ \[\
2306.02193
LDEB -- Label Digitization with Emotion Binarization and Machine Learning for Emotion Recognition in Conversational Dialogues
Emotion recognition in conversations (ERC) is vital to the advancements of conversational AI and its applications. Therefore, the development of an automated ERC model using the concepts of machine learning (ML) would be beneficial. However, the conversational dialogues present a unique problem where each dialogue depicts nested emotions that entangle the association between the emotional feature descriptors and emotion type (or label). This entanglement that can be multiplied with the presence of data paucity is an obstacle for a ML model. To overcome this problem, we proposed a novel approach called Label Digitization with Emotion Binarization (LDEB) that disentangles the twists by utilizing the text normalization and 7-bit digital encoding techniques and constructs a meaningful feature space for a ML model to be trained. We also utilized the publicly available dataset called the FETA-DailyDialog dataset for feature learning and developed a hierarchical ERC model using random forest (RF) and artificial neural network (ANN) classifiers. Simulations showed that the ANN-based ERC model was able to predict emotion with the best accuracy and precision scores of about 74% and 76%, respectively. Simulations also showed that the ANN-model could reach a training accuracy score of about 98% with 60 epochs. On the other hand, the RF-based ERC model was able to predict emotions with the best accuracy and precision scores of about 78% and 75%, respectively.
Amitabha Dey, Shan Suthaharan
2023-06-03T20:37:46Z
http://arxiv.org/abs/2306.02193v1
Ldeb: Label Digitization with Emotion Binarization and Machine Learning for Emotion Recognition in Conversational Dialogues ###### Abstract Emotion recognition in conversations (ERC) is vital to the advancements of conversational AI and its applications. Therefore, the development of an automated ERC model using the concepts of machine learning (ML) would be beneficial. However, the conversational dialogues present a unique problem where each dialogue depicts nested emotions that entangle the association between the emotional feature descriptors and emotion type (or label). This entanglement that can be multiplied with the presence of data paucity is an obstacle for a ML model. To overcome this problem, we proposed a novel approach-called Label Digitization with Emotion Binarization (LDEB)-that disentangles the twists by utilizing the text normalization and 7-bit digital encoding techniques and constructs a meaningful feature space for a ML model to be trained. We also utilized the publicly available dataset-called the FETA-DailyDialog dataset-for feature learning and developed a hierarchical ERC model using random forest (RF) and artificial neural network (ANN) classifiers. Simulations showed that the ANN-based ERC model was able to predict emotion with the best accuracy and precision scores of about 74% and 76%, respectively. Simulations also showed that the ANN-model could reach a training accuracy score of about 98% with 60 epochs. On the other hand, the RF-based ERC model was able to predict emotions with the best accuracy and precision scores of about 78% and 75%, respectively. ## 1 Introduction The development of an automated system for emotion recognition in conversations (ERC) is beneficial to many conversational AI applications, [17, 14]. The recent language model ChatGPT in the domain of conversational AI has shown the usefulness of an automated system for ERC, [23, 19]. Such a system can help advance research in many disciplines that include computational linguistics, neuroscience, and psychology, [18, 2]. There has been a significant effort to understand the emotions in conversations and develop efficient computational techniques and machine learning classifiers for ERC using the information in conversational dialogues, [17, 16]. For example, [17]-assuming that the textual information in a dialogue does not deliver sufficient information-proposed an approach to supply emotion information _a priori_ at training. Subsequently, [17] have also utilized the Long Short Term Memory networks (LSTM) architecture hierarchically-as an iterative model-to capture contextual emotional features so that the model can predict the emotions in textual dialogues. Machine learning (ML) is a technique that can help us develop such an automated system to recognize emotions in a conversational dialogue by performing the classification of emotions. For example, [14] have adapted emotion theories, based on Ekman's model and the OCC (Ortony/Clore/Collins) model, and developed a support vector machine (SVM) classifier for emotion recognition in a web blog data. A brief discussion on the Ekman model and the OCC model can be found in (Zad et al., 2021). Similarly, deep learning techniques have also been studied to detect emotions in textural dialogues by extracting sentiment and semantic feature descriptors (Chatterjee et al., 2019). They also utilized an LSTM model to extract sentiment and semantic feature descriptors and build their deep-learning models. As reported in (Acheampong et al., 2020), we also observed that the convolutional neural network (CNN) and recurrent neural network (RNN) have also been used in this research domain for developing ERC models. While these approaches provide solutions to emotion detection from the text, we observed that they did not address the _emotion entanglement_ issues that were inherent in many conversational dialogues. Here, it is important to differentiate the usage of the term _entanglement_ in the NLP domain and understand the context in which it is used in our research. In (Huang et al., 2021), the authors used the term to describe the interlaced relationship between semantic and syntactic information. In (Webson et al., 2020), the authors used the term to describe how pre-trained models encode denotation and connotations as one intertwined representation. Furthermore, we see the usage of the term _word entanglement_ in (Matthews, 2018), by which the authors described the relation between content and style of sentences when considering the transferability of a sequence of tokens. However, in our research, we use the term to explain how two dialogues may be made up of the same set of emotions, but in different order of appearance and/or frequency. A conversational dialogue is generally composed of several utterances, or sub-dialogues that express different emotions; hence, the true message of the dialogue could be incorrectly interpreted by the listener. The individual utterance, or sub-dialogue annotation in isolation cannot be utilized accurately unless it is considered in relation to its preceding and following sub-dialogues, collectively. Because the same sub-dialogue in another context might be annotated with a different emotion label depending on the context. When a human language expert manually annotates the dialogues, they annotated them based on the contextual meaning. But this intra-relationship between the sub-dialogues could be lost if they were not captured together. A conversational dialogue with such a state of confusion can cause problems for the successful development of conversational AI models from such data. We called this problem an _emotion entanglement_, because of the nested nature of the emotions and the twisted association between the feature descriptors and the emotion types. For example, consider a dialogue \(d_{1}\) that has six sub-dialogues \((u_{0},u_{1},u_{2},u_{3},u_{4},u_{5})\) that express 4 emotions (0,1,2,4) out of 7 emotions (0,1,2,3,4,5,6), and it is labeled as follows: (4, 2, 0, 1, 0, 1). Let's also consider another dialogue \(d_{2}\) that has six sub-dialogues \((v_{0},v_{1},v_{2},v_{3},v_{4},v_{5})\) and labeled as (0, 2, 0, 0, 4, 1), then it also expresses the same four emotions (0,1,2,4). Therefore, in this case, we called that "the 4 emotions (0,1,2,4) are entangled" in the dialogues \(d_{1}\) and \(d_{2}\). This entanglement can get more complicated with respect to the increase in the number of emotions and sub-dialogues present in the dialogue. In other words, the emotion entanglement leads to the labeling of two dialogues-that express the same emotion set-with distinct values. In our study, we have defined emotion entanglement as the measure of the significance of the order of emotions in a dialogue and its association with the feature descriptors via a computational model. Hence, the proposed approach alleviates this problem by digitizing the labels based on the binarization of emotions in an ordered sequence \((e_{0},e_{1},e_{2},e_{3},e_{4},e_{5},e_{6})\), where the presence and absence of each emotion \(e_{i}\) is marked at its position \(i\), where \(i=0...6\). For the above example, the dialogue \(d_{1}\) is labeled as (1, 1, 1, 0, 1, 0, 0) = 116, and dialogue \(d_{2}\) is also labeled as (1, 1, 1, 0, 0) = 116. Hence, the tokens in the dialogues \(d_{1}\) and \(d_{2}\) can be mathematically associated with the same label that is represented by a single integer between 0 and 127 in the proposed approach. In addition, ML techniques require reliable labeled datasets to develop trustworthy ML models for an ERC system. Hence, our proposed work addressed these issues. Fortunately, a dialogue dataset, called DailyDialog, has been recently developed and distributed for the research use (Li et al., 2017). Using this dataset, the developers of this dataset evaluated some of the existing techniques by dividing them into three groups, namely embedding-based, feature-based, and neural network-based similarity-response retrieval approaches. Most importantly, this dataset is manually labeled; hence, it can be used to train machine learning models that can predict emotions in a conversational dialogue. Subsequently, the authors of (Albalak et al., 2022) utilized this dataset and developed a dataset-called FEw-sample TAsk transfer (FETA) data set-with the hope of efficiently training the large language models (LLMs) that are capable of performing self-supervised learning on a large unlabeled data, (Chen et al., 2020). However, it is still a question if this dataset has sufficient information to train an ML model under a data paucity problem and develop an ERC system. In this paper-to address the problems and challenges caused by the emotion entanglement and data paucity-we proposed an approach to generate a feature space that consists of feature vectors with disentangled emotions and feature descriptors. We called this approach the "Label Digitization with Emotion Binarization (LDEB)" and utilized the text normalization and 7-bit digital encoding techniques to disentangle the twist of the association between the feature descriptors and emotion types. We also used the FETA-DailyDialog dataset to study our proposed approach. While the proposed LDEB approach offers a solution to address this problem, it can also introduce a new imbalanced data problem. Hence, it restricts the direct application of an ML technique, (Suthaharan, 2016a). To alleviate this problem, we proposed a hierarchical ML solution that will be discussed in the subsequent sections. Our proposed work also offered a refined dataset-we called the LDEB-DailyDialog dataset-that is a derivative of the DailyDialog [Li et al., 2017] and FETA-DailyDialog datasets [Albalak et al., 2022]. The refined LDEB-DailyDialog dataset is a systematically organized feature space that enables its direct utilization for training conversational AI models for emotion recognition. ## 2 Proposed Methodology The proposed methodology consists of four modules. The first module integrates the proposed concept of Label Digitization with Emotion Binarization which generates a set of meaningful aggregated emotions for a dialogue. The second module performs a feature learning technique that generates feature vectors for conversational dialogues and maps the feature vectors to the aggregated emotions. The third module presents a hierarchical data modeling that splits a given dataset into balanced subsets (we called them Split-Sets) of data for the training of ML classifiers in a hierarchically ordered sequence. The fourth module assumes RF and ANN classifiers for the models used in the hierarchical data structure and performs simulations to show the feasibility and performance efficiency of the proposed LDEB machine learning model. ### Proposed LDEB Concept The FETA-DailyDialog dataset consists of information that allows a mapping between _Dialogues_Text_ and _Dialogues_Emotion_ that are useful for our goal of developing ML models for emotion recognition in conversations. The information that is useful for our study includes the seven types of emotions-anger, disgust, fear, happiness, sadness, surprise, and no emotion-and the number of instances (or sub-dialogues) associated with these classes. The number of sub-dialogues is 1,022, 353, 74, 12,885, 1,150, 1,823, and 85,572, respectively. However, there are 13,118 dialogues that are formed by these sub-dialogues. This label distribution shows the imbalanced nature of the DailyDialog dataset that can create problems and challenges in developing ML models to classify emotions using this dataset. One of the goals of the proposed approach is to alleviate the imbalanced nature of the data in the training set. In the DailyDialog dataset, the _Dialogues_Text_ column contained 13,118 dialogues, as stated earlier in this paper. In the _Dialogues_Emotion_ column, each sub-dialogue of dialogue was annotated with an emotion class representing the dominant emotion exhibited by the sub-dialogue. There were a total of 7 emotion classes. The emotion labels were as follows: 0: no emotion, 1: anger, 2: disgust, 3: fear, 4: happiness, 5: sadness, and 6: surprise. Each sentence of the dialogue is annotated with one of the 7 aforementioned emotion classes. Therefore, a dialogue, which is composed of multiple sub-dialogues, is annotated by several emotion classes. This environment presented a multi-class challenge since each dialogue has multiple corresponding emotions. Table 1 shows an example of how every sub-dialogue has been annotated with one of the following seven emotions (0: no emotion, 1:anger, 2: disgust, 3: fear, 4: happiness, 5: sadness, and 6: surprise) in the FETA-DailyDialog dataset. Therefore, we have multi-class emotion labels for every dialogue. Table 2 shows an example of generating a label by our LDEB approach. It shows Emo_Sum 68 represents no emotion and happiness, where Emo_Sum is the variable that we defined to represent the newly defined combinations of emotions. This table also illustrates how Emo_Sum values for every dialogue are computed using our LDEB approach. There are a total of 7 emotion types so we represented them in a 7-bit binary encoding system. Every sub-dialogue has been annotated with one of these 7 emotion classes. We added 1 (if the emotion class is present) and 0 (if the emotion class is not present) in a particular dialogue. We performed binary addition to calculate the Emo_Sum value. This value captures the emotional relationship between the sub-dialogues and is the prediction value for our LDEB-DailyDialog dataset. Table 3 lists some of the Emo_Sum representations of the possible combinations of emotions with the counts of dialogues for each Emo_Sum. ### Feature Learning The proposed feature learning involved the extraction of emotional descriptors (features) and the association of features with Emo_Sum values. It was performed by adapting the standard concept of text normalization that is used in natural \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Emo 0 & Emo 1 & Emo 2 & Emo 3 & Emo 4 & Emo 5 & Emo 6 \\ (No Emotion) & (Anger) & (Disgust) & (Fear) & (Happiness) & (Sadness) & (Surprise) \\ \hline 64 & 32 & 16 & 8 & 4 & 2 & 1 \\ \hline 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ \hline \multicolumn{6}{|c|}{Emo\_Sum = \(2^{6}+2^{2}=64+4=68\)} \\ \end{tabular} \end{table} Table 2: For each dialogue, Emo_Sum is calculated using the LDEB approach. A Emo_Sum of 64 means that the dialogue has the following emotion classes present - _No Emotion_ and _Happiness_. A lookup table (Table 3) is further constructed for the 128 Emo_Sum values and their corresponding combination of emotion classes. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Emo\_Sum** & **Binary Numbers** & **Description** & **Count** \\ \hline 4 & 0000100 & Happiness & 437 \\ \hline 64 & 1000000 & No Emotion & 6247 \\ \hline 65 & 1000001 & No Emotion + Surprise & 685 \\ \hline 66 & 1000010 & No Emotion + Sadness & 391 \\ \hline 68 & 1000100 & No Emotion + Happiness & 3708 \\ \hline 69 & 1000101 & No Emotion + Happiness + Surprise & 494 \\ \hline 80 & 1010000 & No Emotion + Disgust & 131 \\ \hline 96 & 1100000 & No Emotion + Anger & 278 \\ \hline \end{tabular} \end{table} Table 3: It illustrates the binary representations of Emo_Sum values, description, and counts. Only the Emo_Sum values in the 5 groups generated by the hierarchical data modeling (as presented in Figure 1) are presented in the table. There is a total of 128 possible Emo_Sum representations as per Table 2. language processing (NLP). We know that every dialogue is made up of sub-dialogues, and each sub-dialogue is composed of words. Hence, we utilized the bag-of-words (BoW) representation model to characterize these dialogues. We used each and every unique word as feature vectors and determined the occurrence frequency of each word in the respective dialogue. Initially, we tokenized all of the sentences in the 13,117 dialogues and then removed punctuation from the tokens. This process resulted in 1,200,389 words. We decided to keep the _stop words_ since the elimination of these words may result in the loss of important features that might be useful for machine learning. We also developed a word counter to address the word repetition problem. We created an empty list and added a new word to the list if it wasn't already present. The automation of this process identified 26,372 unique words in the 13,118 dialogues. Hence, our feature learning constructed a feature vector of dimension 26,372 where its feature is one of the unique words. By iterating through the dialogues, we examined the occurrence of these unique words in every dialogue and accordingly aggregated the tally to define the magnitude of the feature vectors for each dialogue, which is considered an observation of the feature space that we learned from the DailyDialog dataset. ### Hierarchical Data Modeling After constructing the feature space for _LDEB-DailyDialog_, we found the dataset of 13,117 dialogues was still imbalanced as shown in Table 3. As we can see, there are 6,247 dialogues of Emo_Sum value of 64 (Absence of Emotion) and 6,870 dialogues of other Emo_Sum values (Presence of Emotion). Note that if a dialogue has "no emotion" class and "an emotion" (e.g., happiness or surprise) class, then the dialogue reflects an emotional dialogue (i.e., presence of emotion). To address the imbalanced data problem, we adapted the concept of _Hierarchical Machine Learning_. In this approach, we hierarchically built 4 machine-learning models by dividing (splitting) and grouping of dialogues with respect to their LDEB labels. Figure 1 illustrates how the 4 subsets are split based on aggregating labels together to create more balanced sets. For model M1, we labeled all dialogues with an emotion sum (ES) value of 64 with Label 0. Dialogues with an ES value of 64 are dialogues that only have no emotion (NE) class annotated to all its sub-dialogues. There are 6,247 such dialogues. On the other hand, all dialogues that have ES values other than 64 are dialogues that have at least one sub-dialogue with an emotion class other than NE annotated (\(\neg NE\)). In other words, these dialogues are such that there is a presence of emotion. We labeled all of such dialogues as Label 1. There are 6,870 such dialogues as can be seen in Figure 1. For building model M2, we discarded all the dialogues with an ES value of 64 that were previously Figure 1: The proposed hierarchical data modeling framework to improve the balanced nature of the data. considered for model M1. We labeled all dialogues with an ES value of 68 with Label 0. Dialogues with an ES value of 68 are dialogues that have both no emotion (NE) and happiness (HP) emotion classes annotated to their sub-dialogues. It is to be noted that these dialogues could have any number of sub-dialogues that are annotated with NE or HP. But by applying our LDEB approach, we are simply determining whether an emotion class is present in a dialogue or not. So the number of NE and HP annotations can be ignored. We have 3,708 dialogues with an ES value of 68. On the other hand, all dialogues that have ES values other than 68 are labeled as Label 1. There are 3,162 such dialogues. For model M3, we discarded all the dialogues with an ES value of 68 that were previously considered in model M2. We labeled all dialogues with an ES value of 65, 69, or 4 with Label 0. These are collections of dialogues that have either one of the following characteristics: (a) Sub-dialogues only annotated with emotion class NE and surprise (SP) (b) Sub-dialogues only annotated with emotion classes NE and happiness (HP) and SP, or (c) Sub-dialogues only annotated with emotion class HP. There is no other motive for the selection of these emotion classes other than to create more balanced Split-Sets for model building. There are 1,616 such dialogues. On the other hand, all dialogues that have an ES value other than 65, 69, or 4 are labeled as Label 1. There are 1,546 such dialogues. Finally, for model M4, we discarded all the dialogues with an ES value of 65, 69, or 4 that were previously considered in model M3. We label all dialogues with an ES value of 66, 96, or 80 with Label 0. These are collections of dialogues that have either one of the following characteristics: (a) Sub-dialogues only annotated with NE and sadness (SD) (b) NE and anger (AG), or (c) NE and disgust (DG). There are 800 such dialogues. On the other hand, all dialogues that have an ES value other than 66, 96, or 80 are labeled as Label 1. There are 746 such dialogues. These interpretations of ES values can be further looked at in Table 3. As we have seen above, this hierarchical data modeling framework allows the ML models (M1, M2, M3, and M4) to be trained on the hierarchical split-sets and predict the 5 aggregated groups-from the top to bottom in the tree-that are resulted in the leaves of the binary tree, as presented in Figure 1. We can observe these five groups of combined emotions frequently occur in conversational dialogues. Hence, these 5 classes are used in the simulation section to train the RF and ANN models. ## 3 LDEB-DailyDialog The _Emo_Sum_ in our derived dataset is the prediction variable. By the LDEB technique explained above, we were able to aggregate the annotated emotion classes into one value which signifies the emotion(s) present in the conversational dialogue. Along with our feature space constructed from the occurrence of all the unique words in the dialogues and aggregating multi-classes to compute Emo_Sum value using the LDEB technique, we created the LDEB-DailyDialog dataset to train machine learning models. We believe that this dataset will be helpful to train not only language models but also large language models by following the approach systematically on emerging datasets. Figure 2: Label distribution of 4 models in the dataset before aggregation (BA) and after aggregation (AA) ## 4 Simulations We extracted four Split-Sets (or independent subdomains) of the LDEB-DailyDialog dataset using our hierarchical machine learning model, presented in Figure 1. We named these four subsets-associating them with the four hierarchical models M1, M2, M3, and M4-as Split-Set 1, Split-Set 2, Split-Set 3, and Split-Set 4. The label distribution of these Split-Sets is presented in Figure 1. The actual data domain (rows, columns) of the feature (sub) spaces of the Split-Sets, where "rows" represents the number of dialogues and "columns" represents the number of features (or tokens), are as follows: * **Split-Set 1**: (13117, 26373)-as illustrated in Figure 2(b). * **Split-Set 2**: (6870, 26373)-as illustrated in Figure 2(d). * **Split-Set 3**: (3162, 26373)-as illustrated in Figure 2(f). * **Split-Set 4**: (1546, 26373)-as illustrated in Figure 2(h). There are 26,373 features that make the dimensionality of the feature space (and the subspaces). After the aggregation of the labels, the Split-Sets formed are as follows: (i) Split-Set 1: Emo_Sum: 64 (label 0 for Split-Set 1) vs remaining Emo_Sum values (label 1 for Split-Set 1) (ii) Split-Set 2: Emo_Sum: 68 (label 0 for Split-Set 2) vs remaining Emo_Sum values (label 1 for Split-Set 2) (iii) Split-Set 3: Emo_Sum: 65, 69, 4 (label 0 for Split-Set 3) vs remaining Emo_Sum values (label 1 for Split-Set 3), and (iv) Split-Set 4: Emo_Sum: 66, 96, 80 (label 0 for Split-Set 4) vs remaining Emo_Sum values (label 1 for Split-Set 4). The splitting of the subsets is previously shown in Figure 1. As shown in Figure 2, the datasets are now nearly balanced-after aggregation (AA) compared to before aggregation (BA)-to the extent they alleviate the problem of imbalanced data. The number of dialogues of two major classes are as follows: * **Split-Set 1**: Label 0 (47.6%) vs Label 1 (52.4%). * **Split-Set 2**: Label 0 (54.0%) vs Label 1 (46.0%). * **Split-Set 3**: Label 0 (51.1%) vs Label 1 (48.9%). * **Split-Set 4**: Label 0 (51.7%) vs Label 1 (48.3%). These percentages indicate that our hierarchical data model splits the given LDEB-DailyDialog data into balanced Split-Sets with the constraint that dictates the hierarchically ordered application of the models M1, M2, M3, and M4, as in Figure 1. ### Confusion Matrix The confusion matrix in Figure 3 shows the predictive performance measures and misclassification errors of RF and ANN when they are used with our proposed hierarchical data modeling. From the data collected from our hierarchical data modeling framework, 80% of the data has been used as the training data for the ML models and 20% of the data has been used as test data. The configuration of the ML models are as follows: (a) For RF, we used the default parameters from the sklearn package with _n_estimators_ set to 100 trees. For _criterion_, we selected _gini_ - Gini impurity measure - for its simplicity and computational efficiency and since it tends to be less sensitive to class imbalance. For _max_features_, we opted for _sqrt_ which randomly selects a square root number of features for each tree. Finally, we set the _bootstrap_ parameter to _True_ which offers robustness to overfitting and provides improved generalization so that RF can better capture the underlying patterns and relationships (b) For ANN architecture, we used 891, 828, and 734 that are derived using a simple optimization technique for the neuron in the first three dense layer where we also used _uniform_ kernel initializer and _relu_ activation function. In the last dense layer, we used 2 neurons. We also compiled the ANN model with stochastic gradient descent optimizer, mean squared error loss, and accuracy as a measure along with the batch size of 20 and epochs of 80. For example, Figure 3(a) shows that the RF-M1 model predicts 67% of label 0s as 0s, and 33% 0s as 1s, while ANN-M1 (Figure 3(e)) model predicts 73% of label 0s as 0s, and 27% 0s as 1s. On the other hand, Figure 3(a) shows that the RF-M1 model predicts 80% of label 1s as 1s, and 20% 1s as 0s, while ANN-M1 (Figure 3(e)) model predicts 71% of label 1s as 1s, and 29% 1s as 0s. We can also observe from Figure 3(b) and Figure 3(f), both RF-M2 and ANN-M2 perform equally well with 81% and 80% predictive performance of the positive class at that node of the hierarchy of the data model. The darkest heatmap of the diagonal cells of the confusion matrices and the higher number of darkest heated cells in the confusion matrices indicate that RF enjoys the hierarchical data model and performs better than the ANN model. ### Performance Scores The results presented in Table 4 show the comparison of the performance scores of accuracy, precision, and sensitivity between the RF and ANN classifiers after performing them on the 4 Split-Sets, hierarchically, [Suthaharan, 2016b]. We ran the experiments on the 4 models several times to obtain the results of the best submission. From our performance analysis, we found that RF performs better than ANN for all the models based on the test accuracy score of 0.735, 0.756, 0.781, and 0.703 for the models M1, M2, M3, and M4 respectively. On the other hand, ANN achieved a \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & & model M1 & model M2 & model M3 & model M4 \\ \hline \multirow{2}{*}{RF} & A & 0.735 & 0.756 & 0.781 & 0.703 \\ \cline{2-6} & P & 0.719 & 0.753 & 0.743 & 0.744 \\ \cline{2-6} & S & 0.799 & 0.692 & 0.790 & 0.670 \\ \hline \multirow{2}{*}{ANN} & A & 0.720 & 0.738 & 0.739 & 0.660 \\ \cline{2-6} & P & 0.735 & 0.758 & 0.761 & 0.617 \\ \cline{2-6} & S & 0.708 & 0.671 & 0.690 & 0.657 \\ \hline \end{tabular} \end{table} Table 4: It presents test performance scores for the Random Forest classifiers and the Artificial Neural Network classifiers that are associated with the 4 models, where the symbol A represents the accuracy, P represents the precision, and S represents the specificity. Figure 3: Confusion Matrix of Random Forest (RF) and Artificial Neural Network (ANN) on 4 Models slightly lower test accuracy score of 0.720, 0.738, 0.739, and 0.660 for the 4 models, respectively. While the accuracy scores are acceptable to explain the model's performance in our simulations-since we nearly balanced the dataset hierarchically-the precision score is a preferred measure with the interpretation of the sensitivity score, [Suthaharan, 2016b]. For ANN-M1, the precision score of 0.735 (\(\sim\)0.74) suggests that the TP is nearly 3 times (i.e., 2.85 times) the FP. At the same time, the sensitivity score of 0.708 (\(\sim\)0.71) suggests that the TP is nearly two-and-a-half times (2.45 times) the FN. It indicates-as far as the positive class is concerned-the ANN-M1 can be considered precise and reasonably less sensitive to the negative class. On the other hand, for RF-M1, the precision score of 0.719 (\(\sim\)0.72) and the sensitivity score of 0.799 (\(\sim\)0.80) suggest that this model is less precise and strongly less sensitive to the negative class than the ANN-M1 model. For ANN-M2, the precision score of 0.758 (\(\sim\)0.76) suggests that the TP is more than 3 times (i.e., 3.2 times) the FP. At the same time, the sensitivity score of 0671 (\(\sim\)0.67) suggests that the TP is more than 2 times the FN. It indicates-as far as the positive class is concerned-the ANN-M2 can be considered precise and less sensitive to the negative class. On the other hand, for RF-M2, the precision score of 0.753 (\(\sim\)0.75) and the sensitivity score of 0.692 (\(\sim\)0.69) suggest that this model is less precise and less sensitive to the effect of the negative class than the ANN-M2 model. For ANN-M3, the precision score of 0.761 (\(\sim\)0.76) also suggests that the TP is more than 3 times (i.e., 3.2 times) the FP. At the same time, the sensitivity score of 0.69 suggests that the TP is more than twice (2.22 times) the FN. It indicates-as far as the positive class is concerned-the ANN-M3 can be considered precise and less sensitive to the negative class. On the other hand, for RF-M3, the precision score of 0.74 and the sensitivity score of 0.79 suggest that this model is less precise and significantly less sensitive to the negative class than the ANN-M3 model. In contrast, ANN-M4's performance is significantly low compared to the RF-M4 model, as we can see from the much lower precision and sensitivity scores of ANN-M4. This is understandable since the ANN models require sufficient training samples to boost their performance, but we can see the presence of data paucity for model M4 in the hierarchical model in Figure 1. ## 5 Discussion The simulations also showed us very high training accuracy for the ANN classifier. For example, the ANN classifier delivered accuracy scores of 0.9438 (40 epochs) for model M1, 0.9758 (60 epochs) for model M2, 0.9478 (60 epochs) for model M3, and 0.9515 (100 epochs) for model M4. However, the test accuracy dropped significantly compared to the training accuracy. It indicates there is a significant distribution drift between the training and test sets. In other words, it indicates that the DailyDialog data set has the limitation of supplying sufficient information (or feature vectors) to help the proposed approach perform superior. ## 6 Conclusion We were able to define an emotion entanglement problem in conversational dialogues and develop techniques to extract a feature space from the conversational dialogues in a meaningful way to disentangle the twisted association between feature descriptors and the emotion types and predict the emotions in conversations. We were also able to create a derivative of the DailyDialog dataset-that we called the LDEB-DailyDialog dataset-by implementing label-digitization and emotion-binarization techniques. We hope that the LDEB-DailyDialog dataset will be a great resource for developing improved machine-learning techniques for emotion detection in conversational dialogues. We were also able to detect the missing combinations of emotions in the DailyDialog dataset and this limitation will be addressed in our future research while incorporating additional information to our feature vectors that include positional encoding and word sense disambiguation. We will also add more conversations to alleviate some of the shortcomings of the DailyDialog dataset. We also learned that the techniques proposed in the paper can be scaled to accommodate more emotion classes and the hierarchical framework can be revised to balance conversational dialogue datasets. ## Limitations The presence of data paucity in the DailyDialog dataset is one of the limitations. The poor performance of ANN, compared to RF, may be considered as the measure of data paucity. In general, neural network models require a large number of data points (i.e., dialogues for this study) in order to learn sufficiently so that they can provide good predictive performance. Another limitation is that the DailyDialog dataset does not have all the possible combinations of emotions that are defined by the LDEB approach; hence, the proposed research is limited to the available emotions. However, its flexible nature can allow us to add more emotions to the training as they become available. The limitation also includes the restriction on the combination of the emotions that are imposed by the proposed hierarchical data modeling based on the information available in the DailyDialog dataset.
2301.10530
CFD simulation for Hydraulic Short-Circuit feasibility analysis
Hydraulic Short Circuit (HSC) application allows the simultaneous pumping and generating operations on different units of the same pumped hydro energy storage (PHES) plants for the extension of the power consumption range. This article proposes a methodology based on time-history statistics and CFD numerical simulations to investigate the feasibility of integrating such a kind of operation in pre-existing hydropower plants. The evaluation of the potential power output and pressure drop analysis in the new HSC operation are presented. The penstock trifurcation of a selected PHES plant is also analysed by CFD simulations, by evaluating the loss-head coefficients and comparing the flow conditions in different PHES operations.
A. Morabito, C. Wu, S. Sigali, E. Vagnoni
2023-01-25T11:34:56Z
http://arxiv.org/abs/2301.10530v1
# CFD simulations for hydraulic short-circuit feasibility analysis ###### Abstract Hydraulic Short Circuit (HSC) application allows the simultaneous pumping and generating operations on different units of the same pumped hydro energy storage (PHES) plants for the extension of the power consumption range. This article proposes a methodology based on time-history statistics and CFD numerical simulations to investigate the feasibility in integrating such a kind of operation in pre-existing hydropower plants. The evaluation of the potential power output and pressure drop analysis in the new HSC operation are presented. The penstock trifurcation of a selected PHES plant is also analysed by CFD simulations, by evaluating the loss-head coefficients and comparing the flow conditions in different PHES operations. ## 1 Introduction Pumped hydro energy storage (PHES) plants have been adopted in the last decades to mitigate the installation boost of intermittent energy sources, such as wind and photovoltaic power systems [1]. At the same time, recent economics and policy trends pinpoint the challenge of high flexibility in both power generation and storage modes. In PHES, such operating control, which could be awarded by ancillary service retirbutions depending on the local energy market regulations, are partially achieved in generation mode with the power adjustment provided by the Francis turbine guide vanes or by the Pelton turbine spear valve and deflectors. However, in pumping mode there is no possibility to manage the power consumption because the guide vanes in the pump-turbine usually stay fully open [2, 3]. Exceptionally, variable speed and/or seldom geometry regulations allow PHES plants to extend their operating flexibility. Nevertheless, for already existing PHES, the economic justifications of adding variable speed capacity might be somewhat poorer especially to small-hydro projects [4]. Hydraulic short circuit (HSC) configuration, corresponding to the simultaneous operation of the pumps and turbines, enhances the power consumption flexibility of PHES. The main advantage of this practice is regulating the net absorbed energy by the PHES plant with same power regulation range equal to that of the turbine operation [2]. If the hydropower system is equipped with ternary configuration (or with multiple reversible units), the HSC can be obtained and supply primary and secondary frequency regulation services to the transmission system operators (TSOs) within a larger marketable capacity. The power regulation in HSC provided by reversable units are inherently lower than in ternary solutions. The hydraulic pump-turbine cannot operate simultaneously in both modes, and it has a smaller range of operability than turbines, especially for Pelton turbines. Today, the HSC principle counts a very few active applications in the world, but academic and industrial communities have increasingly devoted fervent interest in the profitability of this emerging operation with optimal market participation and reserve scheduling [5, 6, 7, 8]. Upgrading the existing hydropower plant infrastructures for HSC is not an easy task and comprehensive feasibility study is required. About the resiliency and reliability of HSC applications, Nicolet [9] analyzed the contribution of a PHES to compensating the consumer load fluctuations and wind power output variation, together with a thermal power plant, in a mixed islanded power network. Also, Perez-Diaz et al. [10] confirmed the adding value of HSC-PHES to the load-frequency regulation of an isolated system from the perspective of both the TSO and power plant owner. In PHES operating in HSC modes, the flow derivation and power losses in the penstock connected between upper reservoir and more than the units is complex, and the accurate evaluation of the power compensation is crucial in assisting the power plant operation. Kong [11] proposed a method to calculate the head loss in a shared tunnel for a PSHP with variable speed pumps but it doubly overestimates the loss, whereas HSC scheme, in fact, reduces the power loss in the shared tunnel. Skjelbred [12] proposed a mathematical formulation to calculate the actual loss in the shared tunnel when a PSHP operates in HSC mode and the presented method is effective to offset the overrated loss and obtain the correct result. Computational fluid dynamics (CFD) simulations are carried out to obtain more details of the water flow structures during HSC within the hydraulic circuit. Huber et al. [13] conducted CFD simulations to investigate the flow characteristics and head losses of a T-junction planned in the Kopswerk II under turbine, pumping and three different HSC operating conditions. An alternative T- junction was simulated to prove a better computed design in flow properties and lower head-losses. Decaix et al. [14] focus on one of the junctions of the Grand-Maison power plant that is recently upgraded to run in HSC mode: CFD studies have been carried out for various ratio of the flow discharge between the pumps and the turbines to determine the pressure losses in the HSC modes. This paper contributes to the definition of the methodology in the HSC feasibility analysis and relating the effect of such operations to a PHES trifurcation. First, the potential of HSC integration to a selected PHES is discussed assessing the limits of the power regulation (Section 3). In Section 4, the CFD analysis approach is presented and applied to two different scenarios of HSC operations involving two turbomachine units. Section 5 provides the numerical results head loss coefficients defining the pressure drop relative to the branches towards the turbine and the upper reservoir. Moreover, this paper leaps from other works in the literature by presenting the effect of the inlet velocity profile to the trifurcation pressure drop. Finally, the conclusions are given in Section 6. ## 2 Research objectives This research article aims to investigate the feasibility of HSC integration by CFD analysis of the hydropower plant trifurcation. HSC operation was not foreseen when the selected PHES was designed. The pressure losses through the trifurcation must be estimated and considered for defining the power-net output. Flow patterns and pressure distributions in the penstock determine the actual head losses. Fluid-dynamic behaviour analysis is needed to diagnose the health state and behaviour of all the essential components of the hydro-mechanical system. Penstock elements include transitions, bends, tees, elbows, and reducers. They are especially susceptible to excessive vibrations, aging, and lining loss [3]. Trifurcations can be found in the waterways between the headwater or tailwater stretches and the hydropower unit. Their task is dividing or unifying water flow with minimum losses. In HSC operations, variable discharge per branch has a significant impact in the head losses in such kind of connections. Based on their design, location and punctual operating condition, complex flow patterns may occur. To learn as much as possible about these circumstances and HSC feasibility in existing PHES plants, this work analyses the hydraulic performance and the stability of the penstock trifurcations of the selected case study for a safe operation under such unusual conditions. To support the HSC feasibility assessment, depicting limits and operational configurations, CFD numerical simulations are performed to study the flow behaviour of the PHES waterways in the critical sections of the energy storage plant. ## 3 HSC potential in the selected case study This research enhances the quantification of HSC potential for an existing PHES plant commissioned in the early '70s, featuring three alike Francis pump-turbines (Fig 1). Concerning power management and dispatching strategy, three operations are finally available in a modern PHES power plant: generating mode, pumping mode and HSC. Figure 2 illustrates the different operating cases with their relative flexibility for the selected PHES system under head (H) variation. In generating mode, the turbines can operate under variable discharge (Q) coming from the upper reservoir to the lower reservoir (Q>0) and inject power to the grid (P>0). Turbines benefit from the variable guide vanes openings to regulate inlet flow to the runner and, thus, the angular momentum at the shaft. When the unit operates in pumping mode, water is delivered in the upper reservoir through the fix-speed pump (Q<0). Pumps generally work at a single operating condition defined by a fixed Q-H characteristic curve under constant rotational speed, because inlet guide vanes are not common in pumps. Lastly, HSC operation region can quickly be obtained on the same reference chart of Fig. 2 by translating the turbine characteristic origin at the pump duty point [Hp, Pp]. The turbine Figure 1: A schematic view of the investigated PHES plant and position of the three units. On the right, two pumps (P) and a single turbine (T) are operating in HSC configuration. gross head is provided from the pump units and a fraction, or the totality of the pump discharge is delivered to the turbines while the rest is pumped to the headwater. The demarcation of this portion is set to the targeting PHES power consumption serving the power consumption flexibility. Provisioning water both from the pumping units and the upper reservoir is conceivable when necessary. Preliminary study on the time-history statistics of the studied PHES shows that the power consumption range in HSC mode is extended in the bounds of 5-70% of the nominal pumping consumption when a single unit is running as pump and one as turbine. In particular, HSC tackles the extension of the operating range targeting an additional continuous power consumption from 92% to 173% of one pump nominal power, by engaging two machines in pumping mode and one in generating mode (HSC with three units running). Fig. 2: Power regulation chart of the PHES for generating, pumping and HSC modes. Numerical simulation strategy ### Overview In the following sections the flow simulations setup is presented. The scope of numerical simulation analysis is to compare the steady state regimes of a single unit in generating mode with HSC operation with one unit in pumping and another in generating mode. ### Geometric model description The trifurcation geometry has been obtained by a 3D scan on-site and reproduced typically in CATIA V5. The three branches toward the hydraulic units have a final diameter of 2 m and the penstock's diameter is 5 m large. The two external branches bend of 50\({}^{\circ}\) angle to meet at the trifurcation with 8 m inner radius curvature. At the crossing sections, two stiffeners up to 0,6 m tall surround the connection of the central branch (Fig. 3). ### Grid generation methodology The generated geometry is then imported to Ansys Fluent Mesh 2021R2 for meshing and polyhedral meshes are generated for discretizing computational fluid domain. Grid independency tests are performed to exclude effects of the mesh size on the numerical solutions. Finer local sizing is implemented in the proximity of critical areas as in nearabout of the stiffeners. In addition, an inflation mesh technique is used for the fluid volume near the pipe wall, which generates multiple thin layers of elements within the boundary layer region to better capture the near wall flow performance. The size of the \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Case & \# unit 1 & \# unit 2 & \# unit 3 & Discharge & Power \\ \hline Case-1 & Pumping mode & Off & Generating mode & \((0.5-1.1)\,Q_{p}\) & - \(P_{0}+P_{1}\) \\ \hline Case-2 & Off & Pumping mode & Generating mode & \((0.5-1.1)\,Q_{p}\) & - \(P_{0}+P_{1}\) \\ \hline \end{tabular} \end{table} Table 1: Simulated operation of the trifurcation hydropower plant Figure 3: View of the wet surfaces of the trifurcation at the PHES plant. first cell at the wall is always checked to agree with the adopted turbulence model. A grid independency study is carried out monitoring the variation of the averaged pressure in control sections for converged solutions by the increasing number of grid nodes. To ease the convergence of the numerical calculations, the outlets are extended by a length of ten times the diameter, avoiding possible recirculation and hard convergence. ### Numerical methods All simulations are performed by the CFD software Ansys Fluent 2021R2 with the _realizable_\(k-\varepsilon\) turbulence model. The y+ values are consistent with the model adopted, laying in the range of 30-300 for all the simulated discharge ranges in the involved domain regions. In enhanced wall treatments, each wall-adjacent cell's centroid should be located within the log-law layer, 30\(<\) y+ \(<\) 300 requires a greater first cell height, thus, a reduced total number of nodes than a standard model (y+ = 1) [4]. The well-known SIMPLEC scheme is chosen to deal with the pressure-velocity coupling and second order upwind spatial discretization scheme is employed for momentum, turbulent kinetic energy and turbulent dissipation rate. The inlet boundary is set as velocity inlet while the outlet boundaries to the upper reservoir and turbine are set as mass flow outlet and pressure outlet respectively. No-slip boundary condition is adopted for all the physical walls. Cases under various HSC operation conditions can be obtained by changing the discharge of outlet connected with upper reservoir. In those cases where the solution appears to fluctuate, under-relaxation is activated for the pressure and turbulence factors. The convergence criterion is not defined unequivocally by examining the residuals, but also from the imbalance of mass and momentum and stable outputs [4]. ## 5 Results and discussions ### Pressure losses analysis The pressure losses in the trifurcation are calculated over fixed sections of the penstock to monitor the progressive total pressure drop. The loss coefficient (or the resistant coefficient) is defined as follow [5]: \[K_{L}=\ h_{L}\ /\ (V^{2}\ /2g)\] where \(h_{L}\) is the pressure drop in meters and V is the averaged flow speed at the rated section as in V [m\(\cdot\)s\({}^{\text{-}}\)1] = Q [m\({}^{3}\cdot\)s\({}^{\text{-}}\)1] / A [m\({}^{2}\)]. Fig. 4 illustrates the head loss coefficients in the trifurcation of the flow delivery in the upper reservoir \(\Delta\)P\({}_{u}\) and the turbine branch \(\Delta\)P\({}_{t}\) during the HSC operations. The results are showed over the discharge ratio in the two branches: Qt / Qp = 0 the whole pump discharge is delivered to the upper reservoir; Qt / Qp = 1 the whole pump discharge feeds the turbine. Fig. 4-a represents the case-1, namely the HSC configuration involving the units 1, 3 running. The loss coefficient Kt presents downward trend when Qt/Qp range from 0.1 to 0.3 and show upward trend when Qt/Qp is greater than 0.3. Total flow losses in penstock mainly accounts for viscous dissipation of internal secondary flow and wall friction losses. Under lower discharge rate (Qt/Qp = [0.1 - 0.3]), the flow velocity in the turbine branch is low in respect to the nominal condition. The increase of the discharge leads to the decrease of secondary flow losses which causes a reduction of the loss coefficient. By increasing the discharge ratio, wall friction losses increase and cause the sharp increase of losses in the turbine branch. On the upper reservoir side, the wall friction factor does not significantly affect the totality of the losses because the flow velocity remains low. The penstock is designed for accepting the discharge from all the units simultaneously without overburdening the PHES hydraulic losses. Fig. 4-b illustrates the pressure drop of the trifurcation for the case-2 where unit 2 is running as pump and unit 3 as turbine. The pressure losses in the turbine branch show upward trend with the increase of the discharge ratio because of the increase of wall friction losses which is caused by the increasing flow velocity. For the upper reservoir branch, losses caused by viscous dissipation of secondary flow accounts for most of the flow losses by considering the lower flow velocity in this branch which explains why the value of Ku shows upward trend with the turbine discharge. Compared with Kt, Ku is lower which results from lower flow losses caused by secondary flow and wall friction due to the lower flow velocity and smaller flow deflection in the upper reservoir branch. As shown in Fig.4, Kt and Ku of case-1 are higher than in the case-2, due to additional flow losses related to the different geometry of the branch connected to the pump. Interestingly, the resulting pressure and velocity fields of the trifurcation are affected in the HSC operations, in which the inlet boundary conditions are not imposed as uniform velocity profile but extracted from the 3D numerical simulations of the pump-turbine unit running in pumping mode. The three velocity components at the outlet of the spiral case are imported in the solver for the trifurcation analysis as new inlet boundary condition. Fig. 5 illustrates a detail of the computed pump geometry and the velocity profile contours of the pump outlet. The CFD simulations of the pump-turbine is not here discussed as out of scope of this article. To define the effect of the computed velocity profile at the inlet of the domain, the evolution of the total pressure is traced by several sections and grouped in three portions of the trifurcation: the pumping branch \(\Delta\)P1 (from section S6-S4), the intersection \(\Delta\)P2 (between sections S4-S3) and, the branch to the turbine \(\Delta\)P3 (S3-S0). Fig. 4: (a), head loss coefficient for case-1 (b), head loss coefficient for case-2 The head loss portion and totality, \(\Delta\)Ptot = \(\Delta\)P1+\(\Delta\)P2+\(\Delta\)P3, are normalized to the maximum recorded value of each case. The analysed operating conditions space from the minimum discharge required by the turbine Qt, min up to the maximum, Qt, max. The latest operating point prompts additional discharge from the upper reservoir to fulfil with the required turbine discharge. #### 5.1.1 HSC case 1 The inlet velocity profile change does not considerably impact the pressure drop in the \(\Delta\)P1 and \(\Delta\)P3 portions, pump and turbine branches respectively (Fig. 6) and both reach a similar maximum value. The discharge is invariant in the pump branch while the turbine discharge spaces from its minimum and maximum, by generating a constant relative pressure drop \(\Delta\)P1 of 20% over the maximum value recorded for this operating condition. On the other side, \(\Delta\)P3 raises with the discharge that engages the turbine branch to finally reach a value similar to \(\Delta\)P1. The highest local pressure drop is located at the crossing point of the trifurcation for the section expansion and curvature. \(\Delta\)P2 has a steeper evolution for uniform velocity inlet scenario and the latter simulated point, namely Qt = Qt,max = 1.1 Qt, differs. In that operating condition, the inlet flow from the upper reservoir eases the flow momentum pointing to the turbine branch but only in the scenario with uniform velocity inlet. With a developed turbulent velocity profile coming from the pump, the mixing point with the discharge from the upper reservoir preserves higher pressure drops. Figure 5: Illustration of the pump-turbine and the CFD velocity contours at the outlet of the pump spiral casing. #### 5.1.2 HSC case-2 The solution of the case-2 with the imported pump velocity profile globally presents a lower pressure loss apart from the initial branches \(\Delta\)P1 due to the higher initial turbulence (Fig.7). The pressure losses \(\Delta\)P3 are depending on the turbine discharge and presents upward trend with the increase of the velocity in this branch. Fig. 8 shows the comparison of the surface streamlines and magnitude velocity for sections S3-S0 for the two different inlet velocity profiles. In the case with uniform velocity inlet, the flow structures in turbine branch are more complex by presenting two developed vortices entering the turbine branch, which justify the higher pressure losses for \(\Delta\)P3. \(\Delta\)P2 accounts for most of the total flow losses and it increases with the discharge of the turbine. However, with a uniformed velocity inlet, it records a favourable flow path with an incoming flow from the upper reservoir Q = Qt, max. From a better insight of the flow field in the case-2, Fig.9 presents the surface streamlines at middle plane which illustrates that the sudden decrease of \(\Delta\)P2 from Qt = Qp to Qt = 1.1Qp results from the abrupt change of the flow structure in the branch. Fig.8 2D streamlines of case-2 at Q\({}_{t}\) = Q\({}_{p}\) of sections S3 to S0 ## 6 Conclusions This paper provides a methodology to numerically investigate the HSC feasibility in the look of assessing the off-design operating conditions and its potential. The study refers to an existing PHES plant equipped with three Francis pump-turbines that has not been designed for HSC application. The outcomes of this research propose a model expression for the power compensation of head loss due to the new operation based on 3D CFD numerical simulations to evaluate the flow condition in the PHES plant trifurcation. Under HSC operating conditions discussed in this paper, flow losses in penstock raises with turbine discharge rate and it reaches maximum when all pumping fluid feeds into turbine. Meanwhile, CFD results show losses in penstock are also affected by the level of inflow turbulence. The computation of the flow structure in the hydraulic system presents the correlation of total pressure losses and the flow patterns which indicates the loss mechanism of penstock under HSC conditions. Furthermore, the effect of the inlet velocity profile at the trifurcation results to be not negligible for complex mixing cross-sections, especially in studying different turbine discharge contribution at the trifurcation. From here, unsteady simulations will be needed to validate the results and to further investigate transient regimes that potentially could affect the system in HSC operations.
2305.03905
Generalized hybrid natural inflation
Although the hybrid natural inflation is a successful inflation model, the symmetry breaking scale in the model should be large for a negative value of the running of the scalar spectral index. This study proposed a generalized hybrid natural inflation model that can realize a low-scale inflation with a negative value of the running of the scalar spectral index.
Teruyuki Kitabayashi
2023-05-06T02:48:42Z
http://arxiv.org/abs/2305.03905v2
# Generalized hybrid natural inflation ###### Abstract Although the hybrid natural inflation is a successful inflation model, the symmetry breaking scale in the model should be large for a negative value of the running of the scalar spectral index. This study proposed a generalized hybrid natural inflation model that can realize a low-scale inflation with a negative value of the running of the scalar spectral index. ## I Introduction The inflationary paradigm [1; 2; 3; 4] constitutes a major portion of standard cosmology, and several inflation models have been developed, which are consistent with observations [5]. In particular, the natural inflation (NI) model [6; 7; 8; 9] is attractive because the origin of the inflaton potential is well-motivated. The inflaton potential in the NI model is expressed as \[V(\phi)=\Lambda^{4}\left[1+\cos\left(\frac{\phi}{f}\right)\right], \tag{1}\] where \(\phi\) denotes the inflaton (a pseudo-Goldstone boson), \(\Lambda\) indicates the inflation scale, and \(f\) represents the symmetry breaking scale associated with a spontaneously symmetry breaking of a global symmetry of the inflaton. Unfavorably, the NI model is inconsistent with the recent measurement of the scalar spectral index and tensor-to-scalar ratio [10; 11]. Moreover, apart from this inconsistency, a large symmetry breaking scale \(f\gtrsim 5.4M\) is required in the NI model, where \(M=1/\sqrt{8\pi G}\simeq 2.435\times 10^{18}\) GeV is the reduced Planck mass (\(G\) denotes the gravitational constant). The large symmetry breaking scale is theoretically undesired because it may cause large gravitational corrections to the potential. Therefore, new inflation models have been proposed based on the NI model. The hybrid natural inflation (HNI) model [12; 13; 14; 15; 16; 17; 18; 19; 20; 21] is one of the extended versions of the NI model, and its inflaton potential is expressed as \[V(\phi)=\Lambda^{4}\left[1+a\cos\left(\frac{\phi}{f}\right)\right], \tag{2}\] where \(a\) represents the model parameter (\(0<a<1\)). The HNI is classified as a hybrid model in which a second field is responsible for terminating the inflation. Utilizing this hybrid feature, a low-scale inflation at \(f\simeq M\) can be realized if the value of the running of the scalar spectral index is not restricted. However, to ensure a negative value of the running of the scalar spectral index, the symmetry breaking scale should be \(f\geq 3.83M\) in the HNI model. Although a positive value of the running of the scalar spectral index is permissible in the observations, its negative value is favored. In this paper, a new extended NI model, generalized hybrid natural inflation (GHNI) model is proposed with an inflaton potential of \[V(\phi)=\Lambda^{4}\left[1+a\cos\left(\frac{\phi}{f}\right)\right]^{n}, \tag{3}\] where \(n\) denotes the model parameter. Although the theoretical origin of \(n\) is a critical consideration, this study disregarded it as a norm in generalized model building [22; 23]. Objectively, the GHNI model aims to realize a low-scale inflation at \(f\simeq 0.05M\), which is consistent with the observed scalar spectral index and tensor-to-scalar ratio as well as the negative value of the running of the scalar spectral index. The rest of this paper is organized as follows. In Section II, we present a review of slow-roll parameters and the observables related to the inflation. In Section III, the new inflation model, GHNI, is proposed. Herein, we establish the requirement of a large symmetry breaking scale in the HNI model to achieve a negative value of the running of the scalar spectral index. Interestingly, this negative running value of the scalar spectral index can be achieved even with a small symmetry breaking scale using the GHNI model. Finally, the present findings are summarized in Section IV. ## II Slow-roll parameters and observables Herein, we perform the slow-roll approximation of the inflation potential. The consistency of inflation models can be estimated by slow-roll parameters, which are rel evant for this study and are defined as [24]: \[\epsilon = \frac{M^{2}}{2}\left(\frac{V^{\prime}(\phi)}{V(\phi)}\right)^{2}, \tag{4}\] \[\eta = M^{2}\frac{V^{\prime\prime}(\phi)}{V(\phi)},\] (5) \[\xi_{2} = M^{4}\frac{V^{\prime}(\phi)V^{\prime\prime\prime}(\phi)}{V^{2}( \phi)},\] (6) \[\xi_{3} = M^{6}\frac{V^{\prime 2}(\phi)V^{\prime\prime\prime\prime}(\phi)}{V^{3}( \phi)}, \tag{7}\] where the prime sign denotes the derivative with respect to \(\phi\). The observables are related to the slow-roll parameters as follows: \[n_{\rm s} = 1-6\epsilon+2\eta, \tag{8}\] \[n_{\rm sk} = \frac{dn_{\rm s}}{d\ln k}=16\epsilon\eta-24\epsilon^{2}-2\xi_{2},\] (9) \[n_{\rm skk} = \frac{d^{2}n_{\rm s}}{d\ln k^{2}}\] (10) \[= -192\epsilon^{3}+192\epsilon^{2}\eta-32\epsilon\eta^{2}-24 \epsilon\xi_{2}+2\eta\xi_{2}+2\xi_{3},\] \[A_{\rm s} = \frac{2V}{3\pi^{2}M^{4}r}, \tag{11}\] and \[n_{\rm t} = -2\epsilon, \tag{12}\] \[n_{\rm tk} = \frac{dn_{\rm t}}{d\ln k}=4\epsilon(\eta-2\epsilon),\] (13) \[r = 16\epsilon, \tag{14}\] where \(n_{\rm s}\) denotes the scalar spectral index, \(n_{\rm sk}\) indicates the running of the scalar spectral index (\(k\) denotes wave number), \(n_{\rm skk}\) represents the running of running of the scalar spectral index, \(A_{\rm s}\) denotes the scalar power spectrum amplitude, \(n_{\rm t}\) indicates the tensor spectral index, \(n_{\rm tk}\) represents the running of the tensor spectral index, and \(r\) indicates the tensor-to-scalar ratio, respectively. Accordingly, the parameter was defined as \[\delta_{\rm ns}=1-n_{\rm s}. \tag{15}\] In principle, the correct inflation models should be consistent with the observation. The scalar spectral index, running scalar spectral index, scalar power spectrum amplitude, and tensor-to-scalar ratio were constrained from the observation as \(n_{\rm s}=0.9658\pm 0.0040\), \(n_{\rm sk}=-0.0066\pm 0.0070\), \(A_{\rm s}\simeq e^{3.08}\times 10^{-10}\simeq 2.2\times 10^{-9}\), and \(r<0.068\) with Planck TT, TE, EE+lowE+lenging+BK15+BAO [10]. In addition, the tensor-to-scalar ratio may be constrained to \(r<0.035\)[11]. For the proposed inflation model, the constraints \(n_{\rm s}\), \(n_{\rm sk}\), and \(r\) are required. \[n_{\rm s}=0.966\quad(\delta_{\rm ns}=0.034),\] \[-0.0136\leq n_{\rm sk}<0.0004,\] \[r\leq 0.06. \tag{16}\] Although a positive \(n_{\rm sk}\) is still permissible in the Planck data, a negative \(n_{\rm sk}\) is favored. Thus, we estimate the allowed model parameters for \(n_{\rm sk}\leq 0.0004\) as well as \(n_{\rm sk}\leq 0\). ## III Generalized hybrid natural inflation ### Slow-roll parameters The slow-roll parameters for the GHNI model defined by the potential in Eq. (3) are stated as follows: \[\epsilon = \frac{1}{2}\left(\frac{M}{f}\right)^{2}\frac{a^{2}n^{2}\left(1- c_{\phi}^{2}\right)}{\left(1+ac_{\phi}\right)^{2}}, \tag{17}\] \[\eta = -\left(\frac{M}{f}\right)^{2}\frac{an\left[a+c_{\phi}+an(c_{\phi }^{2}-1)\right]}{\left(1+ac_{\phi}\right)^{2}}, \tag{18}\] and \[\xi_{2} = -2\left(\frac{M}{f}\right)^{2}\epsilon\] \[\times\frac{1-2a^{2}+a^{2}n(3-n)+a(3n-1)c_{\phi}+a^{2}n^{2}c_{ \phi}^{2}}{\left(1+ac_{\phi}\right)^{2}},\] \[\xi_{3} = -2\left(\frac{M}{f}\right)^{2}\epsilon\eta\frac{c_{\phi}+a(a^{2} x_{1}+ac_{\phi}x_{2}+x_{3})}{(1+ac_{\phi})^{2}\left[a+c_{\phi}+an(c_{\phi}^{2}-1) \right]},\] where \[c_{\phi}=\cos\left(\frac{\phi}{f}\right), \tag{21}\] and \[x_{1} = (n-3)(n-2)(n-1)+c_{\phi}^{4}n^{3}\] \[-2c_{\phi}^{2}(n-1)(n(n-2)+2),\] \[x_{2} = 6(c_{\phi}^{2}-1)n^{2}+2(5-2c_{\phi}^{2})n+c_{\phi}^{2}-4,\] \[x_{3} = 4-4n+c_{\phi}^{2}(7n-4). \tag{22}\] From Eqs. (14) and (17), \(c_{\phi}\) can be derived as \[c_{\phi}=\frac{-f^{2}r\pm 8M^{2}an^{2}\sqrt{1-\frac{(1-a^{2})f^{2}r}{8M^{2}a^{ 2}r^{2}}}}{a\left(f^{2}r+8M^{2}n^{2}\right)}, \tag{23}\] and from Eqs. (18) and (23), we obtained \[\eta = \begin{cases}-\frac{1}{2}\left(\frac{M}{f}\right)^{2}n+\frac{1}{ 16}\left(2-\frac{1}{n}\right)r&(a=1)\\ \left(\frac{M}{f}\right)^{2}\frac{a^{2}n}{1-a^{2}}A&(0<a<1)\end{cases} \tag{24}\] where \[A=1+\frac{(1-a^{2})f^{2}(n-1)r}{8M^{2}a^{2}n^{2}}\pm\frac{1}{a}\sqrt{1-\frac{(1-a^{ 2})f^{2}r}{8M^{2}a^{2}n^{2}}}. \tag{25}\] The symmetry breaking scale is expressed as \[f=\begin{cases}\frac{2Mn}{\sqrt{4n\delta_{\rm ns}-\frac{(a+1)r}{2}}}&(a=1)\\ \frac{4\delta Man}{\sqrt{r-a^{2}(n+2)r+8a^{2}n\delta_{\rm ns}+\sqrt{B}}}&(0<a<1 )\end{cases}, \tag{26}\] from Eqs. (8), (14), (15), and (24), where \[B=(1+a^{2}n(n+2))r^{2}-16a^{2}n(n+1)r\delta_{\rm ns}+64a^{2}n^{2} \delta_{\rm ns}^{2}. \tag{27}\] Note that, for \(n=1\), the equations of the GHNI model reduce to those of the HNI model. Thus, the HNI model is a specific case of the GHNI model (both are hybrid inflation models). We comment about the generalized natural inflation (GNI) model [22; 23] with the inflaton potential of \[V(\phi)=2^{1-n}\Lambda^{4}\left[1+\cos\left(\frac{\phi}{f}\right)\right]^{n}. \tag{28}\] The equations of the GHNI model obtained with \(a=1\) are identical to those of the GNI model; however, similar to the NI model, the GNI model is regarded as a single-field inflation model in which the inflation and the end of inflation are controlled by the same inflaton. In contrast, the GHNI model is a hybrid model, a second field is responsible for the end of inflation. Thus, the GNI (single-field model) is not a specific case of the GHNI (hybrid inflation model). Moreover, the equations of the GHNI model obtained with \(n=1\) and \(a=1\) are identical to those of the NI model in case a positive sign '+' is selected for the second term in the numerator in Eq. (23) and negative sign '\(-\)' for the third term in the Eq.(25). However, the NI (single-field model) is not a specific case of GHNI (hybrid inflation model). ### Lower limit of the symmetry breaking scale We show that the lower limit of the symmetry breaking scale in the HNI model should be \(f\simeq 3.83M\) for a negative \(n_{\rm sk}\). In contrast, smaller scale \(f\simeq 0.05M\) is enabled with negative \(n_{\rm sk}\) in GHNI model. The running of the scalar spectral index \(n_{\rm sk}\) (solid curves) and the symmetry breaking scale \(f\) (dashed curves) is displayed in Figure 1 as a function of parameter \(a\) for \(r=0.01,0.03,0.06\). In each panel, the upper horizontal dotted line denotes the observed upper limit \(n_{\rm sk}=0.0004\), the lower horizontal dotted line exhibits \(n_{\rm sk}=0\), and the middle panel (\(n=1\)) corresponds to the HNI model. The top and bottom panels display the example of GHNI model for \(n=1.5\) and \(n=0.5\), respectively. The lower limit of the symmetry breaking scale can be estimated from Fig. 1. For instance, the red curve (\(r=0.06\)) in the top panel (\(n=1.5\)) intersected the dotted \(n_{\rm sk}=0.0004\) line at \(a=0.27\). Thus, the symmetry breaking scale should be \(f\geq 3.99M\) for \(n_{\rm sk}<0.0004\). Similarly, the red curve crossed the dotted \(n_{\rm sk}=0\) line Figure 1: Running of scalar spectral index \(n_{\rm sk}\) (solid curves) and symmetry breaking scale \(f\) (dashed curves) as a function of parameter \(a\) for \(r=0.01,0.03,0.06\). Upper and lower horizontal dotted lines in each panels correspond to \(n_{\rm sk}=0.0004\) and \(n_{\rm sk}=0\), respectively; middle panel (\(n=1\)) represents to HNI model; top and bottom panels exhibit the example of GHNI model for \(n=1.5\) and \(n=0.5\), respectively. at \(a=0.40\) and the symmetry breaking scale should be \(f\geq 5.25M\), if \(n_{\rm sk}<0\) is required rather than \(n_{\rm sk}<0.0004\). The lower limit of the symmetry breaking scale \(f\) is exhibited in Fig. 2 as a function of the tensor-to-scalar ratio \(r\) for \(n=1,1.3,\cdots,1.9\) (top panel), \(n=0.1,1.2,\cdots,1\) (middle panel), and for extremely small \(n\) and \(r\) (bottom panel). The three panels on the left (right) display the minimum values of the symmetry breaking scale, \(f_{\rm min}\), for \(n_{\rm sk}\leq 0.0004\) (\(n_{\rm sk}\leq 0\)). Notably, the symmetry breaking scale \(f\) decreased with the parameter \(n\) and the tensor-to-scalar ratio \(r\). Although a small breaking scale \(f\simeq M\) was enabled in the HNI model (\(n=1\)) for \(n_{\rm sk}\leq 0.0004\) (left panels in Fig. 2), we require the condition of \(n_{\rm sk}\leq 0\) (right panels), and thus, the lower limit of the symmetry breaking scale in the HNI model should be \(f\simeq 3.83M\). In contrast, the small symmetry breaking scale, such as \(f\simeq 1.38M\) for \(n=0.1\) and \(f\simeq 0.051M\) for \(n=0.0001\), was enabled with negative \(n_{\rm sk}\) in the GHNI model. If the positive \(n_{\rm sk}\) is excluded in future observations, the validity of the GHNI model would increase. Herein, we discuss the over-production of the primordial black holes in the early universe. As reported earlier, Figure 2: Lower limit of symmetry breaking scale \(f\) as a function of tensor-to-scalar ratio \(r\) for \(n=1,1.3,\cdots,1.9\) (top panel), \(n=0.1,1.2,\cdots,1\) (middle panel), and for extremely small \(n\) and \(r\) (bottom panel). Three panels on left (right) display the minimum value of the symmetry breaking scale, \(f_{\rm min}\), for \(n_{\rm sk}\leq 0.0004\) (\(n_{\rm sk}\leq 0\)). a large positive value of the running of the scalar spectrum index \(n_{\rm sk}\) may possibly cause overproduction of primordial black holes at the end of inflation [25; 26]. The scalar power spectrum at the first order in the slow-roll parameters is expressed as \[{\cal P}_{\rm s} = \frac{1}{24\pi^{2}M^{4}}\frac{V}{\epsilon} \tag{29}\] \[= A_{\rm s}\left(\frac{k}{k_{\rm H}}\right)^{(n_{\rm s}-1)+\frac{1} {2}n_{\rm sk}\ln(k/k_{\rm H})+\cdots},\] in terms of wave number \(k\), where \(k_{\rm H}\) denotes the wave number at the horizon crossing. The Taylor expansion of the power spectrum around its value at the horizon crossing is constrained by \[\ln\left[\frac{{\cal P}_{\rm s}(0)}{{\cal P}_{\rm s}(N_{\rm H})}\right]=(n_{ \rm s}-1)N_{\rm H}+\frac{1}{2}n_{\rm sk}N_{\rm H}^{2}\leq 14, \tag{30}\] where \(N_{\rm H}\simeq 50-60\) denotes the e-folding number. The undesired amplitude for perturbations at which the primordial black holes could be overproduced was \({\cal P}_{\rm s}(0)\simeq 10^{-3}\). This amplitude \({\cal P}_{\rm s}(0)\) evolved from the initial value \({\cal P}_{\rm s}(N_{\rm H})\simeq 10^{-9}\), thereby yielding the upper bound \(n_{\rm sk}<10^{-2}\). Accordingly, we require the condition of \(n_{\rm sk}\leq 0.0004\) (and \(n_{\rm sk}\leq 0\)), and this study resolved the issue related to the over-production of the primordial black holes. ### \(n_{\rm skk}\), \(n_{\rm kk}\) and \(\Lambda\) Although the running of running of the scalar spectral index \(n_{\rm skk}\), the running of the tensor spectral index \(n_{\rm tk}\), and the energy scale of inflation \(\Lambda\) are not useful for constraining the parameters in the GHNI model, these quantities were estimated for the completeness of this study. The running of running of the scalar spectral index \(n_{\rm skk}\) (solid curves) and the symmetry breaking scale \(f\) (dashed curves) are depicted in Fig. 3 as a function of parameter \(a\) for \(r=0.01,0.03,0.06\), wherein the horizontal dotted lines indicate \(n_{\rm skk}=0\) as a reference. The middle panel (\(n=1\)) corresponds to the HNI model; the top and bottom panels display examples of GHNI model for \(n=1.5\) and \(n=0.5\), respectively. The running of the tensor spectral index was estimated as \(n_{\rm tk}=r(r-8\delta_{\rm ns})/64\). For example, we obtained \(n_{\rm tk}=-1.99\times 10^{-4}\) for \(r=0.06\) and \(\delta_{\rm ns}=0.0034\). The energy scale of inflation \(\Lambda\) can be investigated using the relation between the scalar power spectrum amplitude \(A_{\rm s}\) and the inflation potential in Eq. (11), with the GHNI potential in Eq.(3) expressed as \[\Lambda=3.89\times 10^{16}\left[2^{n-2}r(1+ac_{\phi})^{-n}\right]^{1/4}\quad{ \rm GeV}. \tag{31}\] For instance, we derived \[\frac{\Lambda}{10^{16}~{}{\rm GeV}}=\begin{cases}1.55-1.71&(n=1)\\ 1.45-1.53&(n=0.5)\\ 1.38-1.39&(n=0.1)\end{cases}, \tag{32}\] for \(a=0.2\), \(r=0.06\), and \(-1\leq c_{\phi}\leq 1\). ## IV Summary Although the NI model is attractive because the origin of the inflaton potential is well-motivated, this model is inconsistent with the recent measurements of the scalar Figure 3: Running of running of scalar spectral index \(n_{\rm skk}\) (solid curves) and symmetry breaking scale \(f\) (dashed curves) as a function of parameter \(a\) for \(r=0.01,0.03,0.06\). For reference, horizontal dotted lines denote \(n_{\rm skk}=0\); middle panel (\(n=1\)) corresponds the HNI model; top and bottom panels display examples of the GHNI model for \(n=1.5\) and \(n=0.5\), respectively. spectral index and tensor-to-scalar ratio. Moreover, apart from this inconsistency, a large symmetry breaking scale is required in the NI model, which may cause large gravitational corrections to the potential. Thus, the NI model should be extended. To this end, the HNI model is an inflation model based on NI. In the HNI model, a low-scale inflation at \(f\simeq M\) can be realized by exceeding the value of the running of the scalar spectral index. However, as demonstrated in this study, the symmetry breaking scale should be \(f\geq 3.83M\) in such a model when a negative value of the running of the scalar spectral index is required. Although the positive value of the running of the scalar spectral index is still permissible in the observation, its negative value is favored. Overall, this study proposed a generalized HNI model that can realize a low-scale inflation at \(f\simeq 0.05M\). We have shown that the new model is consistent with the observed scalar spectral index and tensor-to-scalar ratio as well as the negative value of the running of the scalar spectral index. If the positive value of the running of the scalar spectral index is excluded from future observations, the validity of the new model would increase. Furthermore, for the completeness of this study, we estimated the running of running of the scalar spectral index, the running of the tensor spectral index, and the energy scale of inflation in the new model.
2310.16600
Balancing central and marginal rejection when combining independent significance tests
A common approach to evaluating the significance of a collection of $p$-values combines them with a pooling function, in particular when the original data are not available. These pooled $p$-values convert a sample of $p$-values into a single number which behaves like a univariate $p$-value. To clarify discussion of these functions, a telescoping series of alternative hypotheses are introduced that communicate the strength and prevalence of non-null evidence in the $p$-values before general pooling formulae are discussed. A pattern noticed in the UMP pooled $p$-value for a particular alternative motivates the definition and discussion of central and marginal rejection levels at $\alpha$. It is proven that central rejection is always greater than or equal to marginal rejection, motivating a quotient to measure the balance between the two for pooled $p$-values. A combining function based on the $\chi^2_{\kappa}$ quantile transformation is proposed to control this quotient and shown to be robust to mis-specified parameters relative to the UMP. Different powers for different parameter settings motivate a map of plausible alternatives based on where this pooled $p$-value is minimized.
Chris Salahub, Wayne Oldford
2023-10-25T12:45:49Z
http://arxiv.org/abs/2310.16600v2
# Balancing central and marginal rejection when combining independent significance tests ###### Abstract A common approach to evaluating the significance of a collection of \(p\)-values combines them with a pooling function, in particular when the original data are not available. These pooled \(p\)-values convert a sample of \(p\)-values into a single number which behaves like a univariate \(p\)-value. To clarify discussion of these functions, a telescoping series of alternative hypotheses are introduced that communicate the strength and prevalence of non-null evidence in the \(p\)-values before general pooling formulae are discussed. A pattern noticed in the UMP pooled \(p\)-value for a particular alternative motivates the definition and discussion of central and marginal rejection levels at \(\alpha\). It is proven that central rejection is always greater than or equal to marginal rejection, motivating a quotient to measure the balance between the two for pooled \(p\)-values. A combining function based on the \(\chi^{2}_{\kappa}\) quantile transformation is proposed to control this quotient and shown to be robust to mis-specified parameters relative to the UMP. Different powers for different parameter settings motivate a map of plausible alternatives based on where this pooled \(p\)-value is minimized. ## 1 Introduction When presented with a collection of \(p\)-values, a natural question is whether they constitute evidence as a whole against the null hypothesis that there are no significant results. The multiple testing problem arises because answering this question requires different analysis than univariate \(p\)-values. A univariate threshold applied to all \(p\)-values, for example, will no longer control the type I error at the level of the threshold. A common approach to control the type I error (often called the family-wise error rate in this context) is to use a function to combine the \(p\)-values into a single value which behaves like a univariate \(p\)-value. Explicitly, consider a collection of \(M\) independent test statistics \(\mathbf{t}=(t_{1},\ldots,t_{M})^{\mathsf{T}}\) having \(p\)-values \(\mathbf{p}=(p_{1},\ldots,p_{M})^{\mathsf{T}}\) for the null hypotheses \(H_{01}\), \(H_{02}\),..., \(H_{0M}\) - for example, \(\chi^{2}\) tests for the association of \(M\) individual genes with the presence of a disease where each \(H_{0i}\) asserts no association. Assessing the overall significance of \(\mathbf{p}\) while controlling the family-wise error rate (FWER) at the outset of analysis is common practice in meta-analysis and big data applications (Heard and Rubin-Delanchy, 2018; Wilson, 2019). The FWER is the probability of rejecting one or more of \(H_{01},\ldots,H_{0M}\) when all are true, equivalent to the type I error of the joint hypothesis \[H_{0}=\cap_{i=1}^{M}H_{0i}.\] To emphasize the null distributions, \(p_{i}\sim U=Unif(0,1)\) for all \(i\in\{1,\ldots,M\}\), this is often written \[H_{0}:p_{1},p_{2},\ldots,p_{M}\stackrel{{\text{iid}}}{{\sim}}U.\] To test \(H_{0}\), a statistic \(l(\mathbf{p}):[0,1]^{M}\mapsto\mathbb{R}\) of the \(p\)-values with a distribution that is known or easily simulated under \(H_{0}\) can be computed. If \(l(\mathbf{p})\) has cumulative distribution function (CDF) \(F_{l}(l)\) under \(H_{0}\), then \(l(\mathbf{p})\) admits \(g(\mathbf{p})=1-F_{l}(l(\mathbf{p}))\sim Unif(0,1)\) such that rejecting \(H_{0}\) when \(g(\mathbf{p})\leq\alpha\) controls the FWER at level \(\alpha\).1\(g(\mathbf{p})\) therefore summarizes the evidence against \(H_{0}\) in a statistic which behaves like a univariate \(p\)-value: its magnitude is inversely related to its significance. Footnote 1: Note that the use of the CDF in \(g(\mathbf{p})=1-F_{l}(l(\mathbf{p}))\) implies that \(g(\mathbf{p})\) is identical for any statistic that is a monotonic transformation of \(l(\mathbf{p})\). If we want \(g(\mathbf{p})\) to additionally have convex acceptance regions like a univariate \(p\)-values, it should be continuous in each argument and monotonically non-decreasing, i.e. \(g(p_{1},\ldots,p_{M})\leq g(p_{1}^{*},\ldots,p_{M}^{*})\leftrightarrow p_{1} \leq p_{1}^{*},\ldots,p_{M}\leq p_{M}^{*}\). Functions failing these can behave counter-intuitively, as they may accept \(H_{0}\) for small \(p_{i}\) only to reject as \(p_{i}\) increases for some margin \(i\). Finally, if there is no reason to favour any margin, \(g\) should be symmetric in \(\mathbf{p}\). The term evidential statistic refers to \(g(\mathbf{p})\) meeting these criteria generally (Goutis et al., 1996), and when testing \(H_{0}\) they are called pooled \(p\)-values. There is no lack of pooled \(p\)-value proposals, including the statistics of Tippett (1931), Fisher (1932), Pearson (1933), Stouffer et al. (1949), Mudholkar and George (1977), Heard and Rubin-Delanchy (2018), and Cinar and Viechtbauer (2022). As all of these methods have convex acceptance regions and control the FWER at \(\alpha\) under the rule \(g(\mathbf{p})\leq\alpha\), statistical power against alternative hypotheses is often used to distinguish them. Ideally, one among them would be uniformly most powerful (UMP) against a very broad alternative but this is not possible because of the generality of \(H_{0}\). Indeed, Birnbaum (1954) proves that if all \(f_{i}\) are strictly non-increasing so that \(p_{i}\sim f_{i}\) is biased to small values when \(H_{0i}\) is false, then there is no UMP test against the negation of \(H_{0}\), \[H_{1}=\neg H_{0}:p_{1}\sim f_{1},p_{2}\sim f_{2},\ldots,p_{M}\sim f_{M}\] where \(f_{i}\neq U\) for at least one \(i\in\{1,\ldots,M\}\). As the simulation studies in Westberg (1985), Loughin (2004), and Kocak (2017) readily demonstrate, the number of false \(H_{0i}\) and the non-null distributions \(f_{i}\) together specify the unique most powerful test. For the particular case of testing \(H_{0}\) against \(H_{1}\) with \(f_{1}=f_{2}=\cdots=f_{M}=Beta(a,b)\) for \(a\in(0,1]\) and \(b\in[1,\infty)\), the Neyman-Pearson lemma proves that the pooled \(p\)-value \(HR\left(\mathbf{p};w\right)\) induced by the statistic \[l_{HR}(\mathbf{p};w)=w\sum_{i=1}^{M}\ln p_{i}-(1-w)\sum_{i=1}^{M}\ln(1-p_{i}) \tag{1}\] with \(w=(1-a)/(b-a)\in[0,1]\) is uniformly most powerful (UMP) (Heard and Rubin-Delanchy, 2018). Though \(HR\left(\mathbf{p};(1-a)/(b-a)\right)\) is UMP against \(H_{1}\) for \(f_{1}=\cdots=f_{M}=Beta(a,b)\), it is rarely assumed that \(f_{1}=\cdots=f_{M}\) in the pairwise search for variables. Rather, some of these are assumed to be non-uniform while others are assumed null. A discussion of this setting therefore requires measures of the prevalence and strength of evidence against \(H_{0}\), captured by a series of telescoping alternative hypotheses that bridge the gap between \(H_{1}\) and the setting where \(HR\left(\mathbf{p};w\right)\) is UMP. Section 2 introduces the measures of strength and prevalence used in this paper, as these are required to understand central and marginal rejection later. Prevalence is measured by \(\eta\), the proportion of non-null tests, while strength is measured by the Kullback-Leibler divergence. Assuming that non-null tests come from a restricted beta family, the power of a UMP method for particular beta distributions is investigated for different values of the prevalence and strength in Section 4. A pattern of high power for either strong evidence in a few tests or weak evidence in many tests is noticed and developed into a framework for choosing pooled \(p\)-values in Section 5. Following the necessary definitions to develop this framework, including the concepts of central and marginal rejection, it is proved that the tendency to reject concentrated evidence is always less than diffuse evidence, allowing for the definition of a coefficient of the preference of a pooling function to diffuse evidence. Section 6 proposes a pooling function based on the \(\chi^{2}\) quantile transformation which controls this preference through its degrees of freedom. It is proven that large degrees of freedom give a pooling function which prefers diffuse evidence while small degrees of freedom prefer concentrated evidence. In a simulation study, this proposal is shown to nearly match the UMP when correctly specified and is more robust to errors in specification. These conclusions are extended in Section 7 where a sweep of parameter values is used to identify the the most powerful choice for a given sample and suggest a region of most plausible alternative hypotheses within the framework in light of it. ## 2 Measuring the strength and prevalence of evidence When proving that no UMP exists for the general hypothesis \(H_{1}=\neg H_{0}\), Birnbaum (1954) provides a couple of two-dimensional examples. Though these are demonstrative, they are not instructive for the discussion of tests generally. To the same conclusions, each of Westberg (1985), Loughin (2004), and Kocak (2017) simulate a variety of populations with differing proportions of \(\mathbf{p}\) generated under \(U\) or some alternative distribution. We begin by defining a telescoping series of alternative hypotheses which capture the settings explored in these empirical investigations. ### Telescoping alternatives Starting at \(H_{1}\), assume \(H_{0i}\) is false only for \(i\in J\subset\{1,\ldots,M\}\) and quantify the proportion of non-null hypotheses by \(\eta=|J|/M\). This implies an alternative hypothesis \[H_{2}:p_{i}\sim\begin{cases}f_{i}\neq U&\text{ if }i\in J,\\ U&\text{ if }i\notin J.\end{cases}\] As no distinctions between \(H_{01},\ldots,H_{0M}\) are made in \(H_{0}\), \(\eta\) captures the prevalence of evidence against \(H_{0}\) without loss of generality. If it is additionally assumed that all \(i\in J\) have the same alternative distribution \(f\neq U\), this gives the alternative hypothesis \[H_{3}:p_{i}\sim\begin{cases}f&\text{ if }i\in J,\\ U&\text{ if }i\notin J.\end{cases}\] Finally, in the particular case where \(\eta=1\), \(|J|=M\) and \[H_{4}:p_{1},p_{2},\ldots,p_{M}\stackrel{{\text{iid}}}{{\sim}}f\neq U\] is obtained. Though restricted compared to \(H_{1}\), this alternative makes sense for meta-analysis or repeated experiments, where we could assume all \(p_{i}\) are independently and identically distributed when \(H_{0}\) is false. \(H_{4}\) was distinguished from \(H_{1}\) as early as Birnbaum (1954) (there called \(H_{A}\) and \(H_{B}\) respectively), but no exploration of intermediate possibilities was considered. All of Westberg (1985), Loughin (2004), and Kocak (2017) explore factorial combinations of \(\eta\) and \(f\), and so use instances of \(H_{3}\) for their investigations. When testing \(H_{4}\) against \(H_{0}\), Heard and Rubin-Delanchy (2018) prove \(HR\left(\mathbf{p};w\right)\) is the UMP pooled \(p\)-value if \(f\) is from a constrained beta family. By stating clearly \(H_{1}\supset H_{2}\supset H_{3}\supset H_{4}\), a framework for alternative hypothesis is created that contextualizes and relates these previous results. Additionally, the parameter \(\eta\) under \(H_{3}\) naturally measures the prevalence in \(\mathbf{p}\) of evidence against \(H_{0}\). ### Measuring the strength of evidence Intuitively, if \(f\) has a density highly concentrated near zero then it provides "strong" evidence against \(H_{0}\). This is because \(p\)-values generated by \(f\) will tend to be smaller than those following \(U\) and therefore will be rejected more frequently for any \(\alpha\). Any measure of the strength of evidence in \(f\) should therefore increase as the magnitude of \(f\) for small values increases. This relatively simple criterion is challenging to apply to \(H_{2}\). Every \(f_{i}\) for \(i\in J\) may be distinct and a single value characterizing their multiple, potentially very different, departures from \(U\) introduces ambiguity. Taking a mean of measures, for example, conflates different instances of \(H_{2}\). If \(J=\{1,2\}\), the mean strength of evidence cannot distinguish strong evidence in \(f_{1}\) with weak evidence in \(f_{2}\) from moderate evidence in both. This difficulty is avoided if all \(f_{i}\) are equal, i.e. if \(H_{3}\) is chosen as the alternative hypothesis. Indeed, this choice is common in previous empirical investigations. Westberg (1985) generates \(\mathbf{p}\) by testing the difference in means of two simulated normal samples, and measures the strength of evidence by the true difference in means between the generative distributions. This is reasonable, but limits us to tests comparing population parameters and requires assumptions on \(\mathbf{t}\) (the tests generating \(\mathbf{p}\)). Considering \(f\) directly, Loughin (2004) takes \(p_{i}\stackrel{{\text{iid}}}{{\sim}}Beta(a,b)\) for all \(i\in J\), restricts \(a=1\leq b\) so that \(f\) is non-increasing, and measures the strength of evidence with one minus the median of \(f\): \(1-0.5^{1/b}\).2 This measure of strength is limited to \(Beta(1,b)\), though it does achieve the intuitive ordering desired. A more general measure that applies to any \(f\) is the Kullback-Leibler (KL) divergence, given by \[D(p,q)=\int_{\mathcal{X}}p(x)\ln\left(\frac{p(x)}{q(x)}\right)dx\] for \(q(x)\) to density \(p(x)\) with mutual support on \(\mathcal{X}\). Widespread application of the KL divergence in information theory and machine learning aside, one interpretation of this measure suits pooled hypothesis testing nicely. Joyce (2011) describes the KL divergence as the extra information encoded in \(q(x)\) when expecting \(p(x)\). The explicit assumption underlying the pooled test of \(H_{0}\) is that \(f_{i}=U\) for all \(i\in\{1,\ldots,M\}\), which gives a natural expected density \(q(x)=U(x)\). Furthermore, this density is, in some sense, minimally informative: no region of \([0,1]\) is distinguished from any other by \(U\). Any additional information which discriminates particular regions of \([0,1]\), in particular values near \(0\), will help inform rejection. ### Choosing a family for the alternative distribution The beta family of distributions is appealing as a model for the alternative distribution \(f\) under \(H_{3}\) for two main reasons. First, it has the same support as \(U\) without the need for adjustment. Second, a wide variety of different density shapes can be achieved by changing its two parameters and it has a non-increasing density whenever \(a\leq 1\leq b\). This latter quality makes it ideal to model alternative \(p\)-value distributions under the assumption that \(p_{i}\) is biased to small values when \(H_{0}\) is false. These features are likely why it is commonly used in the literature (e.g. Loughin (2004), Kocak (2017)). More significantly, the Neyman-Pearson lemma proves that \(HR\left(\mathbf{p};w\right)\) is UMP for \(p_{1},\ldots,p_{M}\stackrel{{\text{iid}}}{{\sim}}Beta(a,1/w+a(1- 1/w))\) when \(0\leq w,a\leq 1\), and so a best-case benchmark for power exists to compare to other tests (Heard and Rubin-Delanchy, 2018). Were another distribution chosen, this important reference point would be absent. When \(f=Beta(a,b)\) the KL divergence also has a relatively simple expression. Letting \(u(x)=1\) be the uniform density and \(f(x)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{a-1}(1-x)^{b-1}\) be the beta density with parameters \(a\) and \(b\), \(D(u,f)\) is given by \[D(u,f)=-\int_{0}^{1}\ln f(x)dx=a+b+\ln\left(\frac{\Gamma(a)\Gamma(b)}{\Gamma(a +b)}\right)-2.\] For the case of the beta densities where \(HR\left(\mathbf{p};w\right)\) is UMP, i.e. \(a\leq 1\leq b\), we can express this in terms of \(a\) and \(w=(1-a)/(b-a)\): \[D(u,f):=D(a,w)=2a+\frac{1-a}{w}+\ln\left(\frac{\Gamma(a)\Gamma\left(\frac{1}{w}+ a\left[1-\frac{1}{w}\right]\right)}{\Gamma\left(2a+\frac{1-a}{w}\right)} \right)-2. \tag{2}\] This is a less convenient expression, but provides a direct link between the strength of evidence \(D(a,w)\) and the UMP test against \(H_{4}\) by the shared parameter \(w\). Interestingly, though the strength depends on both \(a\) and \(w\), the UMP test only depends on the latter. To visualize the strength of evidence provided by different beta distributions, shaded inset densities for different choices of \(w\) and the KL divergence \(D(a,w)\) are displayed in Figure 1 placed at \(\ln(w)\), \(\ln D(a,w)\).3 When \(a=w=1\), \(f=u\) so \(D(a,w)=0\). Decreasing either of \(a\) or \(w\) from \(1\) causes a larger magnitude of \(f\) near zero, with the limiting case \(a=w=0\) corresponding to a degenerate distribution at zero. This general trend in shapes is seen in Figure 1, decreasing \(w\) or increasing \(D(a,w)\) increases the concentration of \(f\) near zero. Footnote 3: The densities of these inset plots were determined for each \(w\), \(D(a,w)\) pair by finding the corresponding \(a\) value numerically using Equation (2). This is the cause of the irregular plots in the upper right corner: when \(w\) is large enough the required \(a\) to obtain a set \(D(a,w)\) is too small to be represented as a floating point double alongside \(w\approx 1\). This is inconvenient, but these cases correspond to densities that are effectively degenerate at zero in any case. This suggests a limitation in the KL divergence. Though the ordering of beta densities by \(D(a,w)\) generally conforms to the intuitive rule - larger divergences correspond to inset densities with greater magnitude near zero - the parameter \(w\) is still relevant to the shape. The KL divergence does not distinguish between departures from uniform near \(1\) and near \(0\), despite their relevance for rejection when, for example, rejecting the null hypotheses of \(p\)-values below a threshold. This is particularly obvious in the final row of inset plots in Figure 1. When \(\ln(w)=-6\), the density for \(\ln D(a,w)=-5\) is mostly flat, with a slight increase in density near zero and a large decrease near one. When \(\ln(w)=0\), however, a much larger spike in the density near zero is present. Nonetheless, the ordering on beta densities imposed by the KL divergence is still very informative. It classifies, generally, which densities are biased to small values. Therefore, \(D(a,w)\) provides a convenient measure of the strength of evidence contained in \(f\) under the alternative hypothesis, and has computationally convenient form for the case of interest where \(f=Beta(a,b)\). ## 3 Pooled \(p\)-values Having explored the possible alternatives to \(H_{0}\) and some specific instances of these alternatives, we can now focus on the pooled \(p\)-values meant to test \(H_{0}\) against these alternatives. Recall that any pooled \(p\)-value \(g(\mathbf{p})\) is derived from the null distribution of a corresponding statistic \(l(\mathbf{p})\). The statistics underlying pooled \(p\)-values are of two basic kinds, based either on the \(k^{th}\) order statistic \(p_{(k)}\) of \(\mathbf{p}\) (e.g. Tippett (1931) and Wilkinson (1951)) or on transformations of each \(p_{i}\) using some quantile function \(F^{-1}(p)\) (e.g. Fisher (1932), Pearson (1933), Stouffer et al. (1949), Lancaster (1961), Edgington (1972), Mudholkar and George (1977), Heard and Rubin-Delanchy (2018), Wilson (2019), and Cinar and Viechtbauer Figure 1: Densities and log KL divergences of \(Beta(a,1/w+a(1-1/w))\) from \(U\) by \(w\). Insets display the densities over \([0,1]\) horizontally and \([0,2]\) vertically and are centred at the \(\ln(w)\), \(\ln D(a,w)\) coordinates corresponding to the density. These densities range from nearly vertical at \(0\) when \(D(a,w)\approx e^{5}\) to nearly uniform when \(D(a,w)\approx e^{-5}\). (2022)). The former case takes the general form \[ord\left(\mathbf{p};k\right)=\sum_{l=k}^{M}\binom{M}{l}p_{(k)}^{l}(1-p_{(k)})^{M -l}, \tag{3}\] and the latter \[g(\mathbf{p})=1-F_{M}\left(\sum_{i=1}^{M}c_{i}F^{-1}(1-p_{i})\right) \tag{4}\] where \(c_{1},\ldots,c_{M}\in\mathbb{R}\) are known constants, typically \(c_{1}=\ldots c_{M}=1\). Equation (3) gives the pooled \(p\)-value based on \(l_{Ord}(\mathbf{p};k)=p_{(k)}\), as with \(Tip(\mathbf{p})=ord\left(\mathbf{p};1\right)=1-(1-p_{(1)})^{M}\)(Tippett, 1931), while different choices of \(F\) and \(F_{M}\) in Equation (4) give the pooled \(p\)-value based on \(l(\mathbf{p})=\sum_{i=1}^{M}F^{-1}(1-p_{i})\). Obvious choices include the normal and gamma families, as these are closed under addition. For example, letting \(\Phi\) be the \(N(0,1)\) CDF and choosing \(F(x)=\Phi(x)\) and \(F_{M}(x)=\Phi(x/\sqrt{M})\) gives \[Sto(\mathbf{p})=1-\Phi\left(\sum_{i=1}^{M}\Phi^{-1}(1-p_{i})/\sqrt{M}\right) \tag{5}\] based on \(l_{Sto}(\mathbf{p})=\sum_{i=1}^{M}\Phi(1-p_{i})\) from Stouffer et al. (1949). Letting \(G_{k,\theta}(x)\) be the CDF of the gamma distribution with shape parameter \(k\) and scale parameter \(\theta\), taking \(F(x)=G_{k,\theta}(x)\) and \(F_{M}(x)=G_{Mk,\theta}(x)\) gives \[gam(\mathbf{p})=1-G_{Mk,\theta}\left(\sum_{i=1}^{M}G_{k,\theta}^{-1}(1-p_{i})\right) \tag{6}\] based on \(l_{gam}(\mathbf{p})=\sum_{i=1}^{M}G_{k,\theta}^{-1}(1-p_{i})\). \(gam\) requires the choice of a parameter \(\theta\); choosing \(\theta=1\) gives the gamma method from Zaykin et al. (2007) while \(k=1\) and \(\theta=2\) gives Fisher's method from Fisher (1932).4 Footnote 4: Should the \(p\)-values be weighted for some reason, the gamma distribution also allows more stable weighting than the constants \(c_{1},\ldots,c_{M}\) in Equation (4) by analogously giving each \(p_{i}\) an individual shape parameter \(k_{i}\) (or equivalently \(\chi^{2}\) degrees of freedom \(\kappa_{i}\)) (Lancaster, 1961). R. A. Fisher's method deserves some additional consideration alongside an analogous proposal from Karl Pearson around the same time5. Both of \(l_{Fis}(\mathbf{p})=-2\sum_{i=1}^{M}\ln p_{i}\) (Fisher, 1932) and \(l_{Pea}(\mathbf{p})=-2\sum_{i=1}^{M}\ln(1-p_{i})\)(Pearson, 1933) were originally proposed only as computational tricks for the distribution of \(\prod_{i=1}^{M}p_{i}\)(Wallis, 1942), but are also quantile transformations based on the \(\chi_{2}^{2}\) distribution. Let \[F_{\chi}(x;\kappa)=\int_{0}^{x}\frac{1}{2^{\kappa/2}\Gamma(\kappa/2)}t^{\kappa/ 2-1}e^{-t/2}dt, \tag{7}\] be the CDF of the \(\chi_{\kappa}^{2}\) distribution, in particular \(F_{\chi}(x;2)=1-e^{-x/2}\). Therefore, \(F_{\chi}^{-1}(1-p;2)=-2\ln p\) and so taking \(F(x)=F_{\chi}(x;2)\) and \(F_{M}(x)=F_{\chi}(x;2M)\) gives \[Fis(\mathbf{p})=1-F_{\chi}\big{(}l_{F}(\mathbf{p});2M\big{)}=1-F_{\chi}\left(- 2\sum_{i=1}^{M}\ln p_{i};2M\right)=1-F_{\chi}\left(\sum_{i=1}^{M}F_{\chi}^{-1} (1-p_{i};2);2M\right)\] consistent with Equation (4). In contrast, \(l_{Pea}(\mathbf{p})\) uses lower tail probabilities by taking \(F_{\chi}(p_{i};2)\), and so \[Pea(\mathbf{p})=F_{\chi}\left(\sum_{i=1}^{M}F_{\chi}^{-1}(p_{i};2);2M\right)\] departs from the general quantile transformation equation. ## 4 Benchmarking the most powerful test Alone, \(Fis(\mathbf{p})\) is preferred to \(Pea(\mathbf{p})\), as Birnbaum (1954) found \(Pea(\mathbf{p})\) inadmissible for the alternative hypothesis \(H_{1}\) if the \(t_{i}\) independently follow particular distributions in the exponential family. \(Fis(\mathbf{p})\), in contrast, was admissible in this setting and is optimal in some sense for others (Littell and Folks, 1971; Koziol and Perlman, 1978). Together, the statistics for these two pooled \(p\)-values are combined in \(l_{HR}(\mathbf{p};w)=-\frac{w}{2}l_{Fis}(\mathbf{p})+\frac{1-w}{2}l_{Pea}( \mathbf{p})\), the UMP statistic under \(H_{4}\) when \(f=Beta(a,1/w+a(1-1/w))\)(Heard and Rubin-Delanchy, 2018). Intuitively, then, \(HR\left(\mathbf{p};w\right)\) is a test based on a linear combination of the lower and upper tail probabilities of \(\mathbf{p}\) transformed to \(\chi_{2}^{2}\) quantiles. When \(w=1\) it considers the upper tail alone and when \(w=0\) the lower tail alone. Note that \(l_{HR}(\mathbf{p};w)\), the statistic, and \(HR\left(\mathbf{p};w\right)\), the unique corresponding pooled \(p\)-value, will be used interchangeably thoughout this paper. Unfortunately, this imbues \(HR\left(\mathbf{p};w\right)\) with some practical shortcomings. While both \(l_{Fis}(\mathbf{p})\) and \(l_{Pea}(\mathbf{p})\) are \(\chi_{2M}^{2}\) distributed under \(H_{0}\), they are not independent and so their combination in \(l_{HR}(\mathbf{p};w)\) does not have a closed-form distribution. Approximation as in Mudholkar and George (1977) or simulation must be used to determine the \(\alpha\) quantiles or visualize their distribution. This means, for example, the kernel density estimates of \(l_{HR}(\mathbf{p};w)\) by \(w\) in Figure 2 required the generation of 100,000 independent simulated samples of the case \(p_{1},\ldots,p_{10}\overset{\text{iid}}{\sim}U\). Further, \(H_{4}\) is the least general of the telescoping alternative hypotheses \(H_{1}\supset H_{2}\supset H_{3}\supset H_{4}\) and is a less natural choice than \(H_{3}\) if only a subset of tests are thought to be significant. Empirical and theoretical investigations show the most powerful test depends on \(\eta\) and \(f\) under \(H_{3}\), so \(HR\left(\mathbf{p};w\right)\) may be less exceptional under this more general hypothesis. Finally, \(HR\left(\mathbf{p};w\right)\) is only UMP if \(w\) is known, which is seldom true in practice. These practical difficulties manifest in two possible errors, assuming \(H_{4}\) when \(H_{3}\) is true and choosing \(w\) when the true parameter is \(\omega\), and four cases of mis-specification depending on which is present. The power of \(HR\left(\mathbf{p};w\right)\) under all four cases was investigated by a simulation study at level \(\alpha=0.05\). For both of \(H_{3}\) and \(H_{4}\) and a range of mis-specified \(w\), \(HR\left(\mathbf{p};w\right)\) was applied to factorial combinations of \(D(a,w)\), \(w\), and \(M\) covering their respective ranges. \(D(a,w)\) was chosen on the log scale ranging from \(-5\) to \(5\) at \(0.5\) increments, \(w\) was chosen on the log scale at values \(-6,-5,\ldots,0\), and the values of \(M\) were \(2\), \(5\), \(10\), and \(20\). For each of the parameter settings, \(l_{HR}(\mathbf{p};w)\)'s \(0.95\) quantiles under \(H_{0}\) given \(w\) and Figure 2: Densities of \(l_{HR}\) by \(w\) when \(M=10\). Solid lines indicate \(w=e^{-6}\), dashed lines \(w=e^{-3}\), dotted lines \(w=1/2\), and dot-dashed lines \(w=1\). Note how \(w=1\) and \(w=e^{-6}\) are nearly mirrored distributions skewed away from zero and \(w=1/2\) is symmetric at zero. \(M\) are simulated by generating 100,000 independent samples \(\mathbf{p}_{i}=p_{i1},\ldots,p_{iM}\stackrel{{\text{iid}}}{{\sim}}U\), computing \(l_{HR}(\mathbf{p}_{i};w)=l_{HRi}\), and taking the 0.95 quantile of the sequence \(l_{HR1},\ldots,l_{HR100,000}\) as the 0.95 quantile of \(l_{HR}(\mathbf{p};w)\) under \(H_{0}\). Note that the value of \(a\) and the case do not impact this simulation, and so these quantiles are used across all \(a\) values under both \(H_{3}\) and \(H_{4}\). Next a Monte Carlo estimate of the probability of rejecting \(H_{0}\) using \(l_{HR}(\mathbf{p};w)\) (i.e. the power of \(l_{HR}(\mathbf{p};w)\)) is generated for each case. The details of this estimate depend on whether \(H_{3}\) or \(H_{4}\) was used to generate the data and whether \(w\) was known or not when choosing \(l_{w}\). To begin, consider the benchmark case when \(w\) is known and the data are generated according to \(H_{4}\). ### Case 1: correct hypothesis and \(w\) If the data are generated under \(H_{4}\) and \(w\) is chosen correctly, \(HR\left(\mathbf{p};w\right)\) is UMP and so provides the greatest power of any test. In this case, the probability of rejection for a given \(a,w\), and \(M\) setting is estimated by generating 10,000 independent samples \(\mathbf{p}_{i}=p_{i1},\ldots,p_{iM}\stackrel{{\text{iid}}}{{\sim} }Beta(a,1/w+a(1-1/w))\). \(l_{HR}(\mathbf{p}_{i};w)\) is computed for each sample and compared to the simulated 0.95 quantile of \(l_{HR}(\mathbf{p};w)\) under \(H_{0}\)6. If the value is larger than the quantile \(H_{0}\) is correctly rejected and if it is smaller the test incorrectly fails to reject \(H_{0}\). Power is estimated as the proportion of the 10,000 generated samples which correctly lead to the rejection of \(H_{0}\), giving a worst-case standard error less than 0.005 based on the binomial distribution. Footnote 6: This corresponds to testing whether \(HR\left(\mathbf{p}_{i};w\right)\leq 0.05\). This procedure is applied to all settings of \(M\), \(w\), and \(D(a,w)\) outlined in Section 4, corresponding to the beta densities of Figure 1. The beta parameter \(a\) was determined for a given \(w\) and \(D(a,w)=D\) by finding the root of \(f(x)=D(x,w)-D\), while the parameter \(b\) is given by \(1/w+a(1-1/w)\). This results in an imbalance in the settings, as \(w>e^{-4}\) require \(a\) less than the typical floating point minimum value to achieve \(D(a,w)=5\). The impact on the coverage of \(D(a,w)\) for each \(w\) choice as a result of this, however, was slight as shown by the small gap in plots in Figure 1. Figure 3(b) shows a scatterplot of the power of \(HR\left(\mathbf{p};w\right)\) by \(D(a,w)\) for every setting when \(M=2\) and \(M=20\), and Figure 3(a) shows the power curves of \(HR\left(\mathbf{p};w\right)\) by \(D(a,w)\) coloured by \(w\). Generally, power increases in both \(M\) (the number of \(p\)-values) and \(D(a,w)\) (the KL divergence), which is unsurprising. Decreasing \(D(a,w)\) for \(f=Beta(a,1/w+a(1-1/w))\) necessarily gives a density closer to \(u\) which is therefore less likely to cause rejection, thus reducing the power. At a certain threshold on \(D(a,w)\), \(f\approx u\) and so the power will be \(\sim\alpha\) for all \(D(a,w)\) less than the threshold. Similarly, when \(D(a,w)\) is large enough, rejection occurs almost certainly and so the power is constant at one. This suggests that most changes in the power of \(HR\left(\mathbf{p};w\right)\) occur for moderate levels of evidence; when the evidence is too weak or too strong, all pooled \(p\)-values will perform equally poorly or well. Regardless of \(D(a,w)\), increasing the number of \(p\)-values makes any distributional differences between \(f\) and \(u\) more easily detectable, as the whole sample KL divergence is given by \(MD(a,w)\). This is why the impact of \(M\) on the power in Figure 3(b) increases in \(D(a,w)\). An interesting feature of Figure 3(a) is the ordering of the lines by decreasing \(w\) for essentially every KL divergence. This pattern holds almost everywhere with the exception of several crossings of the lowest power curves. Referring to Figure 1, increasing \(w\) for a given \(D(a,w)\) increases the magnitude of the density near zero, thereby increasing the probability of extremely small \(p\)-values. This difference was noted in Section 2.2, and causes the expected increase in the power of the UMP. Figure 3: Power of \(HR\left(\mathbf{p};w\right)\) by the KL divergence coloured by \(w\) and scaled by \(M\) displayed using (a) power curves when \(M=2\) joined by \(w\) and (b) lines from \(M=2\) to \(M=20\) for \(w=e^{-6}\) and \(0\). Increasing either \(M\) or the KL divergence increases the power and the greatest rate of change in both occurs when the divergence is in the interval \((e^{-2},e^{2})\). ### Case 2: correct hypothesis with mis-specified \(w\) Of course, the curves of Figure 3(a) are not realistic. In practice, \(D(a,w)\) is set by the data generating process and we lack the perfect knowledge of \(w\) and \(f\) needed to attain them. Suppose that \(\mathbf{p}\) is generated under \(H_{4}\) with \(f=Beta(a,\beta)\) and \(a\in[0,1]\), \(\beta\in[1,\infty)\) but that \(HR\left(\mathbf{p};w\right)\) is used instead of the correct \(HR\left(\mathbf{p};\omega\right)\) with \(\omega=(1-a)/(\beta-a)\). Though \(HR\left(\mathbf{p};w\right)\) is from the same family of tests, it is no longer UMP and so will not match the power achieved by \(HR\left(\mathbf{p};\omega\right)\). The reduction of power from using \(HR\left(\mathbf{p};w\right)\) when the UMP is \(HR\left(\mathbf{p};\omega\right)\) for each \(a\), \(w\), \(\omega\), and \(M\) setting from Section 4.1 was determined by generating 10,000 independent samples \(p_{i1},\ldots,p_{iM}\overset{\mathrm{iid}}{\sim}Beta(a,1/\omega+a(1-1/\omega))\) and computing \(HR\left(\mathbf{p}_{i};w\right)\). The proportion of samples rejected based on the simulated null 0.95 quantiles was recorded as the power of \(HR\left(\mathbf{p};w\right)\) when data were truly generated with the parameter \(\omega\). This procedure was repeated for each of \(w=e^{-6},e^{-3},1/2\), and 1 for every parameter setting. Figure 4 displays the results when \(M=2\) for \(\omega=e^{-6}\) and 1. As one setting of \(w\) matches \(\omega\) in this case, the highest curve displays the power of the UMP. Two patterns stand out in this plot. First, it is clear that mis-specification impacts the power less than the distribution of \(p\)-values under \(H_{4}\). Despite the incorrect value \(w\) in \(HR\left(;w\right)\), the mis-specified curves have the same shape in \(D(a,\omega)\) as the UMP \(HR\left(;\omega\right)\). In most cases, mis-specification results in only a slight decrease in power. Second, larger mis-specifications lead to larger decreases in power. The curves for every \(w\) are ordered by their distance from \(\omega\) for both \(\omega=1\) and \(e^{-6}\). When \(\omega=1\), for example, the lines are ordered so \(w=1\) has the greatest power followed closely by \(w=1/2\) and more distantly by \(e^{-3}\) and \(e^{-6}\). This pattern is reversed when \(\omega=e^{-6}\). In both cases, mis-specification has the greatest impact on power for moderate \(D(a,\omega)\) while mis-specification has scarcely any impact for large or small values of \(D(a,w)\) where all powers converge to 1 or \(\alpha\), respectively. It seems prudent, therefore, to choose a middling value such as \(w=1/2\) if \(\omega\) is not known but \(H_{4}\) is suspected to avoid the worst impacts of mis-specification when using \(HR\left(\mathbf{p};w\right)\) ### Case 3: incorrect hypothesis, correctly specified \(w\) Assuming all \(p_{i}\) are non-null is not always appropriate, however. The investigations of the previous sections are a useful benchmark and exploration of the impact of mi-specification, but do not address the natural case of \(H_{3}\) when a handful of significant variables are assumed to exist in a host of insignificant ones. We therefore repeat the benchmark experiment of Section 4.1 under \(H_{3}\) by generating 10,000 independent samples \(\mathbf{p}_{i}\) of size \(M=10\) with the first \(M\eta\) distributed according to \(f=Beta(a,1/w+a(1-1/w))\) and the latter \(M(1-\eta)\) distributed according to \(U\) for \(\eta\in\{0,0.1,\ldots,1\}\) under each of the settings explored previously. As \(HR\left(\mathbf{p};w\right)\) is symmetric in all of its arguments, this gives no loss of generality. Figure 5 displays contours of the power surface, the proportion of correct rejections of \(H_{0}\), as a function of \(\eta\) and \(D(a,w)\) facetted by \(\ln(w)\). Figure 5 places the measure of strength of evidence horizontally and the measure of prevalence vertically. Including \(\eta=0\) captures the behaviour of \(HR\left(\mathbf{p};w\right)\) under \(H_{0}\) along the bottom edge of the power surface and including \(\eta=1\) captures its behaviour under \(H_{4}\) along the top edge for a range of KL divergences. In particular, this means the top left part of each subplot corresponds to a generative process for \(\mathbf{p}\) with relatively weak evidence in all \(p\)-values of \(\mathbf{p}\) and the bottom right part corresponds to a generative process with strong evidence concentrated in a small number of \(p\)-values. Figures 3 and 4 show that rejection occurs at a rate of \(\alpha\) once the evidence is weak enough, so the top left corner Figure 4: Power curves for \(HR\left(\mathbf{p};w\right)\) against \(D(a,\omega)\) when \(M=2\) with colours giving the value of \(w\) and points along the curves giving \(\omega\). Mis-specification of \(\omega\) has less impact on power than the non-null distribution \(f\), but the greater the difference between \(w\) and \(\omega\), the greater the reduction in power. is less interesting than the top centre, which gives the power for tests of moderate strength spread throughout \(\mathbf{p}\). When \(w\) is small and \(l_{HR}(\mathbf{p};w)\approx l_{Pea}(\mathbf{p})/2\), \(HR\left(\mathbf{p};w\right)\) is relatively weak when strong evidence is concentrated in a few tests. The contours for \(w=e^{-6}\) are nearly horizontal for log KL divergences between \(2.5\) and \(5\) and the power is nearly identical in the bottom right and the top left corners. On the other hand, when \(w\approx 1\) and \(l_{w}(\mathbf{p})\approx-l_{Fis}(\mathbf{p})/2\), \(HR\left(\mathbf{p};w\right)\) is relatively powerful when evidence is strong and concentrated in a few tests. This is starkly visible when \(w=1\), the power contours are nearly vertical, and the bottom right corner has a power of almost \(1\). Between these extremes, the power contours display a mix of these seemingly oppositional sensitivities to strength and prevalence. ### Case 4: when both the hypothesis and \(w\) are incorrect Finally, consider the most pessimistic case. In the preceding section, \(w\) at least matched the non-null distribution \(f\) under \(H_{3}\), but this may not always be so. Suppose, instead, everything is mis-specified. That is, generate \(M\eta\)\(p\)-values according to \(f=Beta(a,1/\omega+a(1-1/\omega))\) and \(M(1-\eta)\) from the uniform distribution and compute \(HR\left(\mathbf{p};w\right)\) for each of \(w=e^{-6},e^{-3},1/2\), and \(1\) for every setting from Section 4.3 and repeat this generation Figure 5: The power of \(HR\left(\mathbf{p};w\right)\) under \(H_{3}\) by \(D(a,w)\) and \(\eta\) split by \(\ln(w)\). Note how the contour for a power of one extends nearly to the bottom of the plot when \(w=1\), but stops near \(0.75\) when \(w=e^{-6}\). 10,000 times. The power, given by proportion of rejections, follows an interesting pattern in the strength and prevalence of evidence, which is illustrated by the power contours of \(HR\left(\mathbf{p};1\right)\) in Figure 6 and the difference in power contours in Figure 7. Compared to the power contours of \(HR\left(\mathbf{p};\omega\right)\) in Figure 5, the contours of \(HR\left(\mathbf{p};1\right)\) in Figure 6 show greater red saturation - and thus greater power - in the lower right corner of every panel. Thus, \(HR\left(\mathbf{p};1\right)\) has power greater than or equal to that of \(HR\left(\mathbf{p};\omega\right)\) in this region for every \(\omega\). The magnitude of this improvement is unclear, however, due to the difficulty comparing the contours between plots. Figure 7 facilitates the comparison by plotting the difference between the power contours of \(HR\left(\mathbf{p};1\right)\) and \(HR\left(\mathbf{p};\omega\right)\) directly for all settings. This more precise plot demonstrates that the most powerful \(HR\left(\mathbf{p};\omega\right)\) to test \(H_{3}\) against \(H_{0}\) depends on the the strength and prevalence of evidence alone, _not_\(\omega\). Specifically, Figure 7 shows that \(HR\left(\mathbf{p};1\right)\) is more powerful than the correctly-specified \(HR\left(\mathbf{p};\omega\right)\) in the lower right corner of the \(D(a,\omega)\), \(\eta\) space and does worse left of centre at the top for both \(\omega=e^{-6}\) and \(e^{-3}\). In the final panel where \(\omega=1\), \(HR\left(\mathbf{p};1\right)=HR\left(\mathbf{p};\omega\right)\) and so the difference in their powers is zero everywhere. Recalling the interpretation of these regions for these facets, this indicates \(HR\left(\mathbf{p};1\right)\) is more powerful than \(HR\left(\mathbf{p};\omega\right)\) at testing \(H_{3}\) against \(H_{0}\) when strong evidence is concentrated in a few tests, but is less powerful when evidence is weaker and spread widely. This pattern holds for every \(\omega\), albeit with differences in magnitude. Additionally, it is not symmetric: when \(HR\left(\mathbf{p};1\right)\) is more Figure 6: Power contours of \(HR\left(\mathbf{p};1\right)\) under \(H_{3}\) by \(D(a,\omega)\) and \(\eta\) displayed using the saturation palette of Figure 5. powerful, the magnitude of the difference tends to be greater than in regions where it is less powerful. This parallels Loughin (2004), who found \(Fis(\mathbf{p})\) more powerful than other alternatives against \(H_{3}\) when strong evidence was concentrated in a few tests. As \(l_{Fis}(\mathbf{p})\propto l_{HR}(\mathbf{p};1)\implies Fis(\mathbf{p})=HR\left( \mathbf{p};1\right)\), this means that Fisher's method is once again relatively powerful for the same setting among the UMP family of tests \(HR\left(\mathbf{p};\omega\right)\). The consistency of this result in both simulation studies warrants further investigation. To aid in this, marginal and central rejection levels are introduced to capture the tendency of a test to reject weak evidence spread among all tests and strong evidence concentrated in a few, along with a meaningful quotient that combines the them. Figure 7: Contours of the difference in power of \(HR\left(\mathbf{p};1\right)\) and \(HR\left(\mathbf{p};\omega\right)\) under \(H_{3}\) by \(D(a,w)\) and \(\eta\), displayed using a divergent palette. \(HR\left(\mathbf{p};1\right)\) is more powerful for strong evidence concentrated in a few tests and less powerful when weak evidence is spread among most tests. Central and marginal rejection levels in pooled \(p\)-values Recall that any pooled \(p\)-value \(g(\mathbf{p})\) behaves like a univariate \(p\)-value, that is \(g(\mathbf{p})\in[0,1]\) and under \(H_{0}\), \(g(\mathbf{p})\sim U\). It may be based on order statistics and use Equation (3), \[ord\left(\mathbf{p};k\right)=\sum_{l=k}^{M}\binom{M}{l}p_{(k)}^{l}(1-p_{(k)})^ {M-l},\] or it may be based on the quantile transformations of Equation (4), \[g(\mathbf{p})=1-F_{M}\left(\sum_{i=1}^{M}F^{-1}(1-p_{i})\right).\] In either case, \(g(\mathbf{p})\) must be non-decreasing in every argument if it is to create convex acceptance regions and therefore be admissible7, continuous in every \(p_{i}\) if it is to have well-defined rejection boundaries, and symmetric in \(p_{i}\) if no margin is to be favoured. Given these common properties, concepts of marginal and central rejection can be defined in order to describe rejection in the cases of evidence against \(H_{0}\) concentrated in one test and evidence against \(H_{0}\) spread among all tests, respectively. Footnote 7: See Birnbaum (1954) and Owen (2009) for a detailed discussion of admissibility and convexity. These correspond to the largest \(p\)-values for a given FWER \(\alpha\) which still result in rejection in two separate cases. The first of these, the marginal level of the pooled \(p\)-value, is the largest value of the minimum \(p\)-value which still leads to rejection at level \(\alpha\). The second, the central level of the pooled \(p\)-value, is the largest value which all \(p\)-values can take simultaneously while still resulting in rejection. ### Characterizing central behaviour The simulations of Section 4 suggest a pooled \(p\)-value \(g(\mathbf{p})\) which is powerful at rejecting weak evidence spread among all \(p\)-values will reject \(H_{0}\) when all tests give relatively large \(p\)-values compared to \(\alpha\). Under the rejection rule \(g(\mathbf{p})\leq\alpha\), this is captured by the largest \(p_{c}\) such that \(g(\mathbf{p}_{c})=\alpha\) for \(\mathbf{p}_{c}=\left(p_{c},\ldots,p_{c}\right)^{\mathsf{T}}\). Explicitly: **Definition 1** (The central rejection level).: _For a pooled \(p\)-value \(g(\mathbf{p})\), the central rejection level, \(p_{c}\), is the largest \(p\)-value for which \(g(\mathbf{p})\leq\alpha\) when \(\mathbf{p}=\left(p_{c},\ldots,p_{c}\right)^{\mathsf{T}}\). That is_ \[p_{c}(g)=\sup\left\{p\in[0,1]:g(p,p,\ldots,p)\leq\alpha\right\} \tag{8}\] \(p_{c}\) quantifies the maximum \(p\)-value shared by all tests which still leads to rejection and therefore is directly related to the power of \(g(\mathbf{p})\) along \(\mathbf{p}_{c}\). As \(g(\mathbf{p})\) is non-decreasing, rejection occurs for any \(\mathbf{p}\) in the hypercube \([0,p_{c}]^{M}\) and so a larger \(p_{c}\) implies rejection for a larger volume of \([0,1]^{M}\). It also suggests pooled \(p\)-values smaller than \(p_{c}\) within this hypercube if \(g(\mathbf{p})\) is continuous and monotonic. This general definition can be applied to the pooled \(p\)-values of Equations (3) and (4) to obtain simple expressions for \(p_{c}\). **Proposition 1** (The central rejection level of order statistics).: \(p_{c}(ord\left(\mathbf{p};k\right))\) _is given by the largest \(p\in[0,1]\) which satisfies_ \[\sum_{l=k}^{M}{M\choose l}p^{l}(1-p)^{M-l}\leq\alpha. \tag{9}\] Proof.: The definition of \(p_{c}\) forces \(p_{(1)}=p_{(2)}=\cdots=p_{(M)}=p_{c}\). Therefore \(p_{(k)}=p_{c}\) and so \(p_{c}\) is the largest \(p\in[0,1]\) which satisfies \(ord\left((p,\ldots,p);k\right)\leq\alpha\). Expanding \(ord\left((p,\ldots,p);k\right)\) gives Equation (9). While this cannot generally be solved for a closed-form \(p_{c}\), the particular cases of \(ord\left(\mathbf{p};1\right)=Tip(\mathbf{p})\) and \(ord\left(\mathbf{p};M\right)\) admit \[p_{c}(Tip)=1-(1-\alpha)^{\frac{1}{M}} \tag{10}\] and \[p_{c}(ord\left(;M\right))=\alpha^{\frac{1}{M}} \tag{11}\] respectively. Also note that \(p_{c}(ord\left(;k\right))\) defines a constant rejection boundary around the regions of \([0,1]^{M}\) where \(k-1\)\(p\)-values are less than or equal to \(p_{c}\), \(M-k\) elements are greater, and exactly one is equal to \(p_{c}\). In particular, this means that \(p_{c}(Tip)=p_{c}(ord\left(;1\right))\) is constant along each margin, as \(M-1\) points are greater than \(p_{c}(Tip)\) along each margin. Next, consider \(p_{c}\) for the general quantile transformation of Equation 4. **Proposition 2** (The central rejection level of quantile pooled \(p\)-values).: _Given a pooled \(p\)-value based on quantile transformations as in Equation (4),_ \[g(\mathbf{p})=1-F_{M}\left(\sum_{i=1}^{M}c_{i}F^{-1}(1-p_{i})\right),\] the central rejection level is given by_ \[p_{c}(g(\mathbf{p}))=1-F\left(\frac{1}{\sum_{i=1}^{M}c_{i}}F_{M}^{-1}(1-\alpha)\right) \tag{12}\] _if \(F\) and \(F_{M}\) are continuous CDFs._ Proof.: As \(F\) and \(F_{M}\) are CDFs, they are monotonically non-decreasing real-valued functions over their ranges. If \(F\) and \(F_{M}\) are also continuous, then \(F^{-1}\) and \(F_{M}^{-1}\) are continuous. Therefore, we can drop the supremum from Equation 8 and consider the equality \[\alpha=g(p_{c},p_{c},\ldots,p_{c}).\] Expanding \(g\) and solving for \(p_{c}\) implies \[p_{c}=1-F\left(\frac{1}{\sum_{i=1}^{M}c_{i}}F_{M}^{-1}(1-\alpha)\right).\] If the \(p\)-values are unweighted, then \(c_{1}=c_{2}=\cdots=c_{M}=1\), \(\sum_{i=1}^{M}c_{i}=M\) and the behaviour of \(p_{c}\) depends on the relative growth of \(F_{M}^{-1}\) in \(M\). If \(F_{M}^{-1}(1-\alpha)\) grows in \(M\) such that \(\frac{1}{M}F_{M}^{-1}(1-\alpha)\) is unbounded, \(p_{c}\) will go to zero. If, on the other hand, \(\lim_{M\to\infty}\frac{1}{M}F_{M}^{-1}(1-\alpha)=c<\infty\), \(p_{c}\) will go to \(1-F(c)\). This provides a general expression for the soft truncation threshold of Zaykin et al. (2007) and suggests interesting asymptotic behaviour for pooled \(p\)-values based on quantile functions along the line \(p_{1}=p_{2}=\cdots=p_{M}\). This behaviour can be demonstrated concretely for several quantile transformations. Stouffer et al. (1949) takes \(F=\Phi\) and \(F_{M}=\sqrt{M}\Phi^{\mathrm{s}}\) in Equation 4 to give \(Sto(\mathbf{p})\) and so \[\lim_{M\to\infty}\frac{1}{M}F_{M}^{-1}(1-\alpha)=\lim_{M\to\infty}\frac{1}{ \sqrt{M}}\Phi^{-1}(1-\alpha)=0\] for all \(\alpha>0\). This implies \[\lim_{M\to\infty}p_{c}(Sto)=1-\Phi(0)=\frac{1}{2}. \tag{13}\] Indeed, taking \(Sto(\mathbf{p})\), substituting \(p_{1}=\cdots=p_{M}=p_{c}(Sto)\), and taking the limit gives \[\lim_{M\to\infty}1-\Phi\left(\frac{1}{\sqrt{M}}\sum_{i=1}^{M}\Phi^{-1}(1-p_{c}(Sto ))\right)=1-\Phi\left(\lim_{M\to\infty}\sqrt{M}\Phi^{-1}(1-p_{c}(Sto))\right).\] Evaluating further gives \[\lim_{M\to\infty}\sqrt{M}\Phi^{-1}(1-p_{c})=\begin{cases}-\infty&\text{ when }p_{c}>\frac{1}{2}\\ 0&\text{ when }p_{c}=\frac{1}{2}\\ \infty&\text{ when }p_{c}<\frac{1}{2},\end{cases}\] and so \(Sto(\mathbf{p})\) is either \(0\), \(\frac{1}{2}\), or \(1\) for large \(M\) when \(p_{1}\approx p_{2}\approx\cdots\approx p_{M}\). Furthermore, it will reject \(H_{0}\) for _any_ FWER level \(\alpha\) if \(p_{1},p_{2},\ldots,p_{M}\) are all less than \(\frac{1}{2}\) when \(M\) is large enough. \(F_{\chi}(x;\kappa)\) admits similar analysis. By the central limit theorem \[\lim_{\kappa\to\infty}\chi_{\kappa}^{2}\to Z\sim N(\kappa,2\kappa)\] where \(N(\mu,\sigma^{2})\) is a normal distribution with mean \(\mu\) and variance \(\sigma^{2}\). Therefore, as \(M\to\infty\) \[F_{\chi}(x;M\kappa)\to\Phi\left(\frac{x-M\kappa}{\sqrt{2M\kappa}}\right).\] This implies that the pooled \(p\)-value based on the \(\chi^{2}\) quantile has the limiting value \[\lim_{M\to\infty}1-F_{\chi}\Big{(}MF_{\chi}^{-1}\big{(}1-p_{c};\kappa\big{)}; M\kappa\Big{)}=1-\lim_{M\to\infty}\Phi\left(\frac{MF_{\chi}^{-1}\big{(}1-p_{c}; \kappa\big{)}-M\kappa}{\sqrt{2M\kappa}}\right)\] when \(p_{1}=\cdots=p_{M}=p_{c}\). As \(\Phi\) is absolutely continuous the limit can be taken inside the argument to give \[1-\Phi\left(\lim_{M\to\infty}\sqrt{\frac{M}{2\kappa}}\left[F_{\chi}^{-1}(1-p_ {c};\kappa)-\kappa\right]\right).\] Now, \[\lim_{M\to\infty}\sqrt{\frac{M}{2\kappa}}\left[F_{\chi}^{-1}(1-p_{c};\kappa)- \kappa\right]=\begin{cases}-\infty&\text{ when }p_{c}>1-F_{\chi}(\kappa;\kappa)\\ 0&\text{ when }p_{c}=1-F_{\chi}(\kappa;\kappa)\\ \infty&\text{ when }p_{c}<1-F_{\chi}(\kappa;\kappa),\end{cases}\] and so the pooled \(p\)-value based on the \(\chi^{2}\) quantile transformation behaves similarly to \(Sto(\mathbf{p})\). It is asymptotically either \(1\), \(\frac{1}{2}\), or \(0\) when \(p_{1}\approx p_{2}\approx\cdots\approx p_{M}\) depending on whether all are greater than, equal to, or less than \(1-F_{\chi}(\kappa;\kappa)\). Additionally, this implies that \[p_{c}=1-F_{\chi}(\kappa;\kappa) \tag{14}\] for the \(\chi^{2}\) quantile case. So, although \(p_{c}\) depends on \(\kappa\), all \(\chi^{2}\) quantile pooled \(p\)-values have identical behaviour about their respective \(p_{c}\). The form of Equation (4) suggests that this result applies generally. Suppose the random variable with CDF \(F\) has a mean \(\mu\) and variance \(\sigma^{2}\). Under \(H_{0}\), \(F^{-1}(1-p_{i})\) for \(i=1,\ldots,M\) are independent and identically-distributed realizations of this random variable and so \(\sum_{i=1}^{M}F^{-1}(1-p_{i})\) is asymptotically normally distributed with mean \(M\mu\) and variance \(M\sigma^{2}\) by the central limit theorem. Therefore \(F_{M}\xrightarrow[M\to\infty]{}\sqrt{M}\sigma\Phi+M\mu\) for the pooled \(p\)-value based on \(F\) and the harsh asymptotic boundary at \(p_{c}\) derived for _Sto_ will occur for _any_ evidential statistic that uses quantile functions. ### Characterizing marginal behaviour Besides the central behaviour of a pooled \(p\)-value \(g(\mathbf{p})\), the simulations of Section 4 indicate large differences in power occur for the rejection rule \(g(\mathbf{p})\leq\alpha\) when strong evidence exists in a single test. This is captured by the _marginal rejection level at b_. **Definition 2** (The marginal rejection level at \(b\)).: _For a symmetric pooled \(p\)-value \(g(\mathbf{p})\), the marginal rejection level at \(b\), \(p_{r}(g;b)\), is the largest individual \(p\)-value in \([0,b]\) for which \(g(\mathbf{p})\leq\alpha\) when all other \(p\)-values are \(b\in[0,1]\). Without loss of generality, define_ \[p_{r}(g;b)=\sup\big{\{}p_{1}\in[0,b]:g(p_{1},b,\ldots,b)\leq\alpha\big{\}}. \tag{15}\] _In particular, the marginal value when \(b=1\) is of interest, that is when there is minimal evidence against all hypotheses other than \(H_{01}\). Therefore, also define_ \[p_{r}(g)=\lim_{b\to 1}\sup\big{\{}p_{1}\in[0,b]:g(p_{1},b,\ldots,b)\leq \alpha\big{\}}. \tag{16}\] Note that symmetry is only necessary to avoid defining marginal rejection levels for each index \(i\in\{1,\ldots,M\}\) separately and that the term _marginal rejection level_ refers to Equation (16). If \(g\) is non-decreasing in all of its arguments, \(p_{r}(g;b)\) gives the largest value of \(p_{(1)}\) that still leads to rejection at \(\alpha\) when the evidence in all other \(p\)-values is bounded at \(b\). The most extreme version of this measure is given by \(p_{r}(g)\). By taking \(b=1\), it measures the power of \(g\) for evidence in a single test when all other tests provide no evidence against \(H_{0}\), and so the sensitivity of \(g\) to evidence in a single test. This leads to a key lemma for \(ord\left(\mathbf{p};1\right)=Tip(\mathbf{p})\). **Lemma 1** (The marginal rejection level for the minimum statistic).: _The marginal rejection level for \(g_{Tip}\) has two cases:_ \[p_{r}(Tip;b)=\begin{cases}b&\text{ for }b<1-(1-\alpha)^{\frac{1}{M}}\\ 1-(1-\alpha)^{\frac{1}{M}}&\text{ for }b\geq 1-(1-\alpha)^{\frac{1}{M}}.\end{cases}\] Proof.: Recall that \[Tip(\mathbf{p})=1-(1-p_{(1)})^{M}\] is a function of the minimum alone. Rejection occurs when \(Tip(\mathbf{p})\leq\alpha\), or rather when \(p_{(1)}\leq 1-(1-\alpha)^{\frac{1}{M}}\). When \(b<1-(1-\alpha)^{\frac{1}{M}}\) and \(p_{1}\leq b\), all values are below the rejection threshold and so \(p_{(1)}\) attains its upper bound. Therefore \[p_{r}(Tip;b)=\sup\left\{p_{1}\in[0,b]:g(p_{1},b,\ldots,b)\leq\alpha\right\}=b.\] When \(b\geq 1-(1-\alpha)^{\frac{1}{M}}\), rejection will only occur if \(p_{(1)}\) is below the rejection threshold at \(\alpha\), and so \[p_{r}(Tip;b)=\sup\left\{p_{1}\in[0,b]:g(p_{1},b,\ldots,b)\leq\alpha\right\}=1 -(1-\alpha)^{\frac{1}{M}}.\] A direct consequence of Lemma 1 is that \(p_{r}(g_{Tip})=p_{c}(g_{Tip})\), which Theorem 1 proves is uniquely true for \(Tip(\mathbf{p})\). As with \(p_{c}\), a larger \(p_{r}\) indicates greater power in a particular region of the unit hypercube. While \(p_{c}\) defines the rejection cube \([0,p_{c}]^{M}\), \(p_{r}\) defines the rejection shell \(\{\mathbf{p}\in[0,1]^{M}:p_{(1)}\leq p_{r}\}\) with a flat boundary at \(p_{r}\) along each margin. A larger \(p_{r}\) implies a larger shell and therefore a greater volume of \([0,1]^{M}\) where \(H_{0}\) is rejected and smaller pooled \(p\)-values within this volume if \(g(\mathbf{p})\) is monotonic. Again, general expressions are provided for \(p_{r}\) for the order statistic and quantile transformation pooled \(p\)-values. **Proposition 3** (The marginal rejection level for order statistics).: _For \(k\geq 2\), \(p_{r}(ord\left(;k\right),b)=b\) when \(\sum_{l=k}^{M}\binom{M}{l}b^{l}(1-b)^{M-1}\leq\alpha\) and does not exist otherwise._ Proof.: Recall that \[g_{Ord}(\mathbf{p};k)=\sum_{l=k}^{M}\binom{M}{l}p_{(k)}^{l}(1-p_{(k)})^{M-l}\] and note Equation 15 forces \(p_{(k)}=b\) for all \(k>1\). If \(\sum_{l=k}^{M}\binom{M}{l}b^{l}(1-b)^{M-1}\leq\alpha\), then the supremum of \(p_{1}\) is \(b\). On the other hand, if \(\sum_{l=k}^{M}\binom{M}{l}b^{l}(1-b)^{M-1}\geq\alpha\) there is no value of \(p_{1}\) which leads to rejection and so \(p_{r}(ord\left(;k\right),b)\) does not exist. In particular, this implies that \(p_{r}(ord\left(;k\right))\) does not exist for \(k\geq 2\), in other words the pooled \(p\)-value based on \(p_{(k)}\) has a value independent of \(p_{(1)}\) for \(k\geq 2\). So long as \(k\) tests are less than a particular bound, \(ord\left(\mathbf{p};k\right)\) will reject. If fewer than \(k\) are below that bound, the values of these small \(p\)-values are irrelevant. **Proposition 4** (The marginal rejection level of quantile transformation statistics).: _Given an unweighted evidential statistic based on quantile transformations as in Equation 4,_ \[g(\mathbf{p})=1-F_{M}\left(\sum_{i=1}^{M}F^{-1}(1-p_{i})\right),\] _if \(F_{M}\) and \(F\) are both continuous then_ \[p_{r}(g;b)=1-F\Big{(}F_{M}^{-1}(1-\alpha)-[M-1]F^{-1}(1-b)\Big{)}.\] _Further, if both are absolutely continuous_ \[p_{r}(g)=1-F\left(F_{M}^{-1}(1-\alpha)-[M-1]\lim_{x\to 0+}F^{-1}(x)\right) \tag{17}\] Proof.: Substituting Equation 4 into Equation 15 gives \[p_{r}(g;b)=\sup\Bigg{\{}p:1-F_{M}\bigg{(}F^{-1}(1-p)+[M-1]F^{-1}(1-b)\bigg{)} \leq\alpha\Bigg{\}}.\] As both \(F_{M}\) and \(F\) are CDFs, they are non-decreasing, if they are also continuous their inverses exist and the supremum can be dropped to give \[p_{r}(g;b)=1-F\Big{(}F_{M}^{-1}(1-\alpha)-[M-1]F^{-1}(1-b)\Big{)}.\] If they are both absolutely continuous, then the limit \[1-\lim_{b\to 1}F\Big{(}F_{M}^{-1}(1-\alpha)-(M-1)F^{-1}(1-b)\Big{)}\] can be taken into the argument of \(F\) to give Equation 17. Many proposals use absolutely continuous CDFs, so this can be readily applied. \(Sto(\mathbf{p})\), for example, has \[p_{r}(Sto)=1-\Phi\left(\sqrt{M}\Phi^{-1}(1-\alpha)-[M-1]\lim_{x\to 0+}\Phi^{-1}(x) \right)=0\] for any \(\alpha\) and \(M\) as \(\lim_{x\to 0}\Phi(x)=-\infty\). Similarly, the proposal by Mudholkar and George (1977) has \(p_{r}=0\), as it uses the logistic distribution which also has \(\lim_{x\to 0}F(x)=-\infty\). This suggests that, for a large enough \(p\)-value on all remaining tests, no level of evidence in a single test will cause the rejection of \(H_{0}\) for either of these pooled \(p\)-values; their marginal rejection levels are always \(0\) for large enough \(b\). ### The centrality quotient Beyond providing definitions that clarify the power of a pooled \(p\)-value to detect evidence spread among all tests and evidence in a single test, \(p_{c}\) and \(p_{r}\) as defined in Equations (16) and (8) can be combined into a single value summarizing the relative preference for diffuse or concentrated evidence. First, a key relationship between \(p_{c}\) and \(p_{r}\) is proven. **Theorem 1** (Order of \(p_{c}\) and \(p_{r}\)).: _For a pooled \(p\)-value \(g(\mathbf{p})\) that is continuous, symmetric, and monotonically non-decreasing in all arguments, \(p_{c}\geq p_{r}\) if both exist. Furthermore, equality occurs iff \(g(\mathbf{p})\) is constant in \(p_{k}\) for \(p_{k}\neq p_{(1)}\), that is if \(g(\mathbf{p})=f(p_{(1)})\) is a function of the minimum \(p\)-value alone._ Proof.: Consider \(p_{c}(g)\) as in Definition 8. Then \[p_{c}(g)=\sup\big{\{}p\in[0,1]:g(p,\ldots,p)\leq\alpha\big{\}}.\] Suppose \[p_{r}(g)=\lim_{b\to 1}\sup\big{\{}p_{1}\in[0,b]:g(p_{1},b,\ldots,b)\leq \alpha\big{\}}\] exists. If \(g\) is symmetric, \(p_{r}(g)\) captures the marginal rejection level of \(g\) in all margins. If \(g\) is continuous, then both \(p_{c}(g)\) and \(p_{r}(g)\) lie on the \(\alpha\) level surface of \(g\). Therefore \[g(p_{c},\ldots,p_{c})=\alpha=g(p_{r},1,\ldots,1).\] But \(g\) is non-decreasing in all of its arguments, so \[g(p_{c},1,\ldots,1)\geq g(p_{c},p_{c},\ldots,p_{c})=g(p_{r},1,\ldots,1)\] and therefore \[p_{c}\geq p_{r}.\] If \(p_{c}=p_{r}\), then substitute \[\alpha=g(p_{c},\ldots,p_{c})=g(p_{r},\ldots,p_{r}).\] As \(g\) is non-decreasing \[g(p_{r},\ldots,p_{r})\leq g(p_{r},1,\ldots,1)=\alpha\] and so \[g(p_{r},1,\ldots,1)=g(p_{r},p_{r},\ldots,p_{r}).\] This implies that the average slope of \(g\) over \([p_{r},1]\), equivalently \([p_{c},1]\), is zero for all \(p_{k}\neq p_{1}\). As \(g\) is continuous and non-decreasing, this implies that the slope must be zero for every point in this interval for all \(p_{k}\neq p_{1}\). As \(p_{k}\geq p_{1}\) for all \(p_{k}\in\mathbf{p}\) over this region, \(p_{1}=p_{(1)}\) by definition. By the symmetry of \(g\), the same argument holds for every \(p_{k}\). Therefore \(g(\mathbf{p})=f(p_{(1)})\) for some non-decreasing function \(f\). To prove the reverse direction note that if \(g(\mathbf{p})=f(p_{(1)})\), then \[\alpha=g(p_{c},p_{c},\ldots,p_{c})=g(p_{c},1,\ldots,1)\] and so \(p_{c}=p_{r}\) by the definition of \(p_{r}\) and the continuity of \(g\). By symmetry, this same argument holds for any margin. Two facts follow directly from this proof. First, Theorem 1 implies that \(p_{c}=p_{r}\) only for \(Tip(\mathbf{p})=ord\left(\mathbf{p};1\right)\) among symmetric, continuous, monotonically non-decreasing \(p\)-values as \(Tip(\mathbf{p})\) is the unique pooled \(p\)-value defined by \(p_{(1)}\). A second corollary is the existence of a sensible _centrality quotient_ to quantify the balance between central and marginal rejection levels in pooled \(p\)-values. **Definition 3** (The centrality quotient).: _Suppose \(g\) is a continuous, symmetric, and monotonically non-decreasing pooled \(p\)-value for which \(p_{r}(g)\) and \(p_{c}(g)\) defined as in Equations (16) and (8) exist, define the centrality quotient_ \[q(g)=\frac{p_{c}(g)-p_{r}(g)}{p_{c}(g)}. \tag{18}\] Theorem 1 implies that \(q(g)\in[0,1]\) with meaningful bounds. If \(q(g)=0\), \(g(\mathbf{p})\) will reject based on the smallest \(p\)-values alone, increasing the marginal rejection level as large as possible while staying non-decreasing. Moreover, \(q(g)=0\) implies \(g(\mathbf{p})\) is the pooled \(p\)-value based on \(p_{(1)}\) alone, \(Tip(\mathbf{p})\). In contrast, when \(q(g)=1\), \(g\) cannot reject based on the evidence contained in a single test, instead it requires evidence in many or all tests, for example \(Sto(\mathbf{p})\). Between these extremes, pooled \(p\)-values with larger centrality quotients will reject \(H_{0}\) for a larger range of \(p_{c}\) values and a smaller range of \(p_{r}\) values, and so will be more powerful at detecting evidence spread broadly at the cost of power when evidence is concentrated in a small number of \(p\)-values. Indeed, increasing \(w\) decreases the centrality quotient of \(HR\left(\mathbf{p};w\right)\). This matches the empirical results obtained in Section 4.4 and in particular Figure 4.4, where larger \(w\) values provided greater power when the prevalence of evidence was small but the strength of evidence was large and smaller \(w\) values gave greater power in the case of weak evidence with high prevalence. For \(\alpha=0.05\), the centrality quotients of a range of \(w\) values in \(HR\left(\mathbf{p};w\right)\) are compared to those of several quantile transformation proposals in Table 1 over a range of \(M\) values. As predicted by the asymptotic argument at the end of Section 5.1, every method tends towards a centrality of \(1\) as \(M\) increases and each \(F_{M}\) converges to a corresponding normal CDF. Beyond \(HR\left(\mathbf{p};w\right)\), other methods show the same relationship between the centrality quotient and regions of relative power in the empirical explorations in Westberg (1985), Loughin (2004), and Kocak (2017). Pooled \(p\)-values with larger centrality quotients are more powerful for weak evidence spread among all tests than those with smaller centrality quotients, but are relatively weak against strong evidence concentrated in a few tests. This is suggestive of an inverse relationship between \(p_{r}\) and \(p_{c}\) over different pooled \(p\)-values, \begin{table} \begin{tabular}{|c|c c c c|} \hline & \multicolumn{4}{c|}{\(M\)} \\ Pooled \(p\)-value & 2 & 5 & 10 & 20 \\ \hline Tippett (1931) & 0 & 0 & 0 & 0 \\ Cinar and Viechtbauer (2022) & 0.83 & 0.99 & 1.00 & 1.00 \\ Stouffer et al. (1949) & 1 & 1 & 1 & 1 \\ Fisher (1932) & 0.91 & 1.00 & 1.00 & 1.00 \\ Mudholkar and George (1977) & 1 & 1 & 1 & 1 \\ Wilson (2019) & 0.49 & 0.79 & 0.90 & 0.95 \\ \(HR\left(\mathbf{p};e^{-6}\right)\) & 1.00 & 1.00 & 1.00 & 1.00 \\ \(HR\left(\mathbf{p};e^{-3}\right)\) & 1.00 & 1.00 & 1.00 & 1.00 \\ \(HR\left(\mathbf{p};1\right)\) & 0.91 & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular} \end{table} Table 1: Centrality quotients for certain pooled \(p\)-values. but this is not the case generally. Consider, as a counter-example, \(Sto(\mathbf{p})\) and the proposal of Mudholkar and George (1977): both have \(p_{r}=0\) but different values of \(p_{c}\). ## 6 Controlling the centrality quotient Table 1, and others which could be constructed like it, provide only a limited ability to select a centrality quotient. Most of the proposals have centrality near 1, and all proposals approach 1 as \(M\) increases. Rather than choose among these other limited proposals when power is desired in a particular region, this work proposes a family of quantile pooled \(p\)-values based on \(\chi_{\kappa}^{2}\) which precisely controls the centrality for any \(M\). Following Equation (4), define the \(\chi_{\kappa}^{2}\) quantile pooled \(p\)-value \[chi\left(\mathbf{p};\kappa\right)=1-F_{\chi}\left(\sum_{i=1}^{M}F_{\chi}^{-1} (1-p_{i};\kappa);M\kappa\right) \tag{19}\] where \(\kappa\in[0,\infty)\) is the the degrees of freedom of the quantile transformation applied to the \(p_{i}\) and doubles as a centrality parameter that sets \(q(chi\left(;\kappa\right))\) arbitrarily.9 This family of pooled \(p\)-values includes several widely-used previous proposals. Setting \(\kappa=2\) gives \(Fis(\mathbf{p})\), \(\kappa=1\) gives the proposal from Cinar and Viechtbauer (2022), taking \(\lim_{\kappa\rightarrow\infty}chi\left(\mathbf{p};\kappa\right)\) gives \(Sto(\mathbf{p})\), and taking \(\lim_{\kappa\to 0}chi\left(\mathbf{p};\kappa\right)\) gives \(Tip(\mathbf{p})\). While the former two are by definition, the latter must be proven. First, prove \(\lim_{\kappa\rightarrow\infty}chi\left(\mathbf{p};\kappa\right)=Sto( \mathbf{p})\) by applying the central limit theorem. Footnote 9: This is similar to the gamma method of Zaykin et al. (2007), but with a different parameter choice. It is possible the same control of \(c\) may be obtained with a general gamma CDF, but sticking to the \(\chi_{\kappa}^{2}\) simplifies the number of parameters from two to one. **Theorem 2** (Limiting value of \(chi\left(\mathbf{p};\kappa\right)\)) as \(\kappa\rightarrow\infty\)).: \[\lim_{\kappa\rightarrow\infty}chi\left(\mathbf{p};\kappa\right)=Sto( \mathbf{p})\] Proof.: Note that Equation (19) is always a pooled \(p\)-value, i.e. has a uniform distribution for any choice of \(\kappa\). By the CLT, \(\lim_{\kappa\rightarrow\infty}F_{\chi}(x;\kappa)=\Phi(x)\), and so in the limit \(chi\left(\mathbf{p};\kappa\right)\) becomes the pooled \(p\)-value derived from the sum of standard normal quantile transformations, \(Sto(\mathbf{p})\). The proof for \(chi\left(\mathbf{p};0\right)\) is slightly more involved, and relies on Theorem 1. **Theorem 3** (Limiting value of \(chi\left(\mathbf{p};\kappa\right)\) for \(\kappa=0\)).: \[\lim_{\kappa\to 0}chi\left(\mathbf{p};\kappa\right)=Tip(\mathbf{p})=ord\left( \mathbf{p};1\right)\] Proof.: Theorem 1 proves that \(p_{r}=p_{c}\) for a pooled \(p\)-value if and only if that pooled \(p\)-value is \(Tip=ord\left(;1\right)\). Therefore, the limit is proven if \[\lim_{\kappa\to 0}p_{r}\big{(}chi\left(;\kappa\right)\big{)}=\lim_{\kappa\to 0}p_{c} \big{(}chi\left(;\kappa\right)\big{)}\] Expressing these quantities as probability statements gives \[p_{c}\big{(}chi\left(;\kappa\right)\big{)}=P\left(\chi_{\kappa}^{2}\geq\frac{ 1}{M}F_{\chi}^{-1}(1-\alpha;M\kappa)\right),\] and \[p_{r}\big{(}chi\left(;\kappa\right)\big{)}=P\left(\chi_{\kappa}^{2}\geq F_{ \chi}^{-1}(1-\alpha;M\kappa)\right).\] The case of \(\chi_{0}^{2}\) is a degenerate distribution at \(0\). That is \[F_{\chi^{2}}(x;0)=\begin{cases}0&x<0\\ 1&x\geq 0.\end{cases}\] This can also be seen from the limit of Markov's inequality for the \(\chi_{\kappa}^{2}\) distribution, \[P(\chi_{\kappa}^{2}\geq a)\leq\frac{\kappa}{a},\] which goes to zero for any \(a>0\) as \(\kappa\to 0\). As \(F_{\chi}(x;\kappa)\) is continuous and monotonically increasing for all \(\kappa\), this also implies \[F\left(\frac{\kappa}{\alpha};\kappa\right)\geq 1-\alpha\] \[\implies 1-F\left(\frac{\kappa}{\alpha};\kappa\right)\leq\alpha\] \[\implies P\left(\chi_{\kappa}^{2}\geq\frac{\kappa}{\alpha}\right)\leq\alpha\] \[\implies F_{\chi}^{-1}(1-\alpha;\kappa)\leq\frac{\kappa}{\alpha}.\] This bound is not particularly tight, for \(\alpha=0.05\) it only restricts the \(0.95\) quantile to be less than \(20\) times the mean. However, it suffices to evaluate \[\lim_{\kappa\to 0}\left|\frac{1}{M}F_{\chi}^{-1}(1-\alpha;M\kappa)-F_{\chi}^{-1}(1- \alpha;M\kappa)\right|\] \[= \frac{M-1}{M}\lim_{\kappa\to 0}F_{\chi}^{-1}(1-\alpha;M\kappa)\] \[\leq \frac{M-1}{M}\lim_{\kappa\to 0}\frac{M\kappa}{\alpha}=0\] and therefore \[\lim_{\kappa\to 0}\frac{1}{M}F_{\chi}^{-1}(1-\alpha;M\kappa)=\lim_{\kappa\to 0}F_{\chi}^{-1}(1- \alpha;M\kappa)\] for any \(\alpha>0\). This implies that \(p_{c}\big{(}chi\left(;\kappa\right)\big{)}=p_{r}\big{(}chi\left(;\kappa\right) \big{)}\) in the limit \(\kappa\to 0\). The result of Theorem 3 can be understood intuitively using the non-central \(\chi_{0}^{2}\) of Siegel (1979) with a non-centrality parameter \(\lambda\), call it \(\chi_{0}^{2}(\lambda)\). When \(\lambda\to 0\), \(\chi_{0}^{2}(\lambda)\rightarrow\chi_{0}^{2}\) in distribution, so taking \(\chi_{0}^{2}(\lambda)\) with small \(\lambda\) should provide some sense of how \(\chi_{0}^{2}\) behaves. Unlike \(\chi_{0}^{2}\), however, \(\chi_{0}^{2}(\lambda)\) has a discrete probability mass at \(0\) for all \(\lambda>0\). As a result, the quantile function of \(\chi_{0}^{2}(\lambda)\), \(F_{\lambda}^{-1}\), returns zero for any input less than \(e^{-\frac{\lambda}{2}}\) and so the terms in the sum \[\sum_{i=1}^{M}F_{\lambda}^{-1}(1-p_{i})\] are non-zero only for those \(i\) where \(p_{i}\leq 1-e^{-\frac{\lambda}{2}}\). As \(\lambda\to 0\), this sum becomes arbitrarily close to \(chi\left(;0\right)\) but only the smallest \(p\)-values contribute. Eventually, only the minimum contributes to the sum, and so \(chi\left(;\kappa\right)\approx f(p_{(1)})\) for very small \(\kappa\) values. The limits \(\lim_{\kappa\to 0}chi\left(\mathbf{p};\kappa\right)=Tip(\mathbf{p})\) and \(\lim_{\kappa\rightarrow\infty}chi\left(\mathbf{p};\kappa\right)=Sto( \mathbf{p})\) are also demonstrated empirically by generating \(n_{sim}\) independent realizations of \(\mathbf{p}\) assuming \(H_{0}\) is true. For each vector \(\mathbf{p}_{i}\), compute \(chi\left(\mathbf{p}_{i};\kappa\right)\) for a range of \(\kappa\), \(Sto(\mathbf{p}_{i})\), and \(Tip(\mathbf{p}_{i})\) and compare \(chi\left(\mathbf{p}_{i};\kappa\right)\) to the other two pooled \(p\)-values. Figure 8 shows this pattern for a few \(\kappa\) when \(M=5\). As expected, the agreement between \(chi\left(\mathbf{p};\kappa\right)\) and \(Tip(\mathbf{p})\) is perfect for small enough \(\kappa\), the two functions have identical outputs for \(\kappa=0.0035\) in Figure 8(a). Similarly, \(chi\left(\mathbf{p};\kappa\right)\) and \(Sto(\mathbf{p})\) match for large \(\kappa\), as when \(\kappa\approx 3000\) in Figure 8(c). Note that the particular values of \(\kappa\) where this close agreement occurs will depend on \(M\). Figure 8: A comparison of \(chi\left(\mathbf{p}_{i};\kappa\right)\), \(Sto(\mathbf{p}_{i})\), and \(Tip(\mathbf{p}_{i})\) values for 1000 independently generated \(\mathbf{p}_{i}\sim Unif([0,1]^{5})\) in the case of (a) small \(\kappa\), (b) moderate \(\kappa\), and (c) large \(\kappa\). Perhaps more interesting is the curved boundary of the points along the top of the plot of \(chi\left(\mathbf{p};0.0035\right)\) against \(Sto(\mathbf{p})\) in Figure 8(a), many points populate the lower right corner of this plot but there are none in the upper left. This pattern is mirrored in Figure 8(c) for the plot of \(chi\left(\mathbf{p};2981\right)\) against \(Tip(\mathbf{p})\). As \(chi\left(\mathbf{p};\kappa\right)\) is essentially identical to one of \(Tip(\mathbf{p})\) or \(Sto(\mathbf{p})\) in these cases, this pattern reflects the relationship between \(Tip(\mathbf{p})\) and \(Sto(\mathbf{p})\). By definition, \(Tip(\mathbf{p})\) considers only \(p_{(1)}\), but any of the \(p_{i}\) can impact \(Sto(\mathbf{p})\). As a result there will be many cases where a small \(Tip(\mathbf{p})\) occurs despite a large \(Sto(\mathbf{p})\) because a very small \(p_{(1)}\) happens by chance. The reverse is impossible, if \(Tip(\mathbf{p})\) is large then \(p_{(1)}\) is large and therefore all values in \(\mathbf{p}\) are large, suggesting a large \(Sto(\mathbf{p})\). ### Choosing a parameter In addition to these meaningful limits, there seems to be a monotonically increasing relationship between \(\kappa\) and \(q(chi\left(;\kappa\right))\). Let \(\chi_{\kappa}^{*}(\alpha)\) be the \(1-\alpha\) quantile of the \(\chi^{2}\) distribution with \(\kappa\) degrees of freedom, then \(chi\left(\mathbf{p};\kappa\right)\) has the central rejection level \[p_{c}\big{(}chi\left(;\kappa\right)\big{)}=1-F_{\chi}\left(\frac{1}{M}F_{ \chi}^{-1}(1-\alpha;M\kappa);\kappa\right)=P\left(\chi_{\kappa}^{2}\geq\frac{ 1}{M}\chi_{M\kappa}^{*}(\alpha)\right) \tag{20}\] and the marginal rejection level \[p_{r}\big{(}chi\left(;\kappa\right)\big{)}=1-F_{\chi}\bigg{(}F_{\chi}^{-1}(1- \alpha;M\kappa);\kappa\bigg{)}=P\bigg{(}\chi_{\kappa}^{2}\geq\chi_{M\kappa}^{* }(\alpha)\bigg{)}, \tag{21}\] implying \[q(chi\left(;\kappa\right)) = \frac{p_{c}\big{(}chi\left(;\kappa\right)\big{)}-p_{r}\big{(}chi \left(;\kappa\right)\big{)}}{p_{c}\big{(}chi\left(;\kappa\right)\big{)}} \tag{22}\] \[= P\left(\chi_{\kappa}^{2}\leq\chi_{M\kappa}^{*}(\alpha)\left| \vphantom{\frac{1}{M}}\chi_{\kappa}^{2}\geq\frac{1}{M}\chi_{M\kappa}^{*}( \alpha)\right.\right).\] That is, the centrality quotient of \(chi\left(\mathbf{p};\kappa\right)\) is the conditional probability that a \(\chi_{\kappa}^{2}\) random variable is less than \(\chi_{M\kappa}^{*}(\alpha)\) given that it is greater than \(\frac{1}{M}\chi_{M\kappa}^{*}(\alpha)\). A better sense of the region corresponding to this conditional probability for \(\alpha<0.5\) is garnered by writing \(\chi_{M\kappa}^{*}\) in terms of the mean of the \(\chi_{M\kappa}^{2}\) distribution, \(M\kappa\). Taking an arbitrary remainder function \(R_{M\kappa}(\alpha)>0\) such that \(\chi_{M\kappa}^{*}(\alpha):=M\kappa+R_{M\kappa}(\alpha)\), substitution gives \[q(chi\left(;\kappa\right))=P\left(\chi_{\kappa}^{2}\leq M\kappa+R_{M\kappa}( \alpha)\left|\vphantom{\frac{1}{M}}\chi_{\kappa}^{2}\geq\kappa+\frac{1}{M}R_{ M\kappa}(\alpha)\right.\right),\] clarifying that \(q(chi\left(;\kappa\right))\) is a conditional probability on the right tail of the \(\chi_{\kappa}^{2}\) distribution when \(\alpha<0.5\). Making more precise statements about \(R_{M\kappa}(\alpha)\) is challenging due to the small values of \(\kappa\) which may be chosen for \(chi\left(;\kappa\right)\). Most approximations of \(\chi^{2}\) tail probabilities and quantiles either break down when the degrees of freedom is less than one or explicitly assume more than one degrees of freedom (Hawkins and Wixley, 1986; Canal, 2005; Inglot, 2010). Nonetheless, the above probability can be computed numerically, as was done for the curves of \(q(chi\left(;\kappa\right))\) by \(\log_{10}(\kappa)\) for \(M\) ranging from 2 to 10,000 in Figure 9. The curves of \(q(chi\left(;\kappa\right))\) by \(\kappa\) have a consistent sigmoid shape for all \(M\). Most of the change in the centrality quotient occurs for values in a three unit range in \(\log_{10}(\kappa)\) for any \(M\), though the centre of this range decreases as \(M\) grows. When \(\kappa=10^{-3}\), for example, the centrality quotient when \(M=100\) is greater than 0.8 while the same \(\kappa\) value corresponds with a centrality quotient of less than 0.05 when \(M=2\). Just as with any other pooled \(p\)-value, increasing \(M\) increases the centrality of \(chi\left(;\kappa\right)\) for a given \(\kappa\) as the sum of independent \(p\)-values becomes more normally distributed by the central limit theorem.10 Footnote 10: This can also be understood geometrically. For a pooled \(p\)-value in \(M\) dimensions, the volume of the marginal shell of width \(p_{r}\) is \(1-(1-p_{r})^{M}\), which approaches \(1\) for any \(p_{r}>0\) as \(M\rightarrow\infty\). As the total volume of the rejection region is \(\alpha\) for the rejection rule \(g(\mathbf{p})\leq\alpha\), \(p_{r}\) must decrease in \(M\) to hold the volume constant. In practice, the inverse of the above curves may be of greater interest to control the centrality quotient under \(chi\left(\mathbf{p};\kappa\right)\) rather than simply report it. Figure 9 does allow the selection of \(\kappa\) for a given centrality quotient by estimating the \(\kappa\) value where the intersection between a curve and a vertical line at \(\kappa\) is at the desired quotient, but a table displaying the numerically estimated inverse for evenly-spaced \(\kappa\) as in Table 2 is more precise and straightforward to use. Determining the desired \(\log_{10}(\kappa)\) for a given centrality quotient and \(M\) proceeds as for a table of critical values. The user searches down the columns for the \(M\) most closely corresponding to the setting at hand, and then searches through that row for the desired column. If \(q(chi\left(;\kappa\right))=1\) or \(0\) is desired, the table is unnecessary as \(Sto(\mathbf{p})\) or \(Tip(\mathbf{p})\) can be used directly. Unlike with critical value tables, there is no need to be conservative: linear interpolation between the provided \(\log_{10}(\kappa)\) values is a reasonable approach to choosing \(\kappa\). Using the parameter \(\kappa\) of \(chi\left(\mathbf{p};\kappa\right)\), the relative preference of \(chi\left(\mathbf{p};\kappa\right)\) to rejection along the margins or in the centre can be directly controlled. Large \(\kappa\) produce a pooled \(p\)-value which is powerful at detecting evidence spread among all tests, while small \(\kappa\) favour the detection of concentrated evidence in a single test with extremes giving the widely-used \(Tip(\mathbf{p})\) and \(Sto(\mathbf{p})\). The parameter \(\kappa\) orders pooled \(p\)-values of the \(chi\left(\mathbf{p};\kappa\right)\) family by \begin{table} \begin{tabular}{|r|r r r r r r r r|} \hline & \multicolumn{10}{c|}{Centrality quotient} \\ \(M\) & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline 2 & -2.1 & -1.7 & -1.4 & -1.2 & -0.9 & -0.7 & -0.4 & -0.1 & 0.3 \\ 5 & -2.8 & -2.5 & -2.3 & -2.1 & -1.9 & -1.7 & -1.4 & -1.2 & -0.8 \\ 20 & -3.7 & -3.4 & -3.1 & -2.9 & -2.8 & -2.6 & -2.4 & -2.1 & -1.8 \\ 100 & -4.6 & -4.3 & -4 & -3.8 & -3.7 & -3.5 & -3.3 & -3 & -2.7 \\ 500 & -5.4 & -5.1 & -4.8 & -4.7 & -4.5 & -4.3 & -4.1 & -3.9 & -3.5 \\ 2000 & -6.1 & -5.8 & -5.5 & -5.3 & -5.2 & -5 & -4.8 & -4.6 & -4.2 \\ 10000 & -6.9 & -6.6 & -6.3 & -6.1 & -6 & -5.8 & -5.6 & -5.4 & -5 \\ \hline \end{tabular} \end{table} Table 2: \(\log_{10}(\kappa)\) values by \(q(chi\left(;\kappa\right))\) and \(M\) to aid in parameter selection for the desired balance of central and marginal rejection. relative centrality, simplifying the choice of pooled \(p\)-value and communication of results. Finally, as it is based on Equation (4), it is an exact quantile-based method which does not rely on asymptotic behaviour and which could, hypothetically, be computed by hand with the aid of \(\chi^{2}\) quantile tables. ### Comparing the chi-squared pooled \(p\)-value to the UMP benchmark Recall the simulation studies that motivated the exploration of central and marginal rejection levels. After a benchmark power computation, the power of \(HR\left(\mathbf{p};w\right)\) for \(\alpha=0.05\) was evaluated under \(H_{4}\) for a range of beta alternatives (\(f=Beta(a,1/\omega+a(1-1/\omega))\)) with KL divergences from uniform (\(D(a,\omega)\)) spanning \(e^{-5}\) to \(e^{5}\). Correct specification of \(w\) was important: the larger the magnitude of \(w-\omega\), the larger the decrease in power of \(HR\left(\mathbf{p};w\right)\) from \(HR\left(\mathbf{p};\omega\right)\). Under \(H_{3}\), mis-specification did not matter at all, the power of \(HR\left(\mathbf{p};w\right)\) was dictated by the the proportion of false null hypotheses (\(\eta\)) and the strength of evidence against \(H_{0}\) in each non-null hypothesis (\(D(a,\omega)\)). The parameter \(w\) tunes \(HR\left(\mathbf{p};w\right)\) to favour either weak evidence spread among all tests, or strong evidence in only a few. Finer selection of this tradeoff is achieved with the parameter \(\kappa\) using the \(chi\left(\mathbf{p};\kappa\right)\) family of pooled \(p\)-values, but controlling centrality is of little use if \(chi\left(\mathbf{p};\kappa\right)\) is not powerful under the settings that motivated their definition. The power of \(chi\left(\mathbf{p};\kappa\right)\) at level \(\alpha=0.05\) for each \(\kappa\in\left\{e^{-8},e^{-4},1,2,e^{4},e^{8}\right\}\) was therefore determined under every setting from Section 4 using the same simulated samples generated under \(H_{4}\). Prior to the simulation, it is expected is that large \(\kappa\) will be uniformly more powerful than small \(\kappa\), as under \(H_{4}\) evidence is spread among all tests. The results confirmed this expectation: the most powerful \(chi\left(\mathbf{p};\kappa\right)\) for all settings under \(H_{4}\) was \(chi\left(\mathbf{p};e^{8}\right)\approx chi\left(\mathbf{p};2981\right)\). It is compared to the both the UMP and mis-specified \(HR\left(\mathbf{p};w\right)\) in Figure 10 by adding a dark grey line to Figure 4. \(chi\left(\mathbf{p};2981\right)\) has higher power than most mis-specified \(HR\left(\mathbf{p};w\right)\) for all settings in this case and is close to the UMP more consistently than any \(HR\left(\mathbf{p};w\right)\). Only when \(w\approx\omega\) does \(HR\left(\mathbf{p};w\right)\) beat \(chi\left(\mathbf{p};2981\right)\), and so it is less robust to mis-specification of \(f\) than \(chi\left(\mathbf{p};2981\right)\). It may therefore be advisable to use \(chi\left(\mathbf{p};\kappa\right)\) with a large \(\kappa\) (or simply \(Sto(\mathbf{p})\)) when testing \(H_{4}\) with beta alternatives in the case where \(\omega\) is not known, rather than risk the penalty of choosing \(w\) wrong when using \(HR\left(\mathbf{p};w\right)\). This is despite the fact that \(HR\left(\mathbf{p};\omega\right)\) is UMP for this setting. For the case where \(\mathbf{p}\) was generated under \(H_{3}\), \(chi\left(\mathbf{p};\kappa\right)\) was again computed for each \(\kappa\in\{e^{-8},e^{-4},1,2,e^{4},e^{8}\}\) over the 10,000 independent samples for each setting of \(D(a,\omega)\), \(\omega\), and \(\eta\) with \(M=10\) from Section 4. Contour plots analogous to Figure 7 showing the differences in power between \(chi\left(\mathbf{p};\kappa\right)\) and \(HR\left(\mathbf{p};1\right)=chi\left(\mathbf{p};2\right)\) were generated. The reference \(HR\left(\mathbf{p};1\right)\) was chosen because it is a test shared by both the \(chi\left(\mathbf{p};\kappa\right)\) and \(HR\left(\mathbf{p};w\right)\) families. The patterns of power for \(chi\left(\mathbf{p};\kappa\right)\) mimic those of \(HR\left(\mathbf{p};w\right)\): large \(\kappa\) favour evidence spread among all tests as do small \(w\) in \(HR\left(\mathbf{p};w\right)\). Despite this similarity, \(chi\left(\mathbf{p};2981\right)\) has higher power when applied to the case of concentrated evidence and so is more robust under \(H_{3}\). This is seen clearly in a comparison of the bottom right corner of Figure 7 to Figure 11(a), the former shows a much larger and darker red region than the latter. The \(chi\left(\mathbf{p};\kappa\right)\) family also extends the range of possible centrality parameters compared to \(HR\left(\mathbf{p};w\right)\). As \(HR\left(\mathbf{p};1\right)=chi\left(\mathbf{p};2\right)\) is one of the boundaries of the \(w\) parameter range, no comparable pooled \(p\)-values to \(chi\left(\mathbf{p};\kappa\right)\) for \(\kappa<2\) exist in the \(HR\left(\mathbf{p};w\right)\) family. Using \(chi\left(\mathbf{p};\kappa\right)\) therefore gives greater control over the balance of central and marginal rejection than \(HR\left(\mathbf{p};w\right)\), though it seems exceptionally small \(\kappa\) in \(chi\left(\mathbf{p};\kappa\right)\), or equivalently \(Tip(\mathbf{p})\), should only be used sparingly. Figure 11(b) shows that \(chi\left(\mathbf{p};0.003\right)\) loses power almost Figure 10: A comparison of \(chi\left(\mathbf{p};2981\right)\) to the UMP and mis-specified \(HR\left(\mathbf{p};w\right)\) under \(H_{4}\). \(chi\left(\mathbf{p};2981\right)\) nearly UMP power more consistently than any \(HR\left(\mathbf{p};w\right)\), and so is more robust to \(f\). everywhere compared to \(chi\left(\mathbf{p};2\right)\) in exchange for higher power only in the case of extreme evidence in a single test. Under \(H_{3}\), very small values of \(\kappa\) should probably only be used if such a pattern of evidence is strongly suspected. Figure 11: Contours for the power of \(HR\left(\mathbf{p};1\right)=chi\left(\mathbf{p};2\right)\) minus (a) \(chi\left(\mathbf{p};e^{8}\right)\approx chi\left(\mathbf{p};2981\right)\) and (b) \(chi\left(\mathbf{p};e^{-8}\right)\approx chi\left(\mathbf{p};0.003\right)\) by \(\eta\) and \(D(a,\omega)\) facetted by \(\omega\). Compared to Figure 7, (a) displays less of a penalty for the case of concentrated evidence while still outperforming \(HR\left(\mathbf{p};1\right)\) for evidence spread among all tests. The \(chi\left(\mathbf{p};\kappa\right)\) family is therefore of interest both practically and theoretically. It provides control over central and marginal rejection under \(H_{3}\) and robustly gives nearly UMP power for large values of \(\kappa\) under \(H_{4}\). It has interpretable endpoints which cover a greater range of centrality quotients than \(HR\left(\mathbf{p};w\right)\) and gives a means of controlling the bias towards central rejection present in all quantile pooled \(p\)-values as \(M\) increases. \(chi\left(\mathbf{p};\kappa\right)\) is a pooled \(p\)-value with great potential as a practical tool for controlling the FWER when testing \(H_{0}\). ## 7 Identifying plausible alternative hypotheses and selecting tests The link between \(\kappa\), the centrality quotient, and relative power in regions of the \(D(a,w),\eta\) plane under \(H_{3}\) can be exploited to identify alternatives to \(H_{0}\) that could have plausibly generated \(\mathbf{p}\). Rather than selecting a particular \(\kappa\) value, can consider all possible \(\kappa\) values simultaneously, compute \(chi\left(\mathbf{p};\kappa\right)\) for each, and record \[\kappa_{\min}=\operatorname*{arg\,min}_{\kappa\in\left[0,\infty\right)}chi \left(\mathbf{p};\kappa\right) \tag{23}\] As each \(\kappa\) value is associated with a particular centrality quotient, each \(\kappa\) identifies a particular region of relative power against others in the \(D(a,w),\eta\) plane under \(H_{3}\). At the same time, \(\kappa_{\min}\) reports the value of \(\kappa\) which produces the smallest pooled \(p\)-value for \(\mathbf{p}\) and therefore suggests the \(\kappa\) value where evidence against \(H_{0}\) is the strongest relative to other \(\kappa\) values. As stronger evidence leads to more frequent rejection and higher power when \(H_{0}\) is false, \(\kappa_{\min}\) therefore links the evidence present in \(\mathbf{p}\) to a region in \(D(a,w),\eta\) if we assume \(H_{3}\) is truly used to generate the data with \(f=Beta(a,1/w+a(1-1/w))\). ### Non-increasing beta densities Begin with a demonstration of the sweep of \(\kappa\) values for previously explored cases by generating curves of \(chi\left(;\kappa\right)\) by \(\kappa\) for different densities under \(H_{4}\). In each of the following, samples of 100 i.i.d. \(p\)-values from different beta distributions are generated independently 1,000 times and \(chi\left(;\kappa\right)\) is computed for a sequence of \(\kappa\) values chosen uniformly on the log scale. Let the \(i^{\text{th}}\) sample be \(\mathbf{p}_{i}\) and the pooled \(p\)-value computed using parameter \(\kappa_{j}\) for \(\mathbf{p}_{i}\) be \(\chi_{ij}=chi\left(\mathbf{p}_{i};\kappa_{j}\right)\). To provide context to \(\chi_{ij}\), a larger simulation of 100,000 samples was generated under \(H_{0}\) and the minimum of \(chi\left(\mathbf{p}_{i};\kappa_{j}\right)\) for the same sequence of \(\kappa_{j}\) values was recorded. Figure 12 displays the median and 0.5, 0.95, and 0.99 central quantiles for \(f=Beta(1,1)\) (equivalent to the null case) alongside the \(Beta(1,1)\) density in 12. For reference three dashed red lines at the observed 0.05, 0.01, and 0.001 quantiles of the minimum pooled \(p\)-value over the 100,000 simulated null cases have been added. This case shows a flat median curve and flat central quantiles which are all slightly above the corresponding minimum quantiles. The null case performs as expected, \(\kappa_{\min}\) would is distributed uniformly over the range of \(\kappa\) values and would produce values below the null quantiles at the expected proportions. A contrasting case in shown in Figure 13, which uses the same layout for an identical simulation carried out when \(f=Beta(0.5,1)\). Displaying the median and the same central quantiles as before, there is a unique minimum at \(\kappa=2\), a lower right end to the curve than the left end, and a generally lower value across its entire length. If one of these curves was observed in practice, \(\kappa_{\min}\approx 2\) would be chosen and larger \(\kappa\) values may not be fully ruled out. This conclusion would be correct: under \(H_{4}\) with \(f=Beta(0.5,1)\) the UMP pooled \(p\)-value is \(HR\left(\mathbf{p};w\right)\) with \(w=(1-a)/(b-a)=1\) and \(HR\left(\mathbf{p};1\right)=chi\left(\mathbf{p};2\right)=Fis(\mathbf{p})\). Furthermore, the power investigations in Section 6.2 demonstrate that \(chi\left(\mathbf{p};2981\right)\approx Sto(\mathbf{p})\) is nearly as powerful as the UMP for all \(w\) under \(H_{4}\). This confirms empirically that the level of the curve of \(chi\left(\mathbf{p};\kappa\right)\) over \(\kappa\) corresponds to the relative power of pooled \(p\)-values in \(chi\left(\mathbf{p};\kappa\right)\) for this Figure 12: The (a) density and (b) central quantiles and median by \(\kappa\) for the null case. All lines are flat and above the null quantiles from the larger simulation as expected. beta distribution. Of course, this conclusion should be expanded to \(H_{3}\) and so mixture of \(Beta(0.1,1)\) and \(Beta(1,1)\) distributions is considered. The first distribution provides strong evidence against the null hypothesis while the second corresponds to the uniform distribution, and so contains no evidence against the null. Mixing these such that the probability of drawing from \(Beta(0.1,1)\) is \(0.05\) and the probability of drawing from the null is \(0.95\) we are placed in the \(D(a,w),\eta\) space at \(0.3,0.05\). Simulating this as for the null and \(Beta(0.5,1)\) cases and displaying the central quantiles of \(\chi_{ij}\) alongside the mixture density gives Figure 14. The central quantiles are more variable for this case than the unmixed densities because the probability of generating any \(p\)-values from \(Beta(0.1,1)\) is small and so many samples would have included only null \(p\)-values. Nonetheless, the median has a unique minimum near \(\kappa=0.01\), and is generally lower for small \(\kappa\) than large \(\kappa\). Considering the coordinates of this case in the \(D(a,w),\eta\) plane, this is completely consistent with the earlier investigations of power where small \(\kappa\) values were most powerful for strong evidence concentrated in a few tests. In practice, seeing the median curve would cause us to suspect this case correctly. Figure 13: The (a) density and (b) central quantiles and median by \(\kappa\) for \(p\)-values generated identically and independently from a Beta(0.5, 1) distribution. The minimum around \(\kappa=2\) corresponds to the UMP. ### Identifying a region of alternative hypotheses While these one-to-one comparisons between densities and \(\kappa\) curves help to demonstrate the link between \(\kappa_{\min}\) and the alternative densities used to generate \(\mathbf{p}\), they are not incredibly informative. Given \(\mathbf{p}\) and supposing we generate such a curve of \(chi\left(\mathbf{p};\kappa\right)\) by \(\kappa\), we would need to sort through an incredible number of density-curve pairs to identify plausible alternatives corresponding to the curve obtained. Instead, consider a more automated approach. Given a collection of \(p\)-values, this generates a curve by sweeping parameter values of \(\kappa\), identifies the minimum \(\kappa\) values (or any below a particular threshold), and maps these back onto the plane of strength and prevalence depicted in, for example, Figure 11. This requires a detailed guide of where each \(\kappa\) value is most powerful in the \(\eta,D(a,w)\) plane so that \(\kappa_{\min}\) or the range of significant \(\kappa\) values can be placed accurately. Therefore, a simulation was carried out over 20 \(\ln(w)\) values evenly spaced from \(-6\) to \(0\), \(\eta\) values from \(0\) to \(1\) in increments of \(1/80\), and \(80\)\(\ln D(a,w)\) values evenly spaced between \(-5\) and \(5\). For each combination, 10,000 samples of 80 \(p\)-values were generated with \(80\eta\) following the beta distribution specified by \(\ln(w)\) Figure 14: The (a) density and (b) central quantiles and median by \(\kappa\) for the mixture 0.05Beta(0.1, 1) + 0.95Beta(1, 1). Small \(\kappa\) values provide the smallest pooled \(p\)-values, and hence power at detecting this alternative. and \(\ln D(a,w)\) and \(80(1-\eta)\) following the uniform distribution.11 Footnote 11: This resolution was not the only one tried, similar experiments were carried out for \(M=10\), \(20\), and \(40\) and the only impact of increasing \(M\), the number of steps in \(\eta\), and the number of steps in \(\ln D(a,w)\) was increasing resolution of the same patterns. This suggests that these patterns do not depend on the sample size. Each of the 10,000 samples then had pooled \(p\)-values computed over a sweep of \(65\,\ln\kappa_{i}\) values evenly spaced from \(-8\) to \(8\) and the power at level \(\alpha=0.05\) was computed for the rejection rule \(chi\left(\mathbf{p};\kappa_{i}\right)\leq 0.05\). The \(\kappa_{i}\) with the greatest power for each combination corresponds to \(\kappa_{\min}\) for that combination because rejection is determined by thresholding the pooled \(p\)-value and so a higher power implies a lower distribution of the pooled \(p\)-value at a given point. This distribution with a lower location will result in an equal or lower quantile curve to all others. Bivariate discretized Gaussian smoothing is applied to each \(chi\left(\mathbf{p};\kappa_{i}\right)\) power surface in \(\eta,D(a,w)\) for each \(w\) value in order to obtain a smoothed estimate of the power surface minimally impacted by random binomial noise. This was completed only because none of the investigations carried out indicated discontinuities in the power or distribution of \(p\)-values by \(\kappa_{i}\). For each \(w\), the power surfaces of every \(chi\left(\mathbf{p};\kappa_{i}\right)\) in \(\eta,D(a,w)\) were then compared to the maximum among them point-wise. This is motivated by the simpler case shown in Figure 13, as several \(\kappa_{i}\) values are often equally powerful for a given setting. Specifically, the comparison was a binomial test of the difference in proportions using a normal approximation at 95% confidence. A surface was deemed equal to the maximum power at that point if the test failed to reject the null hypothesis of equal proportions.12 For each \(\kappa\) and \(w\), all of this pre-processing gave a matrix in \(\eta\) and \(\ln D(a,w)\) indicating whether \(chi\left(\mathbf{p};\kappa_{i}\right)\) achieved the maximum power for that combination for every \(\kappa_{i}\). To produce a final summary in \(\eta\) and \(\ln D(a,w)\) alone, these indicators were summed over \(w\) for each \(\kappa_{i}\). Finally, masks were added in the top right and bottom left corners where all methods are equally powerful with powers \(1\) and \(0.05\) respectively to make the meaningful patterns more visible. Figures 15(a) - (d) display these sums (counts of cases in \(w\) where \(\kappa_{i}\) achieved maximum power) for several \(\kappa_{i}\) in a given \(\eta,\ln D(a,w)\) region, with guide histograms on each row and column added to quickly indicate the relative marginal frequencies. Each plot is therefore rich with both marginal and joint information on the regions where a particular \(\kappa_{i}\) is most powerful. Consistent with previous investigations, this map shows that the regions where the Figure 15: Likely alternatives for a range of \(\kappa\) values. For full coverage of \(D(a,w)\), \(w\) was chosen uniformly on a log scale. small \(\kappa\) values are most powerful correspond to small \(\eta\) values. The mode of the histogram of \(\eta\) values increases steadily in \(\kappa\) until it is near one when \(\kappa=e^{8}\). For \(\kappa=e^{-4}\), the pooled \(p\)-value is only most powerful for settings with \(\eta<0.1\), such that a minimum of the parameter curve below \(e^{-4}\approx 0.02\) indicates a small minority of tests are significant. Given that these plots display counts of cases where a particular \(chi\left(\mathbf{p};\kappa\right)\) is most powerful, choosing to select \(w\) evenly-spaced on the log scale inadvertently places greater weight on small values of \(w\) and under-samples large values in order to achieve more complete coverage of \(D(a,w)\). Even for moderate \(w\) floating point representation limits prevent the computation of \(a\) and \(b\) for the beta distributions with large KL divergences. This complete coverage of the strength of evidence is one of two possible perspectives, with the other focused on even exploration of the parameter space. For this parameter-based perspective, the exact same procedure was performed but with 20 \(w\) values selected at even increments from 0.05 to 1. Figures 16(a) - (d) present the same heatmaps as Figure 15 from this perspective. There are some noteworthy differences between this and the first set of heatmaps. The bias towards smaller proportions and stronger evidence in the first is quite clear when it is compared to the second, which generally shows similar shapes but more evenly distributed saturation across this shape. This leads to changes in the regions suggested for a particular \(\kappa_{\min}\), but these are typically minor. The biggest difference occurs for large \(\kappa\), where the bias towards small values in Figure 15 obscures all of the internal variation in the middle top that can be seen in Figure 16. Without these plots, an analyst would be left trying to identify alternatives from a density estimate. Besides showing comparable information about the prevalence of evidence to a density estimate in the histogram along the right side of the plot, these plots of plausible alternatives give information about likely strengths of evidence and regions for the combination of both. By leveraging the links between the centrality quotient, \(\kappa\), and the distribution and strength of evidence in \(\mathbf{p}\), these maps provide richer and clearer information. Figure 16: Likely alternatives for a range of \(\kappa\) values. \(w\) was chosen uniformly for these images, leading to worse coverage of \(D(a,w)\) but more appropriate coverage of \(w\). ### Selecting a subset of tests Perhaps the most important part of the alternative heatmaps presented in Figures 15 and 16 are the histograms along the right margin that indicate the likely prevalence of evidence in the data. Once \(\kappa_{\min}\) has been determined using a sweep of \(\kappa\) values, and a plausible set of alternatives has been identified using these alternative heatmaps, the corresponding range of proportions can be used to identify a subset of tests of interest. If false positives are less problematic to analysis than false negatives, the upper bound of this range might be taken, with the other bound taken if the opposite is true. In either case, suppose the chosen proportion is \(\eta^{*}\), then the \(M\eta^{*}\) largest values of \(F^{-1}(1-p_{i};\kappa_{\min})\) are the tests most contributing to the small value of \(chi\left(\mathbf{p};\kappa_{\min}\right)\) and so are the tests of greatest interest that can be selected for further investigation. ### Centrality in other beta densities Until now, it was always assumed that \(p\)-values follow a non-increasing beta density when the null hypothesis is false. This is a reasonable assumption, many statistical tests have this property for the rejection rule thresholding the \(p\)-value at \(\alpha\). Relaxing this assumption, however, allows an exploration into how \(chi\left(\mathbf{p};\kappa\right)\) behaves for a broader variety of densities and whether centrality is still a useful concept under these other distributions of non-null \(p\)-values. First, consider the case of a strictly increasing density under \(H_{4}\). Whether or not this case is interesting is a matter of opinion as under the convention that small \(p\)-values are evidence against \(H_{0}\) such a density produces even less evidence against \(H_{0}\) than the null distribution itself. It would be reasonable to expect, then, that this case produces only very large \(chi\left(\mathbf{p};\kappa\right)\) for all \(\kappa\) values. Following the same procedure as Section 7.1, this expectation is tested for \(Beta(1,0.5)\), resulting in the curve and density displayed in Figure 17. As expected, this setting gives only large \(chi\left(\mathbf{p};\kappa\right)\) values for every \(\kappa\). There is a slight dip to small \(p\)-values for small \(\kappa\), likely a result of the small \(p\)-values that still occur for this density occasionally, but it barely crosses the null reference lines. Perhaps a more realistic case is a density that rarely, if ever, produces minimum \(p\)-values small enough to warrant rejection alone, but does tend to produce far more \(p\)-values less than \(0.5\) than expected under the null hypothesis. For such a setting, the previous investigations into centrality suggest that \(\kappa_{\min}\) should be large. An example is \(f=Beta(2,4)\) under \(H_{4}\), shown in Figure 18. Once again, this plot matches the expectation reasoned from centrality, despite the relaxation of the assumptions used to motivate centrality. Both of these examples suggest that the concepts of central and marginal rejection may have use beyond non-increasing densities, and provide a promising framework for future investigation. Figure 17: The (a) density and (b) central quantiles and median by \(\kappa\) for \(p\)-values from a Beta(1, 0.5) distribution. The pooled \(p\)-value is generally large compared to the empirical curve minimum quantiles. Figure 18: The (a) density and (b) central quantiles and median of a \(p\)-values following a Beta(2, 4) distribution. The absence of very small \(p\)-values and bias towards smaller ones means large \(\kappa\) values are most powerful. The PoolBal package There is no lack of packages available to pool \(p\)-values in \(\mathsf{R}\). The most recent of these, poolr(Cinar and Viechtbauer, 2021), lists 8 others all providing piecemeal coverage of every pooled \(p\)-value proposal13. Rather than re-implement the functionality provided by these packages, the PoolBal package aims primarily to support the evaluation of the central rejection level, marginal rejection level, and centrality quotient for these and any future packages which pool \(p\)-values. As they both allow some tuning of centrality, these core functions are supported by implementations of \(chi\left(\mathbf{p};\kappa\right)\) and \(HR\left(\mathbf{p};\kappa\right)\) along with functions to evaluate the Kullback-Leibler divergence for general densities and compute it for the beta density in particular. This is meant to make the adoption of the framework provided in this work as simple as possible. Footnote 13: These are Dewey (2022)Zhang et al. (2020); Wilson (2019); Yi and Pachter (2018); Poole and Gibbs (2015); Dai et al. (2014); Schroder et al. (2011); Zhao (2008). Most of these packages cover a subset of pooling functions or implement adjustments for dependence rather than attempting to be the complete package for pooling \(p\)-values. Briefly summarized, the functionality of PoolBal includes klDiv, betaDiv: compute the Kullback-Leibler divergence for arbitrary densities and the uniform to Beta case, respectively findA: invert a given Kullback-Leibler divergence and most powerful test parameter \(w\) to identify the unique Beta parameter \(a\) that corresponds to this setting pBetaH4, pBetaH3: helpers to generate \(\mathbf{p}\) under under \(H_{4}\) and \(H_{3}\) estimatePc, estimateProb, estimateQ: wrappers for uniroot from base that estimate the central rejection level, marginal rejection level at \(b\), and centrality quotient for an arbitrary function chiPool: an implementation of \(chi\left(\mathbf{p};\kappa\right)\) chiPc, chiPr, chiQ: functions to compute the central rejection level, marginal rejection level, and centrality quotient of \(chi\left(\mathbf{p};\kappa\right)\) using Equations (20), (21), and (22) chiKappa: a wrapper for uniroot from base that inverts a given centrality quotient to give the \(\kappa\) value in \(chi\left(\mathbf{p};\kappa\right)\) with the corresponding quotient hrStat, hrPool: compute \(l_{HR}(\mathbf{p};w)\) and \(HR\left(\mathbf{p};w\right)\) for \(\mathbf{p}\), with the \(p\)-value determined empirically using simulated null data hrPc, hrPr, hrQ: functions to compute the central rejection level, marginal rejection level, and centrality quotient of \(HR\left(\mathbf{p};\kappa\right)\) using simulation and uniroot altFrequencyMat: function allowing access to a summarized version of the simulation results from Section 7.2 marHistHeatMap: function which generates heatmaps with marginal histograms, that is visualizations such as those in Figure 16. The package can be found on the author's GitHub. ## 9 Conclusion When presented with \(M\)\(p\)-values from independent tests of hypotheses \(H_{01},\ldots,H_{0M}\), a natural way to control the family-wise error rate (FWER) is by pooling these \(p\)-values using a function \(g(\mathbf{p})\). If \(g(\mathbf{p})\) is constructed using the sum of quantile transformations or the order statistics of the \(p\)-values, then the rejection rule \(g(\mathbf{p})\leq\alpha\) controls the FWER at \(\alpha\). Selecting between the many possible \(g(\mathbf{p})\) requires the choice of an alternative hypothesis from the telescoping set \(H_{1}\supset H_{2}\supset H_{3}\supset H_{4}\) in order to determine their powers against these alternatives. \(H_{3}\) and \(H_{4}\), though the most restrictive, still require the choice of \(\eta\), the prevalence of non-null \(p\)-values in \(\mathbf{p}\), and \(f\), the distribution of these non-null values. An obvious choice for \(f\) is the beta distribution restricted to be non-decreasing, as this biases non-null \(p\)-values lower than null \(p\)-values. By using \(\eta\) and the Kullback-Leibler divergence of \(f\) from the uniform distribution, both the prevalence and strength of non-null evidence can be measured. If all the evidence is non-null, i.e. \(\eta=1\), the pooled \(p\)-value based on \(l_{w}(\mathbf{p})=w\sum_{i=1}^{M}\ln p_{i}-(1-w)\sum_{i=1}^{M}\ln(1-p_{i})\) is uniformly most powerful (UMP) but is sensitive to the specification of its parameter \(w\in[0,1]\). Incorrectly choosing this parameter, i.e. selecting a value that does not match the true generative distribution, costs power to reject the alternative hypothesis \(H_{4}\). When \(\eta\neq 1\), both the prevalence and strength of non-null evidence dictate the most powerful choice of \(w\). Small values of \(w\) are more powerful for weak evidence spread among all tests while large values are better at detecting strong evidence in a few tests. This reflects a more universal pattern in pooled \(p\)-values and motivates a new paradigm for selecting and analyzing them. The marginal level of rejection at \(\alpha\), the largest individual \(p\)-value that leads to rejection at \(\alpha\) when all other \(p\)-values are \(1\), and the central
2303.16738
Application of Distributed Fiber Optic Strain Sensors to LMQXFA Cold Mass Welding
The future High Luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN will include the low-beta inner triplets (Q1, Q2a/b, Q3) for two LHC insertion regions. The Q1, Q3 components consist of eight 10 m-long LMQXFA cryo-assemblies fabricated by the HL-LHC Accelerator Upgrade Project. Each LMQXFA Cold mass contains two Nb3Sn magnets connected in series. A stainless-steel shell is welded around the two magnets before the insertion into the cryostat. There is a limit on how much coil preload increase induced by the shell welding is allowed. Distributed Rayleigh backscattering fiber optics sensors were used for the first time to obtain a strain map over a wide area of a Nb3Sn magnet cold mass shell. Data were collected during welding of the first LMQXFA cold mass and the results confirm that the increase of the coil pole azimuthal pre-stress after welding do not exceed requirements.
M. Baldini, S. Krave, R. Bossert, S. Feher, T. Strauss, A. Vouris
2023-03-29T14:41:34Z
http://arxiv.org/abs/2303.16738v1
# Application of Distributed Fiber Optic Strain Sensors to LMQXFA Cold Mass Welding ###### Abstract The future High Luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN will include the low-beta inner triplets (Q1, Q2a/b, Q3) for two LHC insertion regions. The Q1, Q3 components consist of eight 10 m-long LMQXFA cryo-assemblies fabricated by the HL-LHC Accelerator Upgrade Project. Each LMQXFA Cold mass contains two Nb3Sn magnets connected in series. A stainless-steel shell is welded around the two magnets before the insertion into the cryostat. There is a limit on how much cell preload increase induced by the shell welding is allowed. Distributed Rayleigh backscattering fiber optics sensors were used for the first time to obtain a strain map over a wide area of a Nb3Sn magnet cold mass shell. Data were collected during welding of the first LMQXFA cold mass and the results confirm that the increase of the coil pole azimuthal pre-stress after welding do not exceed requirements. ACEC2022-3LPo1A-12 ## Introduction The USA Accelerator Upgrade for the HiLumi-LHC (US-HL-LHC AUP) project is fabricating ten Q1/Q3 cold masses for the interaction regions of the High Luminosity Large Hadron Collider (HL-LHC) [1]. The HL-LHC interaction region magnet triplet consists of three optical elements: Q1, Q2, and Q3. Q1/Q3 cryo-assemblies contain two MQXFA quadrupole magnets. The LMQXFA cold mass is the He pressure vessel assembly containing two 4.2 m long Nb3Sn MQXFA superconducting magnets. A stainless-steel (SS) shell is used as a pressure vessel [2, 3]. The two half shells are simultaneously welded together on both sides of the MQXFA magnets. The interference between the stainless-steel shell and the magnet Aluminum (Al) shells need to be controlled. Due to the fragile nature of the Nb3Sn conductor, there is a limit on how much pre-stress generated by the SS shell welding, the MQXFA magnet can tolerate. In the MQXFA Magnet Interface Specification [3] it is specified how much stress is acceptable to prevent damaging the coil [4]. A maximum of 3.2 MPa azimuthal coil pre-load increase is allowed in the pole. This value has been determined considering that the difference between the inner circumference of the SS shell and the outer circumference of the Al shell of the magnet needs to be \(\geq\)-0.2 mm. This corresponds to a SS vessel average azimuthal stress of around 15.5 MPa (i.e., 21 MPa for the outer circumference of the SS shell and 10 MPa for the inner circumference of the same shell) [5]. In case of weld repairs, a maximum of 8 MPa coil pre-load increase in the pole and a SS vessel stress of 40 MPa is locally allowed [5]. The allowed SS shell stress has been calculated using finite element analysis from the difference between the stainless shell inner diameter circumference and the magnet outer diameter circumference. To prevent coil high preload, the design incorporates a 2 mm thick, and 15 mm wide shim tacked to the inside surface of the cold mass shell and located at the top and bottom center and running through the full length of the magnets. This shim allows the stainless shell to bend at the gaps versus stretching during the longitudinal welding and thus to reduces the head on the shell resulting in a lower preload on the coils [5]. A detailed discussion can be found in References [5-9]. After the welding process is completed the weld shrinkage and shell stretching (comparing measured length values before and after welding) are measured. Actual strain measurements are also performed. This manuscript is focused on presenting and discussing the strain measurements data collected during the welding process for the first LMQXFA cold mass. A high-definition distributed fiber sensor was used to measure the strain variations during welding and verify that the coil preload increase meets the requirements [5]. The working principle of the optical sensor exploits the Rayleigh backscattering due to fiber natural imperfections. Those defects are scattering centers for elastic Rayleigh scattering. Some fraction of the scattering events results in backscattering and are used to calculate the spectral shift as a function of length on the fiber. The optical system required to measure the spectral shift is based on Swept Wavelength Interferometry (SWI) [10]. The elastic Rayleigh spectrum can be measured, and it is expected to be modified because of strain and temperature variations [11]. Those fibers have been demonstrated to be a very promising tools for quench detection in high temperature superconducting magnets. For example, fibers were integrated into a REBCO conductor architecture and demonstrated strain sensing capabilities as well as thermal perturbation detection and quench localization with higher spatial resolution than voltage taps [12-13]. In this manuscript, a novel approach to measure strain on Nb3Sn magnet was developed using distributed fiber optics and tested for the first time. Therefore, the goals of this study are twofold: verify that the strain increase on the SS-shell during the welding of the LMQXFA cold mass meets requirements and demonstrate the novel use of distributed optical strain sensors to generate strain maps of Nb3Sn superconducting magnets. ## I Fiber installation on the LMQXFA SS-shell A 10 m long high-definition fiber optic sensor from LUNA INNOVATION has been employed for the welding test. The diameter is 155 \(\upmu\)m and, the fiber is coated with polyimide. A grid of 700 x 400 mm was designed according to the fiber length as shown on the top left panel of Fig 1. A one-to-one model was used to draw the pattern on the upper quadrant of the SS shell (see top panel of Fig. 1). The fiber was installed transferring the model drawing onto the shell and connecting the dots on the drawing (Fig. 1 top and bottom left panels). The fiber was then glued with epoxy as displayed on the bottom right panel of Fig. 1. The SS shell radius is 310 mm which corresponds to a 480 mm arc length. Therefore, the fiber sensor grid covers most of the shell quadrant. Its longitudinal position is close to the center of the cold mass. The welding seam is 100 mm below the grid as shown on the bottom right panel of Fig. 1. Specific positions along the fiber were identified touching the fiber with a Q-tip before collecting the data as suggested by LUNA procedures. Those positions are the ones labeled by increasing numbers in the grid design (see the upper panel of Fig. 1) and allow to identify in which part of the fiber a particular value of strain is measured. Those fiber sensors can acquire data with several spatial resolutions. During the welding of the first LMQXFA cold mass a 0.65 mm spatial pitch was used together with a sample rate of 0.52 sample/seconds. This is equivalent to having 15000 measuring points along the 10 m optical sensor. A tare was collected before the welding process and used to normalize the final strain values. The spectrometer allows to get real-time data as a function of fiber length and time. On the top panel of Fig. 2 an example of a real time data set is reported. The raw data consist of strain variations measured along the entire length of the fiber sensor at a specific time. An easier to visualize plot of the same data set is made using the fiber grid to produce a strain color map as displayed in Fig. 2b. The actual welding of the LMQXFA cold mass took place at the y= -100 mm location starting from the left side and moving Fig. 1: Fiber grid design (upper panel ), the red line marks the section from where the data plotted in Fig 3 are taken; grid drawing on the shell (bottom left ); fiber installation and gluing (bottom right) Fig. 3: Data collected over a longitudinal section of the grid underlined with a red line in Fig. 1. The section is around 33 cm long. The axial strain variation measured as a function of time is displayed here. Each curve corresponds to a data point taken on the selected section of the sensor. Strain measurements are taken every 0.65 mm; there are around 513 data points. Fig. 2: (a): Example of raw strain data taken at a specific point in time; (b): strain variation measured along the fiber grid toward the right. An initial strain wave was observed from the welder passing the gage area. As the shell cooled down, the strain redistributed around the skin. This is evident looking at the strain data measured along the straight sections of the grid that is closer to the welder passage (see Fig. 3). Each data set correspond to a spatial position along the fiber line connecting point 23 to 2 on the grid displayed in Fig. 1. As soon as the welder got closer to the grid area, strain was observed to increase, to reach a maximum and then to decrease as the area thermalized. Indeed, the presence of a strain peak is consistent with the raise of temperature due to welding. ## II strain map The fiber grid was designed with isometric triangles to exploit one of the special cases of Strain Rosette Layouts [14, 15] Similarly, to what it is done with strain gauges, the 60 degrees strain Rosette equations were used to calculate the strain in each direction for each fiber crossing on the grid. In this case the x strain direction is the 0 orientation of the fiber. The principal strain was obtained together with strain along the x (axial) and y (azimuthal) directions. Each direction had been interpolated over a rectangular grid, generating a strain map of the entire area covered by the grid. It is good to note that this interpolation is valid only for the area bounded by points in which strain values exist for each direction. There was an initial concern about the strain measured at crossover regions, but the data did not show any evidence of discontinuities. The fiber sensor grid covers a 100000 times larger area than a single strain gauge, making it a very powerful and reliable tool. A strain map movie of the entire welding process was obtained using this method. In Fig.4 three strain map snapshots are displayed (each contour plot corresponds to a 20 \(\upmu\)e variation). The top one provides a strain representation at the very beginning of the welding, the second one a representation of the strain in the middle of the welding process and the bottom one a strain representation at the end of the welding process. The temperature effect due to the welder passage can be easily identified since the strain variation observed in the bottom part of the grid is much more preeminent in the middle of the welding process, and it is almost gone at the end of the welding. On the other hand, strain appears to be consistently maximum on the upper part of the grid which is closer to the top of the SS shell. This reflects the fact that the MQXFA magnets inside the shell do not have a perfectly circular shape, but something closer to a diamond-like shape, making the upper area of the magnet the first one entering into contact with the SS-shell. Before presenting the quantitative results, it is important to underline that those strain data reflect both the bending and the stretching effects on the SS-shell. The only way to decouple bending from stretching effects is to have an equivalent grid Fig.4: Azimuthal strain map snapshots of the SS vessel during the second welding passage. Fig.5: Azimuthal, axial and shear average strain measured during the first and second welding passage. installed in the inner circumference of the shell. On the other hand, the shim layer installed between the SS-shell and the Al cylinder of the MQXFA magnet favors the SS shell bending at the gaps versus stretching during the longitudinal welding. For this reason, it is reasonable to believe that the strain values measured during this test are mostly due to bending and thus they are overall overestimated. The average azimuthal (Y-direction on the strain map displayed in Fig.4), axial (X-direction) and shear strain was calculated using all the points observed in the strain map. Three welding passages were performed in a span of three days. The rate strain values collected at the beginning of the process were used to normalize the datasets. In Fig. 5 the average strain vs recorded data points is displayed for the first two welding passages. The fiber was broken before disconnecting it from the spectrometer after the last welding passage and the last dataset was lost. The tare data collected before the third passage was normalized using the pre-welding tare and used as a last data point. Axial and shear strain are observed to peak and then quickly decrease. The azimuthal strain peak is less pronounced, and the strain is overall increasing as it is expected. The average axial and azimuthal strain values are observed to decrease overnight between the first and the second passage suggesting that it takes some time for the SS shell to thermalize and adjust. The final value reached after the second passage are very close to the one observed after the first ones. This behavior suggests that the last strain data point values used to calculate the shell stress are a good representation of the actual strain on the shell at the end of the third passage. We measured an average axial strain of - -14.3207 \(\mu\)e, an Azimuthal strain of = 78.7379 \(\mu\)e and a Shear strain of = -39.1052 \(\mu\)e. The average stress was calculated using 200 GPa for the stainless-steel Young modulus [15]. We found that the average azimuthal stress generated on the SS outer shell during welding is around 15.7 MPa, which is very close to what has been computed. However, this measured stress value is conservative with respect to the computational data. The effect of bending cannot be quantified. Moreover, the computed SS-shell stress value (15.5 MPa) is the average between the outer (21 MPa) and inner (10 MPa) SS shell stress values [6]. In conclusion, the SS-shell welding has been performed according to requirements. This test demonstrated that the use of a sensor grid is more effective than the use of traditional strain gauges to validate strain during welding. Indeed, it was not only possible to evaluate the average strain during welding but also to obtain a strain map of a very large area providing a more precise and realistic information about the strain variation on the SS vessel. A stress validation plan has been established for the future pre-series and series LMQXFA cold masses that are going to be fabricated. This plan consists in installing two fiber grids on each cold mass: a 20 m sensor on the top shell quadrant that covers a 1400mmx 500 mm area and a 10 m sensor on the bottom shell quadrant (700x500 mm). This is because blocks from the welding tooling are in contact with the bottom shell. ## III Conclusion A novel way to use optical sensors, based on distributed fiber optic, was developed to obtain a strain map over a large area of the stainless-steel shell of Nb\({}_{3}\)Sn magnets. A fiber grid was designed and successfully tested to measure the strain variations of the first LMQXFA SS shell during welding. The measured stress allows to infer compliance with specifications and validate the welding process requirements as well as the shell design. This technique presents several advantages compared to strain gauges. Indeed, a 100 thousand times larger area was covered with a single data acquisition, making the measured values more representative in describing the actual strain on the SS vessel. This novel way to use distributed fiber is very promising and could be employed to obtain for the first time a strain map of superconducting Nb3Sn coil during magnet cooldown, powering and training.
2310.19533
Role of wave scattering in instability-induced Langmuir circulation
We consider a classical problem about dynamic instability that leads to the Langmuir circulation. The problem statement assumes that there is initially a wind-driven shear flow and a plane surface wave propagating in the direction of the flow. The unstable mode is a superposition of i) shear flow and ii) surface waves both modulated in the horizontal spanwise direction and iii) circulation that is made up with vortices forming near-surface rolls whose axis are coaligned along the shear flow streamlines and whose transverse size corresponds to the modulation period. Usually, the Langmuir circulation is understood as the vortical part of the mode slowly varying in time, which is the combination of the first and the last flows. The novelty of our approach is that we, firstly, take into account the scattering of the initial surface wave on the slow current. Second, we find the interference of the scattered and the initial waves generating a Stokes drift modulated in the same direction. Third, we establish the subsequent affect of the circulation by the vortex force created by the nonlinear interaction of the initial shear flow and the modulated part of the Stokes drift. S. Leibovich & A.D.D. Craik previously showed that the third part of the mechanism could maintain the Langmuir circulation. We calculate the growth rate which is approximately twice smaller than that obtained by A.D.D. Craik. The vertical structure of the circulation in the mode consists of two vortices, that corresponds to the next mode in Craik's model. Considering the wave scattering, we describe the fast-wave motion as a potential flow with relatively weak vortical correction. Application of the technique can be expanded on other flows where fast oscillating surface waves coexist with a slow current.
S. S. Vergeles, I. A. Vointsev
2023-10-30T13:36:59Z
http://arxiv.org/abs/2310.19533v2
# Langmuir instability: wave scattering on circulation flow ###### Abstract We consider a classical problem about dynamic instability that leads to the Langmuir circulation. The problem statement assumes that there is initially a wind-driven shear flow and a plane surface wave propagating in the direction of the flow. The unstable mode is a superposition of shear flow and surface wave both modulated in the horizontal spanwise direction and the circulation that is vortices in form of near-surface rolls with axes aligned along the shear streamlines and of transverse size corresponding to the modulation period. The novelty of our approach is we account for the scattering of the initial surface wave on the slow current component of the unstable mode that produces the modulated wave, the interference of the scattered and the initial waves that produces the modulation of the Stokes drift in the spanwise direction and the subsequent additional amplification of the circulation by the vortex force produced by the nonlinear interaction of initial shear flow and the modulated part of the Stokes drift. Previously, it was shown by S. Leibovich & A.D.D. Craik that the third part of the mechanism can maintain the Langmuir circulation. We calculate the growth increment which is larger than that obtained by A.D.D. Craik. Considering the wave scattering, we describe fast wave motion as a potential flow with relatively weak vortical correction. Application of the technique can be expanded on others flows where fast oscillating surface waves coexist with a slow current. ## I Introduction Langmuir circulation (LC) is an important mechanism which is responsible for enhancement of turbulent mixing in the ocean surface boundary layer, see e.g. [1] and reviews [2; 3]. The circulation arises on the background of a plane surface wave propagating along the streamlines of a vertical shear flow, typically driven by air wind co-directed with the wave. It includes vortical near-surface rolls with axes oriented in the wave-wind direction and co-directional shear flow modulated in the horizontal spanwise direction. Although the most interesting question is the description of established LC [4; 5], the process of emergence and growth of the circulation is also of certain interest [6; 7], as it provides a tool for analysis of experimental [8] and numerical [9] data. This emergence could be described by an instability mechanism. If one drops out wave scattering on the circulation, the following positive feedback leads to the instability. Consider a small spanwise modulation of the shear flow. The flow possesses vertical vorticity in contrast to the initial shear flow. The wave affects the modulated current, the impact is described by vortex force [10], that is the cross product of Stokes drift produced by the wave motion and vorticity of the slow current. Thus the vorticity component and the Stokes drift produce spanwise vortex force that drives the roll flow. The latter interacts with the initial shear flow via the Lamb term in Navier-Stokes equation that drives the spanwise modulation of the shear flow, so the feedback loop is closed. Treatment [6] does not consider the wave scattering on the modulated flow, although it exists and leads to the spanwise modulated wave according to the further analytical investigations [11]. Subsequently, this modulation was observed in the large variety of experimental and numerical works. For example, large-eddy simulations of interacting turbulent shear flow and surface waves [12] showed rather noticeable variation of wave field in cross-wind direction that can not be omitted when describing oceanic boundary layer. In [13] the strong LC-like modulation of wave field was proved during laboratory experiments. The results of recent numerical calculations also indicate that the scattering of waves by the flow and further interference of waves cannot be neglected [14]. A general conclusion is that the influence of established LC on the surface motion is an important factor in the evolution of the ocean surface boundary layer. Within analytical approach, effect from the interference of two waves propagating at small angles of equal magnitudes and opposite signs to the shear flow was considered separately in [15; 10] and was shown to cause linear growth of LC if it is absent or weak and to maintain the circulation when the growth is saturated due to the turbulent viscosity. In this paper we show that the scattering leads to the enhancement of the positive feedback so the instability develops faster than it is predicted in [6]. In [11], the wave modulation by LC was considered and the instability problem was examined when spanwise modulation period is short compared to the wavelength. In the limit, no wave scattering occurs since the wavenumber of the modulation exceeds the wavenumber of the plane (fundamental) wave and the scattered wave would have almost the same frequency because the time scale associated with vortical flow should be assumed to be much greater compared to the wave period. Here we consider the opposite case of wavelength short compared to the modulation period, so the fundamental wave does scatters on the circulation and the scattered wave should be taken into account. The stream function used in [11] is convenient when there is a single wave vector characterising the wave motion. As we consider essentially three-dimensional wave flow which consists of waves spreading in different directions, we use potential approximation for the flow. On the way, the relatively weak vortical correction to the wave flow is described in terms of vorticity [10]. This technical solution allows us to consistently take into account wave scattering on a vortex flow. The only requirement for the applicability of the developed general mathematical apparatus is that the wave frequency has to be much greater than the velocity gradient and the rate of change of the vortex flow. The time scale separation allows one to consider the wave motion and the vortical flow as independent subsystems in zero approximation which weakly interact due to hydrodynamic nonlinearity. In the problem of Langmuir instability, we show that the spanwise modulation of the wave is essential at any strength of the shear flow. In particular, we obtain the increment for an unstable mode larger than that found in [6]. The established analytical results are applicable if the shear rate is much less than the wave frequency. The general mathematical scheme is developed in Section II. Derivation of vortex force is reproduced in Subsection II.1 and wave scattering process is considered in Subsection II.2. Using this technique, the Langmuir instability mechanism is considered in Section III. The unperturbed flow and the perturbation structure are defined in Subsections III.1, III.2. Equations governing the linearized development of the perturbation are derived in Subsection III.3. Then we consider the limit of small Langmuir number \(\mathrm{La}\ll 1\) and present complete analytical solution for the problem in the case of linear shear in Subsection III.4. Some calculations are swept into Appendix. ## II General description of interacting waves and current We consider an incompressible flow with a free surface, along which surface gravity waves can propagate. We consider deep-water case with equation \(z=\zeta(x,y,t)\) describing surface shape. Velocity field is a sum of surface gravity wave term \(\mathbf{u}\) and slow current \(\mathbf{V}\) excited against the background, i.e. \(\mathbf{v}=\mathbf{u}+\mathbf{V}\). The characteristic time scale \(T\) of slow current, \(T\sim|\partial_{t}\mathbf{V}|/|\mathbf{V}|\) or \(T\sim 1/|\operatorname{grad}\mathbf{V}|\) is much larger than the inverse gravity wave frequency \(\omega\), \(\varepsilon=1/\omega T\ll 1\), so we call \(\mathbf{u}\) high-frequency component of the whole flow. In particular, this means that the surface elevation is determined by the wave motion only, so the Froude number of the slow current is considered to be small, \(\mathrm{Fr}\sim|\mathbf{V}|\cdot|\operatorname{grad}\mathbf{V}|/g\ll 1\), where \(g\) is the gravitational acceleration. Note that the spatial scale of the current \(L\sim|\mathbf{V}|/|\operatorname{grad}\mathbf{V}|\) is not assumed to be necessarily large in relation to the wavelength, but the smallness of parameter \(\varepsilon\) is an essential condition for our analytical approach to be applicable. The fluid motion is described by Navier-Stokes equation which can be written in the Lamb form [16] \[\partial_{t}\mathbf{v}=[\mathbf{v},\,\mathbf{\varpi}]-\operatorname{grad}\left(p+\frac{v ^{2}}{2}\right)+\nu\Delta\mathbf{v}+\mathbf{g}, \tag{1}\] where \(p\) is fluid pressure, \(\mathbf{\varpi}=\operatorname{curl}\mathbf{v}\) is vorticity and mass density is equal to one. The equation should be supplemented with boundary conditions that are kinematic boundary condition \[\partial_{t}\zeta=v_{z}-v_{\alpha}\partial_{\alpha}\zeta, \tag{2}\] where indices \(\alpha,\beta,\ldots\) runs values \(\{x,y\}\) and dynamic boundary condition \[\left(p-2\nu n_{i}n_{k}\partial_{i}v_{k}\right)\bigr{|}_{z=h} =0, \tag{3}\] \[\delta_{li}^{\perp}\left(\partial_{i}v_{k}+\partial_{k}v_{i}n_{k }\right)_{z=h} =0, \tag{4}\] where \(\delta_{li}^{\perp}=\delta_{li}-n_{l}n_{i}\) is the projector onto the plane tangent to the free surface, \(\mathbf{n}\) is normal unit vector on the surface and indices \(i,j,k,\ldots\) runs values \(\{x,y,z\}\). Hereinafter, summation is assumed over the repeating indices. In the absence of the current and the viscosity, the wave non-breaking flow \(\mathbf{u}\) is irrotational so can be described in term of flow potential \(\phi\). The interaction of the wave with the vortical current leads to the fast component of the flow oscillating with the wave frequency ceases to be pure irrotational. There is an analytical description of the phenomenon based on the wave flow representation through stream function instead of the potential \(\phi\), which is good for two-dimensional problems [17]. When the wave motion is substantially three-dimensional, the technique becomes cumbersome since it is impossible to describe the wave flow with only one scalar stream function so one should introduce more functions [11]. Our aim is to describe the dynamics of the wave motion itself including its interaction with the current. The result of the interaction at large distances are scattered waves which are reasonable to be described in terms of the potential \(\phi\) as well. There is a local response on the interaction, which is relatively small as \(\varepsilon\) and can be described in terms of oscillating vorticity \(\mathbf{\varpi}^{u}=\operatorname{curl}\mathbf{u}\) and pressure part \(p^{\varpi}\) (see below). Thus, in our approach we keep the potential \(\phi\) which determines the wave motion itself and the vortical correction \(\mathbf{u}^{\varpi}\) so that \[\mathbf{u}=\mathbf{u}^{\phi}+\mathbf{u}^{\varpi},\quad\mathbf{u}^{\phi}=\operatorname{grad} \phi,\qquad u^{\varpi}\sim\varepsilon\cdot u^{\phi}. \tag{5}\] For the division to be completely defined, we assume that \(\mathbf{u}^{\varpi}\) is not related to the surface dynamics in the linear approximation \[u_{z}^{\varpi}|_{z=0}=0, \tag{6}\] since the surface shape dynamics is closely related to the potential wave motion. ### Influence of the wave motion on the slow current The impact of the waves on the current is established [15; 10] and is called vortex force. On the way of its derivation one should find the high-frequency part \(\mathbf{\varpi}^{u}\) of the vorticity, that is also used when finding the scattered waves as well. So here we briefly repeat the derivation of the vortex force \(\mathbf{f}^{V}\). The projection of Navier-Stokes equation (1) onto the slow motion is \[\partial_{t}\mathbf{V}+(\mathbf{V}\,\nabla)\mathbf{V}=-\operatorname{grad}\left\langle p+ \frac{u^{2}}{2}\right\rangle+\nu_{\mathcal{T}}\Delta\mathbf{V}+\langle[\mathbf{u}^{ \phi},\;\mathbf{\varpi}^{u}]\rangle. \tag{7}\] Here we have changed molecular viscosity by turbulent viscosity \(\nu_{{}_{T}}\) assuming that there is some small-scale turbulent flow on the background of flow \(\mathbf{V}\) and the wave motion \(\mathbf{u}\). The last term in (7) stems from the high-frequency part of the flow, which is pure potential in the main approximation, so it was taken \(\mathbf{u}^{\phi}\) instead of \(\mathbf{u}\). To find the high-frequency part \(\mathbf{\varpi}^{u}\) of the vorticity, we linearize the vorticity equation \[\partial_{t}\mathbf{\varpi}=\operatorname{curl}[\mathbf{v},\mathbf{\varpi}]+\nu\Delta\bm {\varpi} \tag{8}\] with respect to the wave motion, that leads to \[\big{(}\partial_{t}+(\mathbf{V}\,\nabla)\big{)}\mathbf{\varpi}^{u}-(\mathbf{\varpi}^{u} \nabla)\mathbf{V}-\nu\Delta\mathbf{\varpi}^{u}=\operatorname{curl}\big{[}\mathbf{u}^{\phi },\;\mathbf{\Omega}\big{]}, \tag{9}\] where the vorticity of the slow flow \(\mathbf{\Omega}=\operatorname{curl}\mathbf{V}\), so the full vorticity \(\mathbf{\varpi}=\mathbf{\Omega}+\mathbf{\varpi}^{u}\). The second term in left-hand side of equation (9) can be estimated as \((\mathbf{\varpi}^{u}\nabla)\mathbf{V}\sim\Omega\varpi^{u}\) so it should be neglected compared to \(\partial_{t}\mathbf{\varpi}^{u}\). The right-hand side serves as driving force for \(\mathbf{\varpi}^{u}\). We assume that the spatial scale of the right-hand side of the equation is much greater than the thickness of the viscous sublayer. Thus, the influence of the viscosity is negligible and the equation becomes local in space. Its solution is \[\mathbf{\varpi}^{u}=\operatorname{curl}\left[\mathbf{s},\;\mathbf{\Omega}\right],\quad \mathbf{s}(t,\mathbf{r})=\int\limits^{t}dt^{\prime}\,\mathbf{u}^{\phi}(t^{\prime},\,\mathbf{r }(t^{\prime})), \tag{10}\] where \(\dot{\mathbf{r}}(t)=\mathbf{V}(t,\mathbf{r})\) is the Lagrangian trajectory produced by the slow flow, \(\mathbf{r}(t)\equiv\mathbf{r}\) and \(\mathbf{s}\) is a particle displacement during the wave oscillations. Using (10), one can rewrite the last term in (7) into more convenient form \[\langle[\mathbf{u}^{\phi},\;\mathbf{\varpi}^{u}]\rangle=\mathbf{f}^{V}+\frac{1}{2} \operatorname{grad}(\mathbf{\Omega}\cdot\mathbf{\mathcal{A}}),\qquad\mathbf{f}^{V}=[\mathbf{ U}^{s},\;\mathbf{\Omega}], \tag{11}\] where \(\mathbf{\mathcal{A}}=\langle[\mathbf{u}^{\phi},\mathbf{s}]\rangle/2\) and the Stokes drift \[\mathbf{U}^{s}=\operatorname{curl}\mathbf{\mathcal{A}}=\langle\operatorname{curl}[ \mathbf{u}^{\phi},\mathbf{s}]\rangle \tag{12}\] is determined by the potential part of the flow associated with the wave motion. The gradient term in (11) should be included in effective pressure in (7). Note that all time averaging \(\langle\ldots\rangle\) in (7-12) should be implemented along the Lagrangian trajectories \(\mathbf{r}(t)\) defined after (10). Finally, the equation (7) takes the form \[\partial_{t}\mathbf{V}+(\mathbf{V}\,\nabla)\mathbf{V}=-\operatorname{grad}\overline{P}+ \nu\Delta\mathbf{V}+\mathbf{f}^{V}, \tag{13}\] where effective time-averaged pressure \(\overline{P}\) contains pressure heads from (7) and (11). Concerning the boundary conditions, we neglect the virtual wave stress produced by the wave viscous damping [18; 19; 20], that is reasonable if the vortex flow is strong enough so the velocity gradient much exceeds the viscous damping rate of the wave, \(|\nabla\mathbf{V}|\gg\nu k^{2}\). Then the boundary conditions for the slow flow are stress-free rigid boundary, \[\partial_{z}V_{\alpha}\big{|}_{z=0}=0,\quad V_{z}\big{|}_{z=0}=0. \tag{14}\] ### Wave scattering process The wave flow dynamics is determined by the Navier-Stokes equation linearized in the wave amplitude \[\partial_{t}\mathbf{u}+(\mathbf{u}\nabla)\,\mathbf{V}+(\mathbf{V}\nabla)\,\mathbf{u}=-\nabla p^{ u}+\mathbf{g}, \tag{15}\] where \(p^{u}\) is high-frequency part of the pressure. We omitted the viscous term in (15) so we deal with Euler equation due to the assumption that the viscosity which alters the flow in the narrow viscous sublayer beneath the surface is not essential in the wave dynamics. Our goal is to describe the dynamics of the potential part of the wave flow (5) taking into account the scattering process. Due to the incompressibility condition, the potential still satisfies the Laplace equation \[\Delta\phi=0. \tag{16}\] Now we impose the boundary conditions. Kinematic boundary condition (2) linearized in the wave amplitude is \[\big{(}\partial_{t}+V_{\alpha}\big{|}_{z=0}\partial_{\alpha}\big{)}\,\zeta= \partial_{z}\phi+\zeta\partial_{z}V_{z}\big{|}_{z=0}, \tag{17}\] where we used condition (6) for \(u_{z}^{\varpi}\). Because the viscosity was neglected in (15), we need only one dynamical boundary condition for pressure (3). To obtain this condition one should express the fast oscillating term in pressure in terms of the potential \(\phi\) and surface elevation \(\zeta\). In the case of pure wave flow, this is produced with the aim of Bernoulli equation, which should be linearized in the wave amplitude in our approximations, so we have \(\partial_{t}p^{u}=-gz-\partial_{t}\phi\). We generalize the equation in the presence of the slow current, that includes replacement of the time derivative \(\partial_{t}\) by substantial derivative \((\partial_{t}+V_{i}\partial_{i})\). We denote by \(p^{\varpi}\) the residual part of the pressure, so by definition \[p^{u}\equiv-gz-\left(\partial_{t}+V_{i}\partial_{i}\right)\phi+p^{\varpi}. \tag{18}\] Thus, the dynamic boundary condition is \[\big{(}\partial_{t}+V_{\alpha}\big{|}_{z=0}\partial_{\alpha}\big{)}\,\phi\big{|} _{z=0}=-g\zeta+\left.p^{\varpi}\right|_{z=0}, \tag{19}\] where the pressure part \(p^{\varpi}\) satisfies \[\Delta p^{\varpi}=\partial_{i}\phi\,\Delta V_{i}+2\partial_{j}u_{i}^ {\varpi}\,\partial_{i}V_{j} \tag{20}\] \[\partial_{z}p^{\varpi}\big{|}_{z=0}=\partial_{\alpha}\phi\partial_ {z}V_{\alpha},\quad p^{\varpi}\big{|}_{z\to-\infty}\to 0. \tag{21}\] In (20), the vortical part of the high-frequency flow \(\mathbf{u}^{\varpi}\) should be restored using equation \(\mathbf{\varpi}^{u}=\mathrm{curl}\,\mathbf{u}^{\varpi}\) from (10) with boundary conditions (6) at the free surface and \(\mathbf{u}^{\varpi}\to 0\) at infinity. Note that if velocity \(\mathbf{V}\) is pure potential, then \(\Delta\mathbf{V}=0\) and \(\mathbf{u}^{\varpi}=0\), so \(p^{\varpi}=0\) as well. The contribution \(p^{\varpi}\) into the pressure is related to the vortical part \(\mathbf{u}^{\varpi}\) of the wave motion, but the relation is not linear in \(\mathbf{V}\). According to (10), the ratio of the terms in right-hand side of (20) is estimated as \[\frac{2\partial_{j}u_{i}^{\varpi}\,\partial_{i}V_{j}}{\partial_{i}\phi\, \Delta V_{i}}\sim\frac{V}{\omega/k}. \tag{22}\] Thus, the second term in (20) should be taken into account only if the slow flow is large and the magnitude of its alternation in the horizontal plane is comparable with the phase velocity \(\omega/k\) of the waves. The limit can be analysed with the ray approximation [21] as well. ## III Langmuir instability mechanism In this Section we consider the Langmuir circulation instability problem. We assume that the unperturbed flow is a plane monochromatic wave propagating on the background of co-directed vertically sheared flow. Thus, the unperturbed flow is uniform in the spanwise direction, and the perturbation is modulated in the direction with some period, which is much larger that the wavelength. A sketch of the flow is depicted in Figure 1. ### Unperturbed flow We assume that the wave travels in the \(x\) direction in Cartesian coordinate system. Thus, the wave potential \[\phi^{(0)}=\mathrm{Re}\,\Big{(}\psi^{(0)}\exp[i\varphi^{(0)}+kz]\Big{)} \tag{23}\] with \(\psi^{(0)}\) and \(k\) being potential amplitude and wave number respectively and the phase \(\varphi^{(0)}=kx-\omega t\). The corresponding surface elevation rate is \[\zeta^{(0)}=\mathrm{Re}\,\Big{(}h^{(0)}\exp[i\varphi^{(0)}]\Big{)}. \tag{24}\] Without loss of generality we assume \(h^{(0)}>0\). The slow current is shear flow aligned in the same \(x\) direction \[\mathbf{V}^{(0)}=\big{\{}V_{x}^{(0)},\,0,\,0\big{\}},\qquad\mathbf{\Omega}^{(0)}= \big{\{}0,\,\Omega^{(0)},\,0\big{\}}, \tag{25}\] where \(\Omega^{(0)}(z)=\partial_{z}V_{x}^{(0)}(z)\). The relative slowness of the flow means that \(\varepsilon\sim\Omega^{(0)}/\omega\ll 1\). One can think about the resulting motion as generalized Gouyon wave, see e.g. [22]. Here we apply our general scheme to find the influence of the shear onto the wave demonstrating the reproduction of the well-known result [17]. We start from the Stokes drift, which \(x\) component is \[U_{x}^{s(0)}=k\omega\big{(}h^{(0)}\big{)}^{2}e^{2kz}. \tag{26}\] The vortex force (11) appears to be pure potential \[\mathbf{f}^{V}=k\omega\Omega^{(0)}\big{(}h^{(0)}\big{)}^{2}e^{2kz}\big{\{}0,\,0, \,1\big{\}} \tag{27}\] and, according to the equation (7), does not produce any contribution to the slow flow but only modifies the pressure. Dispersion relation and, thus, phase velocity could be obtained from boundary conditions (17) and (19) on the free surface: \[\big{(}-i\omega+ikV_{x}^{(0)}(0)\big{)}h^{(0)}=k\psi^{(0)}, \tag{28}\] \[\big{(}-i\omega+ikV_{x}^{(0)}(0)\big{)}\psi^{(0)}=-gh^{(0)}+p^{ \varpi(0)}, \tag{29}\] with pressure (note that \(\mathbf{u}^{\varpi(0)}=0\) for the geometry) \[p^{\varpi(0)}=2ik\psi^{(0)}\Big{(}V_{x}^{(0)}(0)-2k\,\int\limits_{-\infty}^{0} dz\,e^{2kz}V_{x}^{(0)}(z)\Big{)}. \tag{30}\] The solution of (28)-(29) with linear precision to \(\Omega^{(0)}/\omega\ll 1\) term is \[\omega=\sqrt{gk}+\omega^{\prime},\qquad\omega^{\prime}=2k^{2}\int\limits_{- \infty}^{0}dz\,e^{2kz}V_{x}^{(0)}(z), \tag{31}\] and the phase velocity \(\omega/k\) appears the same as in [17]. One can choose the reference system where \(V_{x}^{(0)}|_{z=0}=0\). We assume that the shear flow is produced by surface stress co-directed with \(Ox\)-axis, as the Langmuir instability exists only in this case [23]. Then \(\Omega^{(0)}>0\) if one considers the simplest case of constant sign shear and the correction to the wave frequency \(\omega^{\prime}<0\). If we suppose the linear shear, \(V_{x}^{(0)}=\Omega^{(0)}z\), then the correction to the wave frequency \(\omega^{\prime}=-\Omega^{(0)}/2\). Figure 1: Sketch for Langmuir instability development. ### Perturbation It is assumed in CL2 mechanism [6] of Langmuir instability that an unstable mode includes a streamwise shear flow and a circulation in vertical-spanwise plane both modulated in the spanwise direction. Here we add into the mode structure the wave modulation, that can be thought as the result of scattering of initial wave on the modulated slow flow. We call the modulated wave oblique one after [24]. Hereinafter, we mark the contributions into the physical quantities describing the unstable mode with symbol '\(\delta\)', so the full fields are now \[\mathbf{V} =\mathbf{V}^{{}^{(0)}}+\delta\mathbf{V},\qquad\phi=\phi^{{}^{(0)}}+ \delta\phi, \tag{32}\] \[\mathbf{\Omega} =\mathbf{\Omega}^{{}^{(0)}}+\delta\mathbf{\Omega},\qquad\zeta=\zeta^{{}^{(0)} }+\delta\zeta.\] The wave potential \(\delta\phi\) and the surface elevation \(\delta\zeta\) are: \[\delta\phi = \mathrm{Re}\,\Big{(}\delta\psi\,e^{\lambda t}\exp\big{[}i\varphi^ {{}^{(1)}}+kz\big{]}\Big{)} \tag{33}\] \[\delta\zeta = \mathrm{Re}\,\Big{(}\delta h\,e^{\lambda t}\exp\big{[}i\varphi^ {{}^{(1)}}\big{]}\Big{)}, \tag{34}\] where phase \(\varphi^{{}^{(1)}}=kx\cos\theta+ky\sin\theta-\omega t\) contains spanwise modulation with wave number \(k\sin\theta\). The instability growth rate \(\lambda\) is assumed to be much smaller compared to the wave frequency, \(\lambda\ll\omega\), so the wave is assumed to be quasi-monochromatic. The correction to the oscillating part of the pressure could be represented in the same form (see Appendix A) \[\delta p^{\varpi}=\mathrm{Re}\,\Big{(}\delta\tilde{p}^{\varpi}(z)\ e^{\lambda t }\exp\big{[}i\varphi^{{}^{(1)}}\big{]}\Big{)}. \tag{35}\] The dynamics and the spatial structure of the slow flow matches the dynamics of the interference between the initial and scattered waves averaged over the fast oscillations. Thus, the time dependence is exponential, \(\exp(\lambda t)\), and the spatial structure of the slow flow is determined by the phase difference \(\varphi^{{}^{(1)}}-\varphi^{{}^{(0)}}=ky\sin\theta+kx(\cos\theta-1)\): \[\delta\mathbf{V}=\mathrm{Re}\,\Big{(}\delta\tilde{\mathbf{V}}(z)\,e^{\lambda t}\exp \big{[}i\big{(}\varphi^{{}^{(1)}}-\varphi^{{}^{(0)}}\big{)}\big{]}\Big{)}. \tag{36}\] The same notations are considered for \(\delta\mathbf{\Omega}\). ### Getting equations The next step is to derive the equations which define the dynamics of the perturbation in the approximation linear in its amplitude. The full system of equations to be linearized in the perturbation amplitude on the background of the flow described in Section III.1 consists of equations (16,17,19,21) for the fast oscillating motion and equations (13,14) for the slow flow. We note that one should keep only the first summand in the right-hand side of Eq. (20) since the amplitude of the perturbation is small. Note, that in the case we do not need to restore \(\mathbf{u}^{\varpi}\) from \(\mathbf{\varpi}^{u}\). The slow flow part of the perturbation is completely defined by \(x\)-components of velocity and vorticity \(\delta\tilde{V}_{x}(z)\), \(\delta\tilde{\Omega}_{x}(z)\) and its fast oscillating part is set by the oblique wave parameters \(\delta\psi\), \(\delta h\), so we establish equations for these quantities. We examine the problem in the limit \(\theta\ll 1\), so we neglect all corrections which are relatively small as \(\theta^{2}\). In particular, this means that \(\cos\theta=1\) in our approximation and \(x\)-derivatives of slow flow variables are zero as the derivatives are proportional to \(1-\cos\theta\). This approximation corresponds to \(x\)-independence of the slow flow perturbation adopted in [6] and allows one to introduce the stream function \(\delta\Psi\) for the circulation in \(Oyz\)-plane: \[\delta V_{y} =\partial_{z}\delta\Psi,\quad\delta V_{z}=-\partial_{y}\delta \Psi=-ik\theta\delta\Psi,\] \[\delta\Omega_{x} =-(\partial_{z}^{2}-k^{2}\theta^{2})\delta\Psi. \tag{37}\] Getting the final answer, we choose the particular case of linear shear flow, \(V_{x}^{{}^{(0)}}=\Omega^{{}^{(0)}}z\), consider the limit of small Langmuir number \(\mathrm{La}\ll\theta^{2}\), see (59), and neglect the corrections which are relatively small as \(\theta\) as well. It is convenient to divide the solution into two steps. The first step is determining the slow current part via the oblique wave parameters. The step can be implemented analytically in the case of linear unperturbed shear \(V_{x}^{{}^{(0)}}=\Omega^{{}^{(0)}}z\). On the second step, we define the instability growth rate \(\lambda\) from the requirement that free surface equations (17) and (19) have nontrivial solution. We also adopt below that all quantities describing the unstable mode are complex, that is, in particular, one should drop \(\mathrm{Re}\) in definitions (33,34,35,36) for the perturbation and (23,24) for the initial wave. Let us first linearize in the perturbation amplitude equation (13) and equation (8) written for the slow flow: \[\big{(}\lambda-\nu_{T}\Delta+V_{x}^{{}^{(0)}}\partial_{x}\big{)} \delta\mathbf{V}+\delta V_{z}\,\mathbf{\Omega}^{{}^{(0)}}+\mathrm{ grad}\,\delta\overline{P}=\delta\mathbf{f}^{{}^{V}}, \tag{38}\] \[\big{(}\lambda-\nu_{T}\Delta+V_{x}^{{}^{(0)}}\partial_{x}\big{)} \delta\mathbf{\Omega}-\Omega^{{}^{(0)}}\big{[}\delta\Omega_{z}\mathbf{e}^{x}+\partial_{ y}\delta\mathbf{V}\big{]}=\] \[=\mathrm{curl}\,\delta\mathbf{f}^{{}^{V}}, \tag{39}\] where we took into consideration the variation of vortex force \[\delta\mathbf{f}^{{}^{V}}=\big{[}\mathbf{U}^{s{}^{(0)}},\,\delta\mathbf{\Omega}\big{]}+ \big{[}\delta\mathbf{U}^{s},\,\mathbf{\Omega}^{{}^{(0)}}\big{]} \tag{40}\] as well. The correction to the Stokes drift velocity defined in (12) is \[\delta\mathbf{U}^{s}=\frac{1}{2}\,\partial_{l}\big{(}\delta s^{s}_{l}\mathbf{u}^{{}^{ (0)}}\big{)}+\frac{1}{2}\,\partial_{l}\big{(}s^{{}^{(0)}*}_{l}\delta\mathbf{u}^{ \phi}\big{)}, \tag{41}\] where symbol \({}^{\star}\) denotes complex conjugation. In the limit of small angle \(\theta\), the Stokes drift perturbation is \[\delta U^{s}_{x}=2ik^{2}h^{{}^{(0)}}\delta\psi\,e^{\lambda t+2kz}\exp\big{[}i( \varphi^{{}^{(1)}}-\varphi^{{}^{(0)}})\big{]}, \tag{42}\] and \(\delta U^{s}_{y}=\theta\,\delta U^{s}_{x}\), but \(y\)-component is not interesting because it does not produce any contribution into vortex force (40). The perturbation (40) of the vortex force appears to be directed in \(yz\)-plane. Now we take \(x\)-components in equations (38,39) and obtain in our approximations \[\big{[}\lambda-\nu_{{}_{T}}\big{(}\partial_{z}^{2}-k^{2}\theta^{2} \big{)}\big{]}\delta\tilde{V}_{x}+\Omega^{{}^{(0)}}\delta\tilde{V}_{z}=0. \tag{43}\] \[\big{[}\lambda-\nu_{{}_{T}}\big{(}\partial_{z}^{2}-k^{2}\theta^{2} \big{)}\big{]}\delta\tilde{\Omega}_{x}+2ik^{2}\theta U^{{}^{(0)}} \delta\tilde{V}_{x}=\] \[=ik\theta\Omega^{{}^{(0)}}\delta\tilde{U}_{x}^{s}. \tag{44}\] Note that \(x\)-component of square bracket in (39) was neglected as it is equal to \(\partial_{x}\delta V_{y}\). Equations should be supplemented with boundary conditions on the free surface (14) \[\partial_{z}\delta V_{x}\big{|}_{z=0}=\partial_{z}\delta V_{y}\big{|}_{z=0}=0,\quad\delta V_{z}\big{|}_{z=0}=0. \tag{45}\] Next we rewrite equations (17,19), \[(\lambda-i\omega)\delta h=k\delta\psi+h^{{}^{(0)}} \partial_{z}\delta V_{z}\big{|}_{z=0}, \tag{46}\] \[(\lambda-i\omega)\delta\psi= -g\delta h+\delta p^{\varpi}\big{|}_{z=0}. \tag{47}\] Taking into account the boundary conditions for the slow current (45), the pressure \(\delta\tilde{p}^{\varpi}\) taken at the surface can be presented in the form (see (100)) \[\delta\tilde{p}^{\varpi}\big{|}_{z=0} = -\psi^{{}^{(0)}}\int\limits_{-\infty}^{0}dz\,e^{2kz}(\partial_{z} ^{2}-k^{2}\theta^{2})\big{(}i\delta\tilde{V}_{x}+\delta\tilde{V}_{z}\big{)}+ \tag{48}\] \[+i\Omega^{{}^{(0)}}\big{|}_{z=0}\delta\psi.\] ### Solution Now we assume, that the perturbations growth rate \(\lambda\) is large compared to viscous decay rate of the slow flow component of the perturbation. Since the Stokes drift in (43) penetrates on depth \(\sim 1/k\), the vertical scale of \(\delta\Omega_{x}\) is the same. This means, that the viscous damping rate is \[\nu_{{}_{T}}k^{2}\ll\lambda. \tag{49}\] Below we show that the inequality is equivalent to the limit of small Langmuir number. In the limit, the viscous scale \(\delta_{\nu}\sim\sqrt{\nu_{{}_{T}}/\lambda}\) for the slow flow is less than the wave penetration depth \(1/k\), \(k\delta_{\nu}\ll 1\) and \(\lambda\) prevails the viscous terms in square brackets in (43,44). We neglect the viscous term in equation (43) and obtain \[\delta\tilde{V}_{x}=-\frac{\Omega^{{}^{(0)}}}{\lambda} \delta\tilde{V}_{z}=\frac{ik\theta\Omega^{{}^{(0)}}}{ \lambda}\delta\tilde{\Psi} \tag{50}\] due to definition (37). According to (50), the mutual signs of \(\delta\tilde{V}_{x}\) and \(\delta\tilde{V}_{z}\) are fixed due to implied \(\Omega^{{}^{(0)}}>0\) and, as it will be shown below, the increment is real, \(\lambda>0\). Downward flow in the circulation corresponds to maximum in the amplitude of the shear flow. On the other hand, the downward flow corresponds to divergent stream lines on the surface, i.e. areas where a surface contamination is collected along \(x\)-oriented lines. Under the same approximation, equation (44) gives \[(k^{-2}\partial_{z}^{2}-\theta^{2})\delta\tilde{\Psi}+\mu^{2}\,e^{2kz}\delta \tilde{\Psi}=\sqrt{\varepsilon}\,\mu\,e^{2kz}\delta\psi, \tag{51}\] where the dimensionless quantities \(\mu\) and \(\varepsilon\) are \[\mu=\frac{\sqrt{2\Omega^{{}^{(0)}}\omega}}{\lambda}\,kh^{{}^{(0)} }\theta,\quad\varepsilon=\frac{2\Omega^{{}^{(0)}}}{\omega}\ll 1. \tag{52}\] As we are considering inviscid problem, the boundary conditions for the stream function are \[\delta\tilde{\Psi}(0)=\delta\tilde{\Psi}(-\infty)=0 \tag{53}\] instead of (45). Note that we gain the CL2-model equation [6] if we neglect the variation of Stokes drift, i.e. let the right-hand side of (51) being equal to zero. Further we consider linear shear flow, so parameters \(\mu\) and \(\varepsilon\) are \(z\)-independent after \(\Omega^{{}^{(0)}}\). In the absence of Stokes drift variation in (51), the eigenvalue problem (51,53) leads to the solution \(\delta\tilde{\Psi}=\mathrm{J}_{\theta}(\mu\,e^{kz})\) and boundary condition \(\mathrm{J}_{\theta}(\mu)=0\)[6], where \(\mathrm{J}_{n}(\mu)\) is Bessel function of \(n\)-th order. Thus the perturbation grows rate in CL2-model corresponds to the lowest root \(\mu_{{}_{\mathrm{CL2}}}\approx 2.4\) in the limit \(\theta\ll 1\). Now we restore the right-hand side of (53). The equation (51) can be solved analytically for the constant initial shear flow: \[\delta\tilde{\Psi}=\frac{\sqrt{\varepsilon}}{\mu}\Bigg{[}1-\Big{(}1+\frac{1- \mathrm{J}_{0}(\mu)}{\mathrm{J}_{0}(\mu)}\,e^{|\theta|kz}\Big{)}\mathrm{J}_{0} \big{(}\mu\,e^{kz}\big{)}\Bigg{]}\delta\psi. \tag{54}\] Expression (54) is not exact, there are corrections for the exact solution which are relatively small as \(\theta\). With the accuracy, one should put the exponent \(e^{|\theta|kz}\to 1\) when \(kz\sim 1\) so the square bracket in (54) is equal to \(1-\mathrm{J}_{0}\big{(}\mu\,e^{kz}\big{)}/\mathrm{J}_{0}(\mu)\). At large depths \(z\gtrsim 1/k\theta\), the stream function decays exponentially as the square bracket in (54) is \((1-1/\mathrm{J}_{0}(\mu))\,e^{|\theta|kz}\). Thus, the circulation penetrates \(1/k\theta\) deep. However, the vorticity related to the stream function according to (37) penetrates only \(1/k\) deep, \[\delta\tilde{\Omega}_{x}=i\sqrt{\varepsilon}\,\frac{\mu\mathrm{J}_{0}(\mu\,e^ {kz})}{\mathrm{J}_{0}(\mu)}\,e^{2kz}\omega k\delta h, \tag{55}\] where we used approximate equality \(\delta\psi=-i(\omega/k)\delta h\). Note that in CL2 model, the vorticity has the same vertical dependence in our approximations, \(\delta\tilde{\Omega}_{x}=(\mu k)^{2}e^{2kz}\mathrm{J}_{0}(\mu e^{kz})\). Equations (50,54) express the slow flow via the oblique wave parameters \(\delta h\), \(\delta\psi\). To be able to write explicitly equations (46,46) in terms of the parameters, we need to do this first for the residual part of pressure \(\delta p^{\varpi}\) using (48): \[\delta\tilde{p}^{\varpi}\big{|}_{z=0}=\big{(}(F(\mu)+1)\lambda+iF(\mu)\Omega^{{ }^{(0)}}\big{)}\delta\psi, \tag{56}\] where a function \[F(\mu)=1+\Big{(}\mu-\frac{1}{\mu}\Big{)}\frac{\mathrm{J}_{1}(\mu)}{\mathrm{J}_{0}( \mu)} \tag{57}\] is even in \(\mu\). It follows from (55) also, that \(\partial_{z}\delta\tilde{V}_{z}|_{z=0}=-i\sqrt{\varepsilon}(\mathrm{J}_{1}(\mu) /\mathrm{J}_{0}(\mu))k^{2}\theta\,\delta\psi\) in (46). The linear system of equations (46,47) for \(\delta h\), \(\delta\psi\) has a nontrivial solution if \(F(\mu)=0\) in the first precision with respect to small parameters \(\varepsilon\) and \(\theta\). Among all existing solutions we should choose that corresponding to positive and minimal real part of \(\lambda\). The numerical solution is \[\mu_{\star}\simeq 1.67,\quad\lambda=\frac{\sqrt{2\Omega^{(0)}\omega}}{\mu_{ \star}}\,kh^{(0)}|\theta|. \tag{58}\] As \(\mu_{\star}<\mu_{\text{\tiny{$\mathcal{L}$2}}}\), obtained growth increment \(\lambda\) (58) is greater than that found in [6]. Hence, the unstable mode does contain the oblique wave component. Knowing that \(\mu\sim 1\), let us rewrite the inequality (49). We introduce friction velocity \(u_{\star}\) according to definition \(\Omega^{(0)}=u_{\star}^{2}/\nu_{\text{\tiny{$\mathcal{T}$}}}\). Then (49) means that Langmuir number \[\text{La}\equiv\frac{\nu_{\text{\tiny{$\mathcal{T}$}}}^{3}k^{2}}{\omega(h^{(0 )})^{2}u_{\star}^{2}}\ll\theta^{2}. \tag{59}\] Let us also test the properties of the solution with respect to changing the sign of angle \(\theta\), that leads to \(\mu\to-\mu\) as well. Without loss of generality we can adopt that \(\delta h\) is pure real, \(\delta h<0\), and the phase difference between the initial and oblique waves along streamwise direction is small, \(k\theta^{2}x\ll 1\), so phase difference \(\varphi^{(1)}-\varphi^{(0)}=k\theta y\) in (36). Then both \(\delta\tilde{V}_{z}\) (37) and \(\delta\tilde{V}_{x}\) (50) are pure real and even in \(\theta\), so \(\delta V_{z}\propto\cos(k\theta y)\) (including the sign) is even function of \(\theta\) after \(\delta\tilde{V}_{x}\), whereas \(\delta\tilde{V}_{y}\) is pure imaginary and odd in \(\theta\), so \(\delta V_{y}\propto-\theta\sin(k\theta y)\) (the sign is taken at the surface) is even function of \(\theta\) as well. This means that the unstable mode is twice degenerate; two possible configurations have identical slow flow spatial structure and oblique waves with opposite spanwise wave numbers. The derived spatial distribution is schematically plotted in Figure 1. ## IV Discussion Presented analysis of Langmuir instability assumes that the flow has exactly periodic structure in space. In reality, the periodicity should be lost at some length \(L_{c}\). For the analysis to be valid, the length should be greater than the distance which wave passes during the characteristic time \(1/\lambda\) along streamwise direction, that is \(L_{c,x}>\omega/(k\lambda)\). Taking into account (58,52), one can rewrite the inequality in the form \(L_{c,x}>1/(\sqrt{\varepsilon}\,\theta\,kh^{(0)}k)\), that much exceeds the period in spanwise direction. In spanwise direction, the periodicity should be kept at smaller length \(L_{c,y}\sim\theta L_{c,x}\). If the periodicity appears to be disrupted at shorter distances, then the oblique wave is not correlated with the Langmuir circulation and one should drop it out during the Langmuir instability analysis. As a result, one arrives back to CL2-model [6], where only \(x\)-independence of the flow at distances greater than \(1/(k\theta)\) is assumed. Considering an extension of the scope of our approach, we believe, that the developed mathematical scheme should be appropriate for the analysis of wave-current interaction in flow confined in a basin, see e.g. [25; 26]. The confinement preserves correlation between vortical flow and surface waves, which are mostly standing ones in the case. At initial stage, when only surface waves are excited, the leading mechanism of the interaction is virtual wave stress [20; 26], which can be enhanced due to presence of surface contamination in a form of liquid elastic film [27]. However, during the development of large-scale vortical flow, oblique waves appears that is evidence of the wave scattering by the vortical flow [25]. The presented approach, combined with a more detailed analysis of experimental data, can answer whether a wave-current interaction loop plays any role in formation of large-scale flow, along with other mechanisms, including the two-dimensional inverse energy cascade [28]. ## V Acknowledgments The work was supported by Russian Science Foundation (project no. 23-72-30006). ## Appendix A Contribution into pressure related to high-frequency part of the vorticity The equation on oscillating part of pressure follows from Navier-Stokes equation for wave motion \[\partial_{t}\mathbf{u}^{\phi}+\left(\mathbf{u}^{\phi}\nabla\right)\mathbf{V}+\left(\mathbf{V} \nabla\right)\mathbf{u}^{\phi}+\partial_{t}\mathbf{u}^{\varpi}+\left(\mathbf{u}^{\varpi} \nabla\right)\mathbf{V}+\left(\mathbf{V}\nabla\right)\mathbf{u}^{\varpi}=-\nabla p^{u}+\bm {g} \tag{10}\] Let us group potential and vortical terms \[\left(\partial_{t}\mathbf{u}^{\varpi}+\left(\mathbf{u}^{\varpi}\nabla\right)\mathbf{V}+ \left(\mathbf{V}\nabla\right)\mathbf{u}^{\varpi}\right)-\left[\mathbf{u}^{\phi}\mathbf{\Omega }\right]=-\nabla\Big{(}p^{u}+gz+\left(\partial_{t}+V_{i}\partial_{i}\right) \phi\Big{)}, \tag{11}\] using \[\left(\mathbf{u}\nabla\right)\mathbf{V}+\left(\mathbf{V}\nabla\right)\mathbf{u}=\nabla\left( \mathbf{u}\mathbf{V}\right)-\left[\mathbf{V}\mathbf{\omega}\right]-\left[\mathbf{u}\mathbf{\Omega} \right],\] valid for incompressible vector fields. The gradient term in right-hand side of (11) we interpret as vortical part of pressure which we denote as \(p^{\varpi}\), \[p^{\varpi}\equiv p^{u}+gz+\left(\partial_{t}+V_{i}\partial_{i} \right)\phi \tag{12}\] \[\Delta p^{\varpi}=f^{\varpi},\qquad f^{\varpi}=\partial_{i}\phi \,\Delta V_{i}+2\partial_{j}u_{i}^{\varpi}\,\partial_{i}V_{j}. \tag{13}\] The boundary condition for \(p^{\varpi}\) on the free surface is following from \(z\)-component of equation (11). Taking into account the smallness of spatial derivatives of \(\mathbf{V}\), we present \[\left.\partial_{z}p^{\varpi}\right|_{z=0}=\partial_{\alpha}\phi\,\partial_{z} V_{\alpha},\qquad\left.p^{\varpi}\right|_{z\to-\infty}\to 0. \tag{14}\] The differential problem (13), (14) could be solved analytically in Fourier space (\(\mathbf{q}\) denotes Fourier harmonic in \(Oxy\)-date) \[\left(\partial_{z}^{2}-q^{2}\right)\delta p_{\mathbf{q}}^{\varpi}=f_{\mathbf{q}}^{ \varpi},\qquad\delta p_{\mathbf{q}}^{\varpi}=-\frac{e^{qz}}{2q}\int\limits_{- \infty}^{0}dz^{\prime}e^{qz^{\prime}}f_{\mathbf{q}}^{\varpi}(z^{\prime})-\int \limits_{-\infty}^{0}dz^{\prime}\frac{e^{-q|z-z^{\prime}|}}{2q}f_{\mathbf{q}}^{ \varpi}(z^{\prime})+\frac{e^{qz}}{q}\big{(}\partial_{\alpha}\phi\partial_{z}V_ {\alpha}\big{)}_{\mathbf{q}}. \tag{15}\] Now we turn to calculations in the sake of Langmuir instability analysis. Using the estimation (22) we get \(f^{\varpi}=\partial_{i}\phi\,\Delta V_{i}\) in (13). Thus, \[\left.\delta p_{\mathbf{q}}^{\varpi}\right|_{z=0}=-\frac{1}{q}\int\limits_{-\infty }^{0}dz\,e^{qz}\big{(}\partial_{i}\phi\Delta V_{i}\big{)}_{\mathbf{q}}+\frac{1}{q} \partial_{\alpha}\big{(}\phi\partial_{z}V_{\alpha}\big{)}_{\mathbf{q}}. \tag{16}\] Using this expression, let us prove the equality (35). We substitute (33) and (36) in (16) and by direct calculations we show that only harmonics oscillating as \(\exp[ikx\cos\theta+iky\sin\theta]\) appears nonzero (note that we should set \(\mathbf{q}=k\{\cos\theta,\sin\theta\}\)). According to definition (35) we obtain \[\left.+ik\psi^{\text{\tiny(0)}}\sin\theta\partial_{z}\delta\tilde {V}_{y}\right|_{z=0}. \tag{17}\]
2304.14836
Training Large Scale Polynomial CNNs for E2E Inference over Homomorphic Encryption
Training large-scale CNNs that during inference can be run under Homomorphic Encryption (HE) is challenging due to the need to use only polynomial operations. This limits HE-based solutions adoption. We address this challenge and pioneer in providing a novel training method for large polynomial CNNs such as ResNet-152 and ConvNeXt models, and achieve promising accuracy on encrypted samples on large-scale dataset such as ImageNet. Additionally, we provide optimization insights regarding activation functions and skip-connection latency impacts, enhancing HE-based evaluation efficiency. Finally, to demonstrate the robustness of our method, we provide a polynomial adaptation of the CLIP model for secure zero-shot prediction, unlocking unprecedented capabilities at the intersection of HE and transfer learning.
Moran Baruch, Nir Drucker, Gilad Ezov, Yoav Goldberg, Eyal Kushnir, Jenny Lerner, Omri Soceanu, Itamar Zimerman
2023-04-26T20:41:37Z
http://arxiv.org/abs/2304.14836v2
# Training Large Scale Polynomial CNNs for E2E Inference over Homomorphic Encryption ###### Abstract Training large-scale CNNs that during inference can be run under Homomorphic Encryption (HE) is challenging due to the need to use only polynomial operations. This limits HE-based solutions adoption. We address this challenge and pioneer in providing a novel training method for large polynomial CNNs such as ResNet-152 and ConvNeXt models, and achieve promising accuracy on encrypted samples on large-scale dataset such as ImageNet. Additionally, we provide optimization insights regarding activation functions and skip-connection latency impacts, enhancing HE-based evaluation efficiency. Finally, to demonstrate the robustness of our method, we provide a polynomial adaptation of the CLIP model for secure zero-shot prediction, unlocking unprecedented capabilities at the intersection of HE and transfer learning. ## 1 Introduction We are interested in the problem of training Convolutional Neural Networks (CNNs) in a way that allows inference on encrypted data, without the owner of the model being exposed to either the inputs or the outputs. This is achievable through Homomorphic Encryption (HE), see e.g., [18; 1; 5]. Most modern HE schemes, however, limit network operations to polynomials, creating training and inference challenges. Training large-scale polynomial CNNs is a challenging task that often fails to achieve the same performance as the original network. Thus, previous studies have only achieved promising results on shallow networks [26; 5] and their methods have not scaled up for larger networks or large datasets such as ImageNet [19]. The hardness of training polynomial networks is well established, as we explain in Sect. 2. Previous studies in HE (e.g., [44; 40; 38]) focused on modifying pre-trained networks by substituting non-polynomial ReLU activations for polynomial approximations, however, when applied naively to deep networks, it can cause explosions or imprecise results. In this study, we observe that one of the factors for those explosions is the input range of the activation, which dominates the approximation error (Fig. 2). Hence the approximation requires extremely high-degree polynomials, leading to computational inefficiencies or instability. To address this, we develop a novel training method that handles the input range during the fine-tuning process, which enables approximating activations using low-degree polynomials. This method allows for the first time the training of **HE-friendly CNNs** on large-scale networks like ResNet [31] and ConvNeXt [47], over large datasets such as ImageNet. Another challenge for running encrypted inference is reducing the inference latency costs, which are significantly influenced by two key factors: the _multiplication depth_ of high-degree polynomials and the HE _chain-index_ mismatch resulting from skip-connections, as will be identified in Obs. 3.1. To this end, we propose new design and training techniques for polynomial CNNs that translate to latency acceleration of the inference process of polynomial CNNs under HE. Specifically, in Sect. 3.2 we provide a solution to efficiently handle skip-connections under HE, through chain index-aware design, resulting in a substantial reduction in inference time. For instance, when employing the HElayers SDK, a notable speedup factor of 2.5 is achieved. Additionally, in Sect. 3.3, the paper provides an analysis of selecting an appropriate backbone to minimize the computational resources needed when using HE. **Our Contributions.** 1. Our main contribution is a novel training method that is grounded by our insight from Sect. 3.1, which handles the range to the non-polynomial layers. This method enables us to achieve low-degree polynomial approximation, while maintaining the accuracy of the original model. 2. We provide several insights about the design choices of HE-friendly CNNs, which can lead to better latency efficiency with a lower approximation error. Specifically, we refer to techniques such as handling neural activations (Sect. 3.1), _Skip-Connections (SCs)_ (Sect. 3.2), and the CNN backbone in the context of HE (Sect. 3.3). 3. Using the above techniques, we demonstrate, for the first time, the feasibility of training HE-friendly (polynomial) CNNs such as ConvNeXt and ResNet over large scale datasets. These models achieve comparable accuracy to state-of-the-art (SOTA) approaches when trained on realistic datasets like ImageNet (see Tab. 2). Our code is available online 1. Footnote 1: For reproducing our main results, please refer to our anonymous repository: shorturl.at/lvNXZ. The entire Git repository will be shared upon acceptance. 4. We extend the capabilities of employing secure transfer learning over HE. This allows for the first time several key techniques such as encryption of the entire pre-trained model, fine-tuning the entire model rather than optimizing the last layer, and exploiting ZSL as an alternative to training on encrypted data (see Sect. 5). This demonstration represents a significant milestone in making HE applicable. Empirical Contributions.We implemented and tested our methods using the HElayers framework [1]; please see the results in Section 4. Consequently, we report the first non-interactive Privacy-Preserving Machine Learning (PPML) solution that can run secure prediction of the above **large and accurate CNNs over large-scale datasets** in minutes, which proves the practicality of secure prediction HE-based solutions. In addition, we take polynomial networks to the next level, by demonstrating the practicality of our approach on the first secure zero-shot and multi-modal foundation model over encrypted data using CLIP (Section 5). ## 2 Background To clarify the difficulty of producing polynomial networks several theoretic intuitions and proofs were proposed. For example, [74] proved that under some conditions polynomial Feed-Forward Networks (FFNs) are unstable, and concluded that the more complicated a polynomial activation is the more likely that it will face instability. Another paper, [28], suggests that the problem with polynomial activations is that the gradients and outputs are unbounded and can be arbitrarily large, in contrast to other activations such as ReLU, GELU, Sigmoid, or TanH. The paper also points out that in deeper networks \(f_{(d,l)}\) with \(l\) layers and \(d\)-degree polynomial activations, the gradients explode exponentially in the degree of the entire network, since for input \(x>1,\lim_{x\to\infty}f_{(n.l)}(x)/x=\infty\). Additionally, [16], [28] and [27] attempted to implement deep polynomial networks but faced optimization instability. They resolved the issue by incorporating non-polynomial components like tanh or max, resulting in a non-polynomial model. Polynomial Approximations.Instead of training a polynomial network from scratch, a commonly used method is approximating non-polynomial functions of pre-trained networks using polynomials. For example, the \(\mathrm{ReLU}\) activation function is approximated by a polynomial in the studies of [42; 65; 53; 33] or is replaced by a trainable polynomial in [5]. One commonly used way to approximate a function is by using the well-known Remez algorithm [60] and its follow-up algorithms [55; 20], which were proved to be optimal tools for finding the polynomial approximation of a function \(f(x)\) given the range of \(x\) and polynomial degree. Nevertheless, the range of the different CNN layers' input \(x\) may not be known in advance, which may lead to a non-negligible error due to the approximation's poor performance outside of the conditional range. See more details in Appendix. E. One interesting work is [44] that approximated \(\mathrm{ReLU}\) using a composition of 3 polynomials of degrees {15, 27, 29}. While the reported accuracy was only 0.11% lower than State-of-the-Art (SOTA), the authors of [44] did not test their approach in a low-precision environment such as HE. Motivation for training polynomial networks.Our principal motivation in this work is to enable E2E-secure inference (in contrast to client-aided based solutions, see B) that uses HE. This will allow data owners to use third-party cloud environments while complying with regulations such as GDPR [21] and HIPAA [13]. An example of a problem-setting is provided in Fig. 1. For brevity, we only claim that HE-friendly CNNs should be polynomial and refer the interested reader to App. A for more details about HE. Related art.HE-based secure prediction solutions should be both efficient and accurate. In App. B we provide a detailed comparison of SOTA HE-based PPML solutions. Here, we only summarize that the most efficient solutions today [40, 38] both reported accuracy only for CIFAR-10/100, where [38] mentions that they have not yet succeeded in training ResNet-18 over ImageNet. In contrast, as mentioned above, the most accurate attempt to run an HE-friendly inference is of [44] who did not implement their solution with HE. Followup works e.g., [24] claimed that due to the large polynomial degree (e.g., more than 10K), the solution latency when evaluated under HE is large, and [43] only showed practicality under HE for ResNet-20 and CIFAR-10. Furthermore, our experiments (see supplementary material) were not able to reproduce the results of [44] for ResNet-50/150 over ImageNet, even when using 96-bit floating-point precision on plaintext. In conclusion, our work provides the first accurate and performant implementation of large polynomial CNNs on large datasets. Figure 1: **(Motivation)** An E2E PPML solution for running CNNs over HE. The flow involves a client and a cloud server. The client **trains a polynomial (HE-friendly) CNN model**, either encrypts it or not, and uploads the model to the cloud. Then, the client requests from the cloud to run this model on its behalf. For that, the client encrypts its private samples and uploads them to the cloud, which **processes the encrypted data using the (possibly encrypted) model**, and returns the results to the client for decryption. Figure 3: **(Solution)** range-aware training. Here, accuracy and ranges of ConvNeXt trained on CIFAR-10. Figure 2: **(Problem)** Maximal error when using the \(p_{\alpha=7,10,14}\) polynomials of [44] to approximate \(\mathrm{ReLU}\) over different ranges (\(B\)). Small error is achieved through a small range or a large polynomial degree. Training Method Our method for generating practical HE-friendly CNNs is based on three essential factors: a) Minimizing input ranges to activation layers and generating suitable polynomial approximations based on these ranges (Sect. 3.1); b) Effectively handling of SCs under HE (Sect. 3.2); and c) Choosing the appropriate backbone architecture (Sect. 3.3). ### Input Range Tuning for Accurate Polynomial Approximation of Activation Functions Approximating activation functions with polynomials accurately and efficiently, and applying them in deep CNNs is a hard task. We identified that the main issue in approximating non-polynomial networks is that the input range for these polynomial approximations is not known in advance, often spanning over scale of hundreds [44], and thus the deviation of the original activation from the approximated activation increases (see Fig. 2, Observation E.1). In practice, when dealing with large networks with multiple approximated layers, the error from the initial layers is accumulated and eventually causes explosions and instability. Traditionally, to reduce the approximation error and thus the accumulated error, the network designer is forced to use high-degree polynomials [44]. However, as detailed below, we take a different approach, in which we reduce the polynomial degree by reducing the input range for every polynomial. This reduces the accumulated error and hence the chances of a network explosion to occur. Alg. 1 provides a high-level overview of our method. Let \(\mathcal{M}\) be a pre-trained non-polynomial model, \(\mathrm{NPL}\) be the ordered list of length \(L\), that contains the non-polynomial layers of \(\mathcal{M}\). Let \(c_{i}=|\,\mathrm{NPL}[i]|\) be the number of neurons at layer \(i\) in \(\mathrm{NPL}\), \(\mathbf{x}^{i}\) be the vector input for that layer and \(d\) be the polynomial degree. The first phaseof the algorithm involves adding a novel regularization term, _range loss_ (\(rl\)), to \(\mathcal{M}\)'s original objective function. This loss term aims to reduce the range of inputs to the \(\mathrm{NPL}\) layers around the value of 0, which can be depicted as \(rl=\|(\|\mathbf{x}^{i}\|_{p})_{0\leq i<L}\|_{q}\), where we often set \(p=\infty\) and \(q\in\{1,2,\infty\}\). The new loss function for input \((X,y)\) is defined as: \(loss(\mathcal{M})=CE(\mathcal{M}(X),y)+w\cdot rl\), where CE is the Cross Entropy loss on the model. When using the \(L_{1}\) norm for \(rl\) and when the size of \(\mathrm{NPL}\) increases, the range loss term may become more significant than the original CE loss, which is why we introduce a weight \(w\) to balance the two terms. In Step 2, the algorithm fine-tunes \(\mathcal{M}\) using the new loss function. This phase ends when the loss is minimized, at which point the activation functions should expect inputs in the minimal range so that the model preserves its performance. Fig. 3 demonstrate the effectiveness of this procedure. In the second phase of Alg. 1 (Steps 3-4), the framework uses **empirical analysis** to estimate the input ranges per layer \([x^{i}_{min}<x^{i}_{max}]_{0\leq i\leq L}\), with confidence level \(\alpha\). This is done by approximating the ranges of the input to each activation function by sampling a subset of the training data that has not been used for training or validation. This stage takes into account the error generated by the approximation of the previous layer, i.e., it pre-bounds the error with \(e_{i}\) and expects that in the last step, the approximation would be bounded by \(|p_{i}(x)-f_{i}(x)|<e_{i}\). In the last phase(Steps 5-6), we replace the original activation functions with polynomial approximations, using e.g., Remez or the faster but less accurate least-square polynomial fit function. Each activation layer is replaced by a separate polynomial that has been designed for the estimated range. The output of the algorithm is an HE-friendly model \(\mathcal{M}_{HE-f}\). However, since the polynomial activation layers provide only an approximation of the original activations, the accuracy of the model is normally decreased. Therefore, we added Step 7 to fine-tune the model for a few more epochs with the added \(rl\) term until the desired performance is achieved. ### Efficiently Handling Skip-Connections in HE CKKS and other HE schemes have a limit on the number of multiplications that can be performed on a ciphertext, known as the "multiplication chain index" (a.k.a. modulus chain index) or \(\mathrm{CIdx}\). This limit is set by the client to achieve the desired level of security and performance [3]. Every ciphertext starts with a \(\mathrm{CIdx}\) of 0. After each multiplication of two ciphertexts with \(\mathrm{CIdx}\) of \(x\) and \(y\) the result has a \(\mathrm{Cldx}\) of \(\max(x,y)+1\). Once a ciphertext's \(\mathrm{Cldx}\) reaches the limit, it can no longer be multiplied, unless a costly \(\mathrm{Bootstrap}\) operation is performed to reduce its \(\mathrm{Cldx}\), or even reset it back to \(0\). One design goal when generating an HE-based solution is to minimize the number of \(\mathrm{Bootstrap}\) invocations. Hereafter, we define the term _multiplication depth_ to be the longest chain of sequential multiplication operations in an HE-evaluated function. As noted above, longer chains, i.e., higher multiplication depths, result in more bootstrapping operations. Using these definitions, we observe that **Observation 3.1**.: _Given a Skip-Connection layer \(SC_{f}(x)=x+f(x)\), where \(f\) is a combination of some layers. When running under HE, \(\mathrm{Cldx}\left(SC_{f}(x)\right)\in\{\mathrm{Cldx}(x),\mathrm{Cldx}(f(x))\}\) and when \(\mathrm{Cldx}(x)\neq\mathrm{Cldx}(f(x))\) the SC implementation may increase the overall multiplication depth of the network by \(|\mathrm{Cldx}(x)-\mathrm{Cldx}(f(x))|\)._ In practice, the cost of \(SC_{f}(x)\) can be even higher because the input \(x\) or \(f(x)\) may need to go through some transformation before adding it to \(f(x)\). This is the case with the HElayers SDK, which requires input to an \(\mathrm{ADD}\) operator to use the same ciphertext parameters. Specifically, it applies transformations \(g,h\) on \(x\), \(f(x)\), respectively, and replaces \(S_{f}(x)\) with the operator \(S_{f,g,h}(x)=g(x)+h(f(x))\). In this case, for Observation 3.1, we need to consider \(\mathrm{Cldx}(g(x))\) instead of \(\mathrm{Cldx}(x)\) and \(\mathrm{Cldx}(h(f(x)))\) instead of \(\mathrm{Cldx}(f(x))\). Given the latency costs associated with implementing SCs under HE, we propose two methods for placement and removal of SCs, where our goal is to maintain accuracy while improving latency. Note that this was not required previously for commonly used networks where addition is free. However, with our methods, we can offer new network designs that are more suitable for the HE world. ``` 0: A pre-trained CNN model (\(\mathcal{M}\)), a training set (\(\mathcal{DS}_{train}\)), a small disjoint set (\(\mathcal{DS}_{trainRange}\)), and a positive integer degree (\(d\)). 0: A trained HE-friendly model (\(\mathcal{M}_{HE-f}\)). 1: Add a regularization range loss term \(rl\) to loss(\(\mathcal{M}\)). 2: Fine-tune \(\mathcal{M}\) over \(\mathcal{DS}_{train}\) until the input ranges to the \(\mathrm{NPL}\) layers are small enough and the network performance is satisfying. The resulting model is \(\mathcal{M}\)'. 3: Evaluate \(\mathcal{M}\)' over \(\mathcal{DS}_{trainRange}\) and compute the pairs \((\min\mathbf{x}^{i},\max\mathbf{x}^{i})_{0\leq i<L}\) per sample. 4: Using the above pairs, estimate the range \(([x^{i}_{min},x^{i}_{max}])_{0\leq i<L}\) for values of \(\mathbf{x}^{i}\) with confidence level \(\alpha\). 5: Replace the functions \(f_{i}(x)\) of the \(\mathrm{NPL}\) layers with polynomial approximations \(P_{i}(x)\) of degree \(d\) over the estimated ranges \([x^{i}_{min},x^{i}_{max}]\). The new model is \(\mathcal{M}_{HE-f}\). 6: Fine-tune \(\mathcal{M}_{HE-f}\) over \(\mathcal{DS}_{train}\) until convergence. 7: return \(\mathcal{M}_{HE-f}\). ``` **Algorithm 1** Training HE-friendly CNNs Figure 4: \(\mathrm{ReLU}\) (red) versus \(\mathrm{GELU}\) (blue). Panel (a): Maximum error \(|p(x)-f(x)|\) for a different degree of polynomial approximation \(p(x)\) of \(\mathrm{ReLU/GELU}\) in different ranges (x-axis). Panel (b): A 4- degree polynomial approximation of \(\mathrm{ReLU/GELU}\). In both (a) and (b), \(\mathrm{GELU}\) is better approximated. Panel (c) Error range [min, max] (y-axis) after \(l\) training epochs (x-axis). Panel (d): Model accuracy (y-axis) after training ConvNeXt with \(\mathrm{ReLU/GELU}\) for (x-axis) epochs: In the initial 10 epochs, max-pooling and LayerNorm are substituted with HE-friendly components (mean-pooling and BatchNorm). Our range-aware training technique is then applied from epochs 10 to 50. Finally, at epoch 50, the activations are replaced by polynomials. While the ranges exhibit similarity, only \(\mathrm{GELU}\) can be precisely approximated. Removing Skip-Connections.Our first method aims to generate a CNN without SCs while maintaining near-SOTA performance. Removing SCs directly will result in bad performance as previously demonstrated in the study of He et al. [31] due to the gradient flow across layers. Therefore, in our method, we start by training a CNN to achieve SOTA performance while using SCs. We gradually eliminate them by replacing \(S_{f,g,h}(x)\) with a new layer \(S^{\prime}_{f,g,h,a}(x)=a\cdot g(x)+h(f(x))\). We continue training for \(N\) more epochs. At every epoch index \(0<E\leq N\), we set \(a=(1-\frac{E}{N})\). After \(N\) epochs, \(a=0\) and the SCs are removed. Finally, we continue training for several more epochs. Skip-Connection Placement.When SCs are required, we propose Alg. 2 for designing efficient HE-friendly CNNs. The algorithm uses as input a network architecture \(\mathcal{A}\) and an HE network analyzer (\(\mathrm{HEAnalyzer}\)), such as the optimizer of \(\mathrm{HElayers}\). We start by removing all SC layers and feeding the new network \(\mathcal{A}^{\prime}\) to the analyzer. For every pair of layers in the network \(i<j\), the analyzer computes the latency costs of adjusting the output \(x\) and \(f(x)\) of layers \(i\), \(j\), using transformations \(g,h\), respectively, and the need for \(\mathrm{Bootstrap}\) operations. It returns the results in the matrix \(costsMatrix\) of size \(L\times L\), where \(L\) is the number of layers in \(\mathcal{A}^{\prime}\). For \(i\geq j\), \(costsMatrix[i][j]=\infty\). Using \(costsMatrix\), we can now place SCs to minimize the latency overhead while maintaining accuracy. One possible heuristic is to start from layer \(i=0\) and find \(j=\arg\min_{j}\left(costsMatrix[i][j]\right)\), place an SC between layers \(i\) and \(j\), and repeat with \(i=j+1\). Note that when adding SC in a way that increases \(\mathrm{Cldx}(h(f(x)))\) the costs matrix should be re-computed. Alg. 2 already considers the bootstrapping costs discussed earlier, and is most likely to choose SCs for layers with \(\mathrm{Cldx}(g(x))\leq\mathrm{Cldx}(h(f(x)))\). ### Choosing the Right Backbone Many variations of ResNet backbones have been proposed over the years [47, 70, 71, 34, 67]. While most prior HE-related works use vanilla ResNet [40, 44, 4] (see App. B), its superiority has not been proven or explored in the context of HE. Surprisingly, even seemingly insignificant design choices such as the backbone choice can have a large performance impact when working with encrypted data. Therefore, we decided to study and compare three ResNet backbones: the original ResNet [31], ConvNeXt [47], and DenseNet [34]. ConvNeXt.ConvNeXt is a modern variant of ResNet, which achieves the highest performance [69]. It involves two relatively minor design decisions that make it attractive when working over encrypted data: 1) it has a reduced number of activations; 2) it uses \(\mathrm{GELU}\). Every ConvNeXt block includes only one activation function, as opposed to three in ResNet blocks. Although this only provides a relatively small improvement of (\(0.7\%\)), it has crucial implications in HE since smaller numbers of non-polynomial components reduce the overall multiplication depth by a factor of around \(3\), making it easier to achieve an HE-friendly network. ConvNeXt uses the \(\mathrm{GELU}\)[32] activation function instead of \(\mathrm{ReLU}\). Fig. 4 Panels (a) and (b) show that \(\mathrm{ReLU}\) is much more difficult to approximate in comparison with \(\mathrm{GELU}\), specifically in the areas near 0, where \(\mathrm{ReLU}\) is not smooth. Panels (c) and (d) show that using \(\mathrm{GELU}\) is more robust, Figure 5: Latency breakdown for running ResNet-50 over ImageNet using \(\mathrm{HElayers}\) 1.5.2. Here, \(g(x)\) is the SC adaption of \(\mathrm{HElayers}\). _Other_ refers to BN, FC, and AVG-Pool layers. allowing us to reduce the polynomials' degree. The result is that the overall replacement process is easier, and the aggregated multiplication depth is smaller. DenseNet.The DenseNet network uses a unique type of SCs. Each layer receives the outputs of all preceding layers as inputs, and its output is used as input for all subsequent layers. While this architecture has advantages in terms of optimization, it increases the number of bootstrapping operations, since it forces bootstrapping after almost every DenseNet block. Thus, it is not recommended for HE-based PPML solutions. ## 4 Experiments and Results To test our methods, we performed a series of experiments, which we report next. For brevity, we refer to ResNet-XX as RNXX. All training experiments used the PyTorch framework. HE-Friendly large-scale CNNs.We start by evaluating the accuracy of our proposed training method for HE-friendly models on three datasets: CIFAR-10 [39] and the large-scale datasets: ImageNet [19] and Places-365 [73]. All datasets were evaluated at a resolution of \(224\times 224\times 3\). The accuracy results, presented in Tab. 2, were analyzed in three stages: (1) the original, non-HE-friendly model, (2) the original model after replacing max-pooling with mean-pooling but with non-polynomial activations, and (3) the proposed HE-friendly model with polynomial activations. Additionally, we compared our results to those reported by [40], who evaluated a ResNet-56 model on CIFAR-10 using images of size \(32\times 32\times 3\). While they also report on other networks, we compared our results to their ResNet-56 as it is the closest to ResNet-50. The smaller image size used by Lee et al. [40] may have contributed to the lower accuracy observed in their experiments. During the range minimization phase (Alg. 1, Step 2), the ranges to the activation layers are large, which leads to the \(rl\) loss being significantly higher than the original model loss. To this end, we set \(w\) to be in the ranges 0.0001-0.001 and 0.01-0.1, before (Step 2) and after (Step 5), resp., replacing the \(\mathrm{ReLU}\) activation with a polynomial, the exact value depends on the dataset. The polynomial degree used in these experiments is set to 18, which provides a good approximation for small ranges of around \([-10,10]\), as in our case. Tab. 2 shows that the HE-friendly models trained by our method preserved the original accuracy when applied on CIFAR-10 and Places-365, and reached 94% accuracy when trained on ImageNet for ResNet-101 and 96% for ConvNext-T. HE-Friendly Skip-Connections.Tab. 3 demonstrates the effect of using our method of carefully removing SCs on secure classification latency. For that, we use ResNet-50 and set the polynomial activation degree to be 2, 8, 16 or 18. We see that using HE-friendly skip-less models can save up to \(75\%\) of the bootstrapping as well as provide significant speedup. Recall that bootstrapping is a critical bottleneck in inference under HE, see App. D. When it comes to accuracy, the situation is more complicated. We used our training method with only \(18\)-degree polynomial activation functions. We found that accuracy degradation is dependent on whether the network starts with pre-trained weights. Without pre-training, we observe that HE-friendly ResNet-50 and skip-less HE-friendly ResNet-50 achieve \(93.72\%\) and \(93.21\%\), resp., where the total degradation is relatively small (\(0.53\%\)). In contrast, when using a pre-trained model, the impact on accuracy is more significant, see Tab. 1. This phenomenon is somewhat predictable, since removing the SCs changes the network's dynamics, thus diminishing the impact of pre-training data. HE-Friendly Backbones.We tested different settings for CNNs, including various sizes of ResNets (18, 50,101), ResNet-50 without SCs, ResNet-50 with adaptive removal of SCs, and two variations of ConvNeXt-Tiny, which is equivalent to ResNet-50: one with a polynomial degree of 4, and the other with a polynomial degree of 8. All of the experiments were applied on CIFAR-10 at a resolution of \(224\times 224\times 3\). Results are detailed in Tab. 1. We find that the ConvNeXt-Tiny model with \(4\)-degree polynomial activations has a significantly lower multiplication depth compared to the ResNet-50 model with 18-degree polynomial activations. Despite similar numbers of blocks, FLOPs, and accuracy, the number of \(\mathrm{Bootstrap}\) operations in ConvNeXt-Tiny is reduced from \(7{,}712\) to \(1{,}360\). This improvement is significant, as the bootstrap operation is a major bottleneck in the inference time of deep CNNs, as shown in the profiling provided in App. D. Results are provided in Tab. 1. When implementing an HE-friendly ConvNeXt, we replaced the LayerNorm layers with BatchNorm layers, which resulted in some degradation of accuracy. We also assume that the replacement also negatively impacts the model's transfer-learning capabilities. For a comprehensive overview of the experimental setup we use for evaluation over HE via HElayers SDK, please see Appendix C A Comparison with the SOTA.Lee et al. [40] were the first to report promising latency results when considering secure evaluation over ResNet-20/110. For example, it takes only 37 minutes to run ResNet-20 on a single CPU thread. However, this implementation is tailored to datasets such as CIFAR-10/100 with small images of size \(32\times 32\). Our approach relies on a different objective function, by training low-degree large polynomial CNNs, which allows us to perform secure prediction over large images. By using the HElayers SDK we can support larger images that do not fit within a single ciphertext and provide a more generic solution. ## 5 The Potential of HE-friendly Foundation Models for Transfer Learning Our method enables the training of large-scale polynomial CNNs on unencrypted data, which can then be leveraged for new two capabilities in secure transfer learning: (i) **Zero-Shot Learning (ZSL) as an alternative for training over encrypted data:** direct training of polynomial models under HE poses two main challenges: Firstly, certain training techniques such as batch normalization, gradient clipping, and CE loss, which are non-polynomial, are not natively supported under HE. Secondly, the solution's latency increases linearly with the number of training iterations. Hence, inspired by recent advancements in foundation models [8], we propose training an **HE-friendly foundation model** on large-scale unencrypted data. This model can be applied to unseen encrypted data for downstream tasks without additional training, which allows for avoiding the limitations of polynomial training. (ii) **Polynomial pre-trained models for transfer learning:** Previous studies have utilized frozen pre-trained non-polynomial models as feature extractors, followed by secure training of logistic regression on top of these representations [41]. This technique has two major drawbacks: First, the non-polynomial pre-trained model can not be encrypted via HE and can not be applied over encrypted data at inference, as it is not polynomial. Our method from Sect. 3 opens the door to solve this problem by employing polynomial (HE-friendly) pre-trained models. Second, when considering fine-tuning over encrypted data, instead of utilizing pre-trained models as **freeze** feature extractors, our technique allows E2E secure fine-tuning. This approach allows **optimizing the weights** of the pre-trained polynomial model. \begin{table} \begin{tabular}{|l|c| Experimental Results - Polynomial CLIP.We focus on a specific model that uses contrastive learning [15; 54; 29] - CLIP [58]. Fig. 6 illustrates the training and inference flows of using an HE-friendly CLIP model. A Service Provider (SP) starts from a pre-trained CLIP model, modifies its visual encoder to become polynomial, and trains the network. To use the trained model, the service provider shares the text encoder with the clients and uses the visual encoder locally. We consider our polynomial visual encoder as the **first polynomial foundation model**. Despite its relatively small size (23M parameters), it is one of the largest polynomials trained, and it was established on 400M (image, text) pairs, and adapted to become polynomial encoder through the ImageNet-1K dataset. We chose RN50-CLIP due to its image encoder, which is based on a variation of ResNet-50 that we have managed to transform into an HE-friendly model. As in previous studies in HE, which incorporated the softmax computation with the client side, we also calculated the output attention-pool layer and cosine similarity on the client side. Due to the lack of the massive training data used by CLIP, we adapted the original model into an HE-friendly model by fine-tuning the model through ImageNet, with the prompts provided by the authors of CLIP. We then evaluated the model on four datasets, that were used by [58]: CIFAR-10, CIFAR-100, OxfordPet and STL10. The results are shown in Tab. 5, where we see that even though the HE-friendly adaptation process has been managed by a low-resource training set, its prediction capabilities on unseen data remain comparable in various tasks. Moreover, some degradation is expected as the authors of CLIP have noted that using pre-trained models trained on ImageNet achieve lower transfer scores compared to CLIP-based models (see [58], Fig. 12). This is a first step towards an HE zero-shot transfer and even few-shot learning on encrypted images. Furthermore, similar to CLIP, linear-probing on top of our polynomial model leads to improved accuracy, as observed in the second part of Tab. 5. In contrast to previous adaptations of pre-trained models in HE, our approach enables complete encryption of the entire model, as mentioned in the preceding paragraph. ## 6 Conclusions The question of whether running real-size HE-based polynomial CNNs is possible and practical has been studied by many researchers over the last decade. However, most studies used some sort of relaxation in the form of client-aided or toy networks and toy datasets to achieve this goal. We answer the above question affirmatively, working on real-size datasets with standard-size images of \(224\times 224\) and modern CNNs such as ResNet and ConvNeXt. This achievement unlocked thanks to our method to control and minimize the input ranges to the non-polynomial activations. We demonstrate that evaluating them under HE is practical. Specifically, we run ResNet-18/50/101/152 secure prediction in 7, 31, 57, and 75 minutes, respectively, on a GPU with 128-bit security. In addition, we discuss two insights that can further improve secure predictions performance, namely, special handling of SCs and working with different backbones. For example, we explain the benefits of using ConvNeXt in the context of HE. Finally, our research opens the door for alternatives to expensive training under HE. This can be achieved through the introduction of ZSL-based methods, or by leveraging new transfer learning capabilities. Limitations.Our work focuses on training modern polynomial CNNs for HE, but we have not evaluated our method on transformers yet. Transformers face barriers with the Softmax (attention) operation and layer normalization, which are not easily approximated. We plan to address these in future research. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **CLIP** & **CIFAR** & **CIFAR** & **STL10** & **Pets** \\ **Type** & **10** & **100** & [17] & [56] \\ \hline \multicolumn{5}{|c|}{Zero-Shot Classification} \\ \hline RN50 & 75.6 & 41.6 & 94.3 & 85.4 \\ \hline **Poly** & 73.3 & 38.4 & 90.7 & 73.9 \\ \hline \multicolumn{5}{|c|}{Linear Probing} \\ \hline RN50 & 88.7 & 70.3 & 96.6 & 88.2 \\ \hline **Poly** & 89.28 & 67.71 & 96.96 & 90.62 \\ \hline \end{tabular} \end{table} Table 5: ZSL and linear probe performance of HE-friendly RN50-CLIP fine-tuned on ImageNet and evaluated on various datasets. Figure 6: HE-friendly CLIP training + secure ZSL.
2301.08419
Scalable Quantum Error Correction for Surface Codes using FPGA
A fault-tolerant quantum computer must decode and correct errors faster than they appear. The faster errors can be corrected, the more time the computer can do useful work. The Union-Find (UF) decoder is promising with an average time complexity slightly higher than $O(d^3)$. We report a distributed version of the UF decoder that exploits parallel computing resources for further speedup. Using an FPGA-based implementation, we empirically show that this distributed UF decoder has a sublinear average time complexity with regard to $d$, given $O(d^3)$ parallel computing resources. The decoding time per measurement round decreases as $d$ increases, a first time for a quantum error decoder. The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. We are able to implement $d$ up to 21 with a Xilinx VCU129 FPGA, for which an average decoding time is 11.5 ns per measurement round under phenomenological noise of 0.1\%, significantly faster than any existing decoder implementation. Since the decoding time per measurement round of Helios decreases with $d$, Helios can decode a surface code of arbitrarily large $d$ without a growing backlog.
Namitha Liyanage, Yue Wu, Alexander Deters, Lin Zhong
2023-01-20T04:23:00Z
http://arxiv.org/abs/2301.08419v2
# Scalable Quantum Error Correction for Surface Codes using FPGA ###### Abstract A fault-tolerant quantum computer must decode and correct errors faster than they appear. The faster errors can be corrected, the more time the computer can do useful work. The Union-Find (UF) decoder is promising with an average time complexity slightly higher than \(O(d^{3})\). We report a distributed version of the UF decoder that exploits parallel computing resources for further speedup. Using an FPGA-based implementation, we empirically show that this distributed UF decoder has a sublinear average time complexity with regard to \(d\), given \(O(d^{3})\) parallel computing resources. The decoding time per measurement round decreases as \(d\) increases, a first time for a quantum error decoder. The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. Using Xilinx's cycle-accurate simulator, we present cycle-accurate decoding time for \(d\) up to 15, with the phenomenological noise model with \(p=0.1\%\). We are able to implement \(d\) up to 7 with a Xilinx ZC106 FPGA, for which an average decoding time is 120 ns per measurement round. Since the decoding time per measurement round of Helios decreases with \(d\), Helios can decode a surface code of arbitrarily large \(d\) without a growing backlog. ## 1 Introduction The high error rates of quantum devices pose a significant obstacle to the realization of a practical quantum computer. As a result, the development of effective quantum error correction (QEC) mechanisms is crucial for the successful implementation of a fault-tolerant quantum computer. One promising approach for implementing QEC is the use of surface codes [1, 2, 3] in which information of a single qubit (called a logical qubit) is redundantly encoded across many physical data qubits, with a set of ancillary qubits interacting with the data qubits. By periodically measuring the ancillary qubits, one can detect and potentially correct errors in physical qubits. Once the presence of errors has been detected through the measurement of ancillary qubits, a classical algorithm, or decoder, guesses the underlying error pattern based on the measurement results. The faster errors can be corrected, the more time a quantum computer can spend on useful work. Due to the error rate of the state of the art qubits, very large surface codes (\(d>25\)) are necessary to achieve fault-tolerant quantum computing [2, 4, 5]. See SS2 for more background. As surveyed in SS3, previously reported decoders capable of decoding errors as fast as measured, or backlog-free, either exploit limited parallelism [6, 7], or sacrifice accuracy [8, 9]. The largest \(d\) reported for any backlog-free implementations is 5 [6], based on a design that is physically infeasible beyond \(d=5\). In this paper we report a _distributed Union-Find (UF) decoder_ (SS4) and its FPGA implementation called _Helios_ (SS5). Given \(O(d^{3})\) parallel resources, our decoder achieves sublinear average time complexity according to empirical results for \(d\) up to 15, the first to the best of our knowledge. Notably, adding more parallel resources will not reduce the time complexity of the decoder, due to the inherent nature of error patterns. Our decoder is a distributed design of and logically equivalent to the UF decoder first proposed in [10]. We implement the distributed UF decoder with Helios, a scalable architecture for organizing the parallel computation units. Helios is the first architecture of its kind that can scale to arbitrarily large surface codes by exploiting parallelism at the vertex level of the model graph. In SS6, we report experimental validations of the distributed UF decoder and Helios with a ZCU106 FPGA board [11] which is capable of running surface codes up to \(d=7\). For \(d=7\) the decoder has an average decoding time of 120 ns per measurement round, faster than any existing decoder. We validate our design for surface codes of \(d>7\) by using Xilinx Vivado cycle accurate simulator [12]. These validations successfully demonstrate, for the first time, a decoder design with decreasing average time per measurement round when \(d\) increases. This shows evidence that the decoder can scale to arbitrarily large surface codes without a growing backlog. Background ### Qubit and Errors Qubit is the basic unit of quantum computing which is represented as \(|\psi\rangle=\alpha|0\rangle+\beta|1\rangle\). Here \(\alpha\) and \(\beta\) are complex numbers such that \(|\alpha|^{2}+|\beta|^{2}=1\) and \(|0\rangle\) and \(|1\rangle\) are the basis states of a qubit. Unlike classical bits, qubits are highly susceptible to errors. A qubit can unintentionally interact with its surrounding resulting in a change of its quantum state. Even the latest quantum computers still have an error rate of \(10^{-3}\)[4] which is significantly worse than classical computers which have error rates lower than \(10^{-18}\). In contrast a useful quantum application requires an error rate of \(10^{-15}\) or below necessitating error correction. Errors in qubits can be modeled as bit flip errors and phase flip errors. A bit flip is marked by the \(X\) operator, i.e., \(X|\psi\rangle=\beta|0\rangle+\alpha|1\rangle\), while a phase flip is marked by \(Z\) operator, i.e., \(Z|\psi\rangle=\alpha|0\rangle-\beta|1\rangle\). ### Error Correction and Surface Code Quantum Error Correction (QEC) is more challenging than classical error correction due to the nature of Quantum bits. First, qubits cannot be copied to achieve redundancy due to the no-cloning theorem. Second, the value of the qubits cannot be directly measured as measurements perturb the state of qubits. Therefore QEC is achieved by encoding the _logical state_ of a qubit, as a highly entangled state of many physical qubits. Such an encoded qubit is called a _logical_ qubit. The surface code is the widely used error correction code for quantum computing due to its high error correction capability and the ease of implementation due to only requiring connectivity between adjacent qubits. A distance \(d\) surface code is a topological code made out of a \((2d-1)\times(2d-1)\) array of qubits as shown in Figure 1. A key feature of surface codes is that a larger \(d\) can exponentially reduce the rate of logical errors making them advantageous. For example, even if the physical error rate is 10 times below the threshold, \(d\) should be greater than 17 to achieve a logical error rate below \(10^{-10}\)[2]. A surface code contains two types of qubits, namely data qubits and ancilla qubits. The data qubits collectively encode the _logical state_ of the qubit. The ancilla qubits (called X-type and Z-type) entangle with the data qubits and by periodically measuring the ancilla qubits, physical errors in all qubits can be discovered and corrected. An X error occurring in a data qubit will flip the measurement outcome of Z ancilla qubits connected with the data qubit and Z error will flip the X ancilla qubits likewise. Such a measurement outcome is called _non-trivial measurement value_. Because ancilla qubits themselves could also suffer from physical qubit errors, multiple rounds of measurements are necessary. Figure 2 shows some example physical qubit errors occurring in a surface code and how they are detected by ancilla qubits. We show X and Z errors separately because they can be independently dealt with in the same way. The outcomes from these multiple rounds of measurements of ancilla qubits constitute a _syndrome_. A syndrome can be conveniently represented by a graph called _decoding graph_ in which a vertex represents a measurement outcome of an ancilla and an edge a data qubit. Vertices of nontrivial measurement outcome are specially marked. The weight of edge is determined by the probability of error in the corresponding data qubit or measurement. For distance \(d\) surface code, there are \(d\times(d-1)\) vertices. This decoding graph can be extended to three dimensional in which multiple identical planar layers are stacked on each other. Each layer represents a round of measurement. The total number of rounds is usually the same as the distance of the surface code. Corresponding vertices in adjacent layers are connected by edges which represent the probability of measurement error of the corresponding ancilla. That is, there are \(d\times d\times(d-1)\) vertices in this three-dimensional graph. Figure 2f shows the decoding graph for a syndrome from \(d=3\) surface code. Figure 1: (a) : CSS surface code (\(d=3\)), a commonly used type of surface code. The white circles are data qubits and the black the Z-type and X-type ancillas. (b) and (c) : Measurement circuit of Z-type and X-type ancillas. Excluding the ancillas in the border, each Z-type and X-type ancilla interacts with 4 adjacent data qubits. Figure 2: (a) to (d) : Various error patterns on \(d=3\) surface code. X and Z mark the corresponding physical qubit errors. Ancillas reporting non trivial measurements are shown in red. The red lines are to visualize error chains. (a) isolated X error (b) isolated Z error (c) error chain of three X errors (d) error chain introducing a logical error which has no non-trivial measurements. Note that even though (a) and (c) are different error patterns, they produce the same syndrome. (e) Error patterns spread across multiple measurement rounds. Here single X and Z errors can also spread across two rounds and error chains can include measurement errors (indicated by ’M’) as well. (f) Decoding graph with vertices with nontrivial measurement marked red for the error pattern in (e). ### Error Decoders Given a syndrome, an error decoder identifies the underlying error pattern, which will be used to generate a correction pattern. As multiple error patterns can generate the same syndrome, the decoder has to make a probabilistic guess of the underlying physical error. The objective is that when the correction pattern is applied, the chance of the surface code entering a different logical state (i.e a logical error) will be minimized. MetricsThe two important aspects of decoders are accuracy and speed. A decoder must correct errors faster than syndromes are produced to avoid a backlog. A faster decoder also allows more time for the quantum hardware to do actual useful work. The average decoding time per measurement round is a widely used criteria for speed. A decoder must make careful tradeoff between speed and accuracy. A faster decoder with lower accuracy requires a larger \(d\) to achieve any given logical error rate, which may require more computation overall. Union-Find (UF) DecoderThe UF decoder is a fast surface code decoder design first described by Delfosse and Nickerson [10]. According to [13], it can be viewed as an approximation to the blossom algorithm that solves minimum-weight perfect matching (MWPM) problems. It has a worst case time complexity of \(O(d^{3}\alpha(d))\), where \(\alpha\) is the inverse of Ackermann's function, a slow growing function that is less than three for any practical code distances. Based on our analysis, it has an average case time complexity slightly higher than \(O(d^{3})\). ``` input : A decoding graph \(\mathcal{G}(\mathbf{V},\mathbf{E})\) with X (or Z) syndrome output : A correction pattern 1\(\%\) Initialization 2for each\(v\in\mathbf{V}\)do 3if\(v\) is non-trivialthen 4 Create a cluster \(\{v\}\) 5 end if 6 7 end for 8whilethere is an odd clusterdo 9\(\%\) Growing 10\(\mathcal{F}\leftarrow\mathbf{0}\) 11for each\(odd\) cluster \(C\)do 12for each\(e=<u,v>\), \(u\in C,v\not\in\mathbf{C}\)do 13if\(e.growth<e.w\)then 14\(e.growth\gets e.growth+1\) 15if\(e.growth=e.w\)then 16\(\mathcal{F}\leftarrow\mathcal{F}\cup\{e\}\) 17 18 end if 19 20 end for 210 22 end for 23\(\%\) Merging 24for each\(e=<u,v>\)\(\in\mathcal{F}\)do 25\(\text{UNION($u$, $v$)}\) 26 end for 27 28 Build correction within each cluster ``` **Algorithm 1**Union Find Decoder Algorithm 2 describes the UF decoder. It takes a decoding graph \(\mathcal{G}(\mathbf{V},\mathbf{E})\) as input. Each edge \(e\in\mathbf{E}\) has a weight and a growth, denoted by \(e.w\) and \(e.g\), respectively. \(e.g\) is initialized with 0 and the decoder may grow \(e.g\) until it reaches \(e.w\). When that happens, we say the edge is _fully grown_. The decoder maintains a set of odd clusters, denoted by \(\mathcal{L}\). \(\mathcal{L}\) is initialized to include all \(\{v\}\) that \(v\in\mathbf{V}\) is non-trivial (L81). Each cluster \(C\) keeps track of whether its cardinality is odd or even as well as its root element. The UF decoder iterates over growing and merging the odd cluster list until there are no more odd clusters (inside the **while** loop of algorithm 1). Each iteration has two stages: Growing and Merging. In the _Growing_ stage, each odd cluster "grows" by increasing the _growth_ of the edges incidental to its boundary. This process creates a set of _fully grown_ edges \(\mathcal{F}\) (L86 to L95). The Growing stage is the more time-consuming step as it requires traversing all the edges in the boundary of all the odd clusters and updating the global edge table. Since the number of edges is \(O(d^{3})\), the UF decoder is not scalable for surface codes with large \(d\). In the _Merging_ stage, the decoder goes through each fully-grown edge to merge the two clusters connected by the edge. When two clusters merge, the new cluster may become even. When there is no more odd cluster, the decoder finds a correction within each cluster and combines them to produce the correction pattern (L101). ## 3 Related Work There is a large body of literature on fast QEC decoding, e.g., [14, 15, 16]. The most related are solutions that leverage parallel compute resources. Fowler [17] describes a method for decoding at the rate of measurement (\(O(d)\)). The proposed design divides the decoding graph among specialized hardware units arranged in a grid. Each unit contains a subset of vertices and can independently decode error chains contained within it. The design is based on the observation that large error patterns spanning multiple units are exponentially rare, so inter-unit communication is not frequently required. It, however, paradoxically assumes that the number of vertices per unit is "sufficient large" and a unit can find an MWPM for its vertices within half the measurement time on average. Not surprisingly, to date, no implementation or empirical data have been reported for this work. Our approach distributes computation to a vertex-level and leverages the same observation that communication between distant vertices is infrequent. NISQ+[8] and QECOOL[9] parallelize computation at the ancilla level, where all vertices in the decoding graph representing measurements of one ancilla are handled by a single compute unit. This results in an increase in decoding time per measurement round as \(d\) increases. In contrast we allocate a processing element per each vertex, which results in decreasing decoding time per measurement round with \(d\) at the expense of number of parallel units growing \(O(d^{3})\). Furthermore, they both implement the same greedy decoding algorithm that has much lower accuracy than the UF decoder used in this work. QECOOL has an accuracy that is approximately four orders of magnitude lower than that of a UF decoder [7] and NISQ+ ignores measurement errors further lowering its accuracy than QECOOL. Skoric et al. [18] propose a method of using measurement round-level parallelism, in which a decoder waits for a large number of measurement rounds to be completed and then decodes multiple blocks of measurement rounds in parallel. By using sufficient parallel resources this method can achieve a rate of decoding faster than the rate of measurement. However, the latency of this approach grows with the number of measurement rounds the decoder needs to batch to achieve a throughput equal to the rate of measurement. In contrast, our approach exploits vertex-level parallelism and completes decoding of every \(d\) rounds of measurements with an average latency that grows sublinearly with \(d\). Pipelining can be considered a special form of using compute resources in parallel, i.e., in different pipeline stages. AFS [7] is a UF decoder architected in three pipeline stages. The authors estimate the decoder will have a 42 ns latency for \(d=11\) surface code, which is three times lower than what we report based on implementation and measurement. The authors assume a specialized hardware that is capable of running at 4 GHz and as a result, the decoding latency will be dominated by memory access. However, no implementation or cycle-accurate simulation is known for this decoder. Importantly, pipelining is limited in how much parallelism it can leverage: the number of pipeline stages. In contrast, parallelism of our decoder grows along \(d^{3}\), which enables us to achieve a sublinear average case latency. LILILIPUT [6] is a three stage look-up-table based decoder similar to AFS. Look-up-table based decoders can achieve fast decoding but are not scalable beyond \(d=5\) as the size of the look-up table grows \(O(2^{d^{3}})\). For \(d=7\) surface code with 7 measurement rounds, it would require a memory of \(2^{168}\) Bytes, which is infeasible in any foreseeable future. ## 4 Distributed UF Decoder Design Our goal to build a QEC decoder is scalability to the number of qubits. As surface codes can exponentially reduce logical error rate with respect to \(d\), larger surface codes with hundreds or even thousands of qubits are necessary for fault-tolerant quantum computing. Therefore, the average decoding time per measurement round should not grow with \(d\), to avoid exponential backlog for any larger \(d\). We choose the UF decoder for two reasons. First, it has much lower time complexity than the MWPM algorithm. Although in general the UF decoder achieve lower decoding accuracy than MWPM decoders, it is as accurate in many interesting surface codes and noise models [13]. Second, the UF decoder maintains much less intermediate states, which makes it easier to implement in a distributed manner. We observe that growing stage from L86 to L95 in algorithm 1 operates on each vertex independently without dependencies from other vertices. A vertex requires only the parity of the cluster it is a part of for the growing stage. Second, during the merging stage, a vertex only needs to interact with its immediate neighbors (L98). Like the original UF decoder, our distributed UF decoder is also based on the decoding graph. Logically, the distributed decoder associates a processing element (PE) with each vertex in the decoding graph. Therefore, When describing the distributed decoder, we often use PE and vertex in an interchangeable manner. PEs operate with the same algorithm, specified by algorithm 2. The PE algorithm iterates over three stages. ### PE States A PE has direct read access to its local states and some states of incident PEs. A PE can only modify its local states. Thanks to the decoding graph, a PE has immediate access to the following objects. * \(v\), the vertex it is associated with. * \(v\).\(E\), the set of edges incident to \(v\). * \(v\).\(U\), the set of vertices that are incident to any \(e\in v\).\(E\). We say these vertices are adjacent to \(v\). The algorithm augments the data structures of vertex and edge of the decoding graph, according to the UF decoder design [10]. For each vertex \(v\in V\), the following information is added * \(id:\) a unique identity number which ranges from 1 to \(n\) where \(n=|V|\). \(id\) is statically assigned and never changes. * \(m\) is a binary indicating whether the measurement outcome is trivial (false) or not (true). \(m\) is initialized according to the syndrome. * \(cid\): a unique integer identifier for the cluster to which \(v\) belongs to, and is equal to the lowest \(id\) of all the vertices inside the cluster. The vertex with this lowest \(id\) is called the cluster root. \(v\).\(cid\) is initialized to be \(v\).\(id\). That is, each vertex starts with its own single-vertex cluster. When \(cid=id\), the vertex is a root of a cluster. * \(odd\) is a binary indicating whether the cluster is odd. \(odd\) is initialized to be \(m\). * \(codd\) is a copy of \(odd\). * stage indicates the stage the PE currently operates in * busy is a binary indicating whether the PE has any pending operations. For each edge \(e\in E\), the decoder maintains \(e.\)growth, which indicates the growth of the edge, in addition to \(e.w\), the weight. \(e.\)growth is initialized as 0. The decoder grows \(e.\)growth until it reaches \(e.w\) and \(e\) becomes _fully grown_. For clarity of exposition, we introduce a mathematical shorthand \(v.\)nb, the set of vertices connected with \(v\) by full-grown edges, i.e., \(v.\)nb={\(u|e=\langle v,u\rangle\in v.E\) & \(e.\)growth= \(e.w\)}. We call these vertices the _neighbors_ of \(v\). Note neighbors are always adjacent but not all adjacent vertices are neighbors. ### Shared memory based communication We use coherent shared memory for shared state that has a single writer. For all shared memories, given the coherence, a read always returns the most recently written value. Like ordinary memory, we also assume both read and write are atomic. * memory read/write for PE (\(v\)) and read-only for adjacent PEs, i.e., \(\forall u\in v.U.\)\(v.\)_cid_ and \(v.\)_odd_ reside in this memory (S1). * memory read/write for PE (\(v\)) and read-only for the controller. The PE local states, \(v.\)_codd_, \(v.\)stage and \(v.\)busy reside in this memory (S2). * memory for \(e.\)growth, which can be written by incident PEs (S3). * memory read/write for the controller and read-only for all PEs. The controller state global_stage is stored in this memory (S4). ### Message based communication Only instance in our decoder where a PE needs to communicate with a distant PE is when a PE needs to notify the root when joining a new cluster (L32). Implementing this using shared memory is costly because the PE is not necessarily adjacent to the root. As there is one type of message in our decoder, each message M contains only the destination of the message. The destination take value from 1 to \(n\), which represents the vertex identifier. For the correctness of the decoder we only assume guaranteed delivery of messages and do not assume a time bound for message delivery. ### PE Algorithm All PEs iterate over three stages of operation. Within each stage, they operate independently but transit from one stage to the next when the controller updates global_stage. When a PE enters a stage, it sets \(v.stage\) accordingly and keep \(v.\)busy as true until it finishes all work in the stage. The controller uses these two pieces of information from all PEs to determine if a stage has started and completed, respectively (See SS4.5). We next describe the three stages of the PE algorithm. In the **Growing** stage, vertices at the boundary of an odd cluster increase \(e.\)growth for boundary edges (L16). As PEs perform Growing simultaneously, two adjacent PEs may compare \(e.w\) and \(e.\)_growth_ and update \(e.\)_growth_ for the same \(e.\) Such compare-and-update operations must be atomic to avoid data race. In the **Merging** stage, two clusters connected through a fully-grown edge merge by adopting the lower cluster id (_cid_) of theirs. To achieve this each PE compares its _cid_ with PEs connected through fully-grown edges (L31). If the other incident vertex of a fully grown edge has a lower _cid_ the PE adopts the lower _cid_ as its own (L31). Merging process continues until every PE in the cluster have the same _cid_ which is the lowest \(v.\)_id_ of the cluster. This procedure is related to leader election in a distributed systems: vertices in a newly formed cluster must adopt the lowest \(id.\) The Merging stage also calculates the parity of the cluster. Each PE representing a non-trivial measurement (\(m\) is true) messages the root of the cluster it joins (L32). Likewise, the root updates its parity when it receives a message from a PE (L38). In the **Syncing** stage, a root broadcasts its \(v.\)_odd_ to all PEs in its cluster, which is necessary for the next Growing stage. We achieve this using a modified version of the flooding algorithm, which uses shared memory instead of message passing. Every non-root node initially set its \(v.\)_odd_ as false and continues comparing \(v.\)_odd_ with PEs with fully connected edges. If any of the PEs connected with a fully grown edge has \(v.\)_odd_ as true the PE set its \(v.\)_odd_ as true (L53). If a cluster has \(v.\)_odd_ as true in the root, this results in propagating true to all vertices in the cluster similar to a flooding algorithm. ### Controller Algorithm The controller moves all PEs and itself along the three stages. In each stage, it checks for \(v.\)busy signals and in addition in merging stage it checks for outstanding messages. The controller determines completion of a stage when all PEs have \(v.\)busy as false and there are no outstanding messages. Upon completion, the controller updates the global_stage variable to move to the next stage and the PEs acknowledge this update by updating their own v.stage variable. The controller also calculates the presence of odd clusters. At the end of the syncing stage, it reads the _v.odd_ value of each vertex. If any vertex has _v.odd_\(=\)_true_, the controller updates the global stage variable to Growing to continue the algorithm. Otherwise, it updates it to Terminate to end the algorithm. ### Time Complexity Analysis The worst case time complexity of our distributed UF decoder is \(O(d^{3})\). The worst case occurs when parallelism is maximally lost in the system; all vertices are non-trivial and merge into a single cluster and the root must process all incoming messages from all other vertices (L38). However, the occurrence of the worst case scenario is extremely rare as larger clusters are exponentially unlikely to occur. Empirical results reported in SS6 show that average time grows sublinearly with \(d\). The time complexity of the controller depends on the implementation of the shared memory for v.busy and checking for outstanding messages in the system. As both checks are logical OR operators of individual PE information, the most efficient implementation is a logical tree of OR operations which results in a time complexity of \(O(log(d))\). Thus, the overhead of coordination is significantly smaller than the worst case time complexity. PE Communication ComplexityThe communication complexity of the shared memory based communication is \(O(d^{3})\). The leader election in the Merging stage and the broadcasting of _v.odd_ in the Syncing stage are implemented using a shared memory based flooding algorithm. The time complexity of a flooding operation is \(O(D)\), where \(D\) is the diameter of the cluster. Therefore, in the worst case the time complexity of flooding messages is \(O(d^{3})\). The communication complexity of the message based communication is \(O(d^{6})\). Messages from each trivial measurement to the root of the cluster is proportional to the number of trivial vertices in the cluster and number of changes of _cid_ of each vertex. Thus in the worst case there would be \(O(d^{6})\) messages and the time complexity will be \(O(d^{3})\). ## 5 Helios Architecture and Implementation We next describe Helios, the architecture for the distributed UF decoder. ### Overview Helios organizes PEs and controller in a custom topology that combines a 3-D grid and a B+ tree as illustrated by Figure 3 and explained below. * PEs are organized according to the position of vertices they represent in the model graph. We assign _v.id_ sequentially, starting with 1 from bottom left corner and continuing in row-major order for each measurement round. Shared memory S1 (_v.cid_ and _v.odd_) and S2 (_v.codd_, _v._stage, and _v._busy) are added alongside each PE. * Shared memory S3 (_e._growth) is added to the incident PE with the lower _id_. * A link between every two adjacent PEs to read from each other's S1 and for the one with the higher _id_ to read the other's S4. This results in a network of links in a 3-D grid topology. As a PE represents a vertex in the model graph, a link represents an edge. Broad pink lines in Figure 3 represent these links. * A directional link between two adjacent PEs and between PEs with consecutive _v.id_ values for message passing (L32). These links are directed from the PE with higher _v.id_ to the other and are buffered. They are represented by blue arrows in Figure 3. * The controller, realized as a tree of control nodes (SS5.3). The leaf control nodes of the tree contain shared memory S4. * A link between each PE and the controller for the controller to read from S2 and for the PEs to read from S4. Dashed orange lines in Figure 3 represent these links. ### Message-passing between PEs To implement the vertex merging algorithm (algorithm 4), a PE may send and receive messages from another PE, which is not necessarily adjacent. Helios implements this with the directional links and allows a PE to forward messages over directional links. The forward logic is trivially simple because (1) a PE only messages another PE with a lower _id_ per algorithm 4 and (2) the links are directional from a PE with a higher _id_ to that with a lower one. We note that the directional links consist of the 3-D grid structure, from the edges of the model graph, and additional links between PEs with consecutive _v.id_ values, i.e., the "diagonal" ones in Figure 3. The 3-D grid topology is optimal for exchanging messages between nearby PEs, which is frequent. The additional "diagonal" links prevent deadlocks by breaking potential circular dependency amongst several PEs, e.g., PE 1 to PE 4 in Figure 3. Figure 4: The bottom left corner of the PE array shown in Figure 3. Only part of the logic and memory inside PE 1 is shown:growth (S3) is per edge and is stored in the PE with lower _id_. grow logic (in pink) calculates the updated growth value (Figure 5). logic_busy (in green) (Figure 6) is per adjacent PE and is used to calculate the busy signal. Figure 3: Helios architecture for d=3 surface code for 3 measurement rounds. As d=3 surface code has 6 (3 by 2) ancilla qubits, Helios contains of a 3x2x3 PE array. PE \(n\) indicates PE with _v.id_ = \(n\). The directional links between PEs are buffered because a PE can receive multiple messages at a time. Because these buffers have a finite size, the sending PE can stall if a buffer is full. In SS6.2, we show empirical evidence that stall rarely happens. ### Controller Helios implements the controller as a tree of nodes to avoid the scalability bottleneck. The controller requires four pieces of information from each PE: _v.codd_, _v.stage_, _v.busy_ and the presence of outstanding messages of the system. Each leaf node of the tree is directly connected with a subset of PEs. We can consider these PEs as the children of the leaf node. Each node in the tree gathers vertex information from its children and reports it to the parent. With information from all vertices, the root node runs algorithm 6 and decides whether to advance the stage. We leave height, branching factor and the subset of PEs connected to each leaf node as implementation choices. The necessary requirement is that the controller should not slow down the overall design. ### FPGA Implementation We next describe an implementation of Helios targeting a single FPGA. We choose FPGA for two reasons. It supports massively parallel logic, which is essential as the number of PEs grows proportional to \(d^{3}\) in our distributed UF design. Moreover, it allows deterministic latency for each operation, which facilitates synchronizing all the PEs. Figure 4 shows a minimal diagram of a PE and a controller in the FPGA implementation. _Controller_: Since we only use a single FPGA and evaluate with \(d\) below 20, a single node controller suffices. _Directional links_: We implement the directional links as first-in-first-out (FIFO) buffers, which are mapped by Xilinx Vivado to LUT based RAMs. We choose the buffer size of four because our evaluations in SS6.2 show that increasing the buffer size beyond four does not improve decoding time. Reducing the buffer size below four slightly increases decoding time (by 0.01%) while using the same number of LUTs as memory as a buffer of size four (up to 32). _Shared memory_: We implement all shared memories as FPGA registers, i.e., **reg** in Verilog. FPGA registers by design guarantee that a read returns the last written value. In order to ensure that the S4 memory has a single writer, we modify the PE logic as shown in Figure 5. Compare and update operation (L15) is implemented in the PE that the S3 memory resides in, and the PE increases _e.g_growth by two if both endpoints of the edge have _v.odd_ as true. _Detecting outstanding messages_: Each PE updates its busy state based on pending messages in addition to conditions in L33 and L50 as shown in the code snippet in Figure 6. The sub-circuit logic_busy checks for the conditions in L33 and L50 for each incident edge. In our FPGA implementation of FIFO buffers, when a value is written to a FIFO (using the we signal), nonempty state of the FIFO will be true in the next cycle. This results in at least one PE having busy as true when there are outstanding messages in the system. The controller reads busy every clock cycle to identify the completion of a stage. In total, our implementation contains approximately 6000 lines of Verilog code. The code is available at [19]. On the ZCU106 FPGA development board [11], we are able to support the distributed UF decoder with \(d\) up to 7, due to resource limits. Table 1 shows the resource usage for various \(d\). While the numbers of vertices and edges grow by \(O(d^{3})\), the resource usage grows faster for the following reasons. First, resource usage by a PE grows due to the increase of bitwidth required for _v.id_, and _v.cid_. A PE for \(d=7\) with six adjacent PEs requires 182 LUTs and a similar PE for \(d=3\) requires only 127 LUTs. Second, PEs on the surface of the three-dimensional array as shown in Figure 3 use less resources than those inside because the latter have more incident edges. When \(d\) increases a higher portion of PEs are inside the array. We find that LUTs are the most critical resource in the FPGA for our design. It may be possible to run a design with \(d=15\) on a Xilinx VU19 FPGA [20], which currently has the highest number of LUTs among commercially available FPGAs at the time of this writing. Existing commercial FPGAs like ZCU106 often dedicate a lot of silicon to digital signal processing (DSP) units and block RAMs (BRAMs). However, our design does not use Figure 5: Circuit diagram of grow sub-module and Verilog implementation. This implements the atomic compare and update operation in L15 as part of the PE module. \(\mathit{odd}[0]\) and \(\mathit{odd}[1]\) represents the \(\mathit{odd}\) state of the two incident PEs of the edge. Figure 6: Circuit diagram of logic_busy sub-module and Verilog implementation. The sub-module is implemented per each adjacent PE which are indexed from 1 to the number of edges. The variables \(\mathit{odd}[0]\) and \(\mathit{cid}[0]\) represent the \(\mathit{odd}\) and \(\mathit{cid}\) of the PE, while \(\mathit{odd}[1]\), \(\mathit{cid}[1]\) and \(\mathit{growth}[1]\) represent the corresponding values for the \(\ell^{h}\) adjacent PE and the edge connecting them. any DSPs because it only requires comparison operators and fixed point additions. Our design does not use any BRAMs because the FIFOs have a depth of four and can be efficiently implemented using LUTs. Each BRAM tile in Xilinx has a default size of 18 Kbits and using BRAM for FIFOs would result in significant unused space in each BRAM tile. Therefore, an ideal FPGA designed to run our distributed UF decoder would be simpler than current large FPGAs, as it would only need a large number of LUTs, no DSP units and a limited amount of BRAM. ## 6 Evaluation The main objective of our evaluation is to assess the scalability of our distributed UF implementation. To that end, we first describe our methodology and then show that the latency of our implementation grows sub-linearly with respect to the surface code size \(d\). ### Methodology For speed, we measure the number of cycles required to decode a syndrome. To evaluate correctness, we compare the result of clustering generated by our distributed UF decoder with the clustering generated by the original UF decoder. We compare clusters because the original UF decoder and ours only differ in the clustering process. This shows that both decoders generate identical clusters in all cases tested, confirming the correctness of our decoder. In the rest of our evaluation, we will focus only on the speed of the distributed UF decoder and not on the accuracy of its results. Experimental SetupWe use two setups to evaluate our FPGA implementation. The primary setup is a Xilinx ZCU106 FPGA development board [11], which is capable of handling surface codes with \(d\) up to 7. As an alternative setup, we run our implementation on the Xilinx Vivado simulator [12], which emulates the behavior of FPGA in a cycle-accurate manner, allowing us to evaluate the performance of our implementation for surface codes of any size. We simulated up to \(d=15\) as this is the upper bound of \(d\) possible in the largest FPGA currently available [20]. We also compare the results obtained from the Vivado simulator with those obtained from the FPGA development board for surface code sizes 7 and smaller, to gain confidence in the correctness of the simulator itself. Noise ModelWe use the phenomenological noise model [1] that accounts for errors in both data and ancilla qubits. As decoding for X-errors and Z-errors are independent and identical, we only focus on decoding X-errors in the evaluation. To emulate noise, we independently flip each qubit with a probability of \(p\) (the physical error rate) between every two measurement rounds. This is a widely used approach by prior QEC decoders [7, 8, 18]. We then generate the syndrome from the physical errors and provides it as input to our decoder. For most of our experiments, we use as default \(p=0.001\), like other works [7]. This value is reasonable for surface codes, as \(p\) should be sufficiently below the threshold (at least ten times lower) to exponentially reduce errors. We note that the UF decoder has a threshold of \(p=0.026\), calculated by Delfosse and Nickerson [10]. ### Decoding Time We experimentally show how average time for decoding grows with the size of the surface code. Additionally, we show the effect of noise and buffer size on the average time. Average timeTo demonstrate the scalability of our algorithm with respect to the size of the surface code, we plot the average time for decoding against the size of the surface code. In Figure 7 (left) the y-axis shows the average FPGA clock cycle count and the x-axis shows the distance (\(d\)) of the surface code. We obtained these values from running the distributed UF decoder on the Vivado simulator where each data point represents the average of 1000 trials. We see that for all 3 physical error rates we tested, average decoding time grows sub-linearly with respect to the surface code size, which satisfies the scalability criteria to avoid an exponential backlog. This implies that the average time to decode a measurement round reduces with increasing \(d\) as shown in Figure 7 (right). Distribution of decoding timeTo understand the growth of decoding time with respect to the code distance, in Figure (a)a we plot the distribution of decoding time for different code distances. The y-axis shows the FPGA clock cycle count and the x-axis shows the distance (\(d\)) of the surface code. We ran both our test setups for this experiment and the distribution of FPGA clock cycle count for each surface code size is shown in green, while the distribution of clock cycle count on the Vivado simulator is shown in gray. The average cycle count is indicated with \(\times\). Due to resource limitations on the ZCU106 FPGA, we are unable to run surface codes with \(d>7\) on the FPGA. \begin{table} \begin{tabular}{|c|c|r|r|} \hline \multirow{2}{*}{\(d\)} & \multicolumn{2}{c|}{\# of LUTs} & \multicolumn{1}{c|}{\# of} \\ \cline{2-4} & as logic & as memory (FIFOs) & registers \\ \hline \hline 3 & 2419 & 608 & 1187 \\ \hline 5 & 18655 & 3236 & 7189 \\ \hline 7 & 61793 & 12636 & 27664 \\ \hline \end{tabular} \end{table} Table 1: Resource usage of Helios on ZCU106 FPGA board for various code distances For \(d=3,5\) and 7, the results from the FPGA and those from the Vivado simulator agree. The statistical parameters such as mean, median, and percentile values(\(P_{25}\), \(P_{75}\), \(P_{90}\)) differ between running on the FPGA and using the simulator by less than 1%. Only noticeable difference is the higher maximum observed value on the FPGA, which is caused by exponentially unlikely long error chains appearing when running for \(10^{8}\) trials in the FPGA. This justifies the use of the Vivado simulator to obtain results for large surface codes that cannot be mapped to the ZCU 106 FPGA board due to resource limitations. The key factor determining the decoding time is the number of iterations of growing, merging and syncing the distributed UF decoder requires. The peaks in the probability distribution for each distance in Figure 8a correspond to the number of iterations. The variation around each peak is caused by the delay due to routing messages. The number of iterations is related to the size of the largest cluster, which in turn correlates with the size of the longest error chain in the syndrome. As the size of the surface code increases, the probability of a longer error chain also increases, resulting in the probability distribution shifting to the right. Furthermore, as seen in Figure 8a, the distribution for each surface code size is right-skewed. For example, for \(d=7\), 90% of trials required two iterations or fewer, which were completed within 140 cycles. In the same test, 99.99% of trials were completed within 237 cycles. Only a very small number of error patterns require long decoding times, corresponding to syndromes with long error chains. Since such syndromes occur rarely and have poor decoding accuracy even if the decoding time is bounded, the impact on accuracy will be minimal. Effect of physical error rateTo understand the effect of the physical error rate on decoding time, in Figure 8b we plot the distribution of latency for three different noise levels. We obtained this distribution by running on the ZCU106 FPGA with \(10^{8}\) trials. The y-axis shows the FPGA clock cycle count and the x-axis shows the physical error rate. As the noise level increases, the probability distribution of latency shifts to the right. This is caused by the increased probability of a longer error chain when the physical error rate increases, which in turn requires more iterations to decode. As a result, the average decoding time increases with the physical error rate. Effect of buffer sizeTo measure the impact of the buffer size on decoding time, we varied the buffer size and analyzed the latency distribution. In Figure 8c, the x-axis shows the cycle count and the y-axis shows the cumulative distribution of the latency. We varied the buffer size from 1 to 32. Our results showed that there was no noticeable difference in latency with respect to the buffer size. The obtained results were identical for all buffer sizes above 4 and showed a slowdown of less than 0.01% for buffer sizes of 1 and 2. This indicates that the communication overhead in our design is minimal for the average case We can explain this result using statistics on the number of messages generated. For example, when the physical error rate is 0.001 and \(d=7\), 97.7% of trials are statistically unaffected by the buffer size. This includes 46% of trials resulting in fully non-trivial syndromes, 47.6% of trials resulting in a single qubit error in each cluster, and 4.1% of trials resulting in a chain of two qubit errors. In all of these cases, at most a single message is generated in each cluster, making the buffer size irrelevant. In the remaining 2.3% of trials, the buffer size will only affect the results if error chains occur close to each other and share a common link in their message paths. In our experiments, such congestion occurred in less than 0.1% of runs. Therefore, the buffer size can be reduced without any significant impact on average decoding time. Figure 7: Average decoding time scales sub-linearly with \(d\). We measure the average decoding time for 3 different noise levels using the Vivado simulator. (Left) The average decoding time in FPGA clock cycles. (Right) The average decoding time per measurement round in FPGA clock cycles. Average time per measurement round reducing continuously justifies that our decoder is scalable for large surface codes. We show the distributions separately in Figure 8a ### Comparison with related work Our empirical results as shown in Figure 7(a) suggest that Helios has a lower asymptotic complexity than any existing MWPM or UF implementation for which asymptotic complexities are available, e.g., [10, 17]. Indeed, the empirical results suggest that our decoder has a sub-linear time complexity: the decoding time per round decreases with the number of measurement rounds, which has never been achieved before. This implies that Helios can support arbitrarily large \(d\) as rate of decoding will always be faster than the rate of measurement. Das et al [7] calculate an average latency for their AFS decoder based on memory access cycles and assuming a design running at 4 GHz. As the number of memory access cycles grows quadratically with \(d\), the average decoding time per measurement round of AFS grows \(O(d^{2})\). Similarly, Ueno et al [9] estimate the decoding time of QECOOL from \(d=5\) to \(d=13\) based on SPICE-level simulations with a clock frequency of 5 GHz. For the given range of \(d\) the decoding time per measurement round increases quadratically with \(d\). In comparison, the decoding time of Helios decreases per measurement round. We should like to point out that AFS and QECOOL assume very high clock frequencies, which is key to their estimated low latency. For example, for \(d=11\), AFS and QECOOL respectively report latencies of 42 ns and 8.32 ns per measurement round. Helios, in contrast, requires 107 ns per measurement round with a 100 MHz clock. In terms of clock cycles, Helios requires on average 10.7 cycles for \(d=11\) surface code, lower than both AFS (168 cycles) and QECOOL (41 cycles). To the best of our knowledge, LILLIPUT [6] is the only hardware decoder in literature that provides implementation-based results, for \(d=5\). The decoder has an average time of 21 ns per measurement round, which is shorter than that of Helios for \(d=5\), i.e., 126 ns. However, as analyzed in SS3, LILLIPUT is not scalable for \(d>5\). Our work, in contrast, has successfully demonstrated the implementation of a \(d=7\) surface code on a ZCU106 FPGA with 120 ns per measurement round. The architecture of Helios can potentially support larger \(d\) using a larger FPGA, for example \(d=15\) for Xilinx VU19P [20], and even larger \(d\) using a network of FPGAs. ## 7 Conclusion We describe a distributed design of the Union Find decoder for quantum error-correcting surface codes and present Helios, a system architecture for realizing it. We report an FPGA-based implementation Helios. Using Xilinx Vivaldo cycle-accurate simulator, we demonstrate empirically that the average decoding time of Helios grows sub-linearly with \(d\). Using a ZCU106 FPGA, we implement the fastest decoding of distance 7 surface codes, which achieves 120ns average decoding time per measurement round. Helios is faster and more scalable than any reported implementation of surface code decoder. Our results suggest that by leveraging parallel hardware resources, Helios can avoid a growing backlog of syndrome measurements for arbitrarily large surface codes. ## Acknowledgments This work was supported in part by Yale University and NSF MRI Award #2216030. Figure 8: Distribution of decoding time with the average marked with \(\times\). For each error rate we ran \(10^{8}\) trials. Results from implementation with Xilinx ZCU 106 FPGA are in green; those from Xilinx Vivaldo simulator gray. By default \(d=7\), \(p=0.001\).
2303.11731
Automated service monitoring in the deployment of ARCHER2
The ARCHER2 service, a CPU based HPE Cray EX system with 750,080 cores (5,860 nodes), has been deployed throughout 2020 and 2021, going into full service in December of 2021. A key part of the work during this deployment was the integration of ARCHER2 into our local monitoring systems. As ARCHER2 was one of the very first large-scale EX deployments, this involved close collaboration and development work with the HPE team through a global pandemic situation where collaboration and co-working was significantly more challenging than usual. The deployment included the creation of automated checks and visual representations of system status which needed to be made available to external parties for diagnosis and interpretation. We will describe how these checks have been deployed and how data gathered played a key role in the deployment of ARCHER2, the commissioning of the plant infrastructure, the conduct of HPL runs for submission to the Top500 and contractual monitoring of the availability of the ARCHER2 service during its commissioning and early life.
Kieran Leach, Philip Cass, Steven Robson, Eimantas Kazakevicius, Martin Lafferty, Andrew Turner, Alan Simpson
2023-03-21T10:38:54Z
http://arxiv.org/abs/2303.11731v1
# Automated service monitoring in the deployment of ARCHER2 ###### Abstract The ARCHER2 service, a CPU based HPE Cray EX system with 750,080 cores (5,860 nodes), has been deployed throughout 2020 and 2021, going into full service in December of 2021. A key part of the work during this deployment was the integration of ARCHER2 into our local monitoring systems. As ARCHER2 was one of the very first large-scale EX deployments, this involved close collaboration and development work with the HPE team through a global pandemic situation where collaboration and co-working was significantly more challenging than usual. The deployment included the creation of automated checks and visual representations of system status which needed to be made available to external parties for diagnosis and interpretation. We will describe how these checks have been deployed and how data gathered played a key role in the deployment of ARCHER2, the commissioning of the plant infrastructure, the conduct of HPL runs for submission to the Top500 and contractual monitoring of the availability of the ARCHER2 service during its commissioning and early life. monitoring, HPC, service management ## I Background ### _ARCHER and ARCHER2_ In this paper we discuss the deployment and utility of automated monitoring during the recent rollout of the ARCHER2 system and service. The ARCHER2 system is an HPE Cray EX supercomputer with an estimated peak performance of 28 Pflop/s. The machine has 5,860 compute nodes, each with dual AMD EPYC 7742 64-core processors at 2.25GHz, giving 750,080 cores in total. ARCHER2 is the successor system to ARCHER, a 4,920-node Cray XC30 system which was also operated and supported by EPCC. These systems have been managed and financed by the Engineering and Physical Science Research Council (EPSRC) and UK Research and Innovation (UKRI). In operating and supporting both these services, EPCC has acted in both the Service Provision (SP) and Computational Science and Engineering (CSE) roles. Under the SP role, EPCC is responsible for system management and administration as well as the operation of the Service Desk. Under the CSE role, EPCC is responsible for deploying application software not included in the programming environment supplied by HPE as well as for assisting users with application software development and management, providing training, administering funding calls for software development projects, and running an outreach programme. These responsibilities are in addition to hosting the ARCHER2 service at EPCC's Advanced Computing Facility (ACF) data centre. ### _Deployment timeline_ Owing to a combination of the impacts of COVID-19 and developmental difficulties with the HPE Cray EX and Slingshot technologies, ARCHER2 experienced an extended and somewhat troubled deployment. The originally planned deployment timeline was: * February 2020: ARCHER to be decommissioned * May 2020: ARCHER2 to be made available to users Given the issues faced with the development and scaling of the HPE Cray EX and Slingshot technologies, it was decided to introduce a phased transition. Instead of a direct transfer to the full 23 cabinet system, a 4 cabinet system was temporarily deployed to a separate computer room, to operate in parallel to ARCHER until such time as it was possible to deploy the full 23 cabinet system. The final deployment timeline was: * July 2020: ARCHER2 4 cabinet system delivered to the ACF * October 2020: ARCHER2 4 cabinet system made available to early access users * November 2020: ARCHER2 4 cabinet system made available to all users * January 2021: ARCHER system decommissioned and removed from the ACF * February 2021: ARCHER2 23 cabinet system delivered to the ACF * November 2021: ARCHER2 23 cabinet system made available to users ### _Motivation_ As is discussed further below, when planning the deployment of ARCHER2 in excess of four years experience of using monitoring technologies to improve response time, reduce staff workloads and provide insight when responding to problems. Having had success in integrating monitoring into our approach to service deployment for other HPC services at EPCC, particularly in supporting the diagnosis of problems with services during deployment, monitoring integration is a key part of our standard approach when commissioning a new system. As such, deploying monitoring was integrating into planning for the ARCHER2 deployment from the start. ## II Checkmk and Graphite ### _General background_ EPCC manages a variety of HPC and research computing services in addition to critical support infrastructure. In the past, EPCC system administrators spent a great deal of time tracking the state of various systems. Problem detection and diagnosis typically required looking in multiple locations and running a variety of monitoring scripts on a regular basis. This was time intensive, complex to operate and maintain, and difficult for team members to manage. This approach also made it problematic to integrate new systems into standard operating procedures as team members are typically under pressure to get services up and running in a short time. Given all this, it was considered that a "single pane of glass" approach was necessary: using a single screen to monitor the status of all services on site. The solution selected at EPCC was, and remains, Checkmk [1]. Originally developed in 2008 as a Nagios extension, Checkmk is now a full Nagios derivative monitoring system. Checkmk supports a variety of types of monitoring: * status-based monitoring relating to the health of a system or process; * metric-based monitoring recording data on aspects of a system or process over time; and * log-based monitoring triggering from the detection of events in logs. Among the motivations for selecting Checkmk were: * the range of existing checks including CPU, memory, file system and network interface status; * the ability to simply deploy new checks; * the ability to simply add new hosts; and * the ability to simply manage and view checks from a single interface. Checkmk was first deployed at EPCC in 2015. Since that time it has become core to the management of our HPC services. It has also allowed us to deploy bespoke monitoring solutions for a variety of HPC technologies. Our Checkmk dashboard is monitored during working hours by the "on-shift" team member. This has allowed us to gain awareness of and respond to issues quickly; including partial power failures, system, disk and component failures, networking issues and system load issues. Certain alerts are also issued by email. We have made a practice of including stakeholders such as our hardware partners on the list for email alerts for relevant systems, often allowing those stakeholders who work outside our working hours to become aware of and resolve issues before anyone from EPCC enters said working hours. This has included ensuring certain critical alerts email directly to our HPE colleagues' pagers. Checkmk is our first port of call for investigating issues reported by users, colleagues or stakeholders. The availability of an at-a-glance view of system status has been of great utility. Checkmk is easily deployed to client servers - a Checkmk agent is deployed to the relevant node which conducts most checks when polled by the server. It can also run more heavyweight checks in the background. Polling is available via xinetd or systemd socket however this can also be configured to operate via ssh or any custom command. Deployment is available via rpm/deb and we have historically deployed the Checkmk agent to all management and login servers but not to compute nodes. "Out of the box" Checkmk provides checks including: * CPU, Memory, disk utilisation and load; * IPMI checks (fans, temperatures, voltages); * network interface status and statistics; * file system mounts; * individual processes can be assigned for monitoring (e.g. PBS mom or license servers); and * number of users logged in It is also possible for the server to directly query clients via protocols such as SNMP. We have used this to quickly deploy monitoring for systems such as switches and tape libraries. In addition to checks available by default, a variety of checks are available online to import. A number of checks have been imported over time including checks for monitoring some more specialised systems, such as the RAID controllers for a particular storage system. Motivated in a large part by the data gathered by Checkmk, EPCC has deployed a Graphite [2] metrics server and a Grafana [3] analytics and visualisation server. This allows for metric based monitoring data gathered by Checkmk, as well as from other sources, to be combined and viewed in graphs and dashboards. This also supports greater visibility of data both within and beyond EPCC as a variety of stakeholders can be given access to custom dashboards to allow monitoring of data relevant to individual interests. ### _Specialised checks_ One particular utility of Checkmk is the ease with which new monitoring items, or "checks", can be crafted and deployed. New checks can be deployed either as full blown plug ins or as simple scripts, outputting health and metric data in the appropriate format when called. When scripts are deployed locally in this second method they will be automatically run by Checkmk once per minute. Over time we have deployed a number of specialised checks in support of HPC services. These have included checks intended to monitor specialised HPC technologies as well as checks to detect recurring problems. Specialised checks deployed at EPCC include: * a check to monitor the health status of DDN controller servers; * a check to monitor the status of GPFS clusters; * a check to capture events observed by SELinux and AppArmor; * a check to capture metric data regarding lustre server statistics; * a check to monitor for the occurrence of unplaceable and orphan jobs in PBS Pro before these could impact system scheduling; * a check to use cluster manager commands to capture compute node status on HPCM based systems; and * a check to capture the health status of an Omnipath network using the Omni-Path Fabric Toolset. These various checks were of significant benefit in the deployment and management of the systems for which they were implemented. As such, going into the ARCHER2 deployment, use of Checkmk, Graphite, Grafana and defining specialised checks were all key parts of our planning. ## III Monitoring implemented during the ARCHER2 deployment ### _Overview of deployed infrastructure for ARCHER2_ Each system or group of systems has a separate monitoring server that is controlled from the central Checkmk instance [4]. This approach provides many benefits, including: * There is almost no network communication between central and system specific hosts; * One monitoring host failure does not affect the overall monitoring setup; * It is easy to add and remove new monitoring servers. An outline of the monitoring setup for ARCHER2 is shown in Figure 1. Each monitored host has a Checkmk agent installed which communicates to the server via TCP. This agent collects various host health, performance metrics and posts these to Fig. 1: An overview of the deployment of monitoring services for ARCHER2 the monitoring server. Once the monitoring data reach the Checkmk server it is further passed on to the Graphite graphing server which process data using "Carbon" daemons and stores it in Graphite's specialised database [5]. In addition to the motivations listed previously, the default Checkmk metric storage engine struggles to perform appropriately when asked to display large number of metrics [6]. As discussed previously EPCC has three methods to access system status information. Firstly, all critical notifications are directly dispatched to appropriate personnel email inboxes; for example if login node DNS resolution fails, all system support staff get an email notification. There are then two graphical user interfaces accessible via web browser: a centralised Checkmk control centre that presents overview of all hosts, services, and checks (Figure 2); as well as a Grafana analytics and visualisation web application that pulls various metrics from the Graphite metrics server and presents them in the form of customisable and versatile graphs (Figure 3). As well as collecting the default set of data available from the Checkmk agent and redeploying some checks used elsewhere, we have deployed a number of new custom checks during the ARCHER2 rollout in response to emerging needs. These are discussed in more detail below. In each case, the check is deployed as a bash script placed into the appropriate directory (/usr/lib/check_mk_agent/local). Checks can however be deployed using any language supported by the host - the only requirement is that the output of the check be in the correct format [7]. Once a check has been deployed to the appropriate directory on a client, the Checkmk web interface on the server can be used to discover the new service. ### _Power monitoring_ The ARCHER2 system is noticeably larger than it's predecessor and has a larger power profile. This profile sits at the maximum of what the Computer Room in which it sits was designed to support. As such there was a need to work carefully when the system was first brought into full testing and a requirement for a strong awareness of the power draw of the system at any time. the graphing of the data gathered by this check. ### _Node state monitoring_ As deployment of the 23 cabinet system was taking place and a variety of issues were being troubleshooted, a requirement emerged for tracking the status of all compute nodes to provide an awareness of the state of the system at any given time. In order to gather this data, a check was scripted and deployed to the login nodes which assesses the state of the compute nodes via the Slurm scheduler. This check does the following on each of the four login nodes: * runs "sinfo" and stores the output; * pulls the names of the various partitions from the sinfo output; * for each partition stores the number of nodes in each of the possible Slurm node states; and * outputs the total counts for each node type on a per-partition basis. One advantage of the approach taken here, with all partitions assessed and reported automatically, is that we have been able to implement this script against other systems using Slurm on-site without modification. A graph showing data gathered from this check can be seen in Figure 5. In this graph the total number of nodes in any of the states considered "down" is listed - the line shown represents the targeted node availability threshold. This check is reported by each of the four login nodes. In order to have a single metric which is persistent regardless of which login nodes are up or down (so long as at least one is up) a "cluster host" was created within Checkmk which takes in the reporting from each login node and reports a single metric [8]. ### _Login availability monitoring_ As part of EPCC's responsibilities there is a requirement to monitor the availability of the service. In order to support this monitoring a requirement was determined for a check to monitor the availability of the ARCHER2 login service. On the ARCHER2 service there is a single DNS record which serves all login nodes on a round-robin basis. In order to test this, the following setup was put in place: * a functional test user account was created; * login access for this user was limited within the sshd config to the IP address of the Checkmk monitoring server; * the test user account was added to the allow list in the sshd config for single factor (key based) access; * credentials were put in place to allow ssh from the Checkmk user on the monitoring server to the login nodes; and * a simple check script was deployed to the monitoring server which attempts to ssh to the login DNS address with the command "exit" and outputs the status of the login server based on the exit status of the attempted ssh. ## IV Impact of monitoring during the ARCHER2 deployment ### _Support for early service deployment and initial testing of the 23 cabinet system_ As with previous deployments of HPC services at EPCC, the system deployment team found early implementation and integration of monitoring extremely useful. This was in line with the experiences from other services described previously but a number of incidents are worth noting: * A number of problems were experienced with the provision of both internal and external DNS during the deployment of ARCHER2. Deploying a DNS resolution check to the login node allowed us to be rapidly alerted when problems occurred. This allowed prompt investigation and restoration of DNS. * the only original symptom of this was an inability for users to log into the system. Using Checkmk we were rapidly able to identify the origin of this issue as relating to the relevant file system and hence to the relevant network. * In the early days of both the 4 cabinet and 23 cabinet systems a number of problems were experienced with the Slingshot High Speed Networks (HSN). The first indicator of this issue was often when Checkmk on the login node indicated a drop in the number listed for available Lustre LFS servers. * We were able to quickly become aware of and begin troubleshooting of a memory leak on the login servers. Further as we were graphing all the data gathered we were able to assess the speed with which the leak was progressing and could reboot the login nodes at appropriate intervals until the problem was resolved. ### _Support for system testing and benchmarking_ #### Iv-B1 Initial testing As discussed previously, ARCHER2 has a noticeably larger power profile than it's predecessor, ARCHER. ARCHER2 sits at the maximum of the design intent for the Computer Room in which it is hosted. As Fig. 5: Graph showing the number of nodes in the various states considered “down” such, additional care was taken during the initial testing of ARCHER2 system. In initial testing, using non-optimised High Performance Linpack (HPL) at 4,000-5,000 nodes, power use was initially monitored directly from figures gathered at the wall-level Power Distribution Units (PDUs) and via the Building Management System (BMS). The data gathered from these sources was found to be difficult to access, not as accurate as was preferred and not available to be accessed in a suitable graphed form. HPE identified that appropriate data was available via the cabinet controllers on the system and made this data available - as is described in the section above, this was integrated into our Checkmk monitoring and made available in graph form via Graphite and Grafana. This provision, combined with verifying figures against those gathered from PDUs and the BMS, allowed us to build confidence that the system was operating correctly and safely at scale. We were able to profile the power draw of the system when operating at scale with benchmarks including HPL and the ARCHER2 procurement application benchmarks: OpenS-BLI, HadGEM3 (UK Met Office Unified Model), GROMACS and CASTEP. Additionally, given that this data was available remotely, we were able to agree with HPE for their US teams to operate on the system at scale out-of-hours earlier in the life of the service than would otherwise have been possible. HPE's US team had access to the monitoring data and thresholds were agreed for power draw at which work would need to be stopped. #### Iv-B2 HPL Benchmarking The monitoring of power draw on the service was again useful during efforts to prepare an HPL benchmark suitable for submission to the Top 500 list. Over the course of a week, a number of attempts were made to produce a suitable result. A good number of runs were interrupted by node failures or problems with the HSN, however we were able to complete a number of runs. Note that the graphs shown in Figures 6-8 have been generated from data gathered directly on the system. The granularity of data retained on Graphite is reduced over time to save disk space and full granularity for this data is no longer available at the time of writing. It quickly became apparent that we were seeing "power cycling" behaviour on the system during these HPL runs, where power usage dropped suddenly for a short period of time. An example of a run impacted by this problem can be seen in Figure 6. In order to analyse this issue, single node HPL was run across the system and it was identified that certain nodes were performing persistently poorly. With these nodes drained, further testing showed that the problem was removed or reduced. An example of a run where this problem has been significantly reduced can be seen in Figure 7. Following this work, a number of runs were made to achieve the best available Figure for our Top 500 submission. Using the power monitoring we were able to quickly identify jobs impacted by the power cycling issue and then scan for and drain problem nodes. At the end of the week were able to achieve a score of 19.5PF which, when submitted, placed ARCHER2 at number 22 on the Top 500. The power profile of the submitted run is shown in Figure 8. ### _Automated contractual monitoring and system status website_ We have also been able to make use of the monitoring data gathered beyond stakeholders in EPCC and HPE. In order to support UKRI (the funders) in their monitoring of the service over the acceptance trial period, a requirement emerged to prepare a single view which would encompass all attributes of the service relevant to the contractual monitoring of the Fig. 8: Graph showing the power draw (in kW) during the HPL run submitted to the Top 500. This ran on 5,600 nodes and achieved 19.5PF Fig. 6: Graph showing the power draw (in kW) during an HPL run impacted by the power cycling issue. This ran on 5,500 nodes and achieved 16.8PF. Fig. 7: Graph showing the power draw (in kW) during an HPL run which was less impacted by the power cycling issue. This ran on 5,576 nodes and achieved 18PF service. This includes node availability, login availability and job failures. EPCC develops and operates a service management web application known as SAFE. Details of job failures, and other data relevant to job accounting, are uploaded to SAFE. SAFE is also used by users to create and manage their ARCHER2 accounts, and by project managers to allocate and report on project resources. Data on Graphite was exposed to the SAFE via a web API over HTTP. This allowed the SAFE to pull in relevant data relating to node availability and login availability gathered by the checks described previously. Using this data any authorised stakeholder is able to generate a report in SAFE covering the contractual monitoring of the service for any given period. Critically, SAFE allows for fine-grained access control so only specifically authorised people can run these reports in SAFE. An example of the graph showing the login availability and compute node availability generated by this report is shown in Figure 9. In addition to stakeholders in EPCC, HPE and UKRI, this data is made available to the user community as a whole. Graphs of node availability are generated by the SAFE based on the data from Checkmk/Graphite and are presented on the ARCHER2 System Status web page [9]. ## V Conclusions We have found that live monitoring and graphing makes an extremely valuable contribution to service management. Furthermore this value often presents itself in unexpected ways - we would not have anticipated when first deploying Checkmk and Graphite/Grafana that we would see benefits such as those during the testing and benchmarking of ARCHER2 or that we would be able to implement contractual monitoring using these technologies. The ability to rapidly and flexibly deploy new checks in response to emerging events and requirements is also of particular value - and an imperfect check implemented rapidly is often superior to an ideal check which might require greater deployment time. It is also clear that automating the contractual monitoring of a service can be extremely valuable. This helps us to assure service partners, funders and users that system is working. This has been particularly important given the delayed start to ARCHER2. We finally note that ARCHER2 has now been in full service for some months with in excess of 2,500 active users and utilisation on the order of 90%. We consider automated monitoring to have been key in making this possible. ## VI Future work We are interested in further developing our automated monitoring capabilities going forward. Potential improvements include integrating: * log analysis; * Slingshot error feeds; * per-job lustre statistics and * data driven intrusion detection. We are also interested in making the data we collect more generally available to our user community. We would be pleased to coordinate with other sites who use or are interested in using Checkmk for HPC service monitoring and are happy to share our experience.
2310.09046
Laser beam properties and microfluidic confinement control thermocavitation
Thermocavitation, the creation of a vapor bubble by heating a liquid with a continuous-wave laser, has been studied for a wide range of applications. Examples include the development of an actuator for needle-free jet injectors, as the pumping mechanism in microfluidic channels and crystallization or nanoparticle synthesis. Optimal use in these applications require control over the dynamics of the laser-generated bubble through the laser power and beam radius. In contrast to pulsed lasers, for continuous-wave lasers the influence of the laser beam radius on the bubble characteristics is not fully understood. Here, we present a novel way to control the size of the beam from an optical fiber by changing the distance from the glass-liquid interface. We show that the increase in beam size results in a longer nucleation time. Numerical simulations of the experiment show that the maximum temperature at the moment of nucleation is 237$\pm$5{\deg}C and independent of laser parameters. Due to delayed nucleation for larger beam sizes, more energy is absorbed by the liquid at the nucleation instant. Consequently, a larger beam size results in a faster growing bubble, producing the same effect as reducing the laser power. We conclude that the total bubble energy only depends on the amount of absorbed optical energy and it is independent of the beam radius and laser power for any amount of absorbed energy. This effect contrasts with pulsed lasers, where an increase in beam radius results in a reduction of bubble energy. Our results are of relevance for the use of continuous-wave laser-actuated cavitation in needle-free jet injectors as well as other applications of thermocavitation in microfluidic confinement.
Jelle J. Schoppink, Jose A. Alvarez-Chavez, David Fernandez Rivas
2023-10-13T12:13:39Z
http://arxiv.org/abs/2310.09046v1
# Laser beam properties and microfluidic confinement control thermocoavitation ###### Abstract Thermocavitation, the creation of a vapor bubble by heating a liquid with a continuous-wave laser, has been studied for a wide range of applications. Examples include the development of an actuator for needle-free jet injectors, as the pumping mechanism in microfluidic channels and crystallization or nanoparticle synthesis. Optimal use in these applications require control over the dynamics of the laser-generated bubble through the laser power and beam radius. In contrast to pulsed lasers, for continuous-wave lasers the influence of the laser beam radius on the bubble characteristics is not fully understood. Here, we present a novel way to control the size of the beam from an optical fiber by changing the distance from the glass-liquid interface. We show that the increase in beam size results in a longer nucleation time. Numerical simulations of the experiment show that the maximum temperature at the moment of nucleation is 237\(\pm\)5\({}^{\circ}\)C and independent of laser parameters. Due to delayed nucleation for larger beam sizes, more energy is absorbed by the liquid at the nucleation instant. Consequently, a larger beam size results in a faster growing bubble, producing the same effect as reducing the laser power. We conclude that the total bubble energy only depends on the amount of absorbed optical energy and it is independent of the beam radius and laser power for any amount of absorbed energy. This effect contrasts with pulsed lasers, where an increase in beam radius results in a reduction of bubble energy. Our results are of relevance for the use of continuous-wave laser-actuated cavitation in needle-free jet injectors as well as other applications of thermocavitation in microfluidic confinement. **Keywords:** [Thermocavitation, continuous-wave laser, energy transfer, microfluidic confinement, vapor bubble] ## 1 Introduction The creation of a vapor bubble by heating the liquid with a continuous-wave (CW) laser was first reported in 1987 by Rastopov and Sukhodol'skii, who called it thermocoavitation [1]. Since then, this thermocavitation process has been studied for numerous applications, including removal of pathological tissues [2], nanoparticle synthesis [3], laser-induced crystallization [4], as pumping mechanism in a microchannel [5], creation of short laser pulses [6], generation of ultrasound acoustics [7, 8, 9] and trapping or manipulation of bubbles [10, 11]. Over the last decade, thermo-cavitation also has been investigated for its potential to create microfluidic jets for (bio-)printing and/or needle-free jet injection [12, 13, 14, 15, 16, 17, 18]. Although laser-actuated jet injection was initially studied using pulsed lasers [19, 20, 21, 22, 23], recently we found that continuous-wave lasers generate similar bubble dynamics [24]. For all of these applications of thermocavitation, understanding and control of the bubble formation is vital. However, a thorough understanding of the thermocavitation process is still lacking. Due to the low laser power (P \(\sim\) 1W) in thermocavitation, the bubble does not form instantaneously upon laser irradiation, but after a short incubation time (\(t_{n}\sim\) ms). Therefore, the delivered optical energy E\({}_{\mathrm{d}}\) is not controlled directly, but depends on the bubble nucleation instant (E\({}_{\mathrm{d}}\) = P\(\times t_{n}\)) [24]. Delaying this nucleation time \(t_{n}\) therefore increases the amount of energy, resulting in a larger bubble [25, 26]. The most reported method for delaying the nucleation is a reduction in laser power [25, 27]. Similarly, a reduction in absorption coefficient also reduces the absorbed energy density, which results in delayed nucleation, a larger superheated volume and consequently a larger bubble [28, 29]. A third method to deliver more energy and create a larger bubble is to increase the beam size. The beam size can be increased by moving the liquid away from the focal point of the focusing lens [18, 25, 28, 30]. However, the exact beam size depends on the optics and positioning accuracy, which are difficult to reproduce. To our knowledge the quantitative influence of the beam size on the thermocavitation process and its energy transfer has not been reported. For thermocavitation, the temperature at the moment of nucleation is still debated. Fluorescent measurements using Rhodamine-B resulted in a maximum temperature of 98\({}^{\circ}\)C [31]. However, in this study the calibration was only performed until 85\({}^{\circ}\)C, and the sensitivity of this dye as temperature sensor goes down rapidly above 80\({}^{\circ}\)C [32], for which reason any extrapolation to higher temperatures should be carefully interpreted. Numerical simulations resulted in temperatures of 295-332\({}^{\circ}\)C [25], which is around or even above the spinodal temperature of 305\({}^{\circ}\)C [33], and therefore unlikely as nucleation at an interface should happen below the spinodal temperature [34]. In this manuscript, we investigate the influence of the beam size on the thermocavitation process in microfluidic confinement, i.e., near a wall boundary from which the laser is focused. We compare our experimental results on the bubble nucleation with a numerical heat transfer simulation in COMSOL. This data provides a better understanding of the moment of nucleation and the energy transfer from the CW laser into the bubble. Our results are of relevance for the use of continuous-wave laser-actuated cavitation in needle-free jet injectors as well as other applications of thermocavitation in microfluidic confinement. ## 2 Experimental methods Figure 1 shows the experimental setup, consisting of a microfluidic glass chip with two etched channels along the same axis and separated by 30 um. The right channel (L*W*H = 2000*100*400 um\({}^{3}\)) is partially filled with Milli-Q water until the variable filling level \(F\). The left channel is designated for inserting an optical fiber connected to a CW laser. This single-mode optical fiber (Corning SMF-28e) is positioned inside its channel using a motorized 3-axis stage (Thorlabs Rollerblock) with micrometer accuracy. This allows for accurate aligning of the fiber tip with respect to the microfluidic channel. Due to the divergence of the laser beam from the fiber tip, the beam radius at the interface of the microfluidic channel can be controlled by the distance \(D\). Seven beam radii B\({}_{\mathrm{R}}\) are used in the experiment between 10 to 36 um. The fiber laser (BK Tel Photonics, HPFL-2-350-FCAPC) has a variable output power between 0.2 and 2 W at a wavelength of 1950 nm, which matches the absorption peak of water (\(\alpha~{}\approx\) 12000 m\({}^{-1}\)[35]). The laser has a secondary fiber output at 1% of the nominal power, which is connected to a photodetector (Thorlabs DET05D2) to monitor the output power in-situ using an oscilloscope (Tektronix MSO2014B). Upon laser irradiation, the water inside the right channel is heated, and after a short period (\(t_{n}\sim\) ms), nucleation occurs and a fast growing vapor bubble appears. A Photron NOVA SA-X2 high-speed camera was used in combination with a Navitar 12x zoom lens system and a Schott CV-LS light source for visualization of the bubble dynamics. The camera was used at a frame rate of 225k fps, a resolution of 384*96 and a pixel size of 5 um. Figure 1 (right panel) shows a few typical images during the bubble lifetime. The images were analyzed with a custom-made MATLAB algorithm, which tracks the bubble over time as shown in the red contours. The bubble length is calculated as the area enclosed in the red contour divided by the channel height (400 um). The growth velocity is taken by fitting the bubble length of the second to the fifth frame. The heating phase was simulated in COMSOL until bubble formation. First, using the ray optics module, the beam radius in the water channel is obtained for the different fiber positions used in the experiment. These beam radii are then used in the heat transfer module to simulate the heating. The energy absorption is calculated with Lambert-Beer, using the absorption coefficient of water, which reduces with increasing temperature [36, 37]. It also includes the loss of heat due to dissipation into the walls of the glass chip. More details regarding the numerical simulations of the ray tracing and heat transfer can be found in the Supplementary Information Sections SI 1 and SI 2, respectively. Figure 1: **Left:** Experimental setup consisting of a microfluidic glass chip with two etched channels. The microfluidic channel (right) is filled with water up to distance \(F\). The output fiber of a CW laser is inserted inside the fiber channel (left), and positioned with micrometer accuracy. The distance between the fiber tip and the microfluidic channel \(D\) defines the beam size of the laser at the glass-liquid interface. Upon laser irradiation, nucleation occurs and a vapor bubble will grow and collapse. A high-speed camera with corresponding light source is positioned along the \(y\)-axis to capture \(xz\)-images. **Top right:** Eight selected images of the bubble in an experiment and its contour highlighted in red (\(F=1000\) μm, P = 550 mW, B\({}_{\text{R}}=18.7\) μm). The numbers next to the frames indicate the time after nucleation in μs. Scale bar in first frame indicates 500 μm. **Bottom right:** Bubble length plotted versus time after nucleation. Lengths are calculated from bubble area inside the red contours divided by the channel height (400 μm). Red dots corresponds to eight frames above. Growth velocity is fitted from the first 5 frames, resulting in a slope of 13.2 μm/s. Results and discussion ### Nucleation time and temperature Figure 2A shows the nucleation time as a function of beam radius, for 3 different laser powers. The data points are averaged over at least 6 individual measurements and the error bars indicate the standard deviation. It is clear that the nucleation time increases with increasing beam radius, as well as reducing laser power. These two effects reduce the laser intensity, resulting in slower localized heating of the liquid and therefore a longer nucleation time. For the middle laser power (750 mW), the experiment was performed for three different filling levels, \(F\) = 600, 1000 and 1700 \(\mathrm{\SIUnitSymbolMicro m}\). It was found that the filling level did not have any significant effect on the nucleation time (see Figure SI 2). This is explained as the filling levels are 8 to 20 times larger than the absorption length (\(\delta\approx\) 80 \(\mathrm{\SIUnitSymbolMicro m}\)) and therefore the additional liquid has no effect on the heating, as all the optical power is absorbed before reaching the position of the meniscus for the smallest filling level (\(F\) = 600 \(\mathrm{\SIUnitSymbolMicro m}\)). The typical length over which heat diffusion takes place, \(\delta\), is calculated as [38] \[\delta=\sqrt{4\kappa t_{n}}, \tag{1}\] where \(\kappa\) is the thermal diffusivity (0.14 \(\mathrm{mm^{2}/s}\) for water). Even for the longest nucleation time (35 ms), \(\delta\) = 70 \(\mathrm{\SIUnitSymbolMicro m}\), and therefore at least one order of magnitude smaller than the filling level. Therefore, it can be assumed that changes in the filling level do not affect the nucleation times. In the numerical simulations using COMSOL, the energy absorption and heat transfer in the liquid experiment is simulated until the moment of nucleation, which we took from the experimental nucleation times. These experimental nucleation times are in agreement with simulated times with a maximum local temperature of 237\({}^{\circ}\)C, see solid lines in Figure 2A. This agreement indicates that nucleation temperature is independent of laser beam radius and power. However, the two lowest laser intensities show an exception; where the beam radius is large and laser power small. In such cases, the experiment gives a smaller temperature due to a larger heated region, which is up to 30 times larger compared to the other cases. In such situations, nucleation may happen at lower temperatures, and thus shorter nucleation times. For all other data points, the temperature is very Figure 2: Nucleation time (A) and delivered energy (B) as function of beam radius, for three different laser powers. In both figures, the error bars indicate the standard deviation for at least 6 individual experiments. The solid lines in (A) indicate the time in the COMSOL simulations at which the maximum temperature in the liquid is equal to 237\({}^{\circ}\)C. The delivered energy in (B) is calculated as \(\mathrm{E_{d}}\) = \(\mathrm{P_{L}}*t_{n}\). close to the 237\({}^{\circ}\)C, with a standard deviation of 5\({}^{\circ}\)C (see Figure SI 3). These temperatures are all well above the boiling temperature of water at atmospheric pressure (100\({}^{\circ}\)C), which is explained by the existence of an energy barrier for nucleation. Due to this energy barrier, higher temperatures are needed for bubble formation in microfluidic volumes on short timescales (ms). On the other hand, these temperatures are well below the spinodal temperature at atmospheric pressure (306\({}^{\circ}\)C [33]), which is explained as the bubble forms at a wall, where the energy barrier for bubble formation is lower [39]. Due to this energy barrier, the nucleation itself is a stochastic event [25, 33], which could further explains the slight variations in nucleation time and temperature. Furthermore, impurities such as gas molecules also reduce the energy barrier and therefore nucleation temperature. In literature, different temperatures are noted for bubble formation using a CW laser, either through thermocavitation (direct heating of the liquid) or plasmonic heating (indirect heating of the liquid through plasmonic nanoparticles), see Figure 3. For thermocavitation, studies report different values, either close to the boiling temperature or the spinodal temperature, both of which are unlikely due to the above-mentioned reasons. Our values are in agreement to temperatures found for plasmonic bubbles, therefore, we conclude that our values are closer to the actual temperatures in thermocavitation. Nonetheless, heterogeneous nucleation depends on impurities, which act as nucleation sites [33, 40]. These impurities, such as surface roughness [39], surfactants or dissolved gas molecules [41, 42], reduce the energy barrier and therefore result in earlier nucleation. Figure 2B shows the delivered energy at the moment of nucleation as a function of beam radius, which is calculated by the laser power multiplied by the nucleation time (\(\mathrm{E_{d}=P_{L}*t_{n}}\)). We observe an increase of beam radius or decrease of laser power results in an increase in energy. Furthermore, \(\mathrm{E_{d}}\) spans over two orders of magnitude (0.2 - 20 mJ), for a single thermocavitation set-up. Especially the beam radius plays a significant role in the delivered energy, which makes this set-up an optimal way to accurately control the amount of delivered energy. Figure 3: Calculated temperature ranges for nucleation in water with continuous-wave lasers found in literature. Blue colors indicate thermocavitation and orange plasmonic heating. Black outlines indicate numerical calculations, grey outline indicate experimental measurements. They appear in chronological order, with earliest work at the bottom. ### Bubble growth and energy conversion The maximum bubble volumes are shown in Figure 5 as a function of delivered energy. For all experimental parameters, the maximum bubble volume increases linearly with delivered energy (see logarithmic slope of 1). As all data points are along the same curve, there is little influence of laser power or filling level. However, for large values of delivered energies (E\({}_{\mathrm{d}}\gtrsim\) 2 mJ), the bubble volume plateaus. This plateau is explained by the limited channel length, as the bubble collapses at the moment they coalesce with the surrounding air (see example in Figure SI 4). Therefore, the bubble never reaches its potential maximum volume and larger bubbles cannot be observed in this configuration. This is most apparent for the smallest filling level (\(F\) = 0.6 mm, green diamonds), where already for smaller bubble volumes it can grow beyond the contact line and coalesce with the air inside the channel, resulting in lower plateau values in Figure 5. Figure 5 shows the kinetic energy of the bubble as a function of delivered energy. The kinetic energy is E\({}_{\mathrm{kin}}\) = \(\frac{1}{2}\)mv\({}^{2}\), where m is the liquid mass in the channel and v the maximum bubble growth velocity (change of length over per unit time). We note that the bubble kinetic energy increases quadratically (log slope = 2) with the delivered energy, independent of the laser or liquid parameters. This means that for a constant filling level, the bubble growth rate increases linearly with the delivered energy, as was also found in our earlier work [24]. Here, we now also find that the mass (m \(\propto F\)) does not affect the energy transfer and therefore the bubble growth rate v scales with v \(\propto F^{-0.5}\), which matches previous qualitative observations [45]. For applications such as jet formation for printing or needle-free injection, this means that the liquid velocity can be controlled through the mass of liquid in the confinement. Furthermore, this independence of laser parameters contrasts with pulsed lasers, where an increase in beam radius results in a slower growing bubble [22, 24]. For large values of the delivered energy, the slope in Figure 5 decreases. This is explained by heat diffusion, as this large amount of energy is achieved through long nucleation times, at which point heat dissipation into the glass plays a significant role (see Equation 1). This is especially the case for the smallest laser power (P = 0.55 mW, pink stars), which requires the longest nucleation times to reach those energies, resulting in more heat dissipation. One of the goals of the COMSOL simulations was to investigate the heat dissipation during the absorption of optical energy until moment of nucleation. As discussed in section 3.1, nucleation happens at approximately 237\({}^{\circ}\)C. However, after nucleation has occurred and the energy barrier has been overcome, liquid at a lower temperature (but still above 100\({}^{\circ}\)C) may also contribute to this growing bubble. Figure 6 shows the bubble kinetic energy as a function of the volume of superheated water (T \(>\) 100\({}^{\circ}\)C) at the moment of nucleation. This superheated volume is taken from the COMSOL simulations at the moment of nucleation in the experiment. In contrast to Figure 5, where the initial quadratic relation seems to decrease, the slope in Figure 6 remains constant, which is explained as heat dissipation is included in this simulation. Microbubbles can also be created by different means, such as pulsed lasers [24], plasmonic bubbles [44], discharge with low- [46] or high-voltage [47], microheaters [48] and the tube arrest method [49]. These methods can create similar bubble sizes as in this study and require similar amounts of energy [24, 46]. Follow-up studies could focus on a quantitative comparison between the bubble dynamics and their R(t)-curves to find the best method for different applications. However, most of these methods are invasive, which reduces the ease of use and making chip fabrication more complex. The laser-generated bubbles allow for local heating and generation of bubbles on-chip, and more specifically the use of CW lasers allow for small and affordable set-up. ## 4 Conclusion We proposed and developed a novel set-up to accurately control the laser beam size for thermocavitation in microfluidic confinement. We compared experimental results using high-speed imaging to numerical simulations on the energy absorption and heat transfer. This study focused on the influence of laser beam characteristics on thermocavitation in microfluidic confinement and the energy conversion. We found that the nucleation time increases with increasing beam radius as well as decreasing laser power. Numerical simulations of the heat transfer show that the maximum temperature at the moment of nucleation is 237 \(\pm\) 10\({}^{\circ}\)C and independent of laser beam parameters. This temperature is below the spinodal temperature (306\({}^{\circ}\)C), but well above the boiling temper Figure 6: Bubble kinetic energy vs volume of superheated liquid (T \(>\) 100\({}^{\circ}\)C) at the moment of nucleation, taken from the simulation. Each data point is an individual experiment, where the color and symbol indicate the laser power and filling level. Solid black line is logarithmic fit with slope of 4/3. ature (\(100^{\circ}\)C) and is in agreement to earlier work on plasmonic bubbles. As the filling level \(F\) is much larger than the absorption length, it does not influence the nucleation time or temperature. Furthermore, we found that the maximum bubble volume increases linearly with delivered energy and the conversion is independent of laser parameters. For the largest energies, the maximum bubble volume reaches a plateau as the bubble coalesces with the surrounding air at the opening of the microfluidic channel before reaches its maximum potential volume. The bubble kinetic energy increases quadratically with the delivered energy. However, for large energies, the conversion efficiency decreases, which is explained by the heat dissipation, as the nucleation time is on the same timescale as thermal diffusion. From the temperature profiles in the numerical simulations we find that the bubble kinetic energy increases with volume of superheated liquid (T \(>100^{\circ}\)C), with a power law of 4/3. As heat dissipation is included in these simulations, this relation holds for all data points, independent of the laser or liquid parameters. Our findings contribute to the understanding and use of thermocavitation, and allow for a better control over the bubble characteristics in real life applications. The laser power and beam radius control the nucleation time and delivered energy, and can therefore control the bubble size and growth rate. This allows for optimal use of thermocavitation in a wide range of applications, including laser-actuated jet injection. ## Acknowledgements J.J.S and D.F.R. acknowledge the funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (Grant Agreement No. 851630). J.J.S. would like to thank Stefan Schlautmann for the fabrication of the microfluidic chips. ## Competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## CRediT authorship contribution statement **Jelle J. Schoppink:** Conceptualization, Methodology, Formal analysis, Investigation, Data curation, Writing - Original Draft, Visualization **Jose A. Alvarez-Chavez:** Conceptualization, Writing - Review & Editing. **David Fernandez Rivas:** Conceptualization, Supervision, Project administration, Funding acquisition, Writing - Review & Editing.
2302.06164
A Logic for Veracity
This paper shows the initial stages of development, from first principles, of a formal logic to characterise and then explore issues in a broadly defined idea of Veracity, which includes properties of demonstrability, truth, trust and authenticity.
Steve Reeves
2023-02-13T08:00:47Z
http://arxiv.org/abs/2302.06164v4
# The "Velocity of a "Velocity" ###### Abstract This paper shows the initial stages of development, from first principles, of a formal logic to characterise and then explore issues in a broadly defined idea of "Vercacity". # A Logic for Veracity Steve Reeves Department of Software Engineering, University of Waikato Private Bag 3105, Hamilton, 3240, New Zealand ###### Abstract We present a logic for Veracity, and we show that Veracity is a logical basis for Veracity, and we show that Veracity is a logical basis for Veracity, and Veracity is a logical basis for Veracity. We show that Veracity is a logical basis for Veracity, and Veracity is a logical basis for Veracity, and Veracity is a logical basis for Veracity. ## 1 Introduction When a piece of information is put out into the world it gets subjected to many attempts, both accidental and deliberate, to degrade it or tamper with it. When we are dealing with precious information, that is information which has value (cultural, monetary, scientific etc.), then having assurance that the information has stayed constant is vital. When that information is not kept hidden or otherwise protected then this becomes a very hard problem. It may even be insoluble. _Veracity_ seems to be a term that is widely used, but it is also hard to pin-down its meaning. In this paper I shall take it to mean, reflecting the concerns in the previous paragraph, that we have an assurance that the information has stayed constant. So, we say _a piece of information has veracity_ when we can check that it has not changed. Even though etymologically we might expect _truth_ to have some role in veracity we avoid this. The main reason is that truth seems to require either reference to some authority (and we want our information to survive in an authority-free world: more on this later), or a belief in some objective and unchanging and always accessible reality against which we can always successfully measure our information and decide on its truth. This line of definition drives us towards accepting (perhaps implicitly, since we tend not to think about such things in everyday life) an idealised Platonist reality of some sort. This leads to all sorts of well rehearsed problems (we go with Dummett's analysis still on this [2]). And, of course, even if you are not a Platonist, the requirement that the reality against which you measure your information is always accessible (leaving aside decidability problems etc.) is sometimes precisely the problem; if the source has actually been obscured or lost then it is no longer accessible and truth cannot be decided. As we will see, raising the Platonist spectre does suggest an alternative. ### Aims My view, of course, is that there's a logical basis, and since we want to formalise this in the project in order to both pin-down and explore the idea of veracity, this seems the only sensible place to start. There's a long, deep, rich heritage to truth and trust in many settings and many of them are formal, and very complicated and subtle. I want to start from scratch, not in order to just do something different, but in order to be able treat well, but lightly, those parts of veracity which can adequately be treated that way for our purposes (so, trust will be so treated). And then other parts (demonstrability, truth) will be looked at in more depth simply because they have not (as far as a literature search can show) been treated, in the setting of being a part of veracity, very much at all. I am, therefore, not going to work through a literature review1. I hope that what I say below will make clear that classical approaches will not work, and trust only needs a light-touch, rather unsubtle and instrumental treatment. Footnote 1: Work on another paper with Stephen Cranefield will cover some of this ground, anyhow. To cut to the chase: I will look at intuitionistic logic since it seems to be clearly what's needed, as I argue below. But I'll get there by showing what does not work (pretty clearly), like classical logic or anything based on it (none of the-classical-modal logics work, for example, because of their classical basis, not because I do not like modal formalisms!). The key to seeing this is that all those classical (and classical-including) logics lose information, which is precisely what a formalisation of veracity, as a starting point, must not do, of course. Intuitionistic logic does not lose info. So, _obviously_ it is the place to start. The steps: * We will, as the project does, take Veracity to comprise in: authenticity, truth, trust, demonstrability/verifiability; * Try to pin down and then explore in a logical setting what this means; * Attempt to formalise as much of veracity as we can in order to understand the way it works better. ### Atomic veracity Some statements have a sort of _immediate_ veracity, in the sense that they are newly minted by me (or you) and have not passed through any other hands and have not been in any way combined with other information, so we are immediately assured that the information has not changed. The checking of this is a trivial, indeed empty, act. Consider a couple of examples in more detail: 1. A bar code that we have ourselves just printed and associated with a physical object might be an example in a production chain: this act might generate the information that _this_ bar code is stuck to _this_ object that was produced by _this_ person, at _this_ time and _this_ place, has _these_ characteristics (composition, mass, etc.). We might say the information attests that"this bar code really does identify this object"; 2. Or considering cultural objects, it might be an audio recording of a person giving their whakapapa together with its meta-data that we have just ourselves recorded and catalogued. Here the information is attesting to the association between the meta-data and the audio data. These cases in some sense wear their veracity on their sleeve: it is immediate, we have "a piece of veracity", the information that _this_ piece of data correctly describes _this_ object since I, at just this moment, made the association. This is our _atomic veracity_. The claim or statement cannot be further analysed in terms of asking whose hands it has passed through, how it has be modified or added to since none of this has ever happened to it. We might say (using logical terminology) that the piece of verifying information is a _witness, proof, testimony, piece of evidence_ to the act of association. It is this that we want to be able to objectify and then track. This track will be what we look to when someone says "how do we know that this bar code correctly identifies this object?". Note that the information itself may be made up of many pieces of other information, or may have taken work to compile; but the information witnesses, is evidence for, an atomic claim. So, the witness may be complex, but the claim is atomic. We might want to view this evidential information in more detail though. This will not be in the sense of more detail on the actual veracity claim itself, because this witness \(w\), say, is already formed. But it might come with the information of who \(p\), where \(l\), when \(t\), how \(m\) etc. In this case, the witness might not be an atomic name, a constant, but an "atomic" term. I.e. we might view a witness either as the atomic name \(w\) or the atomic term _w_(_p_, \(l\), \(t\), _m_). So a witness, piece of evidence, might contain a lot of information, but from the point of view of the logic it is not further analysable. Of course, as we build up non-atomic claims the witnessing information will correspondingly become both buildable and analysable in the logic. How the data about it "sticks" to the artefact (the bar code on the car part, the meta-data to the whakapapa audio file) is not what we are concerned about here. It is another technological problem that is being worked on and is outside our scope. So, we are assuming it is possible and has been, or will be, done2. Footnote 2: This might be wishful thinking, but it has such big stakes for such large companies that I think it’s OK to assume it will happen one day. Whether it does or not, though, veracity is still an interesting idea to try to reason about. ### Other methods Our idea of checking for assurance of veracity is different from the distributed ledger technology (DLT) way of doing it (e.g. through use of a blockchain). There the veracity isn't through checking but by making it impossible (or highly unlikely) that the information has been changed once it is put out into the world.3 Footnote 3: In the fuller version of this paper we will look at previous work on intuitionistic logic that we’re drawing on: Martin-Löf [3, 4]; my work on logic from the past, in particular work with Douglas Bridges [1] too as background. ## 2 Considering logics ### Formalisation We let letters like \(A\), \(B\), etc. stand for a claim of veracity, which is a proposition that is true when the veracity claimed is appropriately witnessed, upheld by data, by a person's statement, by direct knowledge, evidence, somehow, that the thing is what we say it is, came from where we say it came from, was grown as we say it was grown.... etc. etc. Then a _judgement_\(a\in A\) is the veracity judgement, statement4\(A\) has witness \(a\). A judgement like this is upheld, or perhaps we might say that \(A\) has veracity because it is witnessed by \(a\), when this judgement appears as the conclusion of a proof tree constructed according to the rules that follow. Footnote 4: Later when thinking semantically we might view \(A\) as the set of all its witnesses. There is a special veracity claim \(\bot\) which has no witnesses, i.e. it is the claim that never has veracity, and a judgement that makes a claim about it can never be upheld. This leads to our first proof rule: \[\frac{a\in\bot}{a\in A}\bot^{-}\] This rule says that if you, in the course of your reasoning, somehow have shown that the claim \(\bot\) that can never have veracity does in fact have it, then you can show that _anything_ has veracity. We call this rule \(\bot^{-}\) for "\(\bot\) elimination". Other rules would be: \[\frac{a\in A\quad b\in B}{(a,b)\in A\wedge B}\wedge^{+}\] \[\frac{(a,b)\in A\wedge B}{a\in A}\wedge^{-}1\] \[\frac{(a,b)\in A\,\wedge\,B}{b\in B}\,\wedge^{-}\,2\] Here we are formalising the idea that if two veracity claims \(A\) and \(B\) are witnessed then the combined claim that \(A\) together with \(B\) has veracity is also witnessed, and that witness we choose to denote by the pairing of the component witnesses. Note that this is a simple use of the the idea also of information being preserved around claims and their witnesses even when they are composed together. One immediate place where this information preservation becomes perhaps a little unfamiliar is when we try to think about what saying "we have claims \(A\) and \(B\) and we know that they each have a witness, so we know that one or the other has one: that is, a claim of \(A\) or \(B\) is witnessed". We might choose to formalise this by saying \[\frac{a\in A\quad b\in B}{a\in A\,\vee\,B}\] The point here is that (first) this rule has exactly the same premises as the one above, and avoiding such points of choice amongst rules is generally (for coherence) a good thing. But more importantly (at the formalisation level) is that we have lost information here. The conclusion does not record which of the alternatives we have relied on to reach it: did we justify the claim of one or the other because of the first witness, or the second? Righting these two points means doing something like \[\frac{a\in A}{i(a)\in A\,\vee\,B}\,\vee^{+}\,1\] \[\frac{b\in B}{j(b)\in A\,\vee\,B}\,\vee^{+}\,2\] So, if we have a witness to a claim of \(A\) then we certainly have a witness to a claim of either \(A\) or \(B\), and we "tag" the witness in the conclusion so that we do not lose the information about which claim the claim of one or the other relies on. Now consider the case where we know that a certain witness \(c\) upholds the claim that _A_\(\vee\,B\). What can we deduce, if anything, from this? First note that our two introduction rules mean that a witness to a claim like this must in fact have a tag since tags are introduced by the only rule that could have allowed us to deduce the claim of _A_\(\vee\,B\). So, we have a case analysis to do: if the witness to this composite claim is tagged with \(i\) then we know it is \(A\) that we relied on and similarly with \(j\) and \(B\). This preservation of all the information allows us to dismantle the composite claim: \[\frac{i(a)\in A\,\vee\,B}{a\in A}\,\vee^{-}\,1\] \[\frac{j(b)\in A\,\vee\,B}{b\in B}\,\vee^{-}\,1\] (In fact, as we will see in the fuller paper, these rules need to be more general, but this gives the idea, I hope. We will I think need more complex ways of writing things. Cf Martin-Lof's need for the idea of canonical form, and computation (equality) rules to get to canonical.) Finally, we denote the fact that a judgement _c_\(\in\)_C_ has been demonstrated to hold without assumptions, i.e. it is the conclusion of a proof tree of uses of these sorts of rules, by \(\vdash\,c\in\,C\) So think of this as saying that the judgement that \(c\) is a witness to \(C\) has been demonstrated by the rules of the logic. Later we will use this same turnstile notation to allow assumptions to appear (on the left). Imagine that by assuming that claim \(A\) has veracity, i.e. that the judgement \(x\in A\) has been shown for some arbitrary witness \(x\), we can show that claim \(B\) has veracity, i.e. we can show \(b\in B\)5. Footnote 5: Here \(b\) is a term which may contain \(x\) free. (_x_)_b_ is a term with the free _x_s in \(b\) bound. Denote this state of affairs by _a claim that depends on an assumption_: \[x\in A\vdash b\in B\] Thinking about a typical logic, introduce an _implication claim_ to reflect this, i.e. to discharge the assumption, so the claim becomes \[A\Rightarrow B\] but what would a witness to _this_ claim plausibly look like? Given any witness \(x\) to the claim \(A\) then it is possible to _construct_ a witness \(b\) for the claim \(B\). That is, there is a function which given any witness to \(A\) will compute a witness to \(B\), so \[\lambda(x)b\in A\Rightarrow B\] The witness to an implicative claim like \(A\Rightarrow B\) should be a function that takes a witness to the claim \(A\) and turns it into a witness for the claim \(B\). In general, this allows us to build a function that, given a whole set of basic veracity claims and their witnesses (the assumptions), builds for us a witness for a complex veracity claim. This function can then be read as a process to be followed which, given starting veracity claims, will assure that a complex veracity claim can be successfully and correctly made. Implication us to define negation in terms of \(\bot\): \(\neg\)_A_ is \(A\Rightarrow\bot\). A witness to a claim of \(\neg\)_A_ takes a witness to \(A\) and give us a witness to \(\bot\). But \(\bot\) has no witness, so a witness to \(A\) is not possible, as expected by our informal understanding of saying a claim has no witnesses. The requirement that to justify a disjunction of claims it has to be demonstrated which of the claims were justified before (which is the role that the tags on the witnesses are playing in the rules) means that, for example, the claim \(A\vee\neg\)_A_ is also not justifiable without saying which claim is witnessed: \(A\vee\neg\)_A_ doesn't survive the question: yes, but can you show, whatever \(A\) is, the witness that assures the veracity of the claim here? And the view that witnesses to implications are functions leads us in the same direction...to the thought that this is reinterpreting intuitionistic logic The argument so far is that the logic work above covers the verifiability (checking a proof is easy) and truth aspects of veracity. What is not yet settled is the trust aspect (the authenticity is left for now-yet to have any ideas on how it might be treated, or even what it is), and once we start to think about trust, we think about people and the relationships between them. ## 3 More actors The section above works well when one person is collecting and making veracity claims. It is a one-person logic because we never mention who is making claims, so we cannot tell how many people might be, so we can only correctly assume it is one person from the form of the rules. In other words, there are no rules for combining or tracking veracity claims made by several actors. One way to perhaps tackle this is to add a name (of an actor) to each justified judgement. So, if actors \(k\) and \(l\) from a set \(\mathit{Act}\) have made claims then we might have two judgements \(a^{k}\in A\) and \(b^{L}\in B\), that is actor \(k\) has made claim \(A\) with witness \(a\), and similarly for \(l\), \(b\) and \(B\). This now adds a second dimension to our logic above. The first dimension dealt with one actor, so we can think of all the judgements before as being abbreviations (because there's only one actor \(k\)) of judgements of the form \(a^{k}\in A\), so we left the \(k\) out because it never varied. Now the second dimension is around how actors become incorporated into the logic. ### Relating actors Having introduced more than one actor we now need to think about how, from a veracity point of view, they can be related. Keeping to the idea that we think of simple cases to guide us rather than trying to do everything we might wish all at once, the question: what relationship between actors is a useful one (there will be others) to consider? Fundamentally, surely, is one of trust: does this actor trust that actor? Once we know who trusts who we can plausibly expect things like \(k\) trusting \(l\) means that any judgement that \(l\) has accepted allows \(k\) accept that judgement. So, roughly, we would say that \(\vdash a^{l}\in A\) leads to \(\vdash a^{k}\in A\) if \(k\) trusts \(l\). If we denote the trust relation by \(T\subseteq Act\times Act\) then \(k\) trusts \(l\) will be \(\mathit{kTl}\) we propose a rule \[\frac{a^{l}\in A\quad\mathit{kTl}}{a^{k}\in A}\mathit{trust}\,\,\,T\] and we can picture the relation as This can be generalised to a more realistic situation, where there are various relations of trust between actors, not just \(T\), by making clear we are parameterising the demonstration of judgements with the particular trust relationship in play and say \[a^{l}\in A,\mathit{kTl}\vdash_{T}a^{k}\in A\] i.e. assuming \(a^{l}\in A\) and \(\mathit{kTl}\) hold for some trust relationship \(T\), we have demonstrated \(a^{k}\in A\). (This is really just another way of giving the above rule, but makes it clearer (perhaps) that things are dependent on the trust relationship being used.) ### Trust relations We can explore, even with this simple basis, how veracity works. This is a derived rule from the simple previous one \[a^{m}\in A,\mathit{kTl},\mathit{lTm}\vdash_{T}a^{k}\in A\] because, given \(T\) as \[\tikzfig{T}\] \[\frac{a^{m}\in A\quad lTm}{a^{l}\in A}\;\textit{trust}\;\;T\] Given this example we might ask: can _any_ binary relation between actors be a trust one? No; it surely needs to be at least reflexive and a certainly not symmetric: we trust ourselves, and if we trust someone does it follow that they should trust us? But I would hesitate to say a trust relation requires more properties. Note that the proof above seems to show that trust is also transitive: it turns out to be a property of our simple rule. Does that call the simple rule into question, since it a stretch to accept that if I trust someone, and they trust someone else, then I should trust that someone else. Well, I make the point that trust here is "100% trust" which explains this rule and how transitivity emerges in this pointwise way. I will return to this below. Another derivable rule which seems to be a good thing: if two people see veracity in two different things and one trusts the other then the first person believes the conjunction \[a^{k}\in A,kTl,b^{l}\in B\vdash_{T}(a,b)^{l}\in A\wedge B\] ### Degrees of trust This brings the final augmentation, that we need _degrees of trust_ to make things work. We write \[a^{k}_{0.5}\in A\] for \(k\) believes with strength 0.5 that \(a\) supports the claim \(A\) (and we drop the subscript in the case it's 1.0). Then the apparent transitivity above only works if \(kT_{1.0}l\) and \(lT_{1.0}m\), i.e. \(k\) trusts \(l\) completely, and the same for \(l\) and \(m\), i.e. \[\begin{array}{c}\includegraphics[width=142.362pt]{l1.0}\end{array}\] and that makes the apparent transitivity look reasonable So, we recast the _trust_\(T\) rule as \[\frac{kT_{x}l\quad a^{l}_{y}\in A}{a^{k}_{x.y}\in A}\;\textit{trust}\;\;T\] If instead \(kT_{0.5}l\) and \(lT_{0.4}m\) then I would say \(kT_{0.2}m\) and the proof above supports this, rewritten as \[\frac{kT_{0.5}l\quad a^{m}\in A}{a^{l}_{0.2}\in A}\;\textit{trust}\;\;T\] i.e. if \(a^{m}_{1.0}\in A\) and \(kT_{0.5}l\) and \(lT_{0.4}m\) then \(a_{0.2}^{k}\in A\) We can also allow terms here with the variables remaining...interesting, and to be explored in the fuller paper. ## 4 Notes for further thought, and discussion, and data ### Combining different trust relations Most interesting, and hard, and probably not agreed on? In particular really need some write-ups of the use cases so as to ground this. Also industrial examples, as well as things like the work on my SfTI seed project with Ahau. There's the question of relationships between trust relations (subset, disjoint...). There's the other harder question of how the incompatible demonstrations of different actors interact. If the actors don't trust each other then it's clear...no merging. If they do trust (in some direction) then (i) how do we spot incompatibility?; and (ii) how do we deal with it, if we wish to. (i) might be formalised as combining the proof of two claims (or just two demonstrations) derives \(\bot\); (ii) That's a hard and interesting question... ### Recording and dealing with disputed veracity stories What happens if we have more than one trail of verification for a claim? Can it happen formally? Hope the previous section can deal with this in a clear way. A benefit of a simple formalisation! Two types of statement perhaps. One for veracity and one for resolution of differences? ### Authenticity This is missing. But one thought is that once a proof tree is constructed, the open assumptions are places to look (in general) for problems in authenticity since they are the places where a judgement "hits" the real world. Authenticity is about a judgement being genuine. ### Proof construction and checking Work has started in using Isabelle to implement a proof checker for the veracity logic.
2304.03392
Personalising Digital Health Behaviour Change Interventions using Machine Learning and Domain Knowledge
We are developing a virtual coaching system that helps patients adhere to behavior change interventions (BCI). Our proposed system predicts whether a patient will perform the targeted behaviour and uses counterfactual examples with feature control to guide personalisation of BCI. We use simulated patient data with varying levels of receptivity to intervention to arrive at the study design which would enable evaluation of our system.
Aneta Lisowska, Szymon Wilk, Mor Peleg
2023-04-06T21:46:48Z
http://arxiv.org/abs/2304.03392v4
Personalising Digital Health Behaviour Change Interventions using Machine Learning and Domain Knowledge ###### Abstract We are developing a virtual coaching system that helps patients adhere to behaviour change interventions (BCI). Our proposed system predicts whether a patient will perform the targeted behaviour and uses counterfactual examples with feature control to guide personalisation of BCI. We use simulated patient data with varying levels of receptivity to intervention to arrive at the study design which would enable evaluation of our system. Keywords:BCI, Personalisation, Machine Learning, MHealth ## 1 Introduction As part of the Horizon 2020 CAPABLE (CAncer PAtient Better Life Experience) project, we are developing a mobile health (mHealth) application that aims to facilitate the mental wellbeing of cancer patients. It offers multiple evidence-based digital health behaviour change interventions (BCIs) from the domains of mindfulness, physical exercise, and positive thinking, targeting various aspects of wellbeing (e.g., stress, insufficient sleep etc.) [1]. Each BCI defines a target activity, that when performed regularly may lead to improvement in one or multiple wellbeing outcomes. For example, evidence-based studies showed that meditation may reduce stress and may improve the quality of sleep. Potential prolonged wellbeing benefits depend on regular engagement with the recommended activity. To support patients with BCI adherence, we design a Virtual Coaching (VC) system combining knowledge- and data-driven approaches. VC uses personalisation and prediction models. We utilise Behaviour Change Intervention Ontology [2] (BCIO) and Fogg's Behaviour Model [3] to select modifiable inputs of personalisation model and machine learning to predict patient behaviour. Fogg suggests that a person will perform a target behaviour when they are sufficiently motivated, have the skill/ability to perform the behaviour, and there is a trigger prompting them to do it [3]. In our previous work [4] we identified some factors influencing these three components, and formulated Fogg's Model as: \[Behaviour=\begin{cases}1&\quad\text{if }(Motivation\times Ability\times Trigger> \text{action threshold}\\ 0&\quad\text{otherwise}\end{cases} \tag{1}\] Each of Motivation (M), Ability (A) and Trigger (T) components, including the action threshold, varies among individuals and depends on the suggested activity and the user's current context, both internal (e.g., emotional state) and external (e.g., day of the week). Kunzler _et al._[5] explored factors that influence patient recetiveness to intervention. They discovered that fixed user characteristics (e.g., age, gender, personality) affect general participants' receptiveness, while extrinsic factors (e.g., date, battery status) are associated with receptiveness in the moment. Building upon this distinction, we map general intervention receptiveness to the action threshold and receptiveness in the moment to the trigger magnitude in Fogg's model (refer to the right part of Figure 1). It is important to note that behaviour itself does not have a magnitude in this model. The Fogg behaviour Model operates on a straightforward binary premise, categorizing behaviour as either performed or not. The focus is not on the "magnitude" of the behaviour but rather on the frequency of engagement for habit formation. In this context, it is more effective to regularly engage in a simplified version of the target behaviour than to not perform it at all. Using walking as an example, some argue that it can be seen as a gradual behaviour with varying durations or frequencies. However, in Fogg's model, these are properties of a walk that can be adjusted. For instance, the length of the walk can be shortened to enhance the patient's ability to engage in it. This concept aligns well with BCIO, where the length of the walk may correspond to the BCI dose, which is a property of the BCI content. Our objective is to modify the BCI in a way that promotes the performance of the well-being behaviours. While prior work focused on specific aspects of BCI for personalisation, our novelty is in leveraging both patient context and all BCI properties to predict whether a patient will perform the target behaviour. Moreover, we propose to utilize counterfactual examples with feature control to guide BCI personalisation. A counterfactual example is a hypothetical input instance that is similar to the original instance, but with certain features modified in order to produce a different prediction from the machine learning (ML) model. For example model predicts that "John (age 67, male etc) given his current context (stressed, fatigued, outside of home, on Tuesday afternoon) will not perform a 45 minutes brisk walk if the VC sends him motivating reminder at this point." However, if we improve his ability to perform the BCI and change the walk duration to 30 minutes or to slower speed the model predicts that John will perform the suggested activity (See Figure 1). In this preliminary work we: 1) Identify BCI properties that can influence patients' Motivation, Ability, and Trigger (MAT) dimensions. We propose that these properties, along with patient context, can be used to predict whether the suggested behaviour will be performed. 2) Adapt the concept of counterfactual examples, commonly used in the field of explainable AI, to personalise BCI. 3) Utilize simulated data to design a study that enables the evaluation of the proposed personalisation system. ## 2 Related Work Kunzler _et al._[5] trained a Random Forest (RF) Classifier to predict participants' receptivity in-the-moment, specifically detecting when participants respond to a chatbot. By utilizing both participant context and their intrinsic characteristics features, they achieved an accuracy of 77%. In our behaviour prediction model, we also incorporate these features and use RF. However, unlike Kunzler et al., our objective is not solely to predict when the patient responds to the notification and consequently adjust BCI delivery. We are also interested in determining whether they will perform the target behaviour as a result of the notification. Therefore, we additionally consider the Motivation and Ability dimensions of Fogg's model. These are affected by BCI content. For example, Mair et al. [6] aimed to increase physical activity in older adults. In their study, participants were given the option to select their goal in terms of either time spent on performing physical activity or the number of steps walked. Each goal had three target ranges, allowing users to match the intervention dose to their ability. The participants received messages that aimed to motivate them (e.g., by highlighting the benefits of walking) or to enhance their ability to perform the activity (e.g., supporting activity planning). We note that these messages correspond to different Behaviour Change (BC) techniques. Following Mair et al., we consider BCI dose and BC technique as variables that can be personalised and we have incorporated them as input features into our behaviour prediction model. ## 3 Methods #### 3.0.1 Behaviour Prediction Model We trained an RF model to predict a patient's behaviour (i.e., the execution of a target activity framed as a binary classification problem) using their fixed characteristics (age, gender, motivation at enrollment), internal context (affect, cognitive load), external context (motion, location, time of day, day of the week), and BCI properties (activity type, dose, delivery schedule, message phrasing, and content) (See forward pass in Figure 1). We use scikit-learn implementation of the RF with default parameters and balanced class weighting. #### 3.0.2 Personalisation Method We utilize the DiCE (Diverse Counterfactual Explanation) framework [7] with feature control to identify specific changes needed in the BCI to increase the likelihood of patient adherence to it, i.e., performing the target behaviour (See the red backward arrow in Figure 1). We generate multiple counterfactual examples using a genetic algorithm, but for personalisation, we select the one that requires the fewest changes to the patient's context and BCI.To illustrate this, let's consider an example with John. If John were more motivated, relaxed, and well-rested, he would likely be willing to engage in a suggested 45-minute brisk walk. However, achieving these changes in his state might be more challenging compared to increasing his ability to perform the target behaviour by reducing the duration of the walk. Therefore, even though there are multiple counterfactual scenarios in which John takes a walk or engages in other well-being activities like yoga, we select the one that requires the minimal number of changes to his context and BCI, ideally altering only one BCI feature. Feature control is a technique that ensures that generated counterfactual explanations satisfy certain constraints. For example, we might want to increase a patient ability to perform a target behaviour but we cannot change the fixed characteristics, e.g. age. We allow changes to the features which relate to BCI but not to patient characteristics or their internal context. #### 3.2.1 Evaluation Method There is a gap in health BCI research in terms of availability of public BCI datasets. To be able to evaluate an ML-based intervention personalisation approach prior to commencement of the clinical study it is necessary to create synthetic data. In our previous work [4] we developed a simulator based on Fogg's model which computes MAT components based on multiple data items and uses equation 1 to generate binary behaviour outputs (behaviour was performed or not). Here we simplify simulation so that each MAT dimension has an integer value from 0-4, depending only on model input features described in section 3. Following equation 1 the action thresholds should have values between Figure 1: An example of utilizing counterfactual explanation with feature control to guide the personalisation of BCI. In the forward step, the model predicts that the behaviour will not be performed based on the current patient context and the selected BCI. We generate counterfactual examples and choose one that necessitates the fewest changes to move the patient above the action line. The counterfactual example demonstrates that we can retain the recommended activity type but need to adjust the BCI dose, such as suggesting a shorter walk. 0 and 64. At the action threshold of 0, a behaviour is always performed, and at 64 - never. In this work, drawing inspiration from the findings of our previous survey study [8] and a pilot real-world well-being intervention involving healthy volunteers performing their selected target behaviour daily [9], we simulate patients with different action thresholds to investigate the influence of these thresholds on the performance of the RF model. Figure 1 depicts a non-linear action threshold. However, in the initial exploration, we generated simulated patients with linear thresholds. We acknowledge that this simplified setting may not fully capture the complexities of real-life scenarios, where initial increases in ability or motivation (from 0 to a small value) may have a larger impact on behaviour compared to increases from high to very high. Nevertheless, our objective was to assess whether the behaviour prediction model can be effectively trained using the selected input features in a small annotated data regime that reflects real-life interventions. If it fails to work in this simple setting, it is unlikely to perform well in more complex scenarios. We also generate datasets with diverse numbers of simulated patients (between 1-100) with varying action thresholds (generated at random) to explore how the RF could benefit from learning across multiple patients. We train the model on all patients but use patient identifier as a feature to distinguish between patient or group of patients if it is necessary. ## 4 Experiments The simulator is utilized to generate both training and testing examples. At the beginning of the intervention, the available data is very limited as it has not yet been gathered. To accurately reflect this situation we initiate the training process with a small training set and gradually increase it as the intervention progresses over time. For testing, we generate 400 samples per patient, while the number of samples for training varies from 2 to 32. It is important to note that during the intervention, we assume receiving one annotated sample per day from each individual, indicating whether the patient performed the target behaviour or not. We consider a maximum data collection period of one month. Consequently, the number of training data remains relatively low. However, this setup realistically reflects what we could expect in a real-life study. #### 4.0.1 Learning to Predict Behaviour for Simulated Patients with Different Action Thresholds We investigate how the action threshold impacts the number of samples required to train the RF model for behaviour prediction. As shown in Figure 2, for a receptive individual (action threshold \(=10\)), RF needs around 30 samples to achieve approx 0.7 macro F1 score. When the activity threshold is low, we have only positive examples and RF is struggling to learn the context in which the behaviour is not performed. A similar problem arises when the activity threshold is set too high. As expected in such cases, the model's effectiveness in training is compromised due to the limited availability of positive examples. However, it is noteworthy that the range of thresholds from which we can gather samples of performed behaviour is remarkably narrow. This suggests that the personalised model can only be effectively trained for a small subset of patients. From a machine learning perspective, this finding may not be unexpected, but it holds significant practical implications for real-life studies. Specifically, when comparing personalised approaches, it is vital to ensure that each study group exhibits a similar distribution of "general intervention receptivity." This issue becomes even more apparent in the subsequent experiment. #### 3.2.2 Learning from Multiple Simulated Patients Figure 2(a) shows that increasing number of samples through increasing number of patients in a study might not necessarily lead to better learning of behaviour prediction because patients would likely have different general receptivity levels. The proportion of patients with higher general receptivity to intervention might be more predictive of model performance than the total number of simulated patients. For example, learning from a smaller group of 10 patients (with 30 samples per patients), where 80% have an action threshold below 40, leads to a higher macro F1 score for the RF model compared to learning from a larger group of 100 patients, where only 52% have an action threshold below 40. #### 3.2.3 Adding Supervision for Enhanced Learning We use the MAT values from the simulation as class output labels for a multi-class RF classifier. There might be variables that commonly contribute to one of MAT dimensions, as assumed in our simulation. For instance, activity difficulty and the patient's physiological state may both contribute to patients ability to perform the target activity. Therefore, instead of directly predicting behaviour, it may be easier for the model to learn where the patient stands on each axis separately. This is especially true when dealing with multiple patients, as shown in Figure 2(c). Hence prediction of MAT could be an intermediate step in intervention personalisation. The MAT values might be subsequently fed alongside the patient ID to the next RF model to learn to predict the behaviour for each individual (See Figure 2(b)). Figure 2: Impact of action threshold on RF behaviour prediction performance. ## 5 Discussion Our experiments revealed that RF struggles to directly predict whether a patient performs the target behaviour based on input features alone. Therefore, we revise the Behaviour Prediction Model to be a two-step system: 1) Predicting the MAT and 2) Predicting if the patient is above or below the action threshold. In this revised system to personalise the intervention, we use DiCE twice: 1) Identify the most effective MAT dimension to target for a given patient in a given context, and 2) Identify BCI properties that should be modulated to move the patient forward on the selected MAT axis. This approach assumes that while patients may differ in their general receptivity to interventions, there may be common factors that influence their motivation, ability, or receptivity in the moment. The main limitation of our experiments is that the MAT values are generated by the simulated system. In a real-world intervention, MAT annotation would be Figure 3: Performance of RF model for varying number of simulated participants with different action thresholds. The % of participants with action threshold below 40 for plots from left (1) to right (100) is: 100, 40, 80, 65, 70, 52. obtained from user interactions with the VC. Trigger values could be assigned based on user responsiveness to notifications or ignored messages. Additional information about patients' ability and motivation might come from enrollment questionnaires, direct questions after the performance of the target behaviour, or ratings of activities and motivational messages. For activities with demonstration videos (e.g., 'Tai Chi' lesson), we can approximate the patient's ability by measuring how long the video was played. The capture of this information would need to be planned for already at the BCI design stage. In our future work, we aim to explore methods for capturing additional supervision information more seamlessly, without burdening patients, and integrating data collected at various frequencies to drive BCI personalisation. ## 6 Conclusion In this preliminary work, our objective is to design a study that effectively tests our personalised system. Specifically, we aim to determine the optimal number of patients required in the study to train a behaviour prediction model effectively. Additionally, we investigate which information about the patient and intervention needs to be leveraged throughout the study. Through simulating diverse patient profiles, we analyze how individual receptivity to the intervention impacts the training of the machine learning model. Our findings highlight that patients' receptivity to intervention has a stronger influence on model performance than the number of patients in the study. This emphasizes the importance of ensuring a similar distribution of "general intervention receptivity" in each study group when comparing personalised approaches. Moreover, our findings demonstrate that incorporating intermediate representation, which considers individual MAT components, contributes to the successful learning of behaviour prediction and subsequent personalisation of BCI. In practice, this necessitates inferring measures of patients' ability and motivation from their interactions with the mHealth application or collecting such data during the study. Lastly, to facilitate patients in performing the target behaviour, we propose employing the DiCE method to generate potential strategies for modulating the BCI. #### Acknowledgments This work has received funding from the EU's Horizon 2020 research and innovation programme under grant agreement No 875052.
2310.10517
Distribution prediction for image compression: An experimental re-compressor for JPEG images
We propose a new scheme to re-compress JPEG images in a lossless way. Using a JPEG image as an input the algorithm partially decodes the signal to obtain quantized DCT coefficients and then re-compress them in a more effective way.
Maxim Koroteev, Yaroslav Borisov, Pavel Frolov
2023-10-16T15:33:58Z
http://arxiv.org/abs/2310.10517v1
# Distribution prediction for image compression: ###### Abstract We propose a new scheme to re-compress JPEG images in a lossless way. Using a JPEG image as an input the algorithm partially decodes the signal to obtain quantized DCT coefficients and then re-compress them in a more effective way. JPEG images, re-compression, probability model, context model, lossless compression ## I Introduction In recent years with emerging multiple various data storage and cloud services the problem of data compression became vital again. One of the key and most expensive factors for organizing cloud services is disk space. On the other hand, the most part of the data currently located in the cloud is media data. This data is normally already compressed in some more or less efficient way but still it is very desirable to improve its compression further to save the disk space in the cloud on a larger scale. There may be multiple approaches to this problem varying in scale and complexity and related to building models of the data of various types. In this paper we focus on a more traditional approach of compressing or rather re-compressing media data, namely, we try to propose some more efficient algorithms for better compression of individual compressed image files. We also focus on jpeg files here as among media data in the cloud video and jpeg compressed files represent a significant fraction. ## II An approach to re-compress JPEG images When uploading media data into the cloud it is usually assumed by the user that the uploaded data can be retrieved back exactly in the same form as it was uploaded. In other words, no one expects the cloud algorithm would distort the data handled to it. Therefore, when working on re-compression of media data in this assumption, the only possible apporach can be lossless compression. It is well known that the main source of losses in compression of media data is quantization; when quantized, media data is normally compressed losslessly. In the currently used approaches to re-compression, which we follow as well, we can try to improve the algorithms following quantization, i.e., to re-compress quantized DCT coefficients. Thus it is assumed below that a re-compression algorithm extracts quantized DCT coefficients out of the compressed data stream and transmits them to a core re-compressor which should compress them in a better way providing an improved compression gain. ## III Some remarks on the statistics of DCT coefficients JPEG algorithm deals with \(8{\times}8\) blocks of DCT coefficients, so each block contains \(64\) coefficients. It is well known that the application of the discrete cosine transform results in a certain decorrelation between the adjacent pixels of the block, so that the final DCT coefficients are sufficiently independent in terms of _linear correlation_. To illustrate this we show the correlation coefficient measured between the neighbouring positions of \(8\times 8\) blocks for a dataset of jpeg images (fig. 1). It is seen that the linear correlations are insignificant for all locations of DCT coefficients except when measuring correlations between the DC coefficients and some AC coefficients closest in terms of the zigzag distance. However, it is quite natural even in the latter case the correlation magnitude remains at the level \(\sim 0.2\), which is too small for building a realistic model out of it. Some increase of standard deviations observed in the right part of fig. 1 is accounted for by the significant fraction of zero DCTs at positions of the block remote from the top left corner (the position of DC coefficient). These observations remain sufficiently stable for various images and indicate that efforts to build _linear_ prediction models based on information from the adjacent positions in the block are doubtful. Let us reiterate, more sophisticated models which would take into account higher order statistics can be of use; they require further analysis and will not be considered in this paper. Next, DCT transform typically results in forming a certain pattern in statistics of DCT coefficients located in different parts of the block. To illustrate this property we plot the behavior of the second moment of the distribution (standard deviation) of DCT coefficients at each position of an image (fig. 2)1. Based on this well-known observation we consider the compression of DCT coeffcients separately for each position in the block which corresponds to separate encoding of the coefficients corresponding different frequencies. To shorten the explanations we will refer, rather informally, to this procedure as _bucketing_; so we imply below that each of \(64\) positions in \(8\times 8\) block corresponds to one bucket and thus we consider the compression of \(64\) buckets. Note that implementation of this approach usually does not require creating separate physical buffers in memory for buckets; instead, it is sufficient to use appropriate pointer shifts in the array of all DCT coeffcients. ## IV Probability model After separating all DCT coefficients of an image into buckets and applying delta coding to the first bucket containing DC coefficients, we would like to predict the probability of each coefficient in each bucket to use these probabilities in a multisymbol arithmetic coder (MSAC). From the point of view of adaptive probability prediction this approach implies forming certain contexts for each position of DCT coefficients. As a probability model for all buckets the Laplace distribution is usually used which appears to have correspondence to actual distributions of DCT coefficients and in the same time sufficiently simple for computations. However, it is clear that when dealing with non-stationary distributions it is difficult to confirm that the distributions inside buckets remain laplacian or at least keep their parameters fixed (usually they do not). Moreover if we look at the distributions of DC coefficients, the deviation from Lapalce becomes evident (fig. 3) In fig. 4 we show the distributions of AC coefficients for an image compared to the pdfs for Lapalce distributions. It is seen that various deviations from the Lapalce distribution may Fig. 1: For each image of a dataset consisting of \(18\) widely spread jpeg images (see the table at the end of the paper) the Pearson correlation coefficient has been computed between DCT coefficients located at different position of \(8\times 8\) blocks. All distances are measured in terms of zigzag order. Note that the error bars are shown only for one experiment; for other they are similar. The only statistically significant deviation of the correlation coefficient from zero is observed for measuring correlations with top left position (DC coefficient position). Even in this case the correlations are extremely weak (\(\sim 0.2\)). Fig. 3: Distribution of \(\Delta\)DC coefficients (i.e., first forward differences between DC coefficinets for an image) for several images from the dataset. Note semi-log scale for more convenient comparison with exponential distributions. For each empirical distribution we also plot the Lapalce distribution for \(\mu=0\) and std shown in the legend. These stds correspond to those measured in empirical distributions. Note also the log-log inset which demonstrates longer tails for the empirical distributions. Fig. 2: Distribution of standard deviations for DCT coefficients computed for various positions inside \(8\times 8\) block of an image. Note the log scale to clearly indicate the trend. All blocks of an image have been collected and stds for quantized DCT coefficients have been computed for each position in the block. The logarithm of the magnitude of std is shown for each position. occur both in the region of small amplitudes of the coefficients and in tail regions. However, there is an essential difference between figs. 3 and 4. For DC buckets the distributions systematically demonstrate longer tails close rather to power-law behavior (see inset on the fig. 3), while the distributions for AC keep the laplacian form: even though the comparison with Laplace distributions shows some discrepancy, the distributions remain quite straight in the semi-log scale indicating that the approximation with Lapalce distribution proves to be adequate for AC coefficients and it is just the question of better estimate for the standard deviation of this distribution which matters; this is valid for majority of observed cases. On the other hand, for DC coefficients an approximation with a distribution with longer tails and sharper peak near zero (e.g. generalized gaussian) may be a better choice. We will discuss this at some other point but for a moment we will assume Lapalce distribution for DC coefficients as well. We apply the Lapalce distribution to the probability computation adaptively, i.e., for each of \(64\) buckets of the block we assume that the probability density for an underlying random variable \(x\) has the form \[p(x,\mu,\sigma)=\frac{1}{2\sigma}e^{-\frac{|x-x|}{\sigma}}. \tag{1}\] This random variable will approximate the distribution for the original alphabet of DCT coefficients, for which we additionally take \(\mu=0\). They are also assumed conditionally independent given variances. Then the distribution function will be \[F(x,\sigma)=\begin{cases}\frac{1}{2}e^{x/\sigma},&x<0\\ 1-\frac{1}{2}e^{-x/\sigma},&x\geq 0.\end{cases} \tag{2}\] Now, if \(X\) is a discrete alphabet \(\ldots-3,-2,-1,0,1,2,3,\ldots\), then the probability2 for each symbol would be approximated by means of (2) as Footnote 2: Generally speaking this is a conditional probability as it depends on the distribution parameter for the previous coefficients in the bucket. \[P(X_{i},\sigma)=F(X_{i}+0.5,\sigma)-F(X_{i}-0.5,\sigma). \tag{3}\] The last formula was used for actual probability computations in the implementation. To make all necessary computations faster probability tables corresponding to various \(X\) and \(\sigma\) have been computed using (1), (2) and (3). ## V Prediction To compute the probability we have an additional parameter \(\sigma\), the second moment of the Lapalce distribution. As the sequence of DCT coefficents can not be considered as stationary, but as we still would like to exploit the redundancy for each bucket, we need to model variances \(sigma\) in each subband. For that end we can try GARCH-like model approach to predict the second moment of the time series3. Then this parameter can be predicted for each bucket using the simplest exponential smoothing approximation for a time series \(x\): Footnote 3: see also the next section for additional discussion \[\sigma_{i}=\beta\sigma_{i-1}+\alpha|x_{i-1}|. \tag{4}\] This is a purely empirical model and can be justified in several ways, e.g., by the following obvious argument. If a random variable \(X\) is distributed according to (1) and if \(n\) observable values of \(X\) are \(x_{1},x_{2},\ldots,x_{n}\), then the logarithmic likelihood function has the form \[\log L=\log\frac{1}{(2\sigma)^{n}}-\frac{1}{\sigma}\sum_{i}x_{i}.\] Differentiate the last expression wrt \(\sigma\) to find where the maximum of \(L\) is attained. We have \[\frac{\partial\log L}{\partial\sigma}=-\frac{n}{\sigma}+\frac{1}{\sigma^{2}} \sum_{i}|x_{i}|=0.\] It follows that the maximum likelihood estimate (MLE) of \(\sigma\), which provides the maximization of an event when \(x_{1},x_{2},\ldots,x_{n}\) occur yields \[\sigma=\frac{1}{n}\sum_{i}|x_{i}|,\] which for the case \(n=1\) takes on an especially simple form: \(\sigma=|x|^{4}\). Thus the best estimate of \(\sigma\) in case of Laplace distribution is just the absolute value of the observable \(x\). Therefore the simplest possible model for \(\sigma\) would have the form \(\sigma\sim|x|\) but to take into account a weak dependency on the previous state we introduce a minimum generalization which immediately results in (4). It is necessary to stress that the above argumentation implies maximization of the posterior probability and thus implies the prior not depending on \(\sigma\), e.g., a uniform one. Fig. 4: Empirical normalized distribution of AC coefficients in the first three buckets for walkbridge image. Note semi-log scale for more convenient comparison with exponential distributions. For comparison three Lapalce pdfs are shown, with \(\sigma\) estimated from the data. Note, that in GARCH model the prediction is usually done for the variance of the random variable, so (4) can be written as \[\sigma_{i}^{2}=\tilde{\beta}\sigma_{i-1}^{2}+\tilde{\alpha}|x_{i-1}|^{2}. \tag{5}\] The justification of this model follows from the previous consideration. In practice, it seems there is no way to decide which model to prefer: it should be determined in numerical experiments. The models like (4) or (5) are extremly generic and no wonder they already appeared from time to time in various compression related applications (see e.g. [1]). Interestingly, they did not seem to receive much attention. ## VI Connection of the prediction model to a random process The way formula (4) was introduced in the previous section was purely empirical and one would be interested in a more strict interpretation or at least in a comparison with some mathematically more clear construction. The easiest way to do this for a non-stationary signal is to compare it to some elementary random process. Let us assume we construct a random process in the following way. \[\sigma_{1}=\beta\left|Z_{0}\right|,\] \[\sigma_{2}=\alpha\sigma_{1}+\beta\left|Z_{1}\right|,\] \[\ldots\] \[\sigma_{k}=\alpha\sigma_{k-1}+\beta\left|Z_{k-1}\right|,\] Here \(Z_{i}\) are random variables having the Lapalce distribution with expectation \(EZ_{i}=0\) and variance \(DZ_{i}=s^{2}\). From the above formulas the general term \(\sigma_{k}\) can be easily re-written as \[\sigma_{k}=\beta\left(\alpha^{k-1}\left|Z_{0}\right|+\alpha^{k-2}\left|Z_{1} \right|+\ldots+\left|Z_{k-1}\right|\right). \tag{6}\] Noting that from \(EZ_{i}=0\), \(DZ_{i}=s^{2}\) it follows \(E\left|Z_{i}\right|=s\), \(D\left|Z_{i}\right|=s^{2}\) and then the last formula yields for the expectation and variance of \(\sigma_{k}\) \[E\sigma_{k}=\beta s\frac{1+\alpha^{k-1}}{2}k,\;D\sigma_{k}=(\beta s)^{2}\frac {1+\alpha^{2k-2}}{2}k.\] Thus, as in the case of a simplest random walk the process represents a random walk in the neighbourhood of a line with the slope \(\beta s(1+\alpha^{k-1})/2\) with the increasing deviations proportional to \(\beta s\sqrt{(1+\alpha^{2k-2})/2}\sqrt{k}\). Let us assume \(|\alpha|<1\), then for \(k\gg 1\) the term with \(\alpha\) becomes negligible and the process behaves as \[E\sigma_{k}\sim\beta sk,\;D\sigma_{k}\sim(\beta s)^{2}k.\] So the basic behavior is determined by the standard deviation of random variables \(Z_{i}\). The magnitude of \(\alpha\) controls the influence of adding \(\left|Z_{i}\right|\) into the \(\sigma_{k}\). In our applications we deal with the situation when all \(Z_{i}\) 1) have different standard deviations; 2) these standard deviations are unknown. The former problem can be simplified by choosing smaller \(\alpha\), which results in only the closest terms affecting \(\sigma_{k}\) at the step \(k\) in (6); for the latter we have to provide an estimate for \(s\). This is what we discussed in the previous section where it was demonstrated \(s\sim\left|Z_{i}\right|\), which results in the estimate \(E\sigma_{i}\sim\beta\left|Z_{i}\right|\). ## VII Adaptation of the prediction model to the encoding scheme It is not difficult to understand (and check experimentally) that the direct use of the models (4), (5) has a restricted efficiency. The main reason for that is that the models imply some 1D time series in which the connection strength between observable values decreases in some proportion of the linear distance between these observables. But this is not the case for the data organized in a 2D structure, like images. The dependences here are also 2D and any 1D representation of the coefficients in the buckets can not be described in terms of the linear distance. It can be expressed another way by saying that contexts for a particular coefficient in a bucket turn out to be more complicated and talking about some context of previous coefficients, the word previous should be understood not in terms of some linear distance. To take this into account we make some modifications for the basic exponential smoothing model. We start from formula (4) and apply it to each bucket as5 Footnote 5: the following discussion can be literally repeated if instead of using the standard deviation \(\sigma\) one uses variance \(\sigma^{2}\); it is a matter of experiments with the model as we pointed out at the end of the previous section. \[\sigma_{i}^{k}=(1-\alpha)\sigma_{i-1}^{k}+\alpha|DCT_{i-1}^{k}|. \tag{7}\] In this formula \(k\) is the number of the bucket, i.e., \(k=1,2,\ldots 64\); \(\sigma_{i}^{k}\) is the magnitude of the second moment for bucket \(k\) for block \(i^{6}\); \(|DCT_{i-1}^{k}|\) is the absolute value for the DCT coefficient in bucket \(k\) of the block \(i-1\) and \(\alpha\) is an empirical parameter. The formula for variances will have the same form; we deal with (7) for the sake of simplicity. Now to take into account 2D dependences we predict \(\sigma\) in three directions: horizontal, vertical, and diagonal, the approach very well known in compression algorithms. Thus, we compute \(\sigma\) for each bucket three times which corresponds to consideration of three 1D time series (horizontal, vertical, and diagonal), the length of which is confined by the sizes of an image. This yields three predicted \(\sigma\): \(\sigma_{ih}^{k},\sigma_{iv}^{k},\sigma_{id}^{k}\); the resulting standard deviation is computed as \[\sigma_{i}^{k}=A\sigma_{ih}^{k}+A\sigma_{iv}^{k}+B\sigma_{id}^{k},\] where wheight \(A\) for the horizontal and vertical components are taken to be equal and normally \(B<A^{7}\). The prediction of sigma can be further improved by taking into account a (weak) dependency existing between various buckets. This can be presented in different forms; to give an idea how this can be done we can collect sufficiently large statistics of linear dependencies between the pairs of buckets just by measuring Pearson linear correlation coefficient and then for each bucket to rank other buckets wrt. this coefficient. We then apply the exponential smoothing model as follows \[\sigma_{i}^{k}=\gamma\sigma_{i-1}^{k}+(1-\gamma)|DCT_{i-1}^{m}|,\] where \(m\) is the index of the most dependent (in terms of linear correlation) bucket. When the second moment is estimated using the above model, the probabilities of the DCT coefficients can be computed using pre-computed tables. After that the symbols and their probabilites can be transmitted to the multisymbol arithmetic coder (MSAC). ## VIII Application of RLRG encoder in the encoding scheme It was noted that for buckets located in the right bottom corner of the block the lengths of runs of zeros become significant. Therefore we found slightly more effective to use for these buckets another method, the adaptive run-length Rice-Golomb encoder (RLRG). This encoder does not use the moment estimations provided above but exploits its own small set of parameters which can be adaptet for a particular task. The detailed discussion of this algorithm is provided in [2]. This algorithm, as it name indicates, is referred to run-length algorithms and consequently can be effective when encoding long runs of repetitive symbols. In the context of DCT coefficient compression it is applied to our buckets when we have long runs of zeros. Another important feature of this algorithm is its lower complexity compared to the arithmetic coder. We found it provides better compression ratio for buckets with smaller \(\sigma\). In terms of practical computations we measured the number of zeros for each bucket for all images in the data set and empirically chose the boundary in terms of the bucket counter. For all buckets with significant number of zeros (\(>75\%\)) RLRG encoder was used instead of MSAC. After analyzing sufficiently large number of images one can determine a threshold for the number of buckets encoded with RLRG. ## IX Remarks on implementation The general layout of our approach to re-compression is presented in fig. 5. The algorithm takes on as an input a compressed jpeg stream and decodes quantized DCT coeffcients losslessly; we used a modified ffjpeg software for that. Then we apply our prediction algorithm and finally two entropy coders: MSAC and RLRG, which are used as explained in the previous section. For MSAC case probability tables have been pre-computed based on analytic formulas for the Laplace distribution. Note, that the coefficients in buckets compressed with RLRG can be additionally sorted in the decreasing order of the coefficients in the previous bucket (so this order is available in the decoder); this can help improve compression gain in some situations but this improvement is moderate. ## X Re-compression results As a test set we used a range of various jpeg images both color and grey scale, most of them being used in many image compression experiments. The compression results obtained with the proposed method are shown in table I and include both luma and chroma data. On the other hand, in industrial applications the use of two encoding algorithms seems to be not an optimal solution in terms of simplicity for implementation. However, it is not a complicated task to remove one of the encoders under consideration and apply another one for all the buckets. We carried out this computation and provide results in table II for the case when MSAC is used for all buckets. Excluding significant loss occurring on _peppers-color_ image other images demonstrate slightly worse results than the original scheme. Overall gain loss, excluding peppers-color, is \(0.5\%\), which can be thought of as insignificant even though the gain loss is observed on the majority of test images and consequently is systematic. Therefore for purposes of simpler implementation one MSAC encoder can be used for all buckets and potentially it seems it should be possible to outperform the combination of MSAC and RLRG by the single MSAC by improving the contexts; this, if possible, will be presented elswhere. ## XI Discussion Any compression scheme is not ideal. The proposed solution for re-compression of images has its pros and cons. On the one hand, the model for the distribution parameter prediction is simple, being based on explicit prediction formulas, not being buried under a bunch of contexts, and easy to implement. Fig. 5: Generic layout of the re-compression algorithm. The blocks are described in the main text. Block ’sorting’ is indicated as optional and applied to the buckets compressed with RLRG encoder. Moreover, the use of pre-computed tables enables to avoid time loss on their computation in the process of encoding. One of the goals of this paper was to provide a single algorithm solution for purposes of re-compression. But in the same time we currently can notice an opposite tendency: some modern approaches to media data compression mix up multiple methods, try to include multiple context models etc. It would be too naive to try to compete with complex methods. Among the recent achievements in the field of image re-compression the most noticeable is probably _brunsli[3]_ which represent a pretty nice combination of good ideas, some of which are similar to those used in our approach8. Another purpose for implementing the approach with two entropy encoders was to demonstrate a good potential for RLRG algorithm compared to even such a powerful method as MSAC. The former algorithm looks elegant and somewhat underestimated. In the same time it does not require any probability computation and builds the context for encoding directly using the small set of adaptation parameters, providing lower complexity, as we already pointed out. However, the recent advances in the field of data compression show (quite naturally) that various sophisticated algorithms turn out to be competitive with each other in application to various data sources, i.e., many methods become data dependent9. This is certainly a sign of a some saturation achieved with application of classical compression methods. Nevertheless, the appearance of another compression scheme may be not without use, especially taking into account the fact this approach demonstrated a significant redundancy reduction over the standard algorithm. Footnote 8: Our test show that our re-compressor implementation is inferior to brunsli in terms of compression gain by \(2\%\) on average. Yet we have to note that brunsli utilizes additional procedures for gain improvement related to histogram representation. Footnote 9: or overfitted ## Acknowledgment The authors are grateful to their colleagues in Huawei Algorithm Innovation Lab for discussions. MK is also grateful to Phil Chou for multiple stimulating discussinos in previous years which were partially embodied in this paper.
2310.17278
Dynamic Factor Models: a Genealogy
Dynamic factor models have been developed out of the need of analyzing and forecasting time series in increasingly high dimensions. While mathematical statisticians faced with inference problems in high-dimensional observation spaces were focusing on the so-called spiked-model-asymptotics, econometricians adopted an entirely and considerably more effective asymptotic approach, rooted in the factor models originally considered in psychometrics. The so-called dynamic factor model methods, in two decades, has grown into a wide and successful body of techniques that are widely used in central banks, financial institutions, economic and statistical institutes. The objective of this chapter is not an extensive survey of the topic but a sketch of its historical growth, with emphasis on the various assumptions and interpretations, and a family tree of its main variants.
Matteo Barigozzi, Marc Hallin
2023-10-26T09:59:27Z
http://arxiv.org/abs/2310.17278v2
# Dynamic Factor Models: a Genealogy ###### Abstract Dynamic factor models have been developed out of the need of analyzing and forecasting time series in increasingly high dimensions. While mathematical statisticians faced with inference problems in high-dimensional observation spaces were focusing on the so-called _spiked-model-asymptotics_, econometricians adopted an entirely and considerably more effective asymptotic approach, rooted in the factor models originally considered in psychometrics. The so-called _dynamic factor model_ methods, in two decades, has grown into a wide and successful body of techniques that are widely used in central banks, financial institutions, economic and statistical institutes. The objective of this chapter is not an extensive survey of the topic but a sketch of its historical growth, with emphasis on the various assumptions and interpretations, and a family tree of its main variants. ## 1 Factor models and the analysis of high-dimensional time series With the fast-pace development of computing facilities, high-dimensional datasets are increasingly available, posing a genuine challenge to statisticians and econometricians. Faced with this situation and the need to analyze such datasets, new asymptotic scenarios and methods had to be developed. Mathematical statisticians mostly focused on the so-called spiked models (see, for instance, Johnstone (2001), Onatski et al. (2013, 2014)), which leads to beautiful mathematical results such as the phase
2302.00515
Decentralized Search and Track with Multiple Autonomous Agents
In this paper we study the problem of cooperative searching and tracking (SAT) of multiple moving targets with a group of autonomous mobile agents that exhibit limited sensing capabilities. We assume that the actual number of targets is not known a priori and that target births/deaths can occur anywhere inside the surveillance region. For this reason efficient search strategies are required to detect and track as many targets as possible. To address the aforementioned challenges we augment the classical Probability Hypothesis Density (PHD) filter with the ability to propagate in time the search density in addition to the target density. Based on this, we develop decentralized cooperative look-ahead strategies for efficient searching and tracking of an unknown number of targets inside a bounded surveillance area. The performance of the proposed approach is demonstrated through simulation experiments.
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, Marios M. Polycarpou
2023-02-01T15:34:56Z
http://arxiv.org/abs/2302.00515v1
# Decentralized Search and Track with Multiple Autonomous Agents ###### Abstract In this paper we study the problem of cooperative searching and tracking (SAT) of multiple moving targets with a group of autonomous mobile agents that exhibit limited sensing capabilities. We assume that the actual number of targets is not known a priori and that target births/deaths can occur anywhere inside the surveillance region. For this reason efficient search strategies are required to detect and track as many targets as possible. To address the aforementioned challenges we augment the classical Probability Hypothesis Density (PHD) filter with the ability to propagate in time the search density in addition to the target density. Based on this, we develop decentralized cooperative look-ahead strategies for efficient searching and tracking of an unknown number of targets inside a bounded surveillance area. The performance of the proposed approach is demonstrated through simulation experiments. ## I Introduction One of the biggest challenges today's society faces is its resilience to severe disasters. Unfortunately, first responders currently rely on a number of conventional methods to gather information that are time consuming and the descriptive character of the collected information often lacks accuracy, eloquence and the necessary level of detail. In this work, we envision that a team of autonomous mobile agents (e.g. drones) could become an important technological tool to aid the work of the rescuers. Under this setting, the mission of one or more drone agents is to assist first responders by conducting the following important tasks: a) search the area for situational assessment, and b) detect and track victims as accurately as possible. More specifically, in a cooperative search and track (SAT) mission, multiple agents are tasked to cooperatively search a certain area of interest in order to discover survivors while at the same time keeping track of those survivors already detected. This work builds upon the theory of random finite sets (RFS) and proposes a multi-agent framework for SAT missions that takes into account the unknown and time varying number of survivors, the noisy sensor measurements and the limited sensing range of the agents. In addition, efficient cooperative search and track strategies are devised which allow the agents to generate joint search-plans and detect and resolve tracking overlaps. The contributions of this paper are as follows: * Devises efficient cooperative searching and tracking strategies for a decentralized multi-agent framework. * Provides a new perspective on the problem of multi-agent cooperative searching and tracking (SAT) through a unified probabilistic approach based on random finite sets (RFS). * Proposes a method to recursively compute and propagate in time the SAT-density by extending the classical probability hypothesis density (PHD) filter to account for the search density in addition to the target density. ## II Related Work Previous works in [1] and [2] investigate the SAT problem but only for the single-agent single-target case. The work in [3] proposes a recursive Bayesian multi-agent SAT solution, however the agents are required to be in communication range at all times. The work in [4] proposes a task assignment algorithm that integrates area search and target tracking, however requires that the number of agents is larger than the number of targets and that a single agent can only track one target at a time. The problem of multi-agent SAT is also investigated in [5] but lacks online path generation. Finally, the work in [6] proposes a cooperative search and track framework, and a clustering approach for grouping neighboring agents (that have intersecting decision spaces) in order to minimize complexity. This work however, assumes clutter free (i.e. no false-alarm measurements are received) environment, perfect target detection and that targets can be uniquely identified. Relevant works also include, the work in [7] which presents an interesting use of random finite sets on collaborative multi-vehicle SLAM and the works in [8, 9] which implement efficient multi-agent RFS-based simultaneous coverage and tracking algorithms for tracking multiple targets. Complimentary to the related work, in this paper we propose a decentralized architecture where multiple agents cooperatively search a region of interest, detect targets in the area and perform tracking of multiple detected targets.We assume that a particular 2D region of interest needs to be continuously searched for potential targets with the aid of a group of mobile agents. The number of targets is not known a priori and changes over time. The agents are equipped with sensors and receive noisy measurements from the targets in the presence of clutter (i.e. false-alarm measurements). The agents have a limited sensing range for detecting targets and limited communication range for exchanging information with other nearby agents. Importantly, the aforementioned problem has been identified as the hardest version of the SAT problem that has not been addressed adequately in the literature as indicated in [10, 11]. That said, the objective of each agent at an arbitrary time-step is to: a) accurately estimate the number of targets and their states inside its sensing range from noisy measurements in the presence of clutter, and b) generate search-plans for efficiently searching the whole surveillance area. To achieve a) and b), each agent maintains a modified PHD filter, termed in this paper as SAT-PHD filter, which in addition to the target density, it recursively computes the search density. Finally, the agents opportunistically cooperate by exchanging information in order to tackle the above objectives more efficiently e.g. when two or more agents are in communication range they cooperate to generate joint search-plans and to resolve tracking overlaps (i.e. a situation where 2 or more agents track the same targets). To summarize, the agents in communication range exchange their search densities, their multi-target states and their mode of operation i.e. search or track. We should also note that all agents are in search mode optimizing their local or joint search objective (see subsection V-B) until targets are found in the surveillance area in which case the respective agents switch to track mode (see subsection V-C). ## III Background on Random Finite Sets The goal of Bayesian filtering [12] is to recursively estimate the conditional posterior distribution i.e. \(p(x_{k}|z_{1:k})\), of the target state \(x_{k}\) at time \(k\) based on the history of measurements \(z_{1},z_{2},...,z_{k}\) up to time \(k\). In the single target tracking scenario the target state \(x_{k}\) and measurement \(z_{k}\) can be represented as random variables or random vectors with fixed size i.e. the state of a target (e.g. position) can change over time however the dimension of the state vector remains constant. On the other hand, in a multi-target system the number of targets changes over time as targets enter and exit the surveillance area which results in a multi-target state (i.e. a collection of individual target states) that changes size over time i.e. the dimension of this multi-target state varies over time as opposed to the dimension of the single target state which remains constant. Using the theory of random finite sets (RFSs) [13] the collection of target states can be represented as finite subsets \(X_{k}=\{x_{k}^{1},x_{k}^{2},...,x_{k}^{n_{k}}\}\in\mathcal{F}(\mathcal{X})\) where \(\mathcal{X}\) denotes the state-space of the single target state and \(\mathcal{F}(\mathcal{X})\) denotes the space of all finite subsets of \(\mathcal{X}\). Finally \(n_{k}\) is the true but unknown number of targets that needs to be estimated. The set \(X_{k}\) is called random finite set and can be seen as a generalization of a random vector. The multi-object conditional distribution \(f_{k}(X_{k}|Z_{1:k})\) of the RFS \(X_{k}\) based on measurements \(Z_{1:k}\) up to time \(k\) can be estimated using Bayesian multi-object stochastic filtering. However, the optimal multi-object Bayes filter is in general intractable and has no analytical solution. A practical alternative is the Probability Hypothesis Density (PHD) filter [14] which only propagates the first-order statistical moment instead of the full multi-object posterior distribution. More specifically, the PHD at time \(k\) is the conditional density \(D_{k}(x|Z_{1:k})\) which when integrated over any region \(R\subseteq\mathcal{X}\) gives the expected number of targets \(\hat{n}_{k}\) contained in \(R\), i.e. \(\hat{n}_{k}(R)=\int_{R}D_{k}(x|Z_{1:k})dx\), where the notion of integration is given by the set-integral [13]. Finally, the multi-target state \(\hat{X}_{k}\) can be estimated as the \(\hat{n}_{k}\) highest local maxima of the PHD. ## IV System Model ### _Single Target Dynamics and Measurement Model_ Let the state of a single target have the following form: \[(x,\ell)\in\mathcal{X}\times\{0,1\} \tag{1}\] where \(x\in\mathcal{X}\) is the kinematic state of the target, \(\mathcal{X}\subseteq R^{n_{x}}\) denotes the kinematic state space of the target, \(n_{x}\) is the dimension of the state vector \(x\) and \(\ell\in\{0,1\}\) is the target label taken from the discrete label space \(\{0,1\}\). We denote a true target with label \(\ell=1\) and a virtual target with label \(\ell=0\). True targets represent physical targets inside the surveillance region whose kinematic state \(x\) needs to be estimated from a sequence of noisy measurements whereas virtual targets represent static locations in the environment which will be used to model the state of searching i.e. whether or not these locations have been searched. Throughout this paper, the kinematic state spaces of true and virtual targets will be denoted as \(\mathcal{X}^{1}\) and \(\mathcal{X}^{0}\), respectively. The single target kinematic state vector \(x_{k},k\in\mathbb{N}\) evolves in time according to the following equation: \[x_{k}=\left\{\begin{array}{ll}\zeta(x_{k-1})+w_{k}&\text{, if }\;x_{k-1}\in \mathcal{X}^{1}\\ x_{k-1}&\text{, if }\;x_{k-1}\in\mathcal{X}^{0}\end{array}\right. \tag{2a}\] where the function \(\zeta:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{n_{x}}\) models the dynamical behavior of the target. Eqn. (2a) describes the evolution of the state vector as a first order Markov process with transitional density \(\pi_{k|k-1}(x_{k}|x_{k-1})=p_{w}(x_{k}-\zeta(x_{k-1}))\). The process noise \(w_{k}\in\mathbb{R}^{n_{x}}\) is independent and identically distributed (IID) according to the probability density function \(p_{w}(.)\). In this paper we assume that the kinematic state vector \(x_{k}\in\mathcal{X}\subseteq\mathbb{R}^{4}\) is composed of position and velocity components in Cartesian coordinates i.e. \(x_{k}=[\mathbf{x},\dot{\mathbf{x}},\mathbf{y},\dot{\mathbf{y}}]^{\top}\). Since a virtual target is static, its kinematic state vector is of the form \(x_{k}=[\mathbf{x},0,\mathbf{y},0]^{\top}\). When an agent detects a true target i.e. \(x_{k}\in\mathcal{X}^{1}\) at time \(k\), it receives a measurement vector \(z_{k}\in\mathcal{Z}\) (range and bearing observations) which is related to the target kinematic state as follows: \[z_{k}=h(x_{k},s_{k})+\mathbf{v}_{k} \tag{3}\] where \(\mathcal{Z}\subseteq\mathbb{R}^{n_{x}}\) denotes the measurement space, \(s_{k}\) is the state of the agent at time \(k\) (described in the next subsection) and the function \(h(.,.)\) projects the state vector to the measurement space. The random process \(\mathbf{v}_{k}\in\mathbb{R}^{n_{x}}\) is IID, independent of \(w_{k}\) and distributed according to \(p_{\mathbf{v}}(.)\). The probability density of measurement \(z_{k}\) for a target with kinematic state \(x_{k}\) when the agent is at state \(s_{k}\) is given by the measurement likelihood function \(g_{k}(z_{k}|x_{k},s_{k})=p_{\mathbf{v}}(z_{k}-h(x_{k},s_{k}))\). On the other hand, virtual targets are observed directly without noise i.e. the measurement of a virtual target is its actual state. ### _Agent Dynamics_ Let \(S=\{1,2,...,|S|\}\) be the set of all mobile agents that we have in our disposal operating in a discrete-time setting. At time \(k\), the 2D surveillance region \(\mathcal{A}\subseteq\mathbb{R}^{2}\) is monitored by \(|S|\) mobile agents with states \(s_{k}^{1},s_{k}^{2},...,s_{k}^{|S|}\), each taking values in \(\mathcal{A}\). Each agent \(j\) is subject to the following dynamics: \[s_{k}^{j}=s_{k-1}^{j}+\begin{bmatrix}l_{1}\Delta_{R}\text{cos}(l_{2}\Delta_{ \theta})\\ l_{1}\Delta_{R}\text{sin}(l_{2}\Delta_{\theta})\end{bmatrix},\begin{array}{l}l _{2}=0,...,N_{\theta}\\ l_{1}=0,...,N_{R}\end{array} \tag{4}\] where \(s_{k-1}^{j}=[s_{k}^{j},s_{k}^{j}]_{k-1}^{\top}\) denotes the position (i.e. xy-coordinates) of the \(j_{\text{th}}\) agent at time \(k-1\), \(\Delta_{R}\) is the radial step size, \(\Delta_{\theta}=2\pi/N_{\theta}\) and the parameters \((N_{\theta},N_{R})\) control the number of possible control actions. We denote the set of all admissible control actions of agent \(j\) at time \(k\) as \(\mathbb{U}_{k}^{j}=\{s_{k}^{j,1},s_{k}^{j,2},...,s_{k}^{j,[\mathbb{U}_{k}]}\}\) as computed by Eqn. (4). ### _Single Agent Sensing Model_ The ability of an agent to sense its 2D environment is modeled by the function \(p_{D}(x_{k},s_{k})\) that measures the probability that a target with kinematic state \(x_{k}\) at time \(k\) is detected by an agent with state \(s_{k}\). More specifically, when \(x_{k}\in\mathcal{X}^{1}\) the sensing capability of the agent is given by: \[p_{D}(x_{k}\in\mathcal{X}^{1},s_{k})=\left\{\begin{array}{ll}p_{D}^{\text{ max}}&\text{, if }x_{k}\in\mathcal{S}_{a}(s_{k})\\ 0&\text{, if }x_{k}\notin\mathcal{S}_{a}(s_{k})\end{array}\right. \tag{5}\] where \(\mathcal{S}_{a}(s_{k})\) denotes the agent's sensing area which in this work includes all \(xy\) points that satisfy the equation \(\max\{|x-s_{x}|,|y-s_{y}|\}=\frac{a}{2}\), i.e. a square region with total area \(a^{2}\) units, centered at \(s_{k}=[s_{x},s_{y}]^{\top}\) and \(p_{D}^{\text{max}}\) denotes the probability of the sensor to detect true targets inside its sensing range. On the other hand, the agent detects virtual target inside its sensing range with probability 1 i.e. \(p_{D}(x_{k}\in\mathcal{X}^{0},s_{k})=1\) when \(x_{k}\in\mathcal{S}_{a}(s_{k})\) and \(p_{D}(x_{k}\in\mathcal{X}^{0},s_{k})=0\) when \(x_{k}\notin\mathcal{S}_{a}(s_{k})\). In addition, any two agents with states \(s_{k}^{i}\) and \(s_{k}^{j}\) are able to communicate with each other when \(\left\|s_{k}^{i}-s_{k}^{j}\right\|_{2}\leq C_{R}\) where \(C_{R}\) is the communication range. ### _Multi-object dynamics and measurement models_ Multiple independent targets can exist and evolve inside the surveillance region. True targets (i.e. with label \(\ell=1\)) can spawn from anywhere in the state space \(\mathcal{X}^{1}\) and target births and deaths occur at random times. This means that at each time \(k\), there exist \(n_{k}^{\ell=1}\) true targets with kinematic states \(x_{k}^{1},x_{k}^{2},...,x_{k}^{n_{k}^{\ell=1}}\), each taking values in the state space \(\mathcal{X}^{1}\) where both the number of true targets \(n_{k}^{\ell=1}\) and their individual states \(x_{k}^{i},\forall i\in n_{k}^{\ell=1}\) are random and time-varying. The multi-target (or multi-object) state of the true targets is thus represented as the RFS \(X_{k}^{\ell=1}\in\mathcal{F}(\mathcal{X}^{1})\) which evolves in time according to: \(X_{k}^{\ell=1}=\bigcup\limits_{x_{k-1}\in X_{k-1}^{\ell=1}}\Psi(x_{k-1})\cup B _{k}\) where \(X_{k-1}^{\ell=1}\) is the multi-target state of the true targets of previous time-step, \(\Psi(x_{k-1})\) is a Bernoulli RFS [15] which models the evolution of the set from the previous state, with parameters \((p_{S}(x_{k-1}),\pi_{k|k-1}(x_{k}|x_{k-1}))\). Thus a target with kinematic state \(x_{k-1}\) continues to exists at time \(k\) with surviving probability \(p_{S}(x_{k-1})\) and moves to a new state \(x_{k}\) with transition probability \(\pi_{k|k-1}(x_{k}|x_{k-1})\). Otherwise, the target dies with probability \(1-p_{S}(x_{k-1})\). The term \(B_{k}\) is the RFS of spontaneous births [14]. The virtual targets i.e. \((\ell=0)\) on the other hand do not exhibit any birth and death events, and their number \(n_{k}^{\ell=0}=n^{\ell=0}\) is constant and known (i.e. sampled uniformly from the surveillance area at \(k=0\)). Thus the multi-target state at time \(k\) is given by \(X_{k}=X_{k}^{\ell=1}\cup X_{k}^{\ell=0}\) where \(X_{k}^{\ell=0}\) is the set of all virtual targets in the surveillance region with \(|X_{k}^{\ell=0}|=n^{\ell=0}\)\(\forall\)\(k\). In the rest of the paper we abbreviate \(X_{k}^{\ell=1}=X_{k}^{1}\) and \(X_{k}^{\ell=0}=X_{k}^{0}\). At time \(k\), an agent receives a finite set of measurements (i.e. measurements generated from the detected true targets and from clutter) denoted as \(Z_{k}\). This RFS has the form: \(Z_{k}=\bigcup\limits_{x_{k}\in X_{k}^{\ell}}\Theta(x_{k})\cup\text{K}_{k}\) where \(\Theta(x_{k})\) is a Bernoulli RFS which models the target generated measurements with parameters \((p_{D}(x_{k},s_{k}),g_{k}(x_{k}|x_{k},s_{k}))\). Thus a true target with kinematic state \(x_{k}\) at time \(k\) is detected by the agent with state \(s_{k}\) with probability \(p_{D}(x_{k},s_{k})\) and receives a measurement \(z_{k}\) with likelihood \(g_{k}(z_{k}|x_{k},s_{k})\) or is missed with probability \(1-p_{D}(x_{k},s_{k})\) and generates no measurements. Additionally, an agent can receive false alarms measurements i.e. the term \(\text{K}_{k}\) is a Poisson RFS which models the set of false alarms or clutter received by an agent at time \(k\) with PHD \(\kappa_{k}(z_{k})=\lambda f_{c}(z_{k})\), where in this paper \(f_{c}(.)\) denotes the uniform distribution over \(\mathcal{Z}\) and \(\lambda\) is the average number of clutter generated measurements per time-step. ## V Proposed Approach In this section we describe how we have extended the classical PHD filter to propagate in time the SAT-density and next we discuss how using the SAT-density the agents cooperate to produce joint search-plans and resolve tracking overlaps. ### _Search-and-Track Density_ During a search and track mission, a single agent is required to be able to perform the following tasks: a) simultaneously estimating the time-varying number of targets and their states from a sequence of noisy measurements and b) efficiently searching the surveillance region in order to maximize the probability of finding targets. The first task can be accomplished by recursively computing and propagating in time, the PHD of the full multi-target posterior distribution using the PHD filter [14]. In order to accomplish the second task, the agent needs to: a) keep track of the visited (i.e. searched) and unvisited regions of the surveillance area, b) intelligently estimate when and how often certain search regions need to be revisited (i.e. searched again), and c) generate efficient search plans for searching the area. To do that we assume that the agent stores a discrete representation of the environment in its memory in the form of a graph \(G=\{\mathcal{V},\mathcal{E}\}\) termed as _search map_, where each node \(v\in\mathcal{V}\) corresponds to a region \(r_{v}\subset\mathcal{A}\) in the surveillance area where \(\bigcup_{v}r_{v}=\mathcal{A}\). The agent recursively computes the _search value_\(p^{\text{search}}(r_{v})\in[0,1],v\in\mathcal{V}\) of each region and uses this information to decide how often to visit a particular region and how to generate search-plans for efficiently searching the surveillance area. With this in mind, we have extended the classic PHD filter in order to recursively compute the search density in addition to the target density. At each time-step we use the target density to estimate the number of targets inside the agents' sensing range and the search density to compute the search values of every region in the surveillance area. More specifically, the predicted SAT-PID at \(x\in\mathcal{X}\) can be computed as: \[D_{k|k-1}(x|Z_{1:k-1})=b_{k}(x\in\mathcal{X}^{1})\ + \tag{6}\] \[\int_{\mathcal{X}^{1}}p_{S}(x^{\prime})\pi_{k|k-1}(x\in\mathcal{X}^ {1}|x^{\prime})D_{k-1}(x^{\prime}|Z_{1:k-1})dx^{\prime}+\] \[\Big{[}\big{(}1-p_{D}(x\in\mathcal{X}^{0},s_{k-1})\big{)}J_{k}(x \in\mathcal{X}^{0})+p_{D}(x\in\mathcal{X}^{0},s_{k-1})\Big{]}\] \[\cdot D_{k-1}(x\in\mathcal{X}^{0}|Z_{1:k-1})\] where \(b_{k}(x)\) is the PHD of target births, \(p_{S}(x)\) is the probability that a target with state \(x\) will survive in the next time step and \(\pi_{k|k-1}(x|x^{\prime})\) is the single-target transition density, \(p_{D}(x,s_{k})\) is the sensing model defined in Eqn. (5) and \(J_{k}(x)\in[0..1]\) is a function that determines the decay value of the virtual target with state \(x\). The first two lines of Eqn. (6) are due to the classic PHD filter which are used to predict the target density at \(x\), whereas the 3rd line is used to predict the search density operating only on virtual targets. In essence, the states of all virtual targets outside the agent's sensing range are adjusted accordingly to reflect the fact that they are not being observed. This property is used to generate search plans which will guide the agent to visit areas that have not been recently visited. The updated SAT-PHD density is given by: \[\begin{split}& D_{k}(x|Z_{1:k})=\Bigr{[}1-p_{D}(x\in\mathcal{X} ^{1}_{;s_{k}})\Bigr{]}D_{k|k-1}(x\in\mathcal{X}^{1}|Z_{1:k-1})\\ &\quad+\left[\sum_{z\in\mathcal{Z}_{k}}\frac{p_{D}(x\in\mathcal{ X}^{1},s_{k})\cdot g_{k}(z|x\in\mathcal{X}^{1},s_{k})}{\kappa(z)+\tau(z)}\right]\, \cdot\\ &\quad D_{k|k-1}(x\in\mathcal{X}^{1}|Z_{1:k-1})+\frac{p_{D}(x\in \mathcal{X}^{0},s_{k})}{|\mathcal{A}|}\\ &\quad+\left[1-p_{D}(x\in\mathcal{X}^{0},s_{k})\right]\!D_{k|k-1} (x\in\mathcal{X}^{0}|Z_{1:k-1})\end{split} \tag{7}\] where \(|\mathcal{A}|\) is the total area of the surveillance region and \(\tau(z)=\int_{\mathcal{X}^{1}}p_{D}(x^{\prime},s_{k})g_{k}(z|x^{\prime},s_{k} )D_{k|k-1}(x^{\prime}|Z_{1:k-1})dx^{\prime}\). In the above equation the last term was added to the classical PHD filter update step in order to adjust the search density inside the agents sensing range to account for the agent's updated position \(s_{k}\). Finally, the search value \(p_{k}^{\text{search}}(r_{v})\) of a particular region \(r_{v}\subset\mathcal{A}\), \(v\in\mathcal{V}\) can be computed by integrating the SAT-PHD in \(r_{v}\) as follows: \[p_{k}^{\text{search}}(r_{v})=\frac{\int_{r_{v}}D_{k}(x\in\mathcal{X}^{0}|Z_{1: k})dx}{|r_{v}||\mathcal{A}|^{-1}} \tag{8}\] where \(|r_{v}|\) is the area of region \(r_{v}\). Finally, the number of true targets \(\hat{n}_{k}\) inside the area \(R\subseteq\mathcal{A}\) can be computed by integrating the SAT-PHD in \(R\) as \(\hat{n}_{k}(R)=\int_{R}D_{k}(x\in\mathcal{X}^{1}|Z_{1:k})dx\) (rounded to the nearest integer) and the multi-target state \(X_{k}^{1}\) can be estimated by finding the \(\hat{n}_{k}(R)\) highest peaks of the PHD as in the original PHD filter. ### _Multi-agent Searching_ The search objective is to find the optimal control actions that will move the agent along areas that have not been explored for some time and could potentially reveal new targets. To address this challenge, we first discuss how searching takes into account the search map derived from the SAT-PHD filter and how low level controls employ the computed paths to steer the agent across the field. **a) Search planning:** Given the search map \(G=(\mathcal{V},\mathcal{E})\) where the set of edges in \(\mathcal{E}\) connect adjacent nodes, the cost \(c_{ij}\) on edge \(i\mapsto j\) is defined as the Euclidean distance between the particular regions in the field. For each node on this graph, a search value \(p^{\text{search}}(r_{v}),\,v\in\mathcal{V}\) is computed using Eqn. (8). This value varies between 0 and 1, where 0 indicates that the particular node (and hence region in the field) has not been searched and 1 indicates that the region has just been visited. The SAT-PHD recursion in Eqn.(6) -(7) indicates how the search value decays over time in order to steer agents to revisit particular regions in the field. Using \(p^{\text{search}}\), we then define the set of unvisited nodes \(\bar{\mathcal{V}}\) as the set of nodes for which the search value goes below a certain threshold, i.e., \(p^{\text{search}}(r_{v})\leq\beta,v\in\bar{\mathcal{V}}\) and thus indicating that those nodes need to be revisited. Given \(\bar{\mathcal{V}}\) and the initial location \(s_{k}\) of agent \(k\), we would like to compute closed walks starting at \(s_{k}\) to visit nodes in \(\bar{\mathcal{V}}\) with the least cost \(c_{ij}\) in order to search the field for targets. Note that computing optimal closed walks with the least cost can be achieved by employing variations of the vehicle routing problem. However due to the high computationally complexity of the optimal algorithms, this approach can not be employed in practice. Hence, an alternative heuristic approach is followed hereafter to devise closed walks efficiently in time. Similar to the path-cheapest-arc heuristic [16], each walk is set to start at \(s_{k}\), and a path is constructed greedily by inserting each new edge with the least cost emanating from the head node of the last edge added. The process repeats until all nodes have been visited or when no more edges can be added. **b) Control:** Given the computed closed-walk sequence, the objective is then to take a control action \(u_{k}\in\mathbb{U}_{k}\) that will move the agent across the designated path. To achieve this, a list of nodes to-be-visited is maintained, and each node is marked as visited whenever the agent moves to a position where the particular node is within its sensing range. The control objective can simply be expressed as follows, \(u_{k}^{*}=\operatorname*{arg\,min}\ \ \xi_{\text{search}}(u_{k},v)\), where \(v\in\bar{\mathcal{V}}\) indicates the location of the next unvisited node in the list and \(\xi_{\text{search}}\) returns the Euclidean distance between the current position of the agent and the next unvisited node. By iteratively visiting the close-walk sequence, the envisioned look-ahead search control is achieved by each agent. **c) Cooperation:** Whenever two or more agents are in communication range they exchange their search densities. Agents then merge their copies using a simple max operation of local and received values and compute a fused search map which now contains the search-path histories of the involved agents. The agents can then compute a joint search-plan as follows: Let \(\bar{\mathcal{S}}\) be the subset of agents in communication range and assume that each agent knows the number \(|\bar{\mathcal{S}}|\) and position \(s_{k}^{j},\,j\in\bar{\mathcal{S}}\) of cooperating agents in its vicinity. Then iteratively each agent computes \(|\bar{\mathcal{S}}|\) closed walks incrementally by adding one node at a time in each agent's path from the list of all unvisited nodes in \(\mathcal{V}\) until there are no more unvisited nodes. A new node is added in an agent's path only if the head node of all possible edges to traverse (starting from the edge with the least cost) is not flagged as visited and the tail node of that edge is the last node added in the particular walk. ### _Multi-agent Tracking_ In this subsection we discuss: a) how multiple agents are cooperating to detect and resolve tracking overlaps and b) how the agents select control actions in order to accurately track multiple targets. **a) Tracking overlap detection:** This problem arises when a target is being tracked by more than one agent. This is something unwanted since valuable system resources are wasted for performing the same task. Consider, the scenario where 3 targets, which are being tracked by two different agents, approach each other. Eventually, the targets will be detected and tracked by both agents at the same time. We denote this situation as a tracking overlap event, which we wish to detect and resolve. To do so, and instead of solving the combinatorial problem that arises (which requires the enumeration of joint control actions among agents and future multi-target states over a finite horizon), in this work we propose a computationally cheaper way to tackle the tracking overlap problem. More specifically, in order to reduce the computational and communication overhead, we allow any two agents to merge and track the same targets but only for a short period of time. More specifically, we consider that each agent can track multiple targets independent of other agents. When, the trajectories of two or more tracking agents converge the agents exchange information to determine whether or not the exact same targets are being tracked. Once, two agents have determined that they track exactly the same targets, one of them generates a search plan and exits the tracking. The above procedure begins when two or more tracking agents have overlapping sensing ranges. Let the predicted multi-target states (regarding the true targets) of any two agents with states \(s^{i}_{k-1}\) and \(s^{j}_{k-1}\) be \(\hat{X}^{1,i}_{k|k-1}\) and \(\hat{X}^{1,j}_{k|k-1}\), respectively. The predicted multi-target state \(\hat{X}^{1}_{k|k-1}\) is computed from the predicted SAT-PHD i.e. Eqn. (6) by selecting the \(\hat{n}_{k|k-1}\) highest peaks where \(\hat{n}_{k|k-1}=\int D_{k|k-1}(x\in\mathcal{X}^{1}|Z_{1:k})dx\). Also, let \(|\hat{X}^{1,i}_{k|k-1}|=m\) and \(|\hat{X}^{1,j}_{k|k-1}|=n\) denote their cardinalities, i.e. the number of predicted targets in the set, with \(n\geq m\) and \(n,m\neq 0\). When \(\mathcal{S}_{a}(s^{i}_{k-1})\cap\mathcal{S}_{a}(s^{j}_{k-1})\neq\emptyset\), the agents exchange their predicted multi-target states to compute the _incremental tracking overlap score_ as: \[\Delta L^{c}_{k}(\hat{X}^{1,i}_{k|k-1},\hat{X}^{1,j}_{k|k-1})=\] \[\left[\ \frac{1}{n}\!\left(\min_{\pi\in\mathbb{I}_{n}}\sum_{l=1}^{m}d_ {c}(x^{i}_{l},x^{j}_{\pi(l)})^{2}+(n-m)\cdot c^{2}\right)\ \right]^{\frac{1}{2}} \tag{9}\] where \(x^{i}\in\hat{X}^{1,i}_{k|k-1}\), \(x^{j}\in\hat{X}^{1,j}_{k|k-1}\) and \(\Pi_{n}\) denotes the set of all permutations of size \(m\) taken from the set \(\{1,2,...,n\}\). The function \(d_{c}(x,y)=\text{min}(c,\left\|x-y\right\|_{2})\) where the parameter \(c>0\) penalizes the cardinality mismatch between two sets. When \(n<m\) Eq. (9) becomes \(\Delta L^{c}_{k}(\hat{X}^{1,j}_{k|k-1},\hat{X}^{1,i}_{k|k-1})\). The above equation is called the optimal sub-pattern assignment (OSPA) [17] of order 2. Then the _cumulative tracking overlap score_ for the time-window \([\kappa:K]\) is defined as: \[Q_{\kappa:K}(s^{i}_{\kappa-1},s^{j}_{\kappa-1})= \tag{10}\] \[\sum_{k=\kappa}^{K}\mathcal{I}(\mathcal{S}_{a}(s^{i}_{k-1}), \mathcal{S}_{a}(s^{j}_{k-1}))\cdot\Delta L^{c}_{k}(\hat{X}^{1,i}_{k|k-1},\hat{ X}^{1,j}_{k|k-1})\] where the function \(\mathcal{I}(A,B)\) checks if the intersection of two regions \(A\) and \(B\) is non-empty and returns \(1\), otherwise returns \(\infty\). As we can see, the cumulative tracking overlap score will generate a low score if two agents track the exact same targets over a certain period of time. In other words when two agents have overlapping sensing ranges and they track the same number of targets with small positioning errors the cumulative tracking overlap score is minimized. Finally, in order to determine if there is tracking overlap between two agents over a time-window the cumulative tracking overlap score is tested against a pre-determined threshold \(Q^{Th}\). If \(Q_{\kappa:K}\leq Q^{Th}\) then the two agents track with high certainty the same targets and thus one of them is removed from tracking. The removed agent generates a search plan and begins in the next time-step to search the surveillance region. **b) Control:** Finally the objective of tracking control is to find the optimal control action \(u_{k}\in\mathbb{U}_{k}\) that must be taken at time step \(k\) by each agent in order to maintain tracking of the detected targets. We should point out that the control action \(u_{k}\) affect the received measurements \(Z_{k}\) which in turn affects the multi-target state estimate \(\hat{X}^{1}_{k}\) during the update step. Thus, ideally to optimize the control actions in this case it would require the knowledge of the future measurements. As a consequence, the objective function to optimize depends on future unknown measurements. Let this objective function be denoted as \(\xi_{\text{track}}(u_{k},Z_{k})\). Since, the future measurement set \(Z_{k}\) is not available until the control action \(u_{k}\) is applied, we generate the predicted measurement set \(Z_{k|k-1}\) and we use it in place of \(Z_{k}\). The predicted measurement set \(Z_{k|k-1}\) for each control action is generated as follows: \[Z_{k|k-1}= Z_{k|k-1}\ \cup\ \{\arg\max_{z}\ g_{k}(z|x,u_{k})\}\] \[\forall x\in\hat{X}^{1}_{k|k-1},\ \forall u_{k}\in\mathbb{U}_{k} \tag{11}\] where \(\hat{X}^{1}_{k|k-1}\) is the predicted multi-target state for the true targets. Thus the problem becomes: \(u^{\star}_{k}=\arg\max\ \xi_{\text{track}}(u_{k},Z_{k|k-1})\). To optimize the track objective, the following steps are performed: The predicted SAT-PHD \(D_{k|k-1}(x|Z_{1:k-1})\) is first computed from Eqn. (6) without performing any control action. From this, we compute the predicted multi-target state \(\hat{X}^{1}_{k|k-1}\) and for each admissible control action \(u_{k}\in\mathbb{U}_{k}\) we generate the predicted measurement set \(Z_{k|k-1}\) using Eqn. (11). For each pair \((u_{k},Z_{k|k-1})\) we perform a pseudo-correction step using Eqn. (7) to produce the (pseudo) posterior SAT-PHD density \(\hat{D}_{k}\). That said, we consider the information gain between the predicted \(f_{k|k-1}(X|Z_{1:k-1})\) and the (pseudo) updated \(\hat{f}_{k}(X|Z_{1:k-1},Z_{k|k-1},u_{k})\) multi-target distributions as a measure of decreasing the uncertainty of the estimated multi-target state. The objective is then to maximize the information gain between the two multi-target distributions. To measure the information gain, we use as \(\xi_{\text{track}}(.)\) the Renyi divergence [18, 19, 20] which in our case is given by: \[\xi_{\text{track}}(u_{k},Z_{k|k-1})=\int_{\mathcal{X}^{1}}D_{k|k-1 }(x)dx\ +\] \[\frac{\alpha}{(1-\alpha)}\int_{\mathcal{X}^{1}}\hat{D}_{k}(x|Z_{k|k -1},u_{k})dx\ - \tag{12}\] \[\frac{1}{(1-\alpha)}\int_{\mathcal{X}^{1}}\hat{D}_{k}(x|Z_{k|k-1}, u_{k})^{\alpha}D_{k|k-1}(x)^{1-\alpha}dx\] where \(0<\alpha<1\) determines the emphasis given on the tails of the two distributions. ## VI Evaluation ### _Experimental Setup_ In our experimental setup we assume that the targets maneuver in an area of 100m \(\times\) 100m. The target dynamics are modeled according to the near constant velocity model with the process noise being Gaussian. The single target transitional density is given by \(\pi(x_{k}|x_{k-1})=\mathcal{N}(x_{k};Fx_{k-1},Q)\) where: \[F=\begin{bmatrix}1&T&0&0\\ 0&1&0&0\\ 0&0&1&T\\ 0&0&0&1\end{bmatrix},\ Q=\begin{bmatrix}T/3&T/2&0&0\\ T/2&T&0&0\\ 0&0&T/3&T/2\\ 0&0&T/2&T\end{bmatrix}\] with sampling interval \(T=1\)s. The target survival probability from time \(k-1\) to time \(k\) is constant \(p_{s,k}(x_{k-1})=0.99\) and does not depend on the target's state. Once an agent detects a target it receives range and bearing measurements thus the measurement model is given by \(h_{k}(x_{k},s_{k})=\left[\left\|s_{k}-\text{H}x_{k}\right\|_{2},\ \arctan \left(\frac{s_{k}-\gamma}{s_{k}-\lambda}\right)\right]\) where H is a matrix which extracts the target position from its state vector. The single target likelihood function is then given by \(g(z_{k}|x_{k},s_{k})=\mathcal{N}(z_{k};h_{k}(x_{k},s_{k}),\Sigma^{\top}\Sigma)\) and sigma is defined as \(\Sigma=\text{diag}(\sigma_{\zeta},\sigma_{\phi})\). The standard deviations \((\sigma_{\zeta},\sigma_{\phi})\) are range dependent and given by \(\sigma_{\zeta}=\zeta_{0}+\beta_{\zeta}\left\|s_{k}-\text{H}x_{k}\right\|_{2}^ {2}\) and \(\sigma_{\phi}=\phi_{0}+\beta_{\phi}\left\|s_{k}-\text{H}x_{k}\right\|_{2}\) respectively with \(\zeta_{0}=1\)m, \(\beta_{\zeta}=5\times 10^{-3}\)m\({}^{-1}\), \(\phi_{0}=\pi/180\)rad and \(\beta_{\phi}=10^{-5}\)rad\(/\)m. Moreover, the agent receives spurious measurements (i.e. clutter) with fixed Poisson rate \(\lambda_{k}=10\) uniformly distributed over the measurement space. The agent's sensing model parameter \(p_{D}^{\text{max}}=0.99\) and the agent sensing area is \(\mathcal{S}_{10}(s_{k})=10^{2}\) m\({}^{2}\). The agent's dynamical model has radial displacement \(\Delta_{R}=2\)m, \(N_{R}=2\) and \(N_{\theta}=8\) which gives a total of 17 control actions, including the initial position of the agent. The function \(J_{k}(x)\) is constant and state independent and equal to \(J_{k}(x)=0.999\ \forall x,k\). The parameter \(\alpha\) in Eqn. (12) is set to 0.5 and finally, the agent communication range is \(C_{R}=50\)m. In order to handle the non-linear measurement model, we have implemented a Sequential Monte Carlo version of the PHD filter [21]. ### _Results_ A representative search-and-track scenario with 2 agents and 2 targets, which takes place during 200 time-steps is shown in Fig. (a)a - (h)h. In this scenario, 2 agents enter at \(k=1\) the surveillance area of size \(100\)m \(\times\)\(100\)m at the locations marked with \(\square\) in Fig. (a)a-(h)h with coordinates \((10,60)\) and \((60,10)\) for agents 1 and 2, respectively. The target birth/death times are \(k=104/179\) and \(k=144/182\) for targets 1 and 2, respectively. The target birth locations (marked with \(\triangle\)) are \((50,69)\) and \((41,29)\) for targets 1 and 2 respectively, and their corresponding death locations (marked with \(\times\)) are \((80,31)\) and \((84,33)\) as shown in Fig. (h)h. At each time-step, the agents in communication range cooperate in order to jointly search the surveillance area and track the detected targets. Otherwise, the agents optimize their individual objective and operate on their own. This is shown in Fig. (a)a - (b)b where the two agents are not in communication range and no targets are being estimated to exist inside their sensing range. More specifically, at \(k=1\) agents 1 and 2 generate a search plan that they will use in order to traverse (i.e. search) the surveillance area. Note here that the produced search plans shown in Fig. (a)a - (b)b if executed, will visit every node (marked with \(\star\) and \(\circ\) for agents 1 and 2 respectively) in the search map and as a consequence the agents will search the whole surveillance region. Figure (c)c shows the execution of the aforementioned search plans during time-steps \(k=1\) Fig. 1: The figure shows the maneuvers of 2 agents for the task of search-and-track in a representative simulated scenario. \(16\) and the trajectories of agents 1 and 2 according to their dynamical models. At time-step \(k=16\) the two agents appear to be in communication range where they exchange and fuse their search densities and generate a joint search plan as discussed in subsection V-B. As a result the surveillance area that has not been searched so far is partitioned into two non-overlapping regions as shown in Fig. (d)d. In essence the joint search plan assigns the nodes \(v\in\mathcal{V}\) which are associated with regions \(r\subset\mathcal{A}\) to the two agents in such a way so that the overall area is searched as efficiently as possible. Thus during time-steps \(k=17:104\), the two agents start executing the joint search plan as shown in Fig. (e)e - (g)g for time-steps \(k=26\), \(k=50\) and \(k=104\). Fig. (e)e - (g)g also illustrates the fused search densities for the same time-steps. Next, at time-step \(k=104\) target 1 is born inside agent's 2 sensing range. The agent estimates the presence of this target at \(k=105\). As a consequence, agent 2 exits the joint search plan and begins to track target 1. Because at \(k=104\) the two agents happen to be in communication range this information is transferred to agent 1 which recalculates its search plan to account for area dropped by agent 2. This is shown in Fig. (h)h. As you can observe, agent 1 recalculates its search plan which now includes nodes that have been initially assigned to agent 2. In addition, the same figure shows the search trajectory of agent 1 during time-steps \(k=105:182\) and the tracking trajectory of agent 2 during the same period. At \(k=144\) target 2 is born which is also being tracked by agent 2 during time-steps \(k=147:182\). Between time-steps \(k=147:179\), agent 2 tracks both targets as shown by the agent's trajectory and estimated target positions. Finally, the combined search density of the two agents for \(k=182\) is shown (in this case, we have manually combined the two search densities in order to show the overall searched area, since the two agents are not in communication range at \(k=182\)). ## VII Conclusion In this work a novel decentralized cooperative multi-agent search-and-track framework has been proposed based on the theory of random finite sets (RFS). The Probability Hypothesis Density (PHD) filter has been extended to recursively propagate in time the search-and-track density which is used to produce cooperative searching and tracking strategies. The proposed framework is flexible and accounts for many of the challenges present in search and rescue missions including the unknown and time varying number of targets, the noisy sensor measurements, the uncertain target dynamics and the limited sensing range of the agents. Future work will focus on the real-world implementation and evaluation of the proposed framework. ## Acknowledgments This work is supported by the European Union Civil Protection under grant agreement No 783299 (SWIFERS), by the European Union's Horizon 2020 research and innovation programme under grant agreement No 739551 (KIOS CoE) and from the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
2304.13181
Sample-Specific Debiasing for Better Image-Text Models
Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval. One common approach involves contrasting semantically similar (positive) and dissimilar (negative) pairs of data points. Drawing negative samples uniformly from the training data set introduces false negatives, i.e., samples that are treated as dissimilar but belong to the same class. In healthcare data, the underlying class distribution is nonuniform, implying that false negatives occur at a highly variable rate. To improve the quality of learned representations, we develop a novel approach that corrects for false negatives. Our method can be viewed as a variant of debiased contrastive learning that uses estimated sample-specific class probabilities. We provide theoretical analysis of the objective function and demonstrate the proposed approach on both image and paired image-text data sets. Our experiments illustrate empirical advantages of sample-specific debiasing.
Peiqi Wang, Yingcheng Liu, Ching-Yun Ko, William M. Wells, Seth Berkowitz, Steven Horng, Polina Golland
2023-04-25T22:23:41Z
http://arxiv.org/abs/2304.13181v2
# Sample-Specific Debiasing for Better Image-Text Models ###### Abstract Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval. One common approach involves contrasting semantically similar (positive) and dissimilar (negative) pairs of data points. Drawing negative samples uniformly from the training data set introduces false negatives, i.e., samples that are treated as dissimilar but belong to the same class. In healthcare data, the underlying class distribution is nonuniform, implying that false negatives occur at a highly variable rate. To improve the quality of learned representations, we develop a novel approach that corrects for false negatives. Our method can be viewed as a variant of debiased constrastive learning that uses estimated sample-specific class probabilities. We provide theoretical analysis of the objective function and demonstrate the proposed approach on both image and paired image-text data sets. Our experiments demonstrate empirical advantages of sample-specific debiasing. Preprint:Under Review 1-24, 2023 Machine Learning for Healthcare Sample-Specific Debiasing for Better Image-Text Models ## 1 Introduction In this paper, we propose and demonstrate a novel approach for contrastive learning of image-text representations. Our method achieves state-of-the-art performance by estimating sample-specific class probabilities to compensate for false negative samples in the learning procedure. Self-supervised representation learning on paired image-text data uses text as "labels", requiring no further annotations beyond what has been routinely documented (Radford et al., 2021). By leveraging natural language to reference visual concepts and vice versa, the resulting image-text models trained using self-supervised objectives can perform a diverse set of vision-language tasks. When applied to the medical domain, image-text models can (i) retroactively label images to select relevant patients for a clinical trial, (ii) help physicians verify the accuracy of a report by noting whether the referred location (i.e., visual grounding of the text) is consistent with their impression of the image, and (iii) enable informed interpretation of medical images by retrieving similar previously imaged patients from a database. Moreover, the abundance of paired image-text data (e.g., radiographs and radiology reports, histology slides and pathology reports) suggests the broad applicability of self-supervision using image-text data to improve healthcare. In this paper, we focus on contrastive learning, a self-supervised approach that encourages the representations of semantically similar or _positive_ pairs of data to be close, and those of dissimilar or _negative_ pairs to be distant. Contrastive learning has been applied in the medical domain, demonstrating impressive transfer capabilities on a diverse set of downstream tasks (Chauhan et al., 2020; Huang et al., 2021; Liao et al., 2021; Muller et al., 2022; Zhang et al., 2022b; Boecking et al., 2022; Wang et al., 2023; Bannur et al., 2023). The biggest improvements come from addressing challenges unique to this domain. Examples include using cross-attention to localize areas of interest to handle the lack of effective pathology detectors (Huang et al., 2021), fine-tuning language models on medical corpora to address linguistic challenges in clinical notes (Boecking et al., 2022), and removing ambiguities that arise from references to previous imaging events by explicitly using these comparisons as a form of self-supervision (Bannur et al., 2023). In a similar spirit, our work aims to address the nonuniform class distribution typical of clinical data. Using text as "labels" implies a nonuniform class distribution with large support. Specifically, natural language descriptions of medical images can represent a vast number of possible classes. Most descriptions belong to only a small number of common classes (e.g., cardiomegaly) while the remaining descriptions are spread across many rare classes (e.g., left apical pneumothorax). For example, CheXpert labels (Irvin et al., 2019) that represent frequently occurring classes in chest radiographs are mentioned in 10-25% of the radiology reports in MIMIC-CXR, a large collection of chest X-ray images and radiology reports (Johnson et al., 2019). The rare classes can be more descriptive and may be associated with high risk, which makes their accurate identification important in clinical applications. Our goal is to handle the highly nonuniform class distribution that presents a challenge for many existing modeling approaches. When training an image-text model using contrastive learning, each text is positively paired with the associated image from the same imaging event and negatively paired with a batch of images uniformly drawn from the training data set (Li et al., 2019; Lu et al., 2019; Chen et al., 2020; Radford et al., 2021; Liao et al., 2021; Zhang et al., 2022; Wang et al., 2023). If a negatively paired image is semantically similar to the text, it is considered as a _false negative_(Saunshi et al., 2019), as shown in Figure 1. It has been shown previously that false negatives result in a substantial decline in downstream classification performance when using image representations trained with contrastive learning (Chuang et al., 2020). One approach to alleviating the problem of false negatives is to identify false negative pairs explicitly and to reduce their effect. In some applications, ground truth class labels Figure 1: False negatives in paired image-text data. Drawing negative image samples \(x_{n}^{-}\) from the data distribution \(\mathcal{D}\) may result in samples that are semantically similar to the text \(x\) (“Heart is enlarged” and “Cardiomegaly” imply the same pathology). False negative samples occur at an uneven rate (e.g., depending on the pathology type) and degrade the performance of image-text models on downstream tasks. are available and can be used to ensure that no false negative pairs are generated (Khosla et al., 2020; Dwibedi et al., 2021). Unfortunately, deriving categorical labels from text is challenging because it can be difficult to determine the appropriate level of granularity and ensure sufficient coverage when the class distribution has a large support. Label imputation with nearest neighbor methods (Zheng et al., 2021) or clustering (Chen et al.) introduces non-trivial implementation and computational burden. In the absence of hard labels, negative samples whose embeddings are close to positive samples are treated as false negatives and are eliminated (Huynh et al., 2022; Zhang et al., 2022; Zhou et al., 2022; Wang et al., 2022). This approach is easy to implement but it overlooks valuable information captured by text if only image embedding is used. Moreover, rejecting negative samples whose embeddings are close to positive samples runs the risk of removing valuable "hard negatives", i.e., visually similar true negative samples. Debiased contrastive learning takes a different approach that uses (possibly) extra positive samples to offset the influence of false negatives without explicitly identifying them (Chuang et al., 2020). It assumes a uniform class distribution and applies a constant correction to each sample. This strategy can be suboptimal for healthcare data where the class distribution is highly nonuniform. We observe that naively applying debiased contrastive learning introduces a performance trade-off between coarse-grained tasks (e.g., classifying pneumonia) and fine-grained tasks (e.g., cross-modal retrieval of rare classes). In this paper, we propose to reduce the effects of false negatives in contrastive representation learning that makes no assumption on the underlying class distribution. Specifically, we estimate the class probability (or level of correction) for each data point based on the likelihood of the text provided by a language model. Our method (i) does not require or attempt to infer class labels, (ii) requires a few extra lines of code to implement when compared with contrastive objectives typically used in practice, (iii) incurs minimal computational overhead, and (iv) takes the advantage of the class information implicitly represented by the text "labels" to mitigate the problem of false negatives in image-text contrastive learning. We study the advantages of using sample-specific class probability estimates on a small-scale image data set constructed with a nonuniform class distribution. We evaluate our approach on a large set of chest X-ray images and associated radiology reports (Johnson et al., 2019), demonstrating superior performance on image classification, visual grounding, and cross-modal retrieval tasks. ### Generalizable Insights about Machine Learning in the Context of Healthcare Our work (i) provides direct value to those interested in using self-supervised representation learning for healthcare applications, (ii) highlights the importance of considering the distinct characteristics of healthcare data that can pose challenges for methods designed for simpler scenarios (in our case, false negatives degrade the performance of image-text models trained on clinical data), and (iii) shows that a language model provides a useful prior that can be employed in other image-text modeling problems (e.g., brain tumor CT scans with imaging reports) or inspire future research to solve related problems in other data modalities. ## 2 Methods In this section, we introduce notation and provide a brief overview of the debiased contrastive learning (Chuang et al., 2020), followed by the derivation of our method to compensate for potential false negative samples and analysis of the relationship between our approach and the original formulation. ### Notation and Problem Setup We assume that the training data is generated from a mixture distribution with \(c\in\mathcal{C}\) denoting the latent class whose distribution \(\rho(\cdot)\) is defined over a (potentially large) finite alphabet \(\mathcal{C}\). Given \(c_{x}\in\mathcal{C}\), a data point \(x\) is generated from a conditional distribution \(\mathcal{D}_{c_{x}}(\cdot)\) over the set of all data points \(\mathcal{X}\). A training set comprises i.i.d samples from the marginal distribution \(\mathcal{D}(x)=\sum_{c\in\mathcal{C}}\rho(c)\,\mathcal{D}_{c}(x).\) Classes \(c_{x}\) is unknown for any data point \(x\) in the data set. In contrastive learning, we generate "positive samples" \((x,x^{+})\) where \(x\) and \(x^{+}\) are guaranteed to belong to the same class by construction. This is achieved for example by applying data augmentation that does not change the (unobserved) class \(c\) (e.g., object identity) to generate \(x^{+}\) from \(x\). In image-text alignment, \(x^{+}\) is the image associated with the radiology report \(x\). Generating "negative samples" \((x,x^{-})\) where \(x\) and \(x^{-}\) are guaranteed to belong to different classes is more challenging. Formally, if \(x\) is generated from class \(c_{x}\), \(x^{-}\) should be generated from distribution \[\mathcal{E}_{c_{x}}(x^{\prime})\triangleq p(x^{\prime}|c\neq c_{x})=\sum_{c \neq c_{x}}\frac{\rho(c)}{1-\rho(c_{x})}\,\mathcal{D}_{c}(x^{\prime}), \tag{1}\] which is infeasible since we don't know class \(c_{x}\). It is straightforward to show that \(\mathcal{D}(x^{\prime})=\rho(c_{x})\,\mathcal{D}_{c_{x}}(x^{\prime})+\left(1- \rho(c_{x})\right)\mathcal{E}_{c_{x}}(x^{\prime})\) and therefore \[\mathcal{E}_{c_{x}}(x^{\prime})=\frac{1}{1-\rho(c_{x})}\,\mathcal{D}(x^{ \prime})-\frac{\rho(c_{x})}{1-\rho(c_{x})}\,\mathcal{D}_{c_{x}}(x^{\prime}). \tag{2}\] In computer vision applications, the number of classes is large and \(\rho(c_{x})\) is likely to be small for any \(c_{x}\). Thus using \(\mathcal{D}\) instead of \(\mathcal{E}_{c_{x}}\) for sampling negative examples is reasonable. Moreover, it is natural to assume that all classes are equally (un)likely. Unfortunately, neither assumption is true in the healthcare setting. In clinical image sets, the probability of common pathologies in a randomly chosen image is not negligible and moreover, common pathologies appear an order of magnitude more frequently than uncommon ones. Debiased contrastive learning (Chuang et al., 2020) addresses the former problem by modifying the optimization function to explicitly account for the chance of false negatives uniformly across classes. We build on that formulation to address the latter challenge by introducing class-specific modified contrastive loss. ### Debiased Contrastive Learning The contrastive learning objective forces representations of positive pairs to be closer than those of negative pairs (van den Oord et al., 2018) by minimizing \[\mathbb{E}_{(x,x^{+})\sim\mathcal{D}_{\text{sim}},\{x_{n}^{-}\}\sim\mathcal{D}^ {N}}\left[-\log\frac{e^{s(x,x^{+})}}{e^{s(x,x^{+})}+\sum_{n=1}^{N}e^{s(x,x_{n}^ {-})}}\right], \tag{3}\] where the bounded function \(s\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) measures the similarity (e.g., a dot-product) between learned representations captured by encoder function \(f:\mathcal{X}\to S^{d-1}(\gamma)\) that maps data from \(\mathcal{X}\) to a hypersphere of radius \(\gamma\), i.e., \(S^{d-1}(\gamma)=\left\{x\in\mathbb{R}^{d}\mid\left\lVert x\right\rVert_{2}= \gamma\right\}\). The parameter \(\frac{1}{\gamma^{2}}\) is the typical temperature scaling hyperparameter during learning to avoid gradient saturation. For image-text contrastive learning, \(\mathcal{D}_{\text{sim}}(x,x^{+})\) is a distribution of image-text pairs and \(f\) performs either image or text encoding, depending on the nature of the input data. Without access to class labels, the negative samples are typically uniformly drawn from the data distribution \(\mathcal{D}\). Thus for \(x\) generated from (unknown) class \(c_{x}\), sample \(x_{n}^{-}\) is a false negative with latent probability \(\rho(c_{x})\). The asymptotic debiased contrastive learning objective (Chuang et al., 2020) considers the case where the number \(N\) of negative samples from \(\mathcal{E}_{c_{x}}\) goes to infinity: \[\tilde{\mathcal{L}}\triangleq\mathbb{E}_{(x,x^{+})\sim\mathcal{D}_{\text{sim }}}\left[-\log\frac{e^{s(x,x^{+})}}{e^{s(x,x^{+})}+N\,\mathbb{E}_{x^{-}\sim \mathcal{E}_{c_{x}}}\left[e^{s(x,x^{-})}\right]}\right]. \tag{4}\] Given \(N\) samples \(\{u_{n}\}\) from \(\mathcal{D}\) and \(M\) samples \(\{v_{m}\}\) from \(\mathcal{D}_{c_{x}}\), the expected value in the denominator of Equation 4 can be estimated with \[g(x,\left\{u_{n}\right\},\left\{v_{m}\right\};\eta)\triangleq\frac{1}{1-\eta} \left[\frac{1}{N}\sum_{n=1}^{N}e^{s(x,u_{n})}\right]-\frac{\eta}{1-\eta} \left[\frac{1}{M}\sum_{m=1}^{M}e^{(x,v_{m})}\right]. \tag{5}\] To avoid numerical issues, the estimator \(g(x,\left\{u_{n}\right\},\left\{v_{m}\right\};\eta)\) is lower bounded by its theoretical minimum \(e^{-\gamma^{2}}\)(Chuang et al., 2020). \(\eta\) is the prior class probability (for the uniform distribution) and is treated as a hyperparameter. The resulting debiased contrastive loss reweights the positive and negative term in the denominator: \[\mathcal{L}\triangleq\mathbb{E}_{(x,x^{+})\sim\mathcal{D}_{\text{sim}},\{u_{ n}\}\sim\mathcal{D}^{N},\{v_{m}\}\sim\mathcal{D}_{x}^{M}}\left[-\log\frac{e^{s(x,x^ {+})}}{e^{s(x,x^{+})}+Ng(x,\left\{u_{n}\right\},\left\{v_{m}\right\};\eta)} \right]. \tag{6}\] ### Sample-specific Class Probability Function \(\eta\) The formulation above use a single hyperparameter \(\eta\) to correct for false negatives uniformly for every data point. We propose to employ an estimate of the class probability \(\rho(c_{x})\) with a sample-specific class probability function \(\eta(\cdot)\). We replace \(\eta\) with \(\eta(x)\) in Equation 5 \[g(x,\left\{u_{n}\right\},\left\{v_{m}\right\};\eta)\triangleq\frac{1}{1-\eta( x)}\left[\frac{1}{N}\sum_{n=1}^{N}e^{s(x,u_{n})}\right]-\frac{\eta(x)}{1-\eta(x)} \left[\frac{1}{M}\sum_{m=1}^{M}e^{(x,v_{m})}\right] \tag{7}\] and use the objective function defined in Equation 6 with \(g\) in Equation 7. In practice, \(\eta\) may be an imperfect estimator for \(\rho(c_{x})\). The following proposition informs us how \(\eta\) affects the approximation error. **Proposition 1**: _Let \(f(\cdot)\) and \(\eta(\cdot)\) be arbitrary functions, \(N\) and \(M\) be finite. Then,_ \[\left|\tilde{\mathcal{L}}-\mathcal{L}\right| \leq\frac{3e^{2}\sqrt{\pi/2}}{\sqrt{N}}\,\mathbb{E}_{x\sim\mathcal{ D}}\left[\frac{1}{1-\rho(c_{x})}\right]+\frac{2e^{2}}{\sqrt{M}}\,\mathbb{E}_{x \sim\mathcal{D}}\left[\frac{\rho(c_{x})}{1-\rho(c_{x})}\right] \tag{8}\] \[\quad+2e^{2}\mathbb{E}_{x\sim\mathcal{D}}\left[\left|\frac{1}{1- \eta(x)}-\frac{1}{1-\rho(c_{x})}\right|\right]. \tag{9}\] Proof is provided in Appendix A.1. Proposition 1 bounds the approximation error due to (i) finite sample approximation in Equation 5 (first two terms \(\mathcal{O}(\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{M}})\) and (ii) misspecification of the sample-specific class probability \(\eta(x)\) (last term). When \(\rho\) is uniform and \(\eta(x)=\rho(c_{x})\) is a constant, Proposition 1 reduces to the result in Chuang et al. (2020), up to a constant factor. Assuming access to \(c_{x}\) and using the correct class distribution \(\rho\), i.e., \(\eta(x)=\rho(c_{x})\), yields a tighter error bound since the last term in the upper bound (Equation 9) vanishes. Therefore, we aim to use \(\eta(x)\) that closely matches the true sample-specific class probability \(\rho(c_{x})\). In Appendix A.2, we show classification generalization bounds on representations trained using debiased contrastive loss with the sample-specific probability function \(\eta\) (in Equation 7), for arbitrary class distribution setup. ### Language Model Estimate of Class Probability We employ the likelihood of the text \(p_{\text{LM}}(x)\) provided by a language model (LM) for text \(x\) to construct the estimate \(\eta(x)\) of the class probability \(\rho(c_{x})\). Language models naturally provide estimates of token sequence probabilities. The estimates get better if the language model is fine-tuned on the data \(\mathcal{D}\). We assume a log-linear relationship between text and class probabilities, i.e., \(\eta_{\text{LM}}(x)=a\cdot p_{\text{LM}}(x)^{k}\) where \(a\) and \(k\) are hyperparameters. ## 3 Experiments We evaluate the advantages of estimating sample-specific class probability for contrastive learning in two different experiments. ### Cifar10 In this experiment, we evaluate image-only representations on data sets with a controlled class distribution. We do not attempt to estimate the class probabilities (since no text information is available and would not be helpful since we control the frequency with which each class is included in the data set) but instead use the true class distribution during training and examine the effect of class distribution of the data set on the resulting representations. DataFor each \(r\in\{0.05,0.1,0.25,0.5,0.75,0.9\}\), we generate a CIFAR10-\(r\) subset of the CIFAR10 data set (Krizhevsky, 2009) as follows. The original data set includes 6,000 images for each class. For each of 5 selected classes (dog, frog, horse, ship, truck), we randomly draw and include \(r\) fraction of the images. We keep all images for each of the remaining 5 classes. The CIFAR10-\(r\) data set has a class probability of \(\eta_{\text{High}}=0.2r/(1+r)\) for each of the 5 selected classes and \(\eta_{\text{Low}}=0.2/(1+r)\) for the remaining classes. Larger values of \(r\) leads to a more uniform class distribution. Representation LearningSimilar to Chuang et al. (2020), we use SimCLR (Chen et al.) for contrastive learning of image representations. We employ ResNet-18 (He et al., 2016) as the image encoder \(f\) and dot-product as the similarity function \(s\). The encoder is followed by a 2-layer perceptron to create a 128-dimensional embedding. For a reference image \(x\), we employ data augmentation to generate positive samples \(x^{+}\) and use \(\{v_{m}\}=x^{+}\) to avoid needing additional samples (\(M=1\)). We draw samples \(\{u_{n}\}\) and \(\{x_{n}^{-}\}\) randomly from the data set (\(N=254\)). We set \(\gamma=\sqrt{2}\). We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of \(10^{-3}\) and weight decay of \(10^{-6}\). Each model is trained for 300 epochs. Contrastive Loss VariantsWe learn image representations using four different types of contrastive objectives: (i) baseline contrastive learning without correction for false negatives (CL), (ii) debiased contrastive learning that uses a constant \(\eta_{\text{Low}}\) providing the true class probabilities for the 5 subsampled classes (DCL-\(\eta_{\text{Low}}\)) and misspecified class probabilities for the remaining classes (iii) debiased contrastive learning that uses a constant \(\eta_{\text{High}}\) providing misspecified class probabilities for the subsampled classes and the true class probability for the remaining 5 classes (DCL-\(\eta_{\text{High}}\)), and (iv) debiased contrastive learning with the true sample-specific class probability function for all samples (DCL-\(\eta_{\text{True}}\)). Image ClassificationWe evaluate the quality of image representations with linear evaluation. We train a linear classifier with cross-entropy loss with the fixed pretrained image encoder. We use the Adam optimizer with a learning rate of \(10^{-3}\) and weight decay of \(10^{-6}\). Each model is trained for 100 epochs. We report classification accuracy with varying number of annotated examples available for training the classifier. ResultsFigure 2 reports the effects of the choice of sample-specific class probability function \(\eta\) on classification accuracy. When the class distribution is nonuniform (i.e., smaller \(r\)), we observe that DCL-\(\eta_{\text{True}}\) consistently outperforms CL, DCL-\(\eta_{\text{Low}}\), and DCL-\(\eta_{\text{High}}\). When the class distribution is close to uniform (i.e., \(r\) is close to 1), all contrastive loss Figure 2: Evaluation using downstream classification task for CIFAR10 experiment. Left: Classification accuracy as a function of class distribution uniformity \(r\). Right: Classification accuracy as a function of the fraction of images for which labels where provided for classifier training on CIFAR10-0.5 data set. Fine-grain debiased contrastive learning with true class probabilities (DCL-\(\eta_{\text{True}}\)) consistently outperforms baseline contrastive learning (CL) and debiased contrastive learning with misspecified class probabilities (DCL-\(\eta_{\text{Low}}\) and DCL-\(\eta_{\text{High}}\)). The effect is more pronounced when fewer labels are used for training the classifier. variants result in similar accuracy. Figure 2 also shows that the gain from using the true sample-specific class probability is more pronounced when the classifier is trained with fewer labels. Figure 4 provides t-SNE (van der Maaten and Hinton, 2008) visualizations of the representations learnt by contrastive and debiased contrastive objectives on CIFAR10-0.1. Using the true sample-specific class probability function \(\eta_{\text{True}}\) leads to better class separation, especially for the subsampled classes. ### Mimic-Cxr We learn image and text encoders for frontal chest X-ray images and associated radiology reports respectively and evaluate the resulting representations in a set of downstream tasks. DataWe use a subset of 234,073 frontal chest X-ray images and reports from MIMIC-CXR (Johnson et al., 2019). We normalize the images and resize them to 512x512 resolution. We apply random image augmentations, i.e., 480x480 random crops, brightness and contrast variations. We use PySBD (Sadvilkar and Neumann, 2020) for sentence tokenization. In all experiments, the data used for representation learning and downstream tasks are disjoint. Representation LearningWe employ ResNet-18 (He et al., 2016) as the image encoder and CXR-BERT (Boecking et al., 2022) as the sentence encoder. Each encoder is followed by a linear mapping to a 128 dimension embedding space. We use LSE+NL (Wang et al., 2023) as the similarity function \(s\). Given a reference text \(x\), we assign \(x^{+}\) to be the associated image and sample \(\{x_{n}^{-}\}\) randomly from the data set. For debiased contrastive learning, we use all unpaired images in a batch as \(\{u_{n}\}\) (\(N=63\)) and use \(v_{1}=x^{+}\) to avoid generating additional positive samples (\(M=1\)). After a grid search, we set \(a=0.2\) and \(k=0.35\). We precompute \(p_{\text{LM}}(x)\) for all sentences \(x\) in the data set using CXR-BERT. Masked language models (e.g., CXR-BERT) cannot estimate sentence probabilities via the chain rule. Instead, we use pseudo-log-likelihood (Salazar et al., 2020) that scores a sentence \(x\) by adding the predicted log probabilities of every masked token as \(\log p_{\text{LM}}(x)\). We use the AdamW optimizer (Loshchilov and Hutter) and decay the initial learning rate of \(10^{-5}\) using a cosine schedule with 2k warmup steps. We initialize \(\gamma\) to 200 and optimize this hyperparameter alongside the encoder parameters. We employ a batch size of 64. Baseline MethodsWe compare our method (DCL-\(\eta_{\text{LM}}\)) with strong baselines BioViL (Boecking et al., 2022) and LSE+NL (Wang et al., 2023) developed specifically for medical vision-language tasks. BioViL is an image-text model trained using symmetric contrastive learning and masked language modeling objective. LSE+NL is another image-text model that uses log-sum-exp and non-local aggregators for the similarity function \(s\). Neither model corrects for the false negatives explicitly. We also include in the evaluation several methods that explicitly identify false negatives based on the similarity measure between the positive sample \(x^{+}\) and negative samples \(x_{n}^{-}\), such as the intersection of their CheX5 labels (CheX5 Labels) or similarity of their text embeddings (Text Sim.). Negative samples that are too similar (i.e., above some threshold) are removed (Khosla et al., 2020; Huynh et al., 2022; Zhang et al., 2022; Zhou et al., 2022). Alternatively, we can reweight the negative samples or even reduce the set of negative samples based on their similarity score (Robinson et al., 2022). We perform grid searches to select hyperparameters (e.g., the similarity threshold or the resampling size) and select the best model for each setup. In addition, we include the original debiased contrastive learning objective that uses a constant \(\eta\)(Chuang et al., 2020). #### Downstream Tasks Image ClassificationWe assess the zero-shot image classification performance of 5 CheXpert labels (Cardiomegaly, Edema, Pleural Effusion, Pneumonia, Pneumothorax) on the MIMIC-CXR data set (Johnson et al., 2019) that we refer to as CheX5. There is roughly 1k images for each binary classification task. We first tokenize and encode class-specific text prompts (e.g., "No signs of pneumonia." or "Findings suggesting pneumonia."). Table 2 provides the prompts for each category. For every image, we assign a binary label that corresponds to the prompt with the higher image-sentence score. We report classification accuracy (ACC) and AUC. Visual GroundingWe evaluate visual grounding performance using the MS-CXR region-sentence annotations (Boecking et al., 2022). This data set consists of 1,448 bounding boxes over 1,162 images, where each bounding box is associated with a sentence that describes its dominant radiological feature. We compute region-sentence scores to quantify how well the sentence is localized in the image. We report a measure of discrepancy between region-sentence scores inside and outside the bounding box, i.e., contrast-to-noise ratio (CNR) (Boecking et al., 2022), and how well the thresholded region-sentence scores overlap with the bounding box on average, i.e., mean intersection over union (mIoU). We use thresholds that span \([-1,1]\) in 0.05 increments to compute the mIoU. Figure 3: Our method (DCL-\(\eta_{\mathrm{LM}}\)) outperforms debiased contrastive learning that applies a fixed amount of correction \(\eta\) to all samples (DCL-\(\eta\)) and LSE+NL that does not correct for the false negatives (Baseline). For each value of \(\eta\), an image-text model is trained and subsequently evaluated in all three downstream tasks. Our method achieves consistently better performance than alternative approaches. Cross-Modal RetrievalWe evaluate cross-modal retrieval performance using the MS-CXR data set. To evaluate retrieval, we compute the bounding box features from the region features with RoIAlign (He et al., 2017). We compute box-sentence scores and sort them to retrieve items in one modality given a query from the other modality. The correctly retrieved item is the one that is paired with the query item. We compute the fraction of times the correct item was found in the top \(K\) results (R@K), the median rank of the correct item in the ranked list (MedR), and the average recall over \(K=10,50,100\) and over both the image-to-text and the text-to-image retrieval direction (Recall). ResultsFigure 3 illustrates performance trade-off for debiased contrastive learning that uses a constant class probability function \(\eta\). Increasing the value of \(\eta\) improves image classification but harms cross-modal retrieval performance. Conversely, decreasing the value of \(\eta\) enhances cross-modal retrieval performance but harms image classification performance. DCL-\(\eta_{\text{LM}}\) provides an overall superior solution on all three tasks. Table 1 reports the performance of DCL-\(\eta_{\text{LM}}\) and competing image-text models. Training LSE+NL with DCL-\(\eta_{\text{LM}}\) significantly improves its performance on these tasks compared to BioViL and LSE+NL. Using a sample-specific class probability function \(\eta_{\text{LM}}\) is more effective than using a constant function \(\eta=0.05\) and \(0.1\). The methods that identify and remove, resample, or reweight the false negatives improve image classification performance \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline Method & & \multicolumn{2}{c}{Classification} & \multicolumn{2}{c}{Grounding} & \multicolumn{2}{c}{Retrieval} \\ & & AUC\(\uparrow\) & ACC\(\uparrow\) & CNR\(\uparrow\) & mIoU\(\uparrow\) & Recall\(\uparrow\) & MedR\(\downarrow\) \\ \hline No Correction & BioViL & 0.783 & 0.622 & 1.14 & 0.174 & 0.246 & 148 \\ No Correction & LSE+NL & 0.788 & 0.646 & 1.40 & 0.190 & 0.289 & 115 \\ \hline Remove & by CheX5 Labels & 0.856 & 0.708 & 1.37 & 0.190 & 0.293 & 113 \\ Resample & by Text Sim. & 0.816 & 0.705 & 1.37 & 0.190 & 0.302 & 111 \\ Remove & by Text Sim. & 0.840 & 0.720 & 1.39 & 0.191 & 0.298 & 112 \\ Reweight & by Text Sim. & 0.844 & 0.686 & 1.40 & 0.191 & 0.293 & 113 \\ \hline DCL-\(\eta\) w/ \(\eta=0.05\) & 0.802 & 0.718 & 1.46 & 0.193 & 0.302 & **104** \\ DCL-\(\eta\) w/ \(\eta=0.1\) & 0.846 & 0.719 & 1.45 & 0.193 & 0.289 & 111 \\ DCL-\(\eta_{\text{LM}}\) (Ours) & **0.862** & **0.724** & **1.49** & **0.195** & **0.304** & **104** \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot performance of the learned representations on downstream image classification, visual grounding, and cross-modal retrieval tasks. Debiased contrastive learning with sample-specific class probability \(\eta_{\text{LM}}\) (DCL-\(\eta_{\text{LM}}\) (Ours)) outperforms state of the art baseline methods BioViL and LSE+NL (no false negative correction) and alternative approaches to false negative correction. While methods that correct for false negatives by removing, resampling, or reweighting are effective in improving image classification results for commonly occuring classes, they do not yield comparable improvements in visual grounding or retrieval results. but do not offer any gain for visual grounding and minimal improvement for cross-modal retrieval. Tables 3, 4, and 5 in Appendix C provide additional statistics for each task. ## 4 Discussion We introduced a novel, sample-specific approach to correcting the effects of false negative samples on the contrastive loss function. Consistent with prior work (Chuang et al., 2020), this approach offers empirical advantages when the learned representation is used in downstream tasks. We also show that sample-specific approach to correcting for false negatives improves the performance over the original variant of debiased contrastive learning that applied the same correction for all data points. Our experiments also demonstrate that reducing false negatives for tasks with varying levels of granularity is nuanced. In particular, the methods that attempt to remove or reweight likely false negative examples might improve image classification performance but offer minimal or no improvement for visual grounding and cross-modality retrieval. We hypothesize that this is due to fine-grain classes that must be also handled for the latter two tasks and are harder to identify when selecting false negative samples. Similarly, the performance of representations learned via debiased constrastive learning with a fixed correction factor was sensitive to the value of the assumed class probability and varied substantially across the range of possible values. Moreover, the optimal choice of the correction factor varied across downstream tasks, making it challenging to train universally useful representations. In contrast, the proposed fine-grain approach produces representations that achieve superior performance across all tasks by automatically estimating the sample-specific class probability. Using all CheX5 labels to remove false negatives consistently improves the performance of classifying the CheX5 classes. However, doing so has minimal impact on visual grounding and cross-modal retrieval tasks that require the ability to distinguish rare classes beyond those specified in CheX5. Methods that require a clear definition of the latent classes (Khosla et al., 2020; Dwibedi et al., 2021) are not easily adaptable to situations where there are many classes or when the classes are difficult to define concretely. In contrast, methods that implicitly define the latent classes via clustering (Chen et al.), or do not assume access to the latent classes (Chuang et al. (2020) or our approach) show promise in improving model performance on vision-language tasks. In addition to the task itself, evaluation metrics are also sensitive to different aspects of the representation. In our experiments, visual grounding and cross-modal retrieval tasks use the MS-CXR data set that contains infrequently occurring sentences. Thus one would assume both tasks would benefit from using smaller values of \(\eta\) than classification of commonly occurring pathologies. However, the observed performance improvement for grounding is less noticeable than for retrieval. One reason is that the overlap-based metrics (e.g., IoU) used to evaluate grounding may be less sensitive to the variations in text embeddings representing different but related classes. In contrast, to achieve a high value of ranking-based metrics (e.g., recall) requires the model to rank the correct sentence higher than closely related alternatives. This unexpected result suggests that in addition to the class distribution, target performance metrics that capture how the model will be used in the clinical application are also important when developing representation learning approaches. To the best of our knowledge, our work is the first to identify these differences in performance of downstream tasks due to the choices in compensating for the effect of false negatives during representation learning. Limited evaluation of the downstream performance in prior work led to the omission of the trade-off we observed in our experiments. Our work emphasizes the need to consider the unique characteristics of healthcare data that pose challenges for methods designed for simpler scenarios. When working with paired image-text data, we observe that false negatives occur at an uneven rate, making methods designed to address false negatives assuming a uniform class distribution less effective. Furthermore, our work shows the potential of using language models to develop more effective algorithms. Language models are versatile, i.e., capable of processing noisy language inputs, and can serve as powerful priors. #### Limitations While the theory presented in this paper is informative, it has limitations. For example, the generalization bounds provided in Appendix A.2 assume a specific similarity function (e.g., dot product) that may not represent the various similarity functions used in practice. Moreover, the generalization bounds only apply to downstream classification tasks while we also evaluate on visual grounding and cross-modal retrieval tasks. While these tasks can be interpreted as nearest neighbor classification, the theoretical results do not trivially extend to these scenarios and more analysis is needed to provide similar generalization guarantees for these tasks. We use a language model to score text as a proxy for the sample-specific class probability. However, it is unclear how to apply this strategy to data that does not include associated text. While we can estimate data density with certain types of models (e.g., flow-based or autoregressive models), it is not well understood whether these estimates correlate well with the underlying class probabilities. Natural language data is unique in that it is created by humans, capturing important variations in data that align well with the types of problems that users typically wish to solve. Moreover, it is uncertain how well our assumption that class probability is log-linear with respect to text sequence probability holds in practice. Verifying this assumption requires defining a concrete set of latent classes and annotating text with the defined classes. However, the latent classes are difficult to define in practice, and we do not have access to the mapping from a data point to its latent class in most clinically important problems as having access to such mapping would eliminate the need for self-supervised learning. ## 5 Conclusion We present a debiased contrastive learning framework that accommodates arbitrary class distributions and mitigates the impact of false negatives. We offer theoretical and empirical evidence for using accurate sample-specific class probabilities. We introduce a specific debiased contrastive objective that employs a language model to estimate the sentence likelihood as a proxy for the class probability. When applied to paired image-text data, our method outperforms state-of-the-art models on image classification, visual grounding, and cross-modal retrieval tasks.
2304.10488
Topologically protected Grover's oracle for the partition problem
The Number Partitioning Problem (NPP) is one of the NP-complete computational problems. Its definite exact solution generally requires a check of all $N$ solution candidates, which is exponentially large. Here we describe a path to the fast solution of this problem in $\sqrt{N}$ quasi-adiabatic quantum annealing steps. We argue that the errors due to the finite duration of the quantum annealing can be suppressed if the annealing time scales with $N$ only logarithmically. Moreover, our adiabatic oracle is topologically protected, in the sense that it is robust against small uncertainty and slow time-dependence of the physical parameters or the choice of the annealing protocol.
Nikolai A. Sinitsyn, Bin Yan
2023-04-20T17:31:27Z
http://arxiv.org/abs/2304.10488v2
# Topologically protected Grover's oracle for the Partition Problem ###### Abstract The Number Partitioning Problem (NPP) is one of the NP-complete computational problems. Its definite exact solution generally requires a check of all \(N\) solution candidates, which is exponentially large. Here we describe a path to the fast solution of this problem in \(\sqrt{N}\) quasi-adiabatic quantum annealing steps. We argue that the errors due to the finite duration of the quantum annealing can be suppressed if the annealing time scales with \(N\) only logarithmically. Moreover, our adiabatic oracle is topologically protected, in the sense that it is robust against small uncertainty and slow time-dependence of the physical parameters or the choice of the annealing protocol. ## I Introduction The basic quantum algorithms, such as by Grover [1] and Shor [2], matrix inversion [3], and the solution of the glued-trees problem [4], assume that a part of a targeted problem is pre-solved. That is, such algorithms assume that a certain quantum function that points to the solution indirectly or a Hamiltonian that encodes the original mathematical problem is given almost for free, i.e., can be _called as an oracle_. In practice, the oracle is a quantum operator that is usually hard to construct. For a realistically interesting computational problem to benefit from such quantum algorithms, there must be a separate fast algorithmic and hardware implementation of its oracle, which is usually an unsolved problem. On the other hand, the oracle-free approaches such as the quantum annealing computing of the ground state of interacting Ising spins have no firm mathematical justification. There are actually theoretical work arguing no quantum speedup if such approaches are not specially tuned [5; 6]. Recently, the physical Grover's oracle implementation was suggested [7] for a solution of the Number Partitioning Problem (NPP) [8]. The idea in [7] was to use resonant interactions of the computational qubits with a central quantum system (a spin or a photon). The state of the qubits that was to be marked by the oracle was interacting with a central system at resonance, so that the phase of this special state changed by \(\pi\), while minimizing unwanted effects on the amplitudes of the other computational basis states. However, the resonant interactions with a targeted state are highly sensitive to the precision of the resonant conditions. Any uncontrollable off-resonance mismatch of interactions or a small imperfection of the control pulses produces a proportional effect on the quantum state. On the other hand, the solution of an exponentially hard problem by Grover algorithm requires an exponentially large number of the oracle calls, so that by the end of the algorithm any uncontrolled error is magnified by a factor \(\sqrt{N}\), where \(N=2^{n}\) and \(n\) is the number of computational qubits. To eliminate such errors, we must set the coupling parameters and control fields in the system with the corresponding exponentially high precision. Thus, the Grover's speedup in [7] for the computation time was achieved in expense of another physical resource, which was the precision of the physical coupling parameters and the control fields. We also note that for NPP such a trade of resources is known even for classical computing. Thus, there are classical dynamic programming algorithms that achieve the exact solution of NPP in time \(T\sim 2^{n/2}\) - just as with Grover algorithm but using exponentially large memory space, i.e., \(\sim 2^{n/4}\) classical bits of memory [9]. In the case of a quantum computer this exponential memory resource is not used, i.e., we deal with \(O(n)\) computation space but the requirement of the exponentially high precision on the physical parameters is undesirable as well. Even more fundamental problem with the approach in Ref. [7] is that its oracle affects the phases of the non-resonant states. Only in the adiabatic limit, these unwanted phases become truly suppressed, according to [7], as \(\sim 2\arctan(E\tau_{O})+\pi\), where \(\tau_{O}\) is the duration of the interaction to generate the oracle and \(E\) is the characteristic energy gap to the states that represent wrong solutions. Indeed, for \(|E|\tau_{O}\gg 1\) such phases become close to either \(0\) or \(2\pi\), which would mean no unwanted error. However, for finite \(E\) and \(\tau_{O}\), the deviation is of the order \(1/(|E|\tau_{O})\). Hence, in order to make this phase error scale as \(\sim 1/\sqrt{N}\), the time to produce the oracle has to scale as \(\tau_{O}\sim\sqrt{N}\) at fixed \(E\). Taking this into account, the entire time of the algorithm in [7] scales as \(\tau_{O}\sqrt{N}\sim N\), which is the same as for the classical algorithm. This raises a question about whether such hidden costs on time and the trade of resources in quantum computing are inevitable. In this article, we propose an approach that essentially eliminates these hidden problems from the solution of NPP by Grover algorithm during physical time \(\sim\sqrt{N}\). Our approach uses quasi-adiabatic quantum annealing in order to produce useful unitary transformations [10; 11]. Importantly, unlike [7], we do not request knowledge of the precise position of the resonance with the searched state. This makes our approach not only robust against the physical parameter uncertainty but also capable to solve a more complex version of NPP. Number partitioning problem The NPP has the goal to split a set \(\mathcal{S}=\{s_{1},s_{2},\ldots,s_{n}\}\) of positive integers \(s_{k}\), \(k=1,\ldots,n\) into two subsets \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) so that the difference between the sum of integers in \(\mathcal{S}_{1}\) and the sum of integers in \(\mathcal{S}_{2}\) is minimized. There are different formulations of this problem. We will restrict ourselves here to its two specific versions that we will call NPP1 and NPP2. (i) In NPP1, the difference may be always nonzero, so the goal is to find the partition that delivers the minimal difference between the two sums. (ii) In NPP2, it is assumed that the difference between the sums in \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) is known to be zero for some partitions, so the goal is to find at least one of them. Both problems can be formalized by introducing \(n\) binary variables \(\sigma_{k}^{z}=\pm 1\) that mark the number \(s_{k}\) as belonging to \(\mathcal{S}_{1}\) if \(\sigma_{k}^{z}=1\) and as belonging to \(\mathcal{S}_{2}\) if \(\sigma_{k}^{z}=-1\). NPP1 then has the goal to find components of an \(n\)-vector \((\sigma_{1}^{z},\ldots,\sigma_{n}^{z})\) that provide the minimum, \[\min\left|H_{I}\right|, \tag{1}\] where \(H_{I}\) is a linear form \[H_{I}\equiv\sum_{k=1}^{n}s_{k}\sigma_{k}^{z}. \tag{2}\] NPP2 is equivalent to finding the binary variables that satisfy a constraint \[H_{I}\equiv\sum_{k=1}^{n}s_{k}\sigma_{k}^{z}=0. \tag{3}\] We can interpret the linear form \(H_{I}\) as a simple Ising Hamiltonian of \(n\) quantum spins-\(1/2\). So, the goal of NPP1 is to find the eigenstate with the minimal nonnegative eigenvalue of \(H_{I}\), and the goal of NPP2 is to find an eigenstate of \(H_{I}\) that corresponds to zero eigenvalue. Note that for NPP1, the \(H_{I}\)-energy of the searched state is not _a priory_ known. This is why the strategy in [7] cannot be applied to NPP1 directly. Also NPP2 is a special case of NPP1. However, we will treat NPP2 separately because the knowledge of the energy of the searched state can be used for a simpler strategy. The following facts have been established about NPP previously. First, NPP is NP-hard [8]. Therefore, it is generally exponentially hard to solve exactly. Although Monte-Carlo algorithms in many situations produce the solution in time that scales with \(n\) polynomially, in the worst cases the needed time is exponential: \(T\sim 2^{n}\). Thus, if we are to solve such a problem definitely and exactly, given only polynomial in \(n\) memory resources, there is no better way than to test all \(2^{n-1}\) independent possibilities for different \(n\)-vector solution candidates. In what follows, we will be concerned with the goal to find such an exact solution definitely. NPP is NP-complete [8]. All other NP-hard problems can be solved faster if one finds a fast universal algorithm to solve any of the known NP-complete problems. NPP can be formulated as Quadratic Unconstrained Binary Optimization (QUBO) problem, whose goal is to find the minimum of a quadratic form of binary variables [12]. Thus, the quantum Ising spin Hamiltonian \[H_{Q}=H_{I}^{2} \tag{4}\] has all nonnegative eigenvalues, so the state with zero eigenvalue in NPP1 can be found by standard means of quantum annealing. However, the price for this strategy would be the requirement to build all-to-all interacting qubit network, which is difficult in practice. Even then, we have to deal with the lack of a known annealing protocol that would definitely outperform the classical search for the ground state of \(H_{Q}\) with arbitrary free parameters. So, we will discard this strategy. Finally, for any positive eigenvalue of \(H_{I}\) there is the same eigenvalue but with negative sign, with corresponding eigenstates different by the flip of all computational spins. The range of possible eigenvalues of \(H_{I}\) is also known: Since all \(s_{k}\) are positive, the highest and lowest eigenvalues are provided by the fully polarized qubit states: \(E_{max}=-E_{min}=\sum_{k=1}^{n}s_{k}\). Since all \(s_{k}\) are integers, we definitely know that there is at least a unit gap between any two different eigenvalues of \(H_{I}\). This also means that there are no energy levels in a finite vicinity of the fractional energy values, e.g., near \(E=1/2\). ## III Solution strategy Consider any superposition of eigenstates of \(H_{I}\), \[|\psi\rangle=\sum_{s=1}^{N}a_{s}|s\rangle,\quad N\equiv 2^{n}. \tag{5}\] We will show that by a single annealing step, whose time scales only as \(\sim\log^{\alpha}N\), where \(\alpha=O(1)\), we can generate an oracle that changes sign of all state amplitudes with \(H_{I}\)-energy below an arbitrarily prescribed energy level \(E\). The infidelity of this oracle is exponentially small in \(n\). We then use this oracle to change sign of the states with eigenvalues of \(H_{I}\) in the range \((-1/2,E)\) by applying the oracle at level \(E\) and then applying it at level \(-1/2\). This flips the sign of the amplitudes of all basis states in (5) with only nonnegative eigenvalues below \(E\). Being able to flip the signs for the states in the range \((-1/2,E)\), one can employ the algorithm of amplitude amplification [13; 14] to find a basis state within this range with nearly unit probability, in \(\sim\sqrt{N}\) steps. Within this range, the relative probabilities for the basis states to be found are determined by their relative weights \(|a_{s}|^{2}\). In Appendix A, we review the basics of Grover algorithm and amplitude amplification. Let the found eigenstate correspond to an eigenvalue \(E_{k}\). We then reset \[E\to E_{k}+1/2.\] The NPP1 protocol starts with an equal superposition of all the computational basis, i.e., \(a_{s}=1/\sqrt{N},\forall s\) in (5). With an initial trial value of the energy threshold \(E\), we then repeatedly apply the procedure described above to update its value. The range \((-1/2,E)\) will then be shrinked so that \(E\) becomes the lowest nonnegative eigenvalue of \(H_{I}\), and therefore, the target state is found. Since the initial state of the amplitude amplification is the equal superposition, each eigenstate from the desired energy range can be found with equal probability. On average, each step of resetting \(E\) reduces the number of eigenvalues in the interval \((-1/2,E)\) by a factor 2. Hence, the algorithm takes only \(\sim\log_{2}N\) cycles. This completes the algorithm up to the procedure that generates the oracles, which is the main result of our work. For NPP2, we will provide a process that for any superposition (5) produces, after a single quantum annealing step, almost the same superposition but with the flipped sign for amplitudes of all states \(|s_{\alpha}\rangle\) that correspond to the zero eigenvalue, \(H_{I}|s_{\alpha}\rangle=0\). Having this, the desired eigenstate is found by a conventional Grover algorithm in \(\sim\sqrt{N}\) repetitions of the quantum annealing process. ## IV Generating oracles ### Basic hardware requirements As in [7], the most complex part of hardware that we request is the Ising central spin interaction Hamiltonian of the form \[H_{int}=r\sum_{k=1}^{n}s_{k}\sigma_{k}^{z}I_{z}, \tag{6}\] where \(s_{k}\) are integers and \(\sigma_{k}^{z}\) are the Pauli \(z\)-matrices acting in phase space of the computational qubits; \(I_{z}\) is the projection operator for an ancillary spin, and \(r\) sets the energy scale. In what follows, we will set the Planck constant \(\hbar=1\), as well as \(r=1\), which makes both energy and time dimensionless. Our energy and time variables can be reconstructed in physical units by multiplying them by, respectively, \(r\) and \(\hbar/r\). Unlike Ref. [7], our approach specifies that the central spin has size \(I=1\). We will also assume that we have access to high-fidelity quantum gates for rotating all the spins/qubits by a fixed angle (single qubit resolution is not needed). The interactions of the type (6) with the central spin \(I=1\) are encountered in real physical systems. For example, the electronic spin of an NV\({}^{-}\)-center in diamond has electronic spin-1, which is coupled to many nuclear spins-1/2 of \({}^{13}\)C isotopes via dipole interactions [15]. The direct interactions between the nuclear spins are negligible due to their small g-factors. When needed, the nuclear spins can be rotated by RF-pulses, while the electronic spin can be controlled by external magnetic fields or optically. The physical effect on which our oracle generation relies essentially is the Robbins-Berry topological phase [16], which we briefly review in Appendix B. This phase is generated when a unit spin, \(\hat{\mathbf{I}}\), interacts with an adiabatically changing magnetic field, \(\mathbf{b}(t)\), with the Hamiltonian \[H(t)=\mathbf{b}(t)\cdot\hat{\mathbf{I}}, \tag{7}\] so that the spin starts at its zero projection on the initial field direction; the field remains finite during the evolution and ends up pointing in the opposite to its initial direction. In Fig. 1(a), the black arrow curve shows an example of a trajectory that the field direction leaves on a unit sphere. At the end of the protocol, the spin is in the initial physical state with zero spin projection on the initial axis but its quantum state acquires a phase \(\pi\) that does not depend on the time-dependent \(\mathbf{b}(t)\). This makes this phase topologically protected, including against weak nonadiabatic transitions. ### Grover's oracle for NPP1 For NPP1, we do not know _a priory_ the energy of the state that we are searching for. Hence, we start with an arbitrary "guessed" value by generating a random eigenstate of \(H_{I}\) and measuring its eigenvalue \(E_{k}\). If it is negative, we find the corresponding positive energy eigenstate by flipping all qubits. We set the initial threshold to be \(E=E_{k}+1/2\). Then, we mark the amplitudes of all states that have energy \(E_{m}<E\) by performing the quantum annealing Figure 1: Paths of the adiabatically changing magnetic field direction \(\mathbf{b}(t)/|\mathbf{b}(t)|\). The spin-1 is initially in the zero projection eigenstate along the field. It remains in the instantaneous zero-projection eigenstate during the time of evolution up to an accumulated phase. (a) The geometric phase along path \(\mathcal{C}\), where the magnetic field flips its direction, is \(\pi\). A closed path would generate no Berry phase [16]. Therefore, the phase difference between \(\mathcal{C}_{+}\) and \(\mathcal{C}_{-}\) is \(\pi\). (b) The phases of paths \(\mathcal{C}_{\pm}\) and \(\mathcal{C}_{0}\) are, respectively, \(\pi\) and zero. with the Hamiltonian \[H_{a1}(s)=A(s)\left[\left(\sum_{k=1}^{n}s_{k}\sigma_{k}^{z}I_{z}\right)-EI_{z} \right]+B(s)I_{x}, \tag{8}\] where \(I_{x(z)}\) is the spin-1 projection operator on \(x(z)\) axis. \(s=t/T\in[0,1]\) is a dimensionless parameter, \(t\) is time, and \(T\) is the total annealing time. The annealing schedule, \(A(s)\) and \(B(s)\), is designed such that \[A(0)=B(1)=0,\quad A(1)=B(0)=1. \tag{9}\] The precise shape of the annealing schedule is not important, and we use the word "adiabatic" in the sense that the evolution takes finite time but it is slow enough to suppress nonadiabatic excitations beyond some desired tolerance level. An example for the shapes of \(A(s)\) and \(B(s)\) are plotted in Fig 2(top). We will discuss more precisely the requirements for the annealing schedules in Section IV.4. The annealing Hamiltonian \(H_{a1}\) in (8) is trivially solvable because it does not contain the terms that flip computational qubits. Since \(H_{a1}\) commutes with time-independent \(H_{I}\), the evolution with \(H_{a1}\) splits into \(N\) invariant 3\(\times\)3 sectors, with the \(k\)-th sector corresponding to a conserved eigenvalue \(E_{k}\) of \(H_{I}\). Within this sector, the effective Hamiltonian \(H_{a1}\) has the form \[H_{k}(s)=A(s)(E_{k}-E)I_{z}+B(s)I_{x}. \tag{10}\] The evolution starts with the state that is a direct product of an arbitrary superposition \(|\psi\rangle\) of states of the computational qubits and the zero projection state of spin \(\mathbf{I}\) on the \(x\)-axis: \[|\Psi\rangle=|\psi\rangle\otimes|0_{x}\rangle. \tag{11}\] The spin-1 state \(|0_{x}\rangle\) is the eigenstate of the initial \(H_{1a}\) at \(s=0\). During the adiabatic evolution, in each sector the spin follows the instantaneous zero-projection state \(|0_{\mathbf{b}_{k}(s)}\rangle\) on the direction of the effective field with components \(\mathbf{b}_{k}(s)\equiv(b_{x},b_{y},b_{z})=(B(s),0,A(s)(E_{k}-E))\). The corresponding eigenvalue of \(H_{k}\) in each sector is identically zero: \(H_{k}(t)|0_{\mathbf{b}_{k}(t)}\rangle=0\). Hence, the dynamic phase is not generated. In Fig. 1(a) we show that for \(E_{k}>E\), the direction of \(\mathbf{b}(t)\) changes from the direction of \(x\)-axis to the direction of \(z\)-axis. For \(E_{k}<E\), however, the field ends up pointing in the opposite to the \(z\)-axis direction. In either case, the central spin ends up in the zero projection state, \(|0_{z}\rangle\), on the \(z\)-axis. However, the difference between the geometric phases generated by these two paths [red arrow curves in Fig. 1(a)] is the same as the phase generated by the field that switches from the positive to the negative direction along \(z\)-axis. According to [16] (see also Appendix B), this leads to an acquired topological \(\pi\)-phase difference between the sectors with \(E_{k}-E>0\) and \(E_{k}-E<0\). Summarizing, if the initial state before the annealing is \[|\Psi_{in}\rangle=\left(\sum_{k}a_{k}|k\rangle\right)\otimes|0_{x}\rangle, \tag{12}\] then after the annealing the state is \[|\Psi_{out}\rangle=\left(\sum_{k}(-1)^{\delta(k)}a_{k}|k\rangle\right)\otimes| 0_{x}\rangle, \tag{13}\] where \(\delta(k)=1\) for \(E_{k}<E\) and \(\delta(k)=0\) for \(E_{k}>E\), as it is required for the solution of NPP1 described in Section III. ### Grover's diffusion step In addition to Grover's oracle, the Grover algorithm employs a Grover's diffusion step, which is an application of a unitary operator \[U_{\mathrm{GD}}=2|\Rightarrow\rangle\langle\Rightarrow|-\mathds{1}, \tag{14}\] where \(|\Rightarrow\rangle\) is the state with all computational spins-1/2 rotated to point along \(x\)-axis, and \(\mathds{1}\) is the unit operator. While formally this step can be performed with a polynomial number of gates, as in Ref. [7] we can generate it with a similar annealing step. Note that \(U_{\mathrm{GD}}\) has the same structure as the Grover's oracle in the sense that \(U_{\mathrm{GD}}\) merely changes the relative Figure 2: Top: Annealing schedule in (18), where the constant \(c\) is fixed at 10. Bottom: Simulation of the infidelity of the adiabatic oracle as a function of the total annealing time, for NNP1 with the problem set \(\mathcal{S}=\{1,2\}\). The energy threshold for the oracle is set at \(E=1.5\). Red circles are the numerical data. Black curves are the best fit to an exponential function \(\sim\exp{(ax^{b})}\) with \(b\approx 1.08\). sign of the amplitude of a particular state of the computational qubits. The only problem is that this state, \(|\Rightarrow\rangle\), is not an eigenstate of \(H_{I}\). However, if we have an access to a unitary operator \[U_{\rm GDz}\equiv 2|\Uparrow\rangle\langle\Uparrow|-\mathds{1}, \tag{15}\] where \(\Uparrow\) is the fully spin-polarized state along the \(z\)-axis, then a simple rotation of all spins from \(z\)-axis to \(x\)-axis direction transforms \(U_{\rm GDz}\) into \(U_{\rm GD}\). If all computational spin qubits are identical, this unitary operation is achieved with a simple pulse of a magnetic field: \[U_{rot}=e^{-i(\pi/4)\sum_{k=1}^{n}\sigma_{k}^{k}}, \tag{16}\] so that \[U_{\rm GD}=U_{rot}U_{\rm GDz}U_{rot}^{\dagger}.\] The Hamiltonian \(H_{I}\) has a nondegenerate state with all spins polarized along \(z\)-axis, which corresponds to \(H_{I}\) eigenvalue \(E_{max}=\sum_{k=1}^{N}s_{k}\). Since the energy of this state is known, we can mark amplitudes of all other states with a \(-1\) factor by setting \(E=E_{max}-1/2\) and performing a single annealing step. Thus, we do not have to change the interaction part of the Hamiltonian: the diffusion step is achieved with the annealing step as for the Grover oracle but in a different field acting on the ancillary spin. The application of the spin rotation before and after this annealing with (8) produces the equivalent effect to the application of the Grover's diffusion operator. The fact that no other quantum gates are needed is practically useful because a simple spin rotation can be performed with very high fidelity, e.g., \(\sim 10^{-6}\)[17] probability of the error, whereas the entire universal set of quantum gates cannot be usually produced with the fidelity better than \(\approx 99\%\). What is important for our discussion is that such a rotation of spin qubits can be done by rotating the control field quasi-adiabatically. The precision and time-scaling of this process then is not worse than for the oracle generation. ### Fidelity of the oracle In Grover algorithm, the oracle is called \(\sim\sqrt{N}\) times, so it is required that the error does not accumulate to \(O(1)\) probability of a wrong state after \(\sqrt{N}\) annealing steps. This imposes a constraint on the tolerance of the nonadiabatic excitations and the running time of the adiabatic oracle. With suitable time-dependent annealing schedules, one can suppress non-adiabatic deviations exponentially in the total running time \(T\)[18; 19; 20]. Generally, the nonadiabatic errors scale as \[P_{\rm ex}\sim\exp{(-\eta\Delta^{2}/\beta)}, \tag{17}\] where \(\eta\) is a numerical factor depending on the specific annealing schedule, \(\Delta\) is the characteristic gap near an avoided crossing point and \(\beta\) is the rate of the transition through this gap. In our case, the lowest gap is found in the sector with \(\Delta=|E-E_{k}|=1/2\). An example of the protocol with exponential suppression of the errors is \[A(s),B(s)=\frac{1}{2}\left[1\pm\tanh{c(4s-1)}\right], \tag{18}\] where \(c\) is a large constant to ensure that the annealing schedule starts and terminates smoothly (derivatives of the schedules are suppressed [18]). Note that if \(c\) is of the order of \(n=\log_{2}N\), the deviations of the boundary values of \(A(s)\) and \(B(s)\) from (9) are exponentially small. Therefore, we ignore errors caused by imperfect boundary condition of the annealing schedules. Shapes of \(A(s)\) and \(B(s)\) are plotted in Fig. 2(top). To quantify the accuracy of the oracle with the above annealing schedule, we simulated its infidelity as a function of \(T\). The infidelity is defined as the \(1-F(T)\), where \(F(T)\) is the probability for the final output state of the oracle to be detected in the desired output state of an ideal oracle. In Fig. 2(bottom), the exponential decay of the infidelity is observed. Since the oracle is called \(\sim\sqrt{N}\) times, the error of each oracle call must scale as \[P_{\rm ex}\sim 1/\sqrt{N}, \tag{19}\] For our protocol, the rate of the transition through the gap is \(\beta\sim c/T\). Since, \(c\sim n\), the condition (19) is satisfied if \(e^{-\eta T/n}\sim 2^{-n/2}\) for some \(\eta=O(1)\). This condition implies that the running time of the oracle satisfies \[T\sim\log^{2}N, \tag{20}\] which retains the overall quadratic speedup of the Grover algorithm. ## V Simpler approaches In this section, we discuss possible strategies to simplify experimental verification of our approach. First, one can reduce the number of steps by considering the NPP2 version of the problem, in which the target state of the corresponding Ising Hamiltonian \(H_{I}\) is known to have zero energy. This knowledge can be used to simplify the generation of the oracle. We will then discuss a strategy that does not involve time dependent tuning of the interaction strength between the Ising spins. This may be important for experiments without access to time-dependent interactions. ### Simplified oracle for NPP2 For NPP2, the \(H_{I}\)-energy of the searched state is known: \(E_{0}=0\). Since this state belongs to the energy range \((-1/2,1/2)\), we can generate its Grover's oracle by performing annealing with the Hamiltonian \(H_{a1}\) initially at \(E=1/2\) and then at \(E=-1/2\). Note that, as for NPP1, this approach is topologically protected. Namely, the physical parameters \(s_{k}\) can be set not precisely and even can experience slow time-dependent deviations from the desired integer values. Nevertheless, the topological \(\pi\)-phase is robust as long as the level \(E\) is set in the gap that separates the searched state from the other states. If the zero energy of the searched state is protected by symmetry of interactions, the oracle for NPP2 can be generated in only a single quantum annealing step with the time-dependent Hamiltonian \[H_{a2}(s)=A(s)\left(\sum_{k=1}^{N}s_{k}\sigma_{k}^{z}I_{x}\right)+B(s)I_{x}, \tag{21}\] where \(A(0)=A(1)=0\), and \(B(0)=-B(1)=1\). For example, such an annealing protocol can be created by combining the schedules in NPP2: \[A(s),B(s)=\begin{cases}\frac{1}{2}\left[1\pm\tanh c(4s-1)\right]&s\leq 1/2,\\ \pm\frac{1}{2}\left[1-\tanh c(4s-3)\right]&s>1/2.\end{cases} \tag{22}\] The shape of this schedule is plotted in Fig. 3(top), in which we also demonstrate that nonadiabatic errors of this oracle are suppressed with the total annealing time \(T\) exponentially. The corresponding effective magnetic field \(\mathbf{b}(s)\) switches direction to the opposite one by the end of the annealing, as we illustrate in Fig. 1(b). According to [16], this leads to the same state \(\left|0_{x}\right\rangle\) at the end of annealing as at the beginning but with an acquired topological \(\pi\)-phase in all sectors with \(E_{k}\neq 0\). In contrast, for the eigenstates of \(H_{I}\) with the eigenvalue \(E_{0}=0\), the state \(\left|0_{x}\right\rangle\) remains the exact eigenstate of the time-dependent Hamiltonian \(H_{0}(s)=B(s)I_{x}\) with zero eigenvalue. Hence, during the entire protocol this state does not change and does not even acquire any dynamic or geometric phases. Summarizing, if the initial state before the annealing is (12) then after the annealing the state is \[\left|\Psi_{out}\right\rangle=\left(\sum_{k}(-1)^{\delta(k)}a_{k}|k\rangle \right)\otimes\left|0_{x}\right\rangle, \tag{23}\] where \(\delta(k)=1\) for \(E_{k}\neq 0\) and \(\delta(k)=0\) for \(E_{k}=0\). ### Annealing with time-independent couplings A caveat of the standard annealing schedule discussed above is that the time dependent \(A(s)\) appears in front of the coupling terms of the Ising spins. Experimentally, changing the interaction strength could be hard to achieve, e.g., if the computational qubits are nuclear spins. Here, we introduce an annealing protocol with fixed coupling strengths. For NNP1, in contrast to (8), the oracle is realized with the Hamiltonian \[H_{a1}^{\prime}(t)=\left(\sum_{k=1}^{n}s_{k}\sigma_{k}^{z}I_{z}\right)-EI_{z}+ g(t)I_{x}, \tag{24}\] where the time changes in the interval \(t\in(T_{\min},T_{\max})\) such that \[g(T_{\min})\gg 1,\quad g(T_{\max})\ll 1.\] An example of such a protocol is \[g(t)=e^{-t/T}, \tag{25}\] where \(T_{\min}\sim-Tn\) and \(T_{\max}\sim Tn\). Considering no environmental decoherence, the errors for this oracle originate from two sources: (i) the finite time of the evolution, which leads to the nonadiabatic transitions over the energy gap and (ii) the finite interval of the external field values \(g(t)\), which leads to the error \(\sim\left|E_{k}/g(t_{max})\right|\) due to misalignment of the initial field \(\mathbf{b}\) from the \(x\)-axis. Given that nonzero eigenvalues \(E_{k}\) of \(H_{I}\) are integer numbers, the adiabatic conditions correspond to \(T\gg 1\) in order to guarantee that in the worst case with \(\left|E_{k}-E\right|=1/2\) we avoid the nonadiabatic transitions during the evolution within each \(E_{k}\) sector. In Appendix C we calculate the nonadiabatic transition probability for the protocol (25) analytically, and thus verify its exponential suppression with \(T\). For the Hamiltonian (24), the boundary-related errors are suppressed if the physical interval for \(g(t)\) is sufficiently large, so that at the beginning and the end of the Figure 3: Top: Annealing schedule in (22), where the constant \(c\) is fixed at 10. Bottom: Simulation of the infidelity of the adiabatic oracle as a function of the total annealing time, for NNP2 with the problem set \(\mathcal{S}=\{1,2,3\}\). Red circles are the numerical data. Black curves are the best fit to an exponential function \(\sim\exp{(ax^{b})}\) with \(b\approx 1.36\). Inset shows a zoom in of the fast Stueckelberg oscillation of the nonadiabatic excitations on top of the overall exponential decay. evolution the deviation of the entire field from the \(x\)-axis direction is exponentially suppressed, e.g., \[g(T_{\rm max})=-g(T_{\rm min})\sim e^{\eta n}, \tag{26}\] where \(\eta>1/2\) is chosen to make sure that the boundary error is not accumulated substantially after \(\sqrt{N}\) calls of the oracle. This guarantees that we are able to prepare the initial state of the spin-1 in all sectors as the zero projection on the \(x\)-axis eigenstate. Note, however, that due to the exponentially fast changes of \(g(t)\), the entire time of the field sweep depends on \(n\) only linearly. So, the entire time of the annealing step still scales logarithmically with \(N\equiv 2^{n}\): \[T_{\rm max}-T_{\rm min}\sim\log^{\alpha}N,\quad\alpha=O(1).\] The condition (25) suggests that if the couplings are time-independent we still need a large resource in the form of an exponentially large interval for the range of \(g(t)\). Experimentally, allowing no time-dependent control of the interactions may simplify the first demonstrations of our approach. However, we expect that the time dependent interactions will be required with growing \(n\) in order to reduce the range for the accessible external field. Finally, we note that the most complex instances of NPP are very rare unless the largest integer number in the set \(\mathcal{S}\) is exponentially growing with \(n\)[8]. In such situations, our annealing protocols still keep the annealing time logarithmic, albeit with an extra power of \(\log N\). However, the energy range for both spin-spin interactions and the external field has to grow with \(n\) exponentially. This resource requirement, however, is inevitable if we are to encode exponentially large input values in physical parameters. A strategy to alleviate this problem can be found in Ref. [7]. ## VI Discussion The NPP is one of the practically most useful famous computational problems. We showed that quantum mechanics allows its general definite exact solution faster than the classical solution, essentially without an exponential overhead due to precision of the control. The computation time \(T_{\rm comp}\sim 2^{n/2}\) still scales exponentially with the number of integers that should be partitioned. However, this speedup is exponential in the sense that the computation time is reduced by an exponentially small factor in comparison to the classical \(T_{\rm comp}\sim 2^{n}\). For modern classical computers, the exact solution of NPP should become generally impossible for \(n\sim 60\), which corresponds to an order of \(2^{60/2}\sim 10^{9}\) calls of the oracle in the Grover algorithm. Thus, we estimate that the quantum supremacy for this problem can be achieved if the quantum annealing model with the central spin interactions is implemented for \(n\approx 60\) qubits, with \(\sim 10^{-9}\) error rate per one annealing step. For the qubits with the quantum lifetime of order 1 second, these steps should take not more than 1ns. Altogether this is still beyond the ability of the modern quantum technology but the numbers are not too far away from what is possible. For example, similar estimates show that our approach is within the modern experimental reach for \(n\approx 40\). For such \(n\), we need \(\sim 10^{6}\) oracle calls, with the fidelity that was demonstrated in some systems [17]. Finally, we comment on a recent work [21], claiming that Grover algorithm provides no quantum advantage. The criticism in [21] was based on the assumption that the Grover's oracle is constructed as a separate quantum circuit. The authors in [21] argued that for the cases when this circuit can be simulated classically, the problem is also solvable quickly by a classical computer. Hence, for a classically complex problem, the Grover's oracle has to be hard to implement as a quantum circuit. Our work does not contradict [21]. Namely, we do not know a short circuit that would simulate our quantum annealing step on a gate-based quantum computer with the desired accuracy. The only possibility that we see generally is to use Suzuki-Trotter decomposition that requires \(\sim\sqrt{N}=2^{n/2}\) quantum gates in order to simulate our annealing step with accuracy \(O(1/\sqrt{N})\), which would be needed to suppress the discretization errors throughout all \(\sim\sqrt{N}\) steps of the Grover algorithm. Hence, our approach does not provide an advantage if it is implemented as a fully gate-based quantum circuit. We showed, however, that this problem can be avoided with a physical quantum annealing evolution, which can be performed in time that scales with \(N\) only logarithmically and employs only simple interactions between qubits. Thus, we resolved the question in favor of quantum computers. ###### Acknowledgements. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, through the Quantum Internet to Accelerate Scientific Discovery Program, and in part by U.S. Department of Energy under the LDRD program at Los Alamos. ## Appendix A Amplitude amplification Given a quantum state in an equal superposition of \(N\) basis states, i.e., \[|s_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}|i\rangle, \tag{27}\] Grover algorithm finds the target state \(|\omega\rangle\) in \(\sim\sqrt{N}\) steps. The basic ingredient of the Grover algorithm is the oracle operation: \[\hat{O}\equiv I-2|\omega\rangle\langle\omega|, \tag{28}\] which flips the sign of the target state and keeps the other basis states unchanged. Each oracle call is also supplemented by a diffusion operator, defined as \[\hat{D}\equiv I-2|s_{0}\rangle\langle s_{0}|. \tag{10}\] This operation flips the sign of \(|s_{0}\rangle\) and keeps the component orthogonal to \(|s_{0}\rangle\) unchanged. For large \(N\), after \(\sim\sqrt{N}\) calls of the oracle (followed by the diffusion operation after each oracle call), the state ends up in the target state \(|\omega\rangle\) with nearly unit probability. Grover's algorithm can be generalized to amplify the amplitudes of more than one target states, as described in the following. For an arbitrary state \[|s\rangle=\sum_{i=1}^{N}c_{i}|i\rangle, \tag{11}\] the task is to amplify the amplitude of all the basis states within a given subspace. Let \(\hat{P}\) be the projector onto the target subspace, and \(a\) be the "weight" of the initial state \(|s\rangle\) in the target subspace, i.e., \[a\equiv\langle s|\hat{P}|s\rangle.\] Similarly to the original Grover algorithm, the amplitude amplification implements the oracle and diffusion operators defined as \[\begin{split}\hat{O}&\equiv I-2\hat{P},\\ \hat{D}&\equiv I-2|s\rangle\langle s|.\end{split} \tag{12}\] For large \(N\), after \(\sim 1/\sqrt{a}\) calls of the oracle and diffusion, the initial state \(|s\rangle\) is projected to the target subspace with nearly unit probability. This approach, however, requires that the weight \(a\) of the initial state in the target subspace is determined. In case \(a\) is not known _a priori_, one can employ the amplitude estimation algorithm [14] first, and then apply the procedure described above. If the task is not to find the projection of the initial state onto the target subspace, but to find a single basis state within the target subspace, as needed in the NNP1 protocol developed in the main text, the amplitude amplification algorithm can achieve this directly. That is, with (expected) \(\sim 1/\sqrt{a}\) number of steps, one finds a single basis state within the desired subspace. The basic procedure of the algorithm is the following: with a fixed constant \(1<c<2\), one should start with \(l=0\) and compute \(M=\lceil c^{l}\rceil\); Apply the oracle for a number of steps uniformly picked from \([1,M]\), and then measure the system. If a state within the target subspace is found, the algorithm terminates. Otherwise, increase \(l\) by 1 and repeat above. Proof of the algorithm can be found in [14]. ## Appendix B Robbins-Berry phase for spin 1 To derive the Robbins-Berry phase [16], we consider a unit spin, \(I=1\), in an external field \(\mathbf{b}(t)\) that changes with time adiabatically so that the initial and final field directions do not coincide but rather differ by sign: \(\mathbf{b}(t_{\text{in}})=-\mathbf{b}(t_{\text{fin}})=b\hat{z}\). Here, without loss of generality we assume that the initial field direction is along the \(z\)-axis. Let the initial spin state \(|0_{z}\rangle\) correspond to the zero spin projection on this axis. Assume that during the adiabatic evolution, the magnetic field is always nonzero and the Hamiltonian is \[H(t)=\mathbf{b}(t)\cdot\hat{\mathbf{I}}. \tag{13}\] Let \(\mathbf{b}=(b,\theta,\varphi)\) be the parametrization of the field vector by the time-dependent components in spherical coordinates, and \[R_{x}\left(\theta\right)=e^{-i\hat{I}_{x}\theta},\quad R_{z}\left(\varphi \right)=e^{-i\hat{I}_{z}\varphi} \tag{14}\] be the spin rotation operators. The instantaneous eigenstates of the Hamiltonian (13) are the spin projection states on the instantaneous field direction. For the zero spin projection on the field axis this state is \[|0_{\mathbf{b}(t)}\rangle=R_{z}(\varphi)R_{x}(\theta)R_{z}^{-1}(\varphi)\,|0_{ z}\rangle\,. \tag{15}\] The eigenvalues of \(H\) are \(-|b(t)|\), 0, and \(|b(t)|\), which are always separated by a finite gap from each other because \(\mathbf{b}(t)\) is nonzero. According to the adiabatic theorem, the solution of the time-dependent Schrodinger equation in the adiabatic limit should coincide with \(|0_{\mathbf{b}(t)}\rangle\) up to a phase factor \(\exp\{i(\phi_{d}+\phi_{\text{geom}})\}\), where \[\phi_{d}=-\int_{T_{min}}^{t}d\tau\,\langle 0_{\mathbf{b}(\tau)}|H|0_{\mathbf{b}( \tau)}\rangle,\] \[\phi_{\text{geom}}(C)=\int_{C}\mathbf{A}(\mathbf{b})\cdot\,d\mathbf{b}.\] Here, \(C\) is the magnetic field trajectory, and \[\mathbf{A}(\mathbf{b})\equiv i\langle 0_{\mathbf{b}}|\frac{\partial}{\partial \mathbf{b}}|0_{\mathbf{b}}\rangle\] is the standard Berry connection along this path. The state \(|0_{\mathbf{b}(t)}\rangle\) corresponds to the 0-eigenvalue of \(H\), so the dynamic phase is identically zero: \(\phi_{d}=0\). The explicit calculations of the Berry connection show that all its components, \(\mathbf{A}=(A_{b},A_{\theta},A_{\varphi})\) are identically zero, which means that the geometric phase correction to \(|0_{\mathbf{b}(t)}\rangle\) is also identically zero. Thus, \(|0_{\mathbf{b}(t)}\rangle\) is the solution of the time-dependent Schrodinger equation with the Hamiltonian \(H(t)\) in the adiabatic limit. At the end of the evolution, \(\mathbf{b}_{\text{fin}}\) has opposite direction to \(z\)-axis. Hence, the final state \(|0_{fin}\rangle\) coincides with the initial state \(|0_{z}\rangle\) up to an unknown phase factor that we now determine. The final state in Eq. (15) corresponds to \(\theta=\pi\). Note also that \(R_{z}^{-1}(\varphi)|0_{z}\rangle=|0_{z}\rangle\). Hence, the final state of the spin is given by \[|0_{\text{fin}}\rangle=e^{i\pi\hat{I}_{x}}|0_{z}\rangle.\] The phase difference between the initial and the final states is \[e^{i\phi}=\langle 0_{z}|0_{\text{fin}}\rangle=\langle 0_{z}|\sum_{k=0}^{\infty} \frac{(i\pi\hat{I}_{x})^{k}}{k!}|0_{z}\rangle, \tag{10}\] which can be calculated by recalling the matrix form \[\hat{I}_{x}=\left(\begin{array}{ccc}0&1/\sqrt{2}&0\\ 1/\sqrt{2}&0&1/\sqrt{2}\\ 0&1/\sqrt{2}&0\end{array}\right).\] All odd powers of \(I_{x}\) have zero expectation values over the state \(|0_{z}\rangle\), whereas \(\langle 0_{z}|\hat{I}_{x}^{2}|0_{z}\rangle=1\), and \(\hat{I}_{x}^{4}=\hat{I}_{x}^{2}\). Then, the series in (10) can be summed as \[e^{i\phi}=\sum_{k=0}^{\infty}\frac{(i\pi)^{2k}}{(2k)!}=\cos(\pi)=-1.\] Thus, the accumulated phase by the end of the field sweep to the opposite direction is \(\phi=\pi\). This is the Robbins-Berry phase, which does not depend on the path of the field \(\mathbf{b}(t)\) between its boundary values. ## Appendix C Nonadiabatic transitions for spin-1 in time-dependent field The theory of nonadiabatic transitions for spin-1/2 in a time-dependent magnetic field is well established. Its generalization to problems with more than two interacting states remains an obscure topic but with some exceptions. Thus, in 1932, Majorana showed that any result for a spin-1/2 in a time-dependent magnetic field can be generalized to a spin of arbitrary size [22]. Here, we review this generalization with application to our annealing problem for spin-1. Consider again the Hamiltonian of a spin-1 in a time-dependent magnetic field: \[H=\mathbf{b}(t)\cdot\hat{\mathbf{I}}, \tag{11}\] and associate with it the Hamiltonians, \(h_{1}\) and \(h_{2}\), of two independent spins-1/2 that are placed in the same as in (11) time-dependent field, i.e., \[h_{1}=h_{2}=\frac{1}{2}\mathbf{b}(t)\cdot\hat{\mathbf{\sigma}}. \tag{12}\] Note that \(h_{1}\) and \(h_{2}\) act in different spin spaces. Hence, both spins are described simultaneously by a combined Hamiltonian \[H^{\prime}=h_{1}\otimes 1_{2}+1_{1}\otimes h_{2}, \tag{13}\] where \(1_{1,2}\) are unit 2\(\times\)2 matrices acting in, respectively, the first and the second spin sectors. The Hamiltonian (13) is acting in space with four basis vectors: \[|1\rangle\equiv|\uparrow\uparrow\rangle,\quad|-1\rangle\equiv|\downarrow \downarrow\rangle,\quad|0\rangle\equiv\frac{1}{\sqrt{2}}\left(|\uparrow \downarrow\rangle+|\downarrow\uparrow\rangle\right), \tag{14}\] \[|-\rangle\equiv\frac{1}{\sqrt{2}}\left(|\uparrow\downarrow\rangle-|\downarrow \uparrow\rangle\right), \tag{15}\] where we use short notation \(|\uparrow\uparrow\rangle\equiv|\uparrow\rangle\otimes|\uparrow\rangle\), e.t.c.. Since \(|-\rangle\) is an eigenstate of \(H^{\prime}\) for all times, it decouples from the triplet (14). Moreover, within the triplet (14), \(H^{\prime}\) has the matrix form (11). Indeed, it is easy to check, e.g., that \(\langle 1|H^{\prime}|1\rangle=-\langle-1|H^{\prime}|-1\rangle=b_{z}\), \(\langle 1|H^{\prime}|0\rangle=b_{x}/\sqrt{2}\), e.t.c.. Since spins-1/2 experience the same time-dependent field, their evolution over the time interval \(t\in(T_{min},T_{max})\) is described by the same evolution matrix: \[U_{1}=U_{2}=\left(\begin{array}{cc}a&b\\ -b^{*}&a^{*}\end{array}\right), \tag{16}\] with complex amplitudes \(a\) and \(b\). The evolution matrix for the Hamiltonian \(H^{\prime}\) factorizes as the direct product: \[U^{\prime}=U_{1}\otimes U_{2}. \tag{17}\] For example, if the initial state, at \(t=T_{min}\), is \(|1\rangle=|\uparrow\uparrow\rangle\) then the amplitude of the state \(|1\rangle\) at time \(T_{max}\) is \[\langle 1|U^{\prime}|1\rangle=a^{2}.\] Similarly, \(\langle 0|U^{\prime}|1\rangle=-\sqrt{2}ab^{*}\), whereas \(\langle 1|U^{\prime}|-\rangle=0\), e.t.c.. Summarizing, if we know the evolution operator (16) for spin-1/2 in a time-dependent magnetic field, then we can also write the evolution matrix for the Hamiltonian that describes spin-1 in the same field: \[U=\left(\begin{array}{ccc}a^{2}&\sqrt{2}ab&b^{2}\\ -\sqrt{2}ab^{*}&|a|^{2}-|b|^{2}&\sqrt{2}a^{*}b\\ (b^{*})^{2}&-\sqrt{2}a^{*}b^{*}&(a^{*})^{2}\end{array}\right). \tag{18}\] The central element, \(U_{00}=|a^{2}|-|b^{2}|=2|a|^{2}-1\), of this matrix is the amplitude to stay on the zero-projection state after the evolution. Note that this element is purely real. For spin-1/2, the adiabatic evolution that flips the spin to the opposite direction corresponds to \(|a|=0\) and \(|b|=1\), which leads to \(U_{00}=-1\), in agreement with Robbins-Berry phase \(\pi\) in Appendix B. Our result is more general: even in case of small nonadiabatic transitions, the element \(U_{00}\) remains real and thus this \(\pi\)-phase is protected by the symmetry. For a quasi-adiabatic sweep of one magnetic field component from large negative to large positive values throughout an avoided crossing point, the probability of the nonadiabatic transition for spin-1/2 is generally given by the Dykhne formula [23]: \[|a|^{2}=ce^{-2\text{Im}\left[\int_{0}^{t_{0}}d\tau\sqrt{b_{x}^{2}+b_{x}(\tau) ^{2}}\right]}, \tag{19}\] where \(t_{0}\) is the complex-valued time point that corresponds to closing the gap in the spectrum: \(\mathbf{b}(t_{0})=0\). If there are many such points we should choose the one that minimizes the integral in (19). Generally \(c=1\), with exceptions in cases of rare symmetries. The Dykhne formula predicts an exponentially suppressed probability of a nonadiabatic transition \(|a|^{2}\sim e^{-\eta\Delta T}\), where \(\Delta\) is the minimal gap during the evolution and \(T\) is a characteristic time of the transition through the avoided crossing; \(\eta\) is a model-specific coefficient of order 1. For our spin-1 models, the probability to make a nonadiabatic transition to the states with nonzero spin polarization on the final field axis is given by \[P_{ex}=1-|U_{00}|^{2}\approx 4|\alpha|^{2}. \tag{10}\] For the model (24) with exponential coupling decay (25), the invariant sectors have the field components \(b_{z}=E_{k}-E\) and \(b_{x}(t)=e^{-t/T}\). The Dykhne formula then predicts \(|a|^{2}\approx e^{-\pi|E_{k}-E|T}\), and for spin-1 we find \[P_{ex}\approx 4e^{-\pi|E_{k}-E|T}.\]
2307.09489
J. B. S. Haldane's Rule of Succession
After Bayes, the oldest Bayesian account of enumerative induction is given by Laplace's so-called rule of succession: if all $n$ observed instances of a phenomenon to date exhibit a given character, the probability that the next instance of that phenomenon will also exhibit the character is $\frac{n+1}{n+2}$. Laplace's rule however has the apparently counterintuitive mathematical consequence that the corresponding "universal generalization" (every future observation of this type will also exhibit that character) has zero probability. In 1932, the British scientist J. B. S. Haldane proposed an alternative rule giving a universal generalization the positive probability $\frac{n+1}{n+2} \times \frac{n+3}{n+2}$. A year later Harold Jeffreys proposed essentially the same rule in the case of a finite population. A related variant rule results in a predictive probability of $\frac{n+1}{n+2} \times \frac{n+4}{n+3}$. These arguably elegant adjustments of the original Laplacean form have the advantage that they give predictions better aligned with intuition and common sense. In this paper we discuss J. B. S. Haldane's rule and its variants, placing them in their historical context, and relating them to subsequent philosophical discussions.
Eric-Jan Wagenmakers, Sandy Zabell, Quentin F. Gronau
2023-07-18T08:22:17Z
http://arxiv.org/abs/2307.09489v1
# J. B. S. Haldane's Rule of Succession ###### Abstract After Bayes, the oldest Bayesian account of enumerative induction is given by Laplace's so-called _rule of succession_: if all \(n\) observed instances of a phenomenon to date exhibit a given character, the probability that the next instance of that phenomenon will also exhibit the character is \(\frac{n+1}{n+2}\). Laplace's rule however has the apparently counterintuitive mathematical consequence that the corresponding "universal generalization" (every future observation of this type will also exhibit that character) has zero probability. In 1932, the British scientist J. B. S. Haldane proposed an alternative rule giving a universal generalization the positive probability \(\frac{n+1}{n+2}\times\frac{n+3}{n+2}\). A year later Harold Jeffreys proposed essentially the same rule in the case of a finite population. A related variant rule results in a predictive probability of \(\frac{n+1}{n+2}\times\frac{n+4}{n+3}\). These arguably elegant adjustments of the original Laplacean form have the advantage that they give predictions better aligned with intuition and common sense. In this paper we discuss J. B. S. Haldane's rule and its variants, placing them in their historical context, and relating them to subsequent philosophical discussions. The problem of enumerative induction has fascinated philosophers from antiquity to modern times. One touchstone is how such studies deal with a general law or so-called _universal generalization_ (UG), in which all instances or observations of a phenomenon share a particular characteristic. For concreteness, consider the Goldbach conjecture, which states that every even integer greater than 2 can be expressed as the sum of two primes (for example, \(8=5+3\); \(50=47+3=37+13=31+19\)). The Goldbach conjecture has not yet been proven, but it has been verified for all consecutive even numbers up to \(4\cdot 10^{18}\), which means that we know of \(2\cdot 10^{18}-1\) confirming instances and zero disconfirming instances (Oliveira e Silva, Herzog, & Pardi, 2014). Each confirming instance presumably makes the
2310.14771
Evaluating the Knowledge Base Completion Potential of GPT
Structured knowledge bases (KBs) are an asset for search engines and other applications, but are inevitably incomplete. Language models (LMs) have been proposed for unsupervised knowledge base completion (KBC), yet, their ability to do this at scale and with high accuracy remains an open question. Prior experimental studies mostly fall short because they only evaluate on popular subjects, or sample already existing facts from KBs. In this work, we perform a careful evaluation of GPT's potential to complete the largest public KB: Wikidata. We find that, despite their size and capabilities, models like GPT-3, ChatGPT and GPT-4 do not achieve fully convincing results on this task. Nonetheless, they provide solid improvements over earlier approaches with smaller LMs. In particular, we show that, with proper thresholding, GPT-3 enables to extend Wikidata by 27M facts at 90% precision.
Blerta Veseli, Simon Razniewski, Jan-Christoph Kalo, Gerhard Weikum
2023-10-23T10:15:13Z
http://arxiv.org/abs/2310.14771v1
# Evaluating the Knowledge Base Completion Potential of GPT ###### Abstract Structured knowledge bases (KBs) are an asset for search engines and other applications, but are inevitably incomplete. Language models (LMs) have been proposed for unsupervised knowledge base completion (KBC), yet, their ability to do this at scale and with high accuracy remains an open question. Prior experimental studies mostly fall short because they only evaluate on popular subjects, or sample already existing facts from KBs. In this work, we perform a careful evaluation of GPT's potential to _complete_ the largest public KB: Wikidata. We find that, despite their size and capabilities, models like GPT-3, ChatGPT and GPT-4 do not achieve fully convincing results on this task. Nonetheless, they provide solid improvements over earlier approaches with smaller LMs. In particular, we show that, with proper thresholding, GPT-3 enables to extend Wikidata by 27M facts at 90% precision. ## 1 Introduction Structured knowledge bases (KBs) like Wikidata (Vrandecic and Krotzsch, 2014), DBpedia (Auer et al., 2007), and Yago (Suchanek et al., 2007) are employed in many knowledge-centric applications like search, question answering and dialogue. Constructing and completing these KBs at high quality and scale is a long-standing research challenge, and multiple benchmarks exist, e.g., FB15k (Bordes et al., 2013), CoDEx (Safavi and Koutra, 2020), and LM-KBC22 (Singhania et al., 2022). Text-extraction, knowledge graph embeddings, and LM-based knowledge extraction have continuously moved scores upwards on these tasks, and leaderboard portals like Paperswithcode1 provide evidence for that. Footnote 1: [https://paperswithcode.com/task/knowledge-graph-completion](https://paperswithcode.com/task/knowledge-graph-completion) Recently, LMs have been purported as a promising source of structured knowledge. Starting from the seminal LAMA paper (Petroni et al., 2019), a three of works have explored how to better probe, train, or fine-tune these LMs (Liu et al., 2022). Nonetheless, we observe a certain divide between these late-breaking investigations, and _practical KB completion_. While recent LM-based approaches often focus on simple methodologies that produce fast results, practical KBC so far is a highly precision-oriented, extremely laborious process, involving a very high degree of manual labour, either for manually creating statements (Vrandecic and Krotzsch, 2014), or for building comprehensive scraping, cleaning, validation, and normalization pipelines (Auer et al., 2007; Suchanek et al., 2007). For example, part of Yago's success stems from its validated \(>\)95% accuracy, and according to (Weikum et al., 2021), the Google Knowledge Vault was not deployed into production partly because it did not achieve 99% accuracy. Yet, many previous LM analyses balance precision and recall or report precision/hits@k values, implicitly tuning systems towards balanced recall scores resulting in impractical precision. It is also important to keep in mind the scale of KBs: Wikidata currently contains around 100 million entities and 1.2B statements. The cost of producing such KBs is massive. An estimate from 2018 sets the cost per statement at 2 $ for manually curated statement, and 1 ct for automatically extracted ones (Paulheim, 2018). Thus, even small additions in relative terms might correspond to massive gains in absolute numbers. For example, even by the lower estimate of 1 ct/statement, adding one statement to just 1% of Wikidata humans would come at a cost of 100,000 $. In this paper, _we conduct a systematic analysis of the KB completion potential of GPT_, where we focus on _high precision_. We evaluate by employing (i) a recent KB completion benchmark, WD-Known, (Veseli et al., 2023), which randomly samples facts from Wikidata and (ii) by a manual evaluation of subject-relation pairs without object values. Our main results are: 1. For the long-tail entities of WD-Known, GPT models perform considerably worse than what less demanding benchmarks like LAMA Petroni et al. (2019) have indicated. Nonetheless, we can achieve solid results for language-related, socio-demographic relations (e.g., _nativeLanguage_). 2. Despite their fame and size, out of the box, the GPT models, including GPT-4, do not produce statements of a high enough accuracy as typically required for KB completion. 3. With simple thresholding, for the first time, we obtain a method that can extend the Wikidata KB at extremely high quality (\(>\)90% precision), at the scale of millions of statements. Based on our analysis of 41 common relations, we would be able to add a total of 27M high-accuracy statements. ## 2 Background and Related Work KB constructionKB construction has a considerable history. One prominent approach is by human curation, as done e.g., in the seminal Cyc project Lenat (1995), and this is also the backbone of today's most prominent public KB, Wikidata Vrandecic and Krotzsch (2014). Another popular paradigm is the extraction from semi-structured resources, as pursued in Yago and DBpedia Suchanek et al. (2007); Auer et al. (2007). Extraction from free text has also been explored (e.g., NELL Carlson et al. (2010)). A popular paradigm has been embedding-based link prediction, e.g., via tensor factorization like Rescal Nickel et al. (2011), and KG embeddings like TransE Bordes et al. (2013). An inherent design decision in KBC is the P/R trade-off - academic projects are often open to trade these freely (e.g., via F-1 scores), yet production environments are often critically concerned with precision, e.g., Wikidata generally discouraging statistical inferences, and industrial players likely use to a considerable degree human editing and verification Weikum et al. (2021). For example in all of Rescal, TransE, and LAMA, the main results focus on metrics like hits@k, MRR, or AUC, which provide no bounds on precision. LMs for KB constructionKnowledge extraction from LMs provides fresh hope for the synergy of automated approaches and high-precision curated KBs. It provides remarkably straightforward access to very large text corpora: The basic idea by Petroni et al. (2019) is to just define one template per relation, then query the LM with subject-instantiated versions, and retain its top prediction(s). A range of follow-up works appeared, focusing, e.g., on investigating entities, improving updates, exploring storage limits, incorporating unique entity identifiers, and others Shin et al. (2020); Poerner et al. (2020); Cao et al. (2021); Roberts et al. (2020); Heinzerling and Inui (2021); Petroni et al. (2020); Elazar et al. (2021); Razinewski et al. (2021); Cohen et al. (2023); Sun et al. (2023). Nonetheless, we observe the same gaps as above: The high-precision area, and completion of already existing resources, are not well investigated. Several works have analyzed the potential of larger LMs, specifically GPT-3 and GPT-4,. They investigate few-shot prompting for extracting factual knowledge for KBC Alivanistos et al. (2023) or for making the factual knowledge in a LM more explicit Cohen et al. (2023). These models can aid in building a knowledge base on Wikidata or improving the interpretability of LMs. Despite the variance in the precision of extracted facts from GPT-3, it can peak at over 90% for some relations. Recently, GPT-4's capabilities for KBC and reasoning were examined Zhu et al. (2023). This research compared GPT-3, ChatGPT, and GPT-4 on information extraction tasks, KBC, and KG-based question answering. However, these studies focus on popular statements from existing KBs, neglecting the challenge of introducing genuinely new knowledge in the long tail. In Veseli et al. (2023), we analyzed to which degree BERT can complete the Wikidata KB, i.e., provide novel statements. Together with the focus on high precision, this is also the main difference of the present work to the works cited above, which evaluate on knowledge already existing in the KB, and do not estimate how much they could add. ## 3 Analysis Method DatasetWe consider the 41 relations from the LAMA paper Petroni et al. (2019). For automated evaluation and threshold finding, we employ the WD-Known dataset Veseli et al. (2023). Unlike other KBC datasets, this one contains truly long tail entities, by randomly sampling from Wikidata, a total of 4 million statements for 3 million subjects in 41 relations (Petroni et al., 2019). Besides this dataset for automated evaluation, for the main results, we use manual evaluation on Wikidata entities that _do not yet have the relations of interest_. For this purpose, for each relation, we manually define a set of relevant subject types (e.g., \(\mathsf{soft\,ture}\) for \(\mathsf{developedBy}\)), that allows us to query for subjects that miss a property. Evaluation protocolIn the automated setting, we first use a _retain-all_ setting, where we evaluate the most prominent GPT models (GPT-3 text-davinci-003, GPT-4, and ChatGPT gpt-3.5-turbo) by precision, recall, and F1. Table 1 shows that none of the GPT models could achieve precision of >90%. In a second step, the _precision-thresholding_ setting, we therefore sort predictions by confidence and evaluate by recall at precision 95% and 90% (R@P95 and R@P90). To do so, we sort the predictions for all subjects in a relation by the model's probability on the first generated token2, then compute the precision at each point of this list, and return the maximal fraction of the list covered while maintaining precision greater than the desired value. We threshold only GPT-3, because only GPT-3's token probabilities are directly accessable in the API, and because the chat-aligned models do not outperform it in the retain-all setting. Approaches to estimate probabilities post-hoc can be found in (Xiong et al., 2023). Footnote 2: This is a heuristic only, as unbiased probabilities are not easy to assign to multi-token generations in list answers (Singhania et al., 2023). Since automated evaluations are only possible for statements already in the KB, in a second step, we let human annotators evaluate the correctness of 800 samples of _novel_ (out-of-KB) high-accuracy predictions. We hereby use a relation-specific threshold determined from the automated 75%-95% precision range. MTurk annotators could use Web search to verify the correctness of our predictions on a 5-point Likert scale (correct/likely/unknown/implausible/false). We counted predictions that were rated as correct or likely as true predictions, and all others as false. Prompting setupTo query the GPT models, we utilize instruction-free prompts listed in the appendix. Specifically for GPT-3, we follow the prompt setup of (Cohen et al., 2023), which is based on an instruction-free prompt entirely consisting of 8 randomly sampled and manually checked examples. In the default setting, all example subjects have at least one object. Since none of the GPT models achieved precision >90% and we can only threshold GPT-3 for high precision, we focus on the largest GPT-3 model (text-davinci-003) in the following. We experimented with three variations for prompting this model: 1. **Examples w/o answer**: Following (Cohen et al., 2023), in this variant, we manually selected 50% few-shot examples, where GPT-3 did not know the correct answer, to teach the model to output "Don't know". This is supposed to make the model more conservative in cases of uncertainty. 2. **Textual context augmentation**: Following (Petroni et al., 2020), we test whether adding textual context improves model performance. We hereby employ Google Web Search, with the subject and relation of interest as search query. The top 1 result snippet is then included as context to the prompt. 3. **#few-shot examples**: A standard parameter in prompting is the number of few-shot examples. They have a huge impact on monetary costs. We vary this number between 1 and 12. \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline & \multicolumn{3}{c}{GPT-4} & \multicolumn{3}{c}{GPT-3} & \multicolumn{3}{c}{ChactGPT} \\ & \multicolumn{3}{c}{} & \multicolumn{3}{c}{test-davinci-003} & \multicolumn{3}{c}{grp-3.5-turbo} \\ Relation & \(P\) & \(R\) & \(F1\) & \(P\) & \(R\) & \(F1\) & \(P\) & \(R\) & \(F1\) \\ \hline wintlenn & 0.62 & 0.59 & 0.6 & 0.91 & 0.78 & 0.84 & 0.37 & 0.85 & 0.52 \\ ownedBy & 0.41 & 0.3 & 0.35 & 0.6 & 0.44 & 0.51 & 0.17 & 0.61 & 0.27 \\ marketLanguageOfVirl & 0.85 & 0.85 & 0.69 & 0.84 & 0.77 & 0.53 & 0.88 & 0.65 & 0.55 \\ LanguageOfVirl & 0.77 & 0.86 & 0.7 & 0.58 & 0.52 & 0.55 & 0.48 & 0.65 & 0.62 \\ hasCapital & 0.77 & 0.48 & 0.59 & 0.77 & 0.44 & 0.56 & 0.49 & 0.48 & 0.62 \\ officialLanguage & 0.62 & 0.61 & 0.61 & 0.67 & 0.64 & 0.65 & 0.27 & 0.73 & 0.39 \\ foundedIn & 0.27 & 0.53 & 0.36 & 0.4 & 0.38 & 0.39 & 0.14 & 0.58 & 0.23 \\ playlurment & 0.25 & 0.36 & 0.33 & 0.16 & 0.18 & 0.17 & 0.13 & 0.65 & 0.21 \\ parfOf & 0.05 & 0.10 & 0.16 & 0.17 & 0.13 & 0.10 & 0.36 & 0.36 & 0.21 \\ citizensOf & 0.72 & 0.62 & 0.67 & 0.67 & 0.6 & 0.63 & 0.47 & 0.68 & 0.56 \\ spokenLanguage & 0.48 & 0.62 & 0.54 & 0.54 & 0.54 & 0.76 & 0.63 & 0.37 & 0.54 & 0.51 \\ playerPosition & 0.4 & 0.4 & 0.4 & 0.18 & 0.24 & 0.21 & 0.23 & 0.77 & 0.35 \\ incontinent & 0.62 & 0.56 & 0.59 & 0.61 & 0.6 & 0.6 & 0.46 & 0.65 & 0.5 \\ namalAfter & 0.5 & 0.49 & 0.49 & 0.53 & 0.54 & 0.48 & 0.12 & 0.36 & 0.18 \\ houtCountry & 0.77 & 0.5 & 0.61 & 0.75 & 0.48 & 0.59 & 0.54 & 0.55 & 0.46 \\ musicLabel & 0.29 & 0.31 & 0.3 & 0.16 & 0.18 & 0.17 & 0.08 & 0.48 & 0.14 \\ hakBelgin & 0.44 & 0.36 & 0.4 & 0.47 & 0.38 & 0.42 & 0.16 & 0.39 & 0.23 \\ developIdBy & 0.43 & 0.7 & 0.53 & 0.45 & 0.5 & 0.47 & 0.11 & 0.6 & 0.19 \\ countryOfurisdiction & 0.38 & 0.24 & 0.29 & 0.52 & 0.22 & 0.42 & 0.42 & 0.33 & 0.37 \\ scholarship & 0.21 & 0.4 & 0.28 & 0.73 & 0.85 & 0.79 & 0.16 & 0.62 & 0.25 \\ diplometricRelation & 0.36 & 0.62 & 0.46 & 0.7 & 0.17 & 0.27 & 0.41 & 0.32 & 0.36 \\ CountryOfOrigin & 0.6 & 0.34 & 0.43 & 0.48 & 0.31 & 0.38 & 0.2 & 0.39 & 0.26 \\ \hline Macro-Average & 0.49 & 0.48 & 0.47 & 0.53 & 0.46 & 0.48 & 0.28 & 0.59 & 0.36 \\ \hline \hline \end{tabular} \end{table} Table 1: Automated evaluation in the _retain-all setting_: GPT-3 (text-davinci-003 with 175B parameters), GPT-4 (#parameters unknown) and ChatGPT (gpt-3.5-turbo with #parameters unknown ) on 1000 samples/relation from WD-Known. ## 4 Results and Discussion Can GPT models complete Wikidata at precision AND scale?In Table 1 we already showed that without thresholding, none of the GPT models can achieve sufficient precision. Table 2 shows our main results when using precision-oriented thresholding, on the 16 best-performing relations. The fourth column shows the percentage of subjects for which we obtained high-confidence predictions, the fifth how these translates into absolute statement numbers, and the sixth shows the percentages that were manually verified as correct (sampled). In the last column, we show how this number relates to the current size of the relation. We find that manual precision surpasses 90% for 5 relations, and 80% for 11. Notably, the best-performing relations are mostly related to socio-demographic properties (languages, citizenship). In absolute terms, we find a massive number of high-accuracy statements that could be added to the _writtenIn_ relation (18M), followed by _spokenLanguage_ and _nativeLanguage_ (4M each). In relative terms, the additions could increase the existing relations by up to 1200%, though there is a surprising divergence (4 relations over 100%, 11 relations below 20%). Does GPT provide a quantum leap?Generating millions of novel high-precision facts is a significant achievement, though the manually verified precision is still below what industrial KBs aim for. The wide variance in relative gains also shows that GPT only shines in selected areas. In line with previous results Veseli et al. (2023), we find that GPT can do well on relations that exhibit high surface correlations (person names often give away their nationality), otherwise the task remains hard. In Table 3 we report the automated evaluation of precision-oriented thresholding. We find that on many relations, GPT-3 can reproduce existing statements at over 95% precision, and there are significant gains over the smaller BERT-large model. At the same time, it should be noted that Sun et al. (2023) observed that for large enough models, parameter scaling does not improve performance further, so it is well possible that these scores represent a ceiling w.r.t. model size. Is this cost-effective?Previous works have estimated the cost of KB statement construction at 1 ct. (highly automated infobox scraping) to $2 (manual curation) Paulheim (2018). Based on our prompt size (avg. 174 tokens), the cost of one query is about 0.35 ct., with filtering increasing the cost per retained statement to about 0.7 ct. So LM prompting is monetarily competitive to previous infobox scraping works, though with much higher recall potential. In absolute terms, prompting GPT-3 for all 48M incomplete subject-relation pairs reported in Table 2 would amount to an expense of $168,000, and yield approximately 27M novel statements. Does "Don't know" prompting help?In Table 4 (middle) we show the impact of using examples without an answer. The result is unsystematic, with notable gains in several relations, but some losses in others. Further research on calibrating model confidences seems important Jiang et al. (2021); Singhania et al. (2023). \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline & \#current stm.s. & \#sub./ & fraction for which GPT-3 can & manual & relative \\ Relation & in Wikidata & missing units. & give high-confidence predictions & \#addable stm.s. & accuracy & growth \\ \hline founda & 43,254 & 225,578 & 96 & 20,302 & **92\%** & 43\% \\ citizenOf & 4,206,684 & 4,616,601 & 59 & 20,830 & 82\% & 5\% \\ countryOfurisdiction & 901,066 & 24,966 & 76\% & 18,974 & 88\% & 2\% \\ namedAfter & 340,234 & 477,845 & 22\% & 105,125 & 64\% & 20\% \\ in/Continent & 71,101 & 889,134 & 62\% & 551,263 & 88\% & 682\% \\ ownedBy & 449,140 & 416,437 & 6\% & 24,986 & 24\% & 1\% \\ hostCountry & 14,275,996 & 35,214 & 53\% & 18,663 & 88\% & 0\% \\ spokenLanguage & 2,148,775 & 7,134,543 & 57\% & 4,066,689 & **92\%** & 174\% \\ written & 14,140,453 & 24,990,161 & 73\% & 18,242,817 & **92\%** & 119\% \\ officialLanguage & 19,678 & 6,776 & 42\% & 2,846 & **100\%** & 14\% \\ developedBy & 42,379 & 29,349 & 6\% & 1,761 & **94\%** & 4\% \\ CountryOfOrigin & 129,260,813 & 135,196 & 49\% & 66,246 & 30\% & 2\% \\ hasCapital & 111,171 & 973 & 11\% & 107 & 14\% & 0\% \\ LanguageOfTim & 337,682 & 70,669 & 24\% & 16,961 & 82\% & 4\% \\ naturalLanguage & 264,778 & 7,871,085 & 49\% & 3,856,831 & 82\% & 1195\% \\ sharesholders & 6,946 & 222 & 14\% & 31 & 27\% & 0\% \\ \hline Overall & 38,654,975 & 46,924,749 & 66\% & 27,224,432 & 90 \% & 70\% \\ \hline \hline \end{tabular} \end{table} Table 2: Manual evaluation: Wikidata KB completion potential of GPT-3 text-davinci-003 with precision-oriented thresholding. Does textual context help?Table 4 (right) shows the results for prompting with context. Surprisingly, this consistenly made performance worse, with hardly any recall beyond 90% precision. This is contrary to earlier findings like Petroni et al. (2020) (for BERT) or Mallen et al. (2023) (for QA), who found that context helps, especially in the long tail. Our analysis indicates that, in the high-precision bracket, misleading contexts cause more damage (lead to high confidence in incorrect answers), than what helpful contexts do good (boost correct answers). #### How many few-shot examples should one use? Few-shot learning for KBC works with remarkably few examples. While our default experiments, following Cohen et al. (2023), used 8 examples, we found actually no substantial difference to smaller example sizes as low as 4. ## 5 Conclusion We provided the first analysis of the real KB completion potential of GPT. Our findings indicate that GPT-3 could add _novel_ knowledge to Wikidata, at unprecedented scale and quality (27M statements at 90% precision). Compared with other approaches the estimated cost of $168,000 is surprisingly cheap, and well within the reach of industrial players. We also find that, in the high-precision bracket, GPT-3 distills web content to a degree that context augmentation does not easily help. Open issues remain in particular around identifying high-confidence predictions within an LM's generations Jiang et al. (2021); Singhania et al. (2023); Xiong et al. (2023), and the choice of examples. ## Limitations Using LMs for automated knowledge generation comes with the standard risk of exacerbating demographic biases. For example, many of the best-performing relations are language-related, where the model presumably often estimates a person's native language entirely from their name. In terms of reproducibility, it should be noted that our results are tied to a closed-source commercial API. Although GPT-3/4/chatGPT are widely used in research and industry, and OpenAI has announced plans to keep stable model versions online, long-term reproducibility is not ensured, and the internal workings of GPT are publicly not known. Although statement generation is at the core of KB completion, for a complete KBC pipeline, we are still missing critical components. This concerns in particular entity disambiguation, which is essential for relations with more than a few hundred possible object values. Similarly, Wikidata and other KBs give critical importance to scrutable referencing of statements. This is not easily possible with LMs.
2303.14649
Variable range hopping in a non-equilibrium steady state
We propose a Monte Carlo simulation to understand electron transport in a non-equilibrium steady state (\textit{NESS}) for the lattice Coulomb Glass model, created by continuous excitation of single electrons to high energies followed by relaxation of the system. Around the Fermi level, the \textit{NESS} state approximately obeys the Fermi-Dirac statistics, with an effective temperature ($T_{eff}$) greater than the system's bath temperature ($T$). $T_{eff}$ is a function of $T$ and the rate of photon absorption by the system. Furthermore, we find that the change in conductivity is only a function of relaxation times and is almost independent of the bath temperature. Our results indicate that the conductivity of the \textit{NESS} state can still be characterized by the Efros-Shklovskii law with an effective temperature $T_{eff}>T$. Additionally, the dominance of phonon-less hopping over phonon-assisted hopping is used to explain the relevance of the hot-electron model to the conductivity of the \textit{NESS} state.
Preeti Bhandari, Vikas Malik, Moshe Schechter
2023-03-26T07:43:38Z
http://arxiv.org/abs/2303.14649v1
# Variable range hopping in a non-equilibrium steady state ###### Abstract We propose a Monte Carlo simulation to understand electron transport in a non-equilibrium steady state (_NESS_) for the lattice Coulomb Glass model, created by continuous excitation of single electrons to high energies followed by relaxation of the system. Around the Fermi level, the _NESS_ state approximately obeys the Fermi-Dirac statistics, with an effective temperature (\(T_{eff}\)) greater than the system's bath temperature (\(T\)). \(T_{eff}\) is a function of \(T\) and the rate of photon absorption by the system. Furthermore, we find that the change in conductivity is only a function of relaxation times and is almost independent of the bath temperature. Our results indicate that the conductivity of the _NESS_ state can still be characterized by the Efros-Shklovskii law with an effective temperature \(T_{eff}>T\). Additionally, the dominance of phonon-less hopping over phonon-assisted hopping is used to explain the relevance of the hot-electron model to the conductivity of the _NESS_ state. ## I Introduction The conventional charge transport mechanism at low temperatures in insulators is based on phonon-assisted hopping between localized states. This transport phenomenon is termed variable-range hopping (VRH) [1; 2; 3] and is characterized by the conductivity of the form \[\sigma\propto exp\bigg{[}-\bigg{(}\frac{T_{0}}{T}\bigg{)}^{y}\bigg{]}, \tag{1}\] where \(T\) denotes temperature and \(T_{0}\) denotes a characteristic temperature. For a two-dimensional (2D) system with a constant density of states (DOS), one would expect to see Mott's law of conductivity, i.e., \(y=1/3\). Efros and Shklovskii [4] have argued that, due to the presence of Coulomb interactions, the DOS has a so-called Coulomb gap around the Fermi level and follows the relation \[g(\varepsilon)\propto|\varepsilon|^{d-1}, \tag{2}\] where \(d\) is the spatial dimension and \(\varepsilon\), is the Hartree energy. Efros and Shklovskii (ES) further argued that for an interacting system, the exponent \(y\) in Eq.(1) is equal to \(1/2\) for both two and three dimensions. The conductivity dominated by the VRH mechanism has been observed experimentally [3] and numerically [5; 6]. Some interesting physics has come up when an electric field (F) is applied to the system. Depending on the applied field's strength, the conductivity can be in an Ohmic or a non-Ohmic regime. When the applied field is small (low electric field regime), we expect the conductivity to be approximately independent of \(F\). This comes under the category of " Ohmic regime". In this regime, the hopping of electrons is still phonon-assisted (i.e., depends on the bath temperature only). However, at a high enough applied field, energy cannot be fully dissipated to the phonons, and then the system reaches the "non-ohmic regime". Within the non-ohmic regime, for a moderate electric field (\(F<kT/e\xi\)), the conductivity depends on the field, the temperature, and the localization length (\(\xi\)) [7; 8; 9; 10; 11]. For systems with small localization length, the conductivity in this regime can be given by the following relation \[\sigma(T,F)=\sigma(T,0)exp\bigg{(}\frac{eFl_{0}}{kT}\bigg{)}, \tag{3}\] where \(l_{0}\) is the typical hopping length and \(k\) is the Boltzmann constant. The models satisfying Eq.(3) are called Field effect models, [12; 13; 14; 15] where the non-linearity in the conductivity is associated with the field-dependent tilt of the energy landscape of electron hopping. However, systems with large localization lengths are not well characterized by Eq.(3), but rather by what is denoted by the "hot electron model (HEM) [16]. Within the HEM, it is assumed that the conductivity of the system at some temperature (T) and electric field (F) is equal to the linear conductivity at an effective temperature (\(T_{eff}\)), \[\sigma(T,F)=\sigma(T_{eff},0)=\sigma_{0}\,exp\bigg{[}-\bigg{(}\frac{T_{0}}{T_ {eff}}\bigg{)}^{1/2}\bigg{]}. \tag{4}\] Eq.(4) represents the so-called Hot-electron Model, where \(\sigma_{0}\) is the proportionality constant and \(T_{0}\) is the characteristic temperature of the system when the ES law is obeyed in the Ohmic regime at temperature T. The assumption here is that the conductivity of the system for a given electric field and temperature depends only on the effective temperature of the electrons (\(T_{eff}\)) and not on the bath temperature (\(T\)). Several experiments [17; 18; 19; 20] and numerical simulations [6] on systems that in equilibrium obey VRH have been interpreted in terms of the HEM. One can see an activationless hopping when the electric field increases even further, \(F>kT/e\xi\) (strong electric field regime) [21; 22; 23; 24; 25; 26; 9; 27]. In this regime, the field plays a role similar to that played by the bath temperature in the "Ohmic regime", and the conductivity is given by \[\sigma\propto exp\bigg{[}-\bigg{(}\frac{F_{0}}{F}\bigg{)}^{1/2}\bigg{]}\, \tag{5}\] where \(F_{0}=bkT_{0}/(e\xi)\). Here \(T_{0}\) is the characteristic temperature of the ES law, and \(b\) is a constant of order unity. The variation in the applied electric field has attracted much attention as it is the formal way of formulating the non-linearity in the conductivity. This method has provided much helpful and reliable information, as discussed before. The present work presents an alternative approach, obtained by creating a non-equilibrium steady state (NESS) by irradiating the sample continuously with high-frequency photons. The conductivity of the NESS state is calculated in the Ohmic regime. The conceptual advantage of this approach is that it decouples the formation of a non-equilibrium state from conductivity calculations. In this paper, we study the Coulomb glass model (see details in Sec. II), with localization length twice the lattice spacing, and the equilibrium conductance is in agreement with the ES law. Following the HEM, for each NESS state, dictated by the temperature and the excitation rate, we calculate the effective temperature of the system (\(T_{eff}\)) related to the occupation of the single electron states near the Fermi level [28]. We then analyze the conductivity (\(\sigma\)) and density of states at the Fermi level (\(g(0)\)) through their dependence on \(T_{eff}\). Our results are in agreement with HEM results for the field-driven NESS [6]. For large enough \(T_{eff}\), we find deviations from the form of Eq.(4). We explain these deviations within the picture of the HEM, their appearance signaling the initiation of the regime where phonon-less conductivity becomes dominant. As a second approach, and in an attempt to relate to experimental protocol [29], we also analyze the various observables by fitting the equilibrium equations for each observable directly, assigning the temperature as a free parameter. We find that different effective temperatures can be assigned to each observable within this approach. Specifically, we find that the effective temperature for the conductivity is smaller than the effective temperature for the density of states at the Fermi energy. This result is in qualitative agreement with a recent experiment on Indium oxide films where the NESS state under the influence of IR radiation was studied. However, the experimental results show larger deviations between the effective temperatures of the conductivity and of the memory dip; see the discussion below. Our paper is organized as follows. In Sec. II, the Coulomb Glass lattice model is introduced. We present the numerical techniques used here in Sec. III. In Sec. IV, we present our numerical results, and finally, in Sec. V, we conclude the paper with a summary of our results. ## II Model We consider the standard two-dimensional (2d) Coulomb Glass (CG) lattice model with the Hamiltonian \[H=\sum_{i}\phi_{i}S_{i}+\frac{1}{2}\sum_{i\neq j}\frac{S_{i}S_{j}}{r_{ij}}, \tag{6}\] where \(S_{i}=n_{i}-1/2\) is the pseudo spin variable at site \(i\), \(n_{i}\in\{0,1\}\) is the electron occupation number, \(\phi_{i}\) are the random on-site energies chosen from a box distribution with the interval \(\bigg{[}-W/2,W/2\bigg{]}\) and \(r_{ij}\) is the distance between sites \(i\) and \(j\) under periodic boundary conditions. All energies and temperatures are calculated in units of \(e^{2}/a\), where \(a\) is the lattice constant. We have used a square lattice of size \(64\times 64\) and disorder \(W=2\). ## III Numerical details Experimentally, the NESS state is created by irradiating a well-equilibrated sample with high-frequency radiation (\(\nu\)), having energy greater than the Coulomb gap Figure 1: Energy of the non-equilibrium steady state as a function of the number of excitation steps at different relaxation steps. Different graphs correspond to a different number of relaxation steps (\(x\)) following each excitation step. The initial state here is the annealed state at \(\beta=80\). width (\(\delta\)), \(h\nu>>\delta\). Numerically, to create a NESS state, we first use the simulated annealing technique [30] to thermalize the Hamiltonian (6). All the observables were averaged over time (i.e., logarithmic binning of Monte Carlo steps) to test the equilibration. The system is in thermal equilibrium once the last three bins agree within errors. A two-step approach is required to create a NESS condition at a specific temperature. The first step involves inducing excitation in the system (i.e., \(\Delta E=\varepsilon_{i}-\varepsilon_{j}-1/r_{ij}>0\)), followed by a relaxation procedure. These two steps are then repeated continuously. _Excitation process:_ We start with an annealed state (we have considered the annealed states at \(\beta=1/T=20,40\) and \(80\)) and choose a site '\(i\)' at random. Then we pick another site '\(j\)', now with a probability, \(e^{-2r_{ij}/\xi}\), where \(\xi=2\) is the localization length. In addition, we make certain that \(S_{i}\neq S_{j}\). Finally, we swap the two spins, creating an excitation in the system. This corresponds to an electron making a \(\Delta E>0\) transition between site \(i\) and \(j\) by absorbing a photon. _Relaxation process:_ Experimentally, the time between two photons striking the system (relaxation time, \(x\)) varies as a function of the power. \[Power\propto\frac{1}{relaxation\,time}\quad. \tag{7}\] Between two-photon absorptions, the system may relax toward the equilibrium state. To simulate the relaxation process, we allow the system to relax via the kinetic monte carlo algorithm [5] between two subsequent excitations. In our simulation, we vary the number of relaxation steps (\(x=1,3,7,10,30,70,100\)) after each excitation process (\(ex\)) and trace its effect on the energy, DOS, occupation probabilities, and conductivity of the NESS state. A single relaxation step corresponds to \(L^{2}\) Monte Carlo steps. A NESS state is reached once the rate of energy gained by the excitations becomes equal to the energy lost in the relaxation process, and the system achieves steady-state energy. The energy of the NESS state increases as the relaxation time decreases, as seen in Fig.(1). To simulate the conductivity of the NESS state, we apply a small electric field (F) in the x-direction by adding a term \(\sum_{i}Fx_{i}\) (where \(F=T/10\)) to the Hamiltonian (6), and perform Kinetic Monte Carlo simulations as described in Ref.[5] The NESS state is maintained during conductivity calculations, and results were obtained after averaging over 500 disorder realizations. ## IV Results The perturbation leading to the NESS state drives the system out of equilibrium. Specifically, the relaxation of the electrons near the Fermi energy is limited by the scarcity of available states to relax to. This leads to a change of the occupation probability of electronic states near the Fermi energy [28], which is well approximated by a Fermi function with \(T_{eff}\neq T\), \[f_{i}=\frac{1}{[exp(\varepsilon_{i}/T_{eff})+1]}\quad. \tag{8}\] The supplemental material contains more information on calculating \(T_{eff}\) using Eq.(8). This effective temperature is used to express the various quantities below. Figure 2 shows the behavior of the DOS at the Fermi-level (\(g(0)_{NESS}\)) and the conductivity (\(\sigma_{NESS}\)) of the NESS state as a function of \(x\) at different temperatures. Let us first consider the NESS state's conductivity in more detail. In Fig.(3), we plot \(\sigma_{NESS}\) as a function of \(x\). At long relaxation times, we find \[\sigma_{NESS}\approx\sigma(T)+\frac{c}{x}\, \tag{9}\] .i.e. \(\sigma_{NESS}-\sigma(T)\) is nearly independent of \(T\), as shown in the inset. Here \(c\) is a constant, and \(\sigma(T)\) is the conductivity of the thermal state. We now analyze our results for the conductivity with regard to the Hot Electron Model. Our analysis of the HEM is presented in Fig.(4), where we have used \(T_{eff}\) computed using Eq.(8) and plotted \(\sigma_{NESS}\) vs \(T_{eff}^{-1/2}\) for different temperatures and relaxation times. The solid line in Fig.(4) corresponds to the conductivity as given by Eq.(4). For \(\beta=20\) and small enough intensity (large relaxation times \(x>7\)), Eq.(4) is approximately satisfied. For \(\beta=40\) and \(80\), the deviation from the solid line in Fig.(4) leads us to a conclusion that for low temperatures (and for cases where \(\Delta\sigma>>\sigma\)), the conductivity of the NESS state can be explained using the following relation \[\sigma_{NESS}=\sigma_{0}^{\prime}\,exp\bigg{[}-\bigg{(}\frac{T_{0}^{\prime}} {T_{eff}}\bigg{)}^{1/2}\bigg{]}\, \tag{10}\] where \(T_{0}^{\prime}\approx T_{0}\) and \(\sigma_{0}^{\prime}\geq\sigma_{0}/2\) approaching \(\sigma_{0}/2\) for \(T_{eff}\)\(>\)\(T\)[3]. This variation of \(\sigma_{0}\) from the constant value given in Eq.(4) provides a generalization of the HEM. Let us now discuss its physical origin. In the case of a well-equilibrated thermal state, up transitions (\(\Delta E>0\)) and down transitions (\(\Delta E<0\)) contribute equally to the conductivity of the system, and the transition from a site \(i\) to \(j\) is given by \[\Gamma_{ij}\sim\gamma_{0}\,exp\bigg{[}-\frac{2r_{ij}}{\xi}-\frac{\Delta E_{ij} }{kT}\bigg{]}\quad. \tag{11}\] Minimizing the exponent in Eq.(11) leads to the ES law of conductivity (Eq.(1)). Note that \(\Gamma_{ij}=\Gamma_{ji}\) for a system at a steady state under the influence of a small electric field. On the limited range of temperatures available in our simulation, the temperature dependence of the conductivity in equilibrium is in agreement with the ES law (as shown by the solid line in Fig.(4). Now for a NESS state, where the change in conductivity is mainly due to the downward energy transitions, \(\Gamma_{ij}\neq\Gamma_{ji}\). Here, the downward energy transitions between two sites, \(i\) and \(j\), given by \[\Gamma_{ij}=\gamma_{0}\,exp\bigg{[}-\frac{2r_{ij}}{\xi}-\frac{\Delta E_{ij}}{k\, T_{eff}}\bigg{]}. \tag{12}\] The reverse upward energy transition from \(j\) to \(i\) is given by Eq.(11). When \(T_{eff}>>T\), the energy-lowering transitions dominate over energy-gaining ones, and the system's conductivity is defined by Eq.(10). When the downward transitions completely dominate the system (we see this at very low temperature \(\beta=80\)), then \[\sigma_{NESS}(T_{eff},x)\approx\frac{1}{2}\sigma(T_{eff}). \tag{13}\] In Fig.(4), the deviation of the NESS conductivity (dashed line) from the equilibrium conductivity (solid line) is a feature of the HEM and not its failure. The change in Fig.(4) from \(\sigma_{0}\) being similar to its thermal value to being about half its thermal value notes the relaxation time (value of \(x\) in our case) where the phonon-less conductivity becomes dominant. We note that for each temperature, there is a one-to-one correspondence between the effective temperature and the relaxation time (\(x\)). We can now also explain the results shown in Fig.(3). The system gains excess energy (excitations) by photon absorption and attempts to attain equilibrium by energy-lowering transitions. The number of excitations created by photon absorption is proportional to the radiation power. Thus, the excess conductivity, i.e., \(\Delta\sigma\), dominated by energy-lowering transitions facilitated by excited electrons, depends only on the radiation power, not on the bath temperature (see the supplementary material for more details). Another experimentally relevant quantity is the DOS at the Fermi-level \(g(0)\), which, theoretically for 2D systems, is proportional to the phonon-bath temperature (T) [31]. Numerical simulations claims that \(g(0,T)\propto T^{\alpha}\) where \(\alpha\neq 1\)[32; 33]. As shown in Fig.(5) (red circles), our data show that the density of states of the thermal states at the Fermi-level \(g(0)\) follows the following relation \[g(0)=cT^{\alpha}\, \tag{14}\] where \(\alpha=1.29\) and \(c\) is the proportionality constant. For the NESS state, we find (see Fig.(5)) that the density of states at the Fermi-level, \(g_{0}^{NESS}(x,T)\), as a function of relaxation time (\(x\)) and bath temperature \(T\) follows the relation \[g_{0}^{NESS}(x,T)=c^{\prime}(T_{eff})^{\alpha}\, \tag{15}\] Figure 3: The conductivity of the NESS state at various bath temperatures as a function of intensity (\(1/x\)). The behavior of change in conductivity as a function of intensity for the NESS state at different bath temperatures is presented in the inset. Figure 2: (a) The behavior of the density of states at the Fermi level (for the NESS state) as a function of relaxation steps (\(x\)) at different bath temperatures (\(T=1/\beta\)). (b) The conductivity of the NESS state as a function of \(x\) at different bath temperatures. where \(c\approx c^{\prime}\) at \(\beta=20\) and \(c\neq c^{\prime}\) as the temperature and the relaxation time decreases. Also, here, we find a decrease in the proportionality constant in the regime where phonon-less hopping dominates, albeit this effect here is smaller than for the conductivity. Finally, we now make an attempt to connect our numerical simulation to recent experimental results on the NESS state of amorphous indium oxide [29]. Here, following the analysis in the experiment, we adopt a different method to determine the effective temperature for conductivity (\(T_{eff}^{\sigma}\)) and DOS at the Fermi level (\(T_{eff}^{g}\)). (The detailed explanation of the procedure used to calculate the effective temperature is provided in the supplementary material). We calculate \(T_{eff}^{\sigma}\) and \(T_{eff}^{g}\) of the NESS state by using Eq.(4) and Eq.(14) respectively at different temperatures and relaxation times. The values of the parameters \(c\), \(\sigma_{0}\), \(T_{0}\) are kept equal to their equilibrium values in this calculation. For clarity, we denote the \(T_{eff}\) calculated using FD distribution in Eq.(8) as \(T_{eff}^{FD}\) for the rest of the discussion. We observe that at short relaxation times, the effective temperatures describing each physical quantity are different. In Fig.(6), we plot the effective temperatures for the different observables as a function of relaxation time at \(T=0.025\) (data for \(T=0.05\) and \(T=0.0125\) are shown in supplementary material). Note that there is a threshold degree of non-equilibrium, where \(\Delta\sigma>\sigma\) (which depends on the bath temperature and relaxation time of the system) only beyond which effective temperatures of different observables appears. Unlike the situation in equilibrium, when the system is pushed far enough out of equilibrium, it acquires multiple time scales affecting differently various measurable quantities and consequently dictating observable specific effective temperatures. Let us note that for the regime where the effective temperatures are different, \(T_{eff}^{\sigma}\) is always less than \(T_{eff}^{FD}\). This can be explained as follows: Comparing the expressions for the conductivity within the two approaches discussed above, we find \[\sigma_{0}\,exp\bigg{[}-\bigg{(}\frac{T_{0}}{T_{eff}^{\sigma}}\bigg{)}^{1/2} \bigg{]}=\sigma_{0}^{\prime}\,exp\bigg{[}-\bigg{(}\frac{T_{0}}{T_{eff}^{FD}} \bigg{)}^{1/2}\bigg{]}. \tag{16}\] At low temperatures, one finds that \(\sigma_{0}^{\prime}\approx\sigma_{0}/2\) (this is true in our case at \(\beta=80\) where the phononless hopping completely dominates). Using this relation in Eq.(16), we get \[\bigg{(}\frac{1}{T_{eff}^{\sigma}}\bigg{)}^{1/2}\approx\bigg{(}\frac{1}{T_{ eff}^{FD}}\bigg{)}^{1/2}+\bigg{(}\frac{ln(2)}{\sqrt{T_{0}}}\bigg{)}\, \tag{17}\] i.e. \(T_{eff}^{\sigma}<T_{eff}^{FD}\). We now compare our findings with the experiment [29], where it was found that far from equilibrium, the effective temperature of the conductivity is much smaller than the effective temperature of the memory dip. Following Figure 4: Conductivity (\(\sigma^{*}\)) as a function of temperature (\(T^{*}\)) is log-real plot. Here \(\sigma^{*}=\sigma\) and \(T^{*}=T\) for the equilibrium data and \(\sigma^{*}=\sigma_{NESS}\) and \(T^{*}=T_{eff}\) for the NESS data. The red dots correspond to the conductivity of the thermal state. This data has been fitted using the relation, \(\sigma=\sigma_{0}\,\,exp\{-(T_{0}/T)^{1/2}\}\) (solid line). Blue cross (\(\beta=20\)), green square (\(\beta=40\)), and orange triangle (\(\beta=80\)) correspond to the conductivity of the NESS state as a function of \(T_{eff}^{-1/2}\). The orange triangles are fitted (dashed line) using the relation \(\sigma_{ness}=\sigma_{0}^{\prime}\,\,exp\{-(T_{0}/T_{eff})^{1/2}\}\), where \(T_{eff}\) is calculated from the Fermi Dirac distribution. Figure 5: Density of state at the Fermi-level (\(g^{*}(0,T^{*})\)) as a function of temperature (\(T^{*}\)). Here \(g^{*}(0,T^{*})=g(0,T)\) and \(T^{*}=T\) for the equilibrium data and \(g^{*}(0,T^{*})=g^{NESS}(0,T_{eff})\) and \(T^{*}=T_{eff}\) for the NESS data. The red dots correspond to the single-particle density of states (DOS) at the Fermi-level of the thermal states \(g(0,T)\) at disorder \(W=2\). This data has been fitted using the relation, \(g(0,T)=c\,T^{\alpha}\) (solid line). Blue cross (\(\beta=20\)), green square (\(\beta=40\)), and orange triangle (\(\beta=80\)) correspond to DOS of the NESS state as a function of \(T_{eff}\). The orange triangles are fitted (dashed line) using the relation \(g^{NESS}(0,T_{eff})=c^{\prime}\,T_{eff}^{\alpha}\), where \(T_{eff}\) is calculated from the Fermi Dirac distribution. this experimental finding, we concentrate on the effective temperatures of the NESS state that we determined using \(\sigma\) and \(g(0)\); the latter is believed to be proportional to the memory dip. We compare the two effective temperatures at T=0.05, 0.025, and 0.0125 for different relaxation times, as shown in Fig.(7). The dashed line represents the bath temperature and indicates how far the NESS state is from the equilibrium state. When the system is further from equilibrium, we do observe that the effective temperature computed using conductivity (\(T_{eff}^{\sigma}\)) is lower than the one computed using the density of states at the Fermi level (\(T_{eff}^{g}\)). However, this difference is much smaller than the difference between the experimentally obtained effective temperatures for the conductivity and the DOS. Fig.(7) also shows that as the system moves away from equilibrium, the two effective temperatures converge at a larger relaxation time value. ## V Discussion We demonstrate the electron transport in a non-equilibrium steady state in the context of understanding the general theory of the Hot-electron model. The non-equilibrium steady state is created by irradiating the system with high-frequency photons. Experiments of this kind have been recently conducted on indium oxide films [29]. We have calculated the conductivity (\(\sigma\)) and the single-particle density of states at the Fermi-level (\(g(0)\)) of the NESS state. At relatively high temperatures, our results are remarkably similar to the non-equilibrium simulation results by Caravaca _et al_[6] done in high electric fields at \(\xi=2\). At lower temperatures, at first glance (in our work) and as reported by Caravaca _et al_[6] for smaller localization length (\(\xi=1\) case), the conductivity of the NESS state deviates from the HEM. The deviation happens because of slow relaxation due to small \(\xi\) in Ref. ([6]) and small \(T\) in our case. We explain this deviation as a feature of the HEM in terms of the dominance in this regime of phonon-less hopping over phonon-assisted hopping. We show that both \(g(0)\) and \(\sigma\) of the non-equilibrium steady state obey the HEM model. Our results, therefore, provide a robust way to understand the crossover from phonon-less hopping to phonon-assisted hopping, which can be tested experimentally by varying the intensity of radiation on the target sample or decreasing the temperature of the system. Our results are also qualitatively in agreement with the experimental finding in [29] that \(T_{eff}^{\sigma}<T_{eff}^{\sigma}\) when the NESS state is far from equilibrium. One of the possible reasons for the quantitative difference could be that the measurement in the experiment (memory dip) is not directly a measurement of the DOS (in simulations) but is believed to be proportional to it. Another possibility can be quantum effects which are beyond the current study in this paper. In the future, it will be interesting to explore how our results alter if we move from the ES to the Mott regime. It would also be interesting to see how positional disorder affects the system. Figure 6: Effective temperature (\(T_{eff}^{*}\)) of the NESS state as a function of relaxation time (\(x\)) calculated using: conductivity (\(\sigma\)), Fermi-Dirac distribution around the Fermi-level (FD), and DOS at the Fermi-level (\(g(0)\)) at the bath temperature \(T=0.025\). \(T_{eff}^{*}\) here corresponds to \(T_{eff}^{\sigma}\) for the blue line, \(T_{eff}^{g}\) for the orange line and \(T_{eff}^{FD}\) for the green line. Figure 7: Effective temperature for a NESS state at beta = 20, 40, and 80 as a function of relaxation times (\(x\)). \(T_{eff}^{g}\) and \(T_{eff}^{\sigma}\) are different when the system is out of equilibrium, which gradually merges to a single temperature (\(T\)) as the relaxation time increases (system approaching equilibrium). The lower the temperature, the longer the relaxation times where the two effective temperatures merge. ## Acknowledgement P.B. acknowledges the Kreitman School of Advanced Graduate Studies for financial support. M.S. acknowledges support from the Israel Science Foundation (Grant No. 2300/19). Illuminating discussions with Z. Ovadyahu is gratefully acknowledged.
2305.01874
Tensor Network Message Passing
When studying interacting systems, computing their statistical properties is a fundamental problem in various fields such as physics, applied mathematics, and machine learning. However, this task can be quite challenging due to the exponential growth of the state space as the system size increases. Many standard methods have significant weaknesses. For instance, message-passing algorithms can be inaccurate and even fail to converge due to short loops. At the same time, tensor network methods can have exponential computational complexity in large graphs due to long loops. This work proposes a new method called ``tensor network message passing.'' This approach allows us to compute local observables like marginal probabilities and correlations by combining the strengths of tensor networks in contracting small sub-graphs with many short loops and the strengths of message-passing methods in globally sparse graphs, thus addressing the crucial weaknesses of both approaches. Our algorithm is exact for systems that are globally tree-like and locally dense-connected when the dense local graphs have limited treewidth. We have conducted numerical experiments on synthetic and real-world graphs to compute magnetizations of Ising models and spin glasses, to demonstrate the superiority of our approach over standard belief propagation and the recently proposed loopy message-passing algorithm. In addition, we discuss the potential applications of our method in inference problems in networks, combinatorial optimization problems, and decoding problems in quantum error correction.
Yijia Wang, Yuwen Ebony Zhang, Feng Pan, Pan Zhang
2023-05-03T03:27:12Z
http://arxiv.org/abs/2305.01874v1
# Tensor Network Message Passing ###### Abstract When studying interacting systems, computing their statistical properties is a fundamental problem in various fields such as physics, applied mathematics, and machine learning. However, this task can be quite challenging due to the exponential growth of the state space as the system size increases. Many standard methods have significant weaknesses. For instance, message-passing algorithms can be inaccurate and even fail to converge due to short loops. At the same time, tensor network methods can have exponential computational complexity in large graphs due to long loops. This work proposes a new method called "tensor network message passing." This approach allows us to compute local observables like marginal probabilities and correlations by combining the strengths of tensor networks in contracting small sub-graphs with many short loops and the strengths of message-passing methods in globally sparse graphs, thus addressing the crucial weaknesses of both approaches. Our algorithm is exact for systems that are globally tree-like and locally dense-connected when the dense local graphs have limited treewidth. We have conducted numerical experiments on synthetic and real-world graphs to compute magnetizations of Ising models and spin glasses, to demonstrate the superiority of our approach over standard belief propagation and the recently proposed loopy message-passing algorithm. In addition, we discuss the potential applications of our method in inference problems in networks, combinatorial optimization problems, and decoding problems in quantum error correction. Consider _statistical mechanics_ problem defined on a graph \(\mathcal{G}\) with \(n\) vertices and a set of \(m\) edges \(\mathcal{E}\), and binary configurations \(\mathbf{s}\in\{+1,-1\}^{n}\), which follow Boltzmann distribution \[P(\mathbf{s})=\frac{1}{Z}e^{-\beta E(\mathbf{s})}, \tag{1}\] where \(\beta\) is the inverse temperature, the energy function with external field \(\{\theta_{i}\}\) is \(E(\mathbf{s})=\sum_{(ij)\in\mathcal{E}}E_{ij}(s_{i},s_{j})+\sum_{i}\theta_{i} (s_{i})\), and \(Z=\sum_{\mathbf{s}}e^{-\beta E(\mathbf{s})}\) is the partition function. Computing the macroscopic observables of the system such as the magnetizations and correlations are important problems in statistical physics, applied mathematics, and machine learning, and finding applications in inference and learning problems where the Boltzmann distribution naturally appears as the posterior distribution of Bayesian inference, in decoding error correction codes where signals can be reconstructed using marginals of the Boltzmann distribution, and in solving combinatorial optimization problems where the solutions map to typical samples of the Boltzmann distribution at zero temperature, and many others. The computation of the local observables suffers from the large computational space which grows exponentially with the system size. The problem has the same complexity as computing the partition function and falls into the class of #P problems in mathematics, so there is no polynomial algorithm that solves the problem exactly in general. Many methods have been proposed, including Markov Chain Monte Carlo (MCMC), message-passing algorithms, tensor networks, etc. MCMC [1] is a general method for computing observables using unbiased samples, but the precision grows slowly with the number of samples hence is computationally expensive in large systems. Moreover, for systems with a complicated landscape, MCMC has the autocorrelation issue. Message passing algorithms such as belief propagation [2] and survey propagation [3] are celebrated methods in statistical physics, and also play an important role in decoding low-density parity-check code [4], solving random combinatorial optimization problems [3], detecting structures and signals in large networks [5], etc. It is closely related to the Bethe mean-field approximations [6], also known as the cavity method in statistical physics [7; 8]. Message-passing algorithms usually have low computational costs, but the performance heavily relies on the topology of the system, working well only on locally tree-like graphs without many short loops. Many efforts have been devoted to extending the message-passing algorithm to systems with short loops [9; 10; 11; 12; 13; 14; 15; 16; 17], however, so far the extensions have very limited success, only dealing with very short loops inside a small region of the graph. Tensor network methods [18; 19] are powerful on graphs full of short loops, particularly on lattices, especially in two dimensions, because tensor contractions can eliminate short loops with various sizes efficiently [20]. However, for systems without translational invariance tensor networks only apply to small systems, due to the fast growth of the computational complexity with system size when there are long loops. In this work we propose the tensor network message passing (TNMP) method to combine the advantage of tensor networks in contracting short loops and the advantage of message passing in iterating over long loops, addressing the issues for both of them. Our methods rely on the arbitrary tensor network contraction methods developed recently in the context of classical simulation of quantum computers [20; 21; 22; 23], with computational complexity depending on the treewidth of the neighborhood graph rather than the number of nodes. Thus TNMP can work with a neighborhood that are much larger than existing loop message-passing methods. Using Ising and spin glass models on synthetic and real-world networks, we demonstrate the superiority of our method to belief propagation, MCMC, and the recently proposed loopy message passing algorithm. In what follows, using a concrete example illustrated in Fig. 1, we first review message passing and tensor network, then introduce our method. Message passing algorithm--As shown in Fig 1(a), in belief propagation the marginal distribution \(p_{i}(s_{i})=\sum_{s\mid s_{i}}P(\mathbf{s})\) is computed using information passed from neighbors, involving a small neighborhood subgraph composed of \(i\) and its direct neighbors. Without loss of generality let us consider the Ising model, with \(E_{ij}(s_{i},s_{j})=-J_{ij}s_{i}s_{j}\) and \(\theta_{i}(s_{i})=-h_{i}s_{i}\), where \(h_{i}\) is the external field. Then the detailed computation can be written as \[p_{i}(s_{i})=\frac{e^{\beta h_{i}s_{i}}}{Z_{i}}\prod_{k\in\partial i}\,\sum_{ s_{k}}e^{\beta J_{ik}s_{i}s_{k}}p_{k\to i}(s_{k}).\] Where \(p_{k\to i}(s_{k})\) is the cavity message indicating the marginal probability of node \(k\) taking value \(s_{k}\) when node \(i\) is removed from the graph and can be determined using cavity messages sent from the neighbors of \(k\), but without \(i\), in the same manner of computing the marginal. The key assumption of BP is the conditional independence, i.e. \(p_{j\to i}(s_{j})\), \(p_{k\to i}(s_{k})\), and \(p_{l\to i}(s_{l})\) are independent. This assumption is correct only when the neighbors are not connected to each other via other nodes in the graph. However, when the graph contains many short loops, belief propagation has a long-standing problem of suffering from poor accuracy, because when the neighbors are connected by short loops (e.g. as shown Fig.1(a)), the conditional independence apparently does not hold, resulting in an inaccurate marginal computation. Tensor networks--On the opposite, tensor networks are particularly good at eliminating short loops using tensor contractions. It maps the computation of the partition function \(Z=\sum_{\mathbf{s}}e^{-\beta E(\mathbf{s})}\) to the contraction of a tensor network with the same shape (see e.g. [20; 24]). Local observables such as the marginal distribution of a node \(i\), can be computed in the same way, e.g. \(p_{i}(s_{i})=Z(s_{i})/\sum_{s=\pm 1}Z(s)\), where \(Z(s_{i})\) is the partition function given a configuration of node \(i\) as \(s_{i}\). The picture is depicted in Fig. 1(b), where the circuits represent diagonal tensors with only two non-zero elements, as \(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig-1.eps}}=e^{\beta h_{i}}\), and \(0\) for all other tensor elements, and squares are \(2\times 2\) matrices encoding energy terms as \(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig-1.eps}}=\left(\begin{array}[ ]{cc}e^{\beta J_{ij}}&e^{-\beta J_{ij}}\\ e^{-\beta J_{ij}}&e^{\beta J_{ij}}\end{array}\right)\). It is clear from Fig. 1(b) that the computation of one node's marginal involves all the nodes in the graph. This arises an issue that it only works for small systems in general, as the exact contraction of the whole tensor network has computational complexity exponential in the treewidth of the graph [25]. For large graphs, one can not even store the intermediate tensor during the contraction. To illustrate this limitation, consider a concrete example on an infinite Caley tree (Bethe lattice) where BP is asymptotically exact. However, due to the long loops that can not be contracted immediately, contraction of any two tensors would result in a tensor with a larger dimension and the space complexity of exact tensor network contraction will inevitably grow to infinity. Another limitation of tensor networks in computing the local observables is that direct contracting the overall tensor network only gives one local observable (e.g. magnetization of one node), so one has to repeat the whole-tensor-network contraction for each node. Tensor Network Message Passing (TNMP) --In this work we propose to combine message passing and tensor networks in such a way that tensor network contractions are responsible for contracting short loops while message passing is responsible for long loops. We use an example to introduce our method. The marginal computation of the proposed method is depicted in Fig. 1(c), a shaded area \(\mathcal{N}_{i}\), which we term as _neighborhood_ of \(i\), is involved in computing marginal probability \(p_{i}(s_{i})\). The tensors inside \(\mathcal{N}_{i}\) such as \(j,k\) and \(l\), and cavity tensors on the boundary such as \(a\), \(b\), and \(c\) are contracted in computing \(p_{i}\). While tensors (the gray ones in the figure) outside the \(\mathcal{N}_{i}\) do not contribute to the computation of \(p_{i}\). The tensors on the boundary provide an environment to the contraction of \(\mathcal{N}_{i}\), i.e. contraction results of all the tensors of the tensor networks excluding those in \(\mathcal{N}_{i}\). However, computing the exact environment is equivalent to contracting exactly the whole tensor network. In this work, we introduce a rank-one approximation of the environment as a tensor product of all _cavity tensors_\(p_{a\to\mathcal{N}_{i}}\), \(p_{b\to\mathcal{N}_{i}}\), and \(p_{c\to\mathcal{N}_{i}}\), which assumes independence between the cavity tensors. The approximation is exact if \(a\), \(b\), and \(c\) are not connected by tensors outside \(\mathcal{N}_{i}\), and is a good approximation if they are connected only by long loops via tensors outside \(\mathcal{N}_{i}\). We compute the cavity tensors iteratively, analogous to computing cavity messages in message-passing algorithms. An example is given in Fig. 1(d), is computed using the tensor contraction of tensors in \(\mathcal{N}_{i}\) excluding tensors inside \(\mathcal{N}_{l}\), and the tensors on the boundary become cavity tensors. To determine the neighborhood, we propose to generate \(\mathcal{N}_{i}\) by progressively including neighboring tensors, subject to that the minimum distance \(\min_{(ab)}d_{ab}(\partial\mathcal{N}_{i})\) between all pairs of tensors \((a,b)\) on the boundary \(\partial\mathcal{N}_{i}\) not smaller than a given value \(R\). Here the distance \(d_{ab}(\partial\mathcal{N}_{i})\) between tensors \(a\) and \(b\) is defined as the length of the shortest path connecting \(a\) and \(b\) only via the tensors outside \(\mathcal{N}_{i}\) (e.g. via the gray part of the tensor network in Fig. 1(c) and (d)). The idea behind the neighborhood generation is intuitive: the longer \(d_{ab}\), the weaker cavity tensors \(a\) and \(b\) are correlated, when \(\mathcal{N}_{i}\) is removed from the graph. In a special case when the tensor network is a tree, for any choice of \(\mathcal{N}_{i}\), the tensors on the boundary are not connected without passing node \(i\), i.e. with \(d_{ab}(\partial\mathcal{N}_{i})=\infty\), so we can generate \(\mathcal{N}_{i}\) simply using the direct neighbors of \(i\), and TNMP reduces to belief propagation in this case. For a general graph with many short loops and long loops as depicted in Fig. 1(a), a larger minimum distance results in a more accurate computation of local variables, because the cavity tensors on the boundary have weaker correlations, and \(\mathcal{N}_{i}\) we generate is larger and the tensor contraction involves more tensors. If \(\mathcal{N}_{i}\) span all the tensors in the network (i.e. \(|\mathcal{N}_{i}|=n\)), then \(d(\partial\mathcal{N}_{i})=\infty\) and the marginal computation is exact, which is actually the conventional tensor network method. In this sense, our algorithm generalizes both the tensor network method and belief propagation. The computational complexity of computing the local variable by contracting the neighborhood depends on the treewidth of the neighborhood graph. If treewidth is small, we use exact tensor network contraction by finding a good contraction order [26, 27, 21, 22, 28]. If the treewidth is large, we employ an approximate contraction algorithm that works for arbitrary tensor networks [20]. We note that in computing local expectations e.g. marginals (or magnetizations) for all variables, TNMP only needs to converge once, giving all consistent environments which can be further used to compute local observables by contracting local tensor networks. Numerical experimentsWe evaluate our method using two examples. The first one is an Ising model on a synthetic graph containing both random links and cliques as proposed in [29]. The generation process creates a random graph with average degree \(d\), then draws cliques with different sizes following a given distribution. The graph generated in this way is sparse globally and dense locally, hence is challenging to both the canonical tensor network method and belief propagation algorithm. In our evaluation, we randomly generated a network with \(n=1000\) nodes, random links with \(d=2\), and cliques with sizes ranging from \(2\) to \(9\). A scratch of the graph is shown in the inset of Fig. 2 top. The graph is generated in such a way that it is still possible to compute the exact magnetizations \(M^{\text{exact}}\) using the contraction of the whole tensor network, with the help of the dynamic slicing and multi-GPU computation which are modern techniques developed very recently in the large-scale quantum circuit simulations [21, 22]. In Fig. 2 top, based on the exact results, we show the error of each node's magnetization given by our method, compared with the error given by the recent loopy message passing method of Cantwell and Newman [16], and MCMC. In the figure, \(R\) in the X-axis controls the size of the neighborhood involved in the magnetization computation. In our method, \(R\) is the minimum distance \(\min_{(ab)}d_{ab}(\partial\mathcal{N}_{i})\) between all pairs of tensors on the boundary of the neighborhood. In Cantwell and Newman's method, \(R\) is the maximum length of the path under consideration between the neighbors of a node. From the figure, we can see that the error of our method decreases monotonically with \(R\). With \(R=0\), our method reduces to belief propagation and gives the same result. With \(R=2\), although the maximum neigh Figure 1: Pictorial illustration of belief propagation (BP) (a), tensor network contraction (b), and tensor network message passing (TNMP) (c) for computing the marginal probability of node \(i\) in an Ising model on a graph. In (b), (c), and (d), circles and squares are tensors converted from the Ising model. The shaded area in each figure denotes the part of the graph that is involved in computing the marginal of \(i\). In BP, it is computed using information from its direct neighbors; in exact tensor network contractions, it includes all tensors in the network; and in TNMP, it evolves a pre-defined neighborhood \(\mathcal{N}_{i}\). In panel (d) The update of \(p_{i\rightarrow\mathcal{N}_{i}}\) in TNMP by contracting tensors inside \(\mathcal{N}_{i}\backslash\mathcal{N}_{l}\) and the cavity tensors (the purple ones). borhood size of our method and [16] are the same, our method gives better results. The difference becomes larger with \(R=3\),\(4\), and \(5\). We note that the computational cost of Cantwell and Newman's [16] method (without employing Monte-Carlo sampling) is exponential in the neighborhood size (as labeled in the figure), which increases rapidly with \(R\) so in the evaluation, we restrict it to \(5\). For our method, the computational cost is related to the treewidth of the neighborhood, rather than directly relating to its size, hence works for a large \(R\). For contracting the neighborhood sub-tensor-networks, we use exact tensor network contraction with \(R<9\). For \(R\geq 9\) the neighborhood sub-tensor-networks are very large so we use CATN method [20] to contact them. We can see that our method works to the maximum neighborhood size as large as \(666\), giving an error smaller than \(10^{-7}\). We also include the MCMC results with different numbers of update steps. In each step, \(n\) spins are randomly chosen one by one and updated sequentially according to the Metropolis-Hasting algorithm. We see that the error decreases slowly with a larger number of steps, obtaining \(10^{-4}\) with \(10^{7}\) steps. In Fig. 2 bottom we evaluate our method using a spin glass model (with random \(\pm 1\) couplings and random fields) on the real-world electric power grid network [30] containing \(n=494\) nodes. For this graph, we obtain exact magnetizations and evaluate the error of magnetization. With all \(R\) values, in TNMP we can always contract sub-tensor networks exactly, even when the largest neighborhood contains \(250\) nodes. In the figure, we can see that TNMP gives much smaller errors than belief propagation, the method in [16], and MCMC with a large number of updating steps. Moreover, when \(R\) is larger than \(16\), the error of our method drops to the rounding error, indicating that all the loops have been included in the neighborhood, and TNMP is exact even when the maximum neighborhood size is smaller than the number of nodes \(494\). _Discussions--_ We have introduced the TNMP method that combines tensor networks and message passing. Although we demonstrate our method using models of statistical mechanics, our methods immediately find potential applications in a broad area in different fields of science. It is straightforward to apply our method to computing the spectra for sparse matrices and the percolation problems [16], it also applies to the inference problems and community detection [31; 32] in real-world networks, particularly suitable for networks which are globally sparse and with local motifs. Since our method does not restrict the local tensor contractions for real numbers, we can easily extend TNMP to the complex field for computing expectations in quantum systems [33], or even extend using other kinds of algebra e.g. the Tropical algebra in the semi-ring [34]. We can also extend survey propagation (SP) [3; 35] with tensor networks in solving constraint satisfaction problems. Similar to BP, we can map SP to a local tensor network contraction and extend the local contraction to a large neighborhood. This approach would leverage SP and one-step replica-symmetry breaking methods to constraint satisfiability problems on loopy and real-world large graphs. Another interesting application is decoding of quantum error correction code [36]. Due to the degeneracies, decoding requires summing over all elements of the stabilizer group. And due to the commutation relation of sta bilizers, the factor graph of the quantum code naturally contains many short loops, in contrast with classical error correction code (such as the low-density parity-check code) which can be designed to almost contains no short loops. BP decoder and its variants so far do not perform well in quantum error correction code. We expect that our TNMP approach can alleviate the problem of short loops for message-passing decoders in surface code and quantum low-density parity check code. A python implementation and a Jupyter notebook tutorial of our algorithm are available at [37] We thank Federico Ricci Tersenghi, Chuang Wang, and Haijun Zhou for helpful discussions on the manuscript, and Alec Kirkley for discussing and for sharing the code and data of Ref. [17]. P.Z. acknowledges the WIUCASICTP2022 grant and Project 11747601 and 11975294 of NSFC.
2308.01888
FROD: Robust Object Detection for Free
Object detection is a vital task in computer vision and has become an integral component of numerous critical systems. However, state-of-the-art object detectors, similar to their classification counterparts, are susceptible to small adversarial perturbations that can significantly alter their normal behavior. Unlike classification, the robustness of object detectors has not been thoroughly explored. In this work, we take the initial step towards bridging the gap between the robustness of classification and object detection by leveraging adversarially trained classification models. Merely utilizing adversarially trained models as backbones for object detection does not result in robustness. We propose effective modifications to the classification-based backbone to instill robustness in object detection without incurring any computational overhead. To further enhance the robustness achieved by the proposed modified backbone, we introduce two lightweight components: imitation loss and delayed adversarial training. Extensive experiments on the MS-COCO and Pascal VOC datasets are conducted to demonstrate the effectiveness of our proposed approach.
Muhammad, Awais, Weiming, Zhuang, Lingjuan, Lyu, Sung-Ho, Bae
2023-08-03T17:31:22Z
http://arxiv.org/abs/2308.01888v1
# FROD: Robust Object Detection for Free ###### Abstract Object detection is a critical task in computer vision and has become an integral component of numerous critical systems. However, state-of-the-art object detectors, similar to their classification counterparts, are susceptible to small adversarial perturbations that can significantly alter their normal behavior. Unlike classification, the robustness of object detectors has not been thoroughly explored. In this work, we take the initial step towards bridging the gap between the robustness of classification and object detection by leveraging adversarially trained classification models. Merely utilizing adversarially trained models as backbones for object detection does not result in robustness. We propose effective modifications to the classification-based backbone to instill robustness in object detection without incurring any computational overhead. To further enhance the robustness achieved by the proposed modified backbone, we introduce two lightweight components: imitation loss and delayed adversarial training. Extensive experiments on the MS-COCO and Pascal VOC datasets are conducted to demonstrate the effectiveness of our proposed approach. ## 1 Introduction Deep learning models have demonstrated remarkable performance in various computer vision tasks, including image recognition (He et al. (2016); Krizhevsky et al. (2017); Xie et al. (2020)), object detection (Girshick et al. (2015); Ren et al. (2015); Redmon et al. (2016); Kang et al. (2022)), and semantic segmentation (Chen et al. (2017); Minaee et al. (2022)). Despite their achievements, deep learning models are susceptible to adversarial attacks, which involve subtle and imperceptible alterations in the input space (Biggio et al. (2013); Szegedy et al. (2013); Carlini and Wagner (2017)). These attacks have raised significant concerns regarding security and robustness (Brown et al. (2017); Ma et al. (2021)). While considerable efforts have been made to counteract these attacks, the majority of research has primarily focused on defending image classification models. Object detection plays a vital role in numerous real-world applications, including autonomous vehicles and tracking systems (Zou et al. (2019)). Its objective is to accurately locate and classify multiple objects of various scales within an input image. Extensive research efforts have been dedicated to enhancing object detection models, resulting in notable advancements (Ren et al. (2015); Redmon et al. (2016); Lin et al. (2017); Tan et al. (2020); Carion et al. (2020)). However, despite their state-of-the-art performance, these models are susceptible to adversarial attacks that can undermine their reliability (Xie et al. (2017); Song et al. (2018)). These attacks have the potential to cause erroneous object localization and recognition, leading to significant challenges when deploying such models in real-world scenarios (Song et al. (2018); Liu et al. (2018); Wu et al. (2020)). However, in contrast to the plethora of defense methods developed to counter attacks on image classification models (Madry et al. (2017); Uesato et al. (2019); Wong et al. (2020); Shafahi et al. (2019)), the research on defending against adversarial attacks in object detection remains relatively limited (Zhang and Wang (2019); Chen et al. (2021)). These existing works propose modified versions of adversarial training specifically tailored for object detection. For example, Chen et al. (2021) introduced improved calibration techniques for class-wise and object-wise losses in adversarial training. However, these methods require training robust object detectors through computationally intensive adversarial training and acquiring robustness from scratch. Additionally, these approaches do not directly leverage the advancements made in enhancing the robustness of classification models. In this work, our objective is to leverage pre-trained adversarially robust models from image classification to enhance the robustness of object detection, while maintaining a minimal computational overhead compared to standard training. Specifically, we propose replacing the standard backbone of object detection with a robust backbone obtained from an adversarially trained classification model. However, a straightforward switch of backbones does not yield the desired robustness, as standard training leads to catastrophic forgetting of the backbone's robustness. To address this issue, we introduce Free Robust Object Detection (FROD), a method that incorporates simple modifications based on robust backbones. We refer to our approach as "free" because it instills robustness without incurring any additional computational cost compared to standard training. To further enhance the robustness of our method, we introduce two new components during the training process: imitation loss and delayed adversarial training. This approach, called FROD-DAT, is computationally efficient, as the adversarial training is performed only at the end of the training process using a single-step adversary. We conducted extensive qualitative and quantitative experiments to evaluate the effectiveness of our method in instilling robustness and improving clean performance. The results demonstrate that our method achieves SOTA-comparable robustness while achieving significantly higher clean mAP. ## 2 Related Work **Adversarial Attacks and Robustness**. Since the seminal work on adversarial vulnerability of neural networks by Szegedy et al. (2013), extensive research has focused on constructing adversarial attacks (Goodfellow et al. (2014); Moosavi-Dezfooli et al. (2016); Papernot et al. (2017)) and developing defenses against them (Madry et al. (2017); Katz et al. (2017); Weng et al. (2018); Muhammad and Figure 1: Overview of our proposed method. We leverage classification-based pre-trained models as backbones (blue) and modify them for effective robustness transfer. The fixed robust model provides robust features for imitation during training. The selection module (sel-mod) determines the training modes based on the chosen method (FROD or FROD-DAT). Bae (2022)). However, most of these efforts have primarily targeted general classification models. Notable attack methods include Fast Gradient Sign Method (FGSM) by Goodfellow et al. (2014) and Projected Gradient Sign (PGD) by Madry et al. (2017), widely used for evaluating white-box robustness. Adversarial training by Madry et al. (2017), on the other hand, represents a generic approach that offers effectiveness against various adversarial attacks (Athalye et al. (2018)). **Adversarial Robustness for Object Detection**. Significant advancements have been made in object detection (Ren et al. (2015); Redmon et al. (2016); Lin et al. (2017); Tan et al. (2020); Carion et al. (2020)), with notable architectures such as single-stage (Lin et al. (2017)) and two-stage (Ren et al. (2015)) detectors. However, similar to classification models, state-of-the-art object detection models have been found to be vulnerable to adversarial perturbations (Xie et al. (2017); Song et al. (2018); Liu et al. (2018); Wu et al. (2020)). In fact, slight modifications in physical environments can deceive deployed vision systems (Song et al. (2018)). Our approach addresses this vulnerability and is applicable to both single-stage and two-stage object detection architectures. Works focusing on adversarial robustness for object detection are relatively scarce. Some recent papers have introduced modified versions of adversarial training specifically tailored for object detection (Zhang and Wang (2019); Chen et al. (2021)). Zhang and Wang (2019) formulated object detection as a multi-task learning problem, emphasizing the misalignment between classification and localization loss gradients. They proposed MTD-A, which incorporated a task-oriented domain constraint based on the maximization of either loss. To expedite the training, they employed a FastAT-like strategy. On the other hand, Chen et al. (2021) identified limitations in MTD, particularly its failure to account for the multi-class and multi-object nature of object detection. They addressed the issue by decomposing the loss based on objects, applying clipping, and normalizing the loss of each class for balanced influence, known as Class-Wise Adversarial Training (CWAT). In contrast, our approach significantly differs from these prior works, as we build upon the adversarially pre-trained classifiers. Nonetheless, our approach shares some similarities with these approaches, as we also employ adversarial training within our two-phase framework. **Robust Features in Classification.** In the realm of classification, recent studies have highlighted the divergent nature of features learned by adversarially trained models compared to their standard counterparts (Zhu et al. (2021); Ilyas et al. (2019); Santurkar et al. (2019); Tsipras et al. (2018); Engstrom et al. (2019); Dong et al. (2022)), thereby demonstrating the potential for feature transferability. Existing approaches for feature transfer include transfer learning-based methods (Shafahi et al. (2019); Hendrycks et al. (2019)) and distillation-based techniques (Awais et al. (2021); Shafahi et al. (2019); Zhang et al. (2019); Wong et al. (2020); Awais et al. (2021)). While our work differentiates itself by focusing on leveraging classification models to enhance object detection robustness, it draws inspiration from these prior works. Our method is particularly useful as countless general classification-based pre-trained models are readily available through numerous open-source projects such as HuggingFace. ## 3 Problem Formulation An object detector maps an input image \(x\) to a set of \(K\) objects represented with bounding boxes \(b_{k}\) and a probability vector \(p_{k}\) for each object spanning \(C\) classes: \(f(x){\rightarrow}\{p_{k},b_{k}\}_{k=1}^{K}\). After that, it uses Non-Maximum Suppression (NMS) (Rosenfeld and Thurston (1971)) to remove redundant \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Clean**} & \multicolumn{2}{c}{**FGSM**} & \multicolumn{2}{c}{**PGD-10**} & \multicolumn{2}{c}{**Add. Steps**} & \multicolumn{1}{c}{**Comp.**} \\ \cline{3-8} & & \(A_{cls}\) & \(A_{reg}\) & \(A_{cls}\) & \(A_{reg}\) & Fwd & Bkwd & **Overhead** \\ \hline **STD** & **0.752** & 0.162 & 0.25 & 0.012 & 0.043 & 0 & 0 & 1\(\times\) \\ **MTD-fast** (Zhu et al. (2021)) & 0.466 & 0.311 & 0.418 & 0.221 & 0.351 & 1 & 1 & 2.78\(\times\) \\ **TOAT-6** (Chen et al. (2021)) & 0.430 & 0.300 & 0.397 & 0.218 & 0.334 & 7 & 7 & 13.47\(\times\) \\ **CWAT** (Chen et al. (2021)) & 0.513 & 0.325 & 0.433 & 0.224 & 0.367 & 7 & 7 & 13.47\(\times\) \\ \hline **FROD** (Ours) & 0.671 & 0.498 & 0.581 & 0.202 & 0.358 & 0 & 0 & 1\(\times\) \\ **FROD-DAT** (Ours) & 0.648 & **0.534** & **0.593** & **0.252** & **0.419** & 1 & 1 & 1.91\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of adversarial robustness and clean mAP on Pascal VOC for RetinaNet trained with our method and previous approaches. The computational overhead compared to standard training is also shown, indicated by the additional forward (Fwd) and backward (Bkwd) steps required by each method and training time comparison. The computational overhead is estimated empirically. detection boxes. An object detector contains a backbone parameterized by \(\theta\) and classification and localization heads 3 parameterized by \(\omega\). Object detection aims to estimate \(\{\theta,\omega\}\) by minimizing a loss: Footnote 3: Two-stage detectors contain only a classification head, but single-stage detectors contain an additional localization head. We formulate the problem based on single-stage detectors, but our method works on both single-stage and two-stage detectors. \[\min_{\theta,\omega}\mathcal{L}(f_{\theta,\omega}(x),\{y_{k},b_{k}\}), \tag{1}\] where \(y_{k}\) is the class label for object \(k\) and \(\mathcal{L}(\cdot)\) is the loss function. The backbone is for extracting features, and its parameters \(\theta\) are commonly initialized from a model pre-trained on large-scale classification datasets like ImageNet (Deng et al. (2009)). The loss function \(\mathcal{L}(\cdot)\) is a combination of a classification loss \(\mathcal{L}_{cls}\) and a localization loss \(\mathcal{L}_{loc}\), we then further formulate the objective as follows: \[\min_{\theta,\omega}\mathcal{L}_{cls}(f_{\theta,\omega}(x),y_{k})+\mathcal{L }_{loc}(f_{\theta,\omega}(x),b_{k}). \tag{2}\] We further consider the robustness of object detectors. The robustness of an object detector is measured by the performance of the model on a perturbed test set. For adversarial robustness, the perturbation \(\delta\) is found by iteratively solving the following objective: \(\delta=\max_{\|\delta\|_{p}\leq\epsilon}\mathcal{L}(x,y,b)\), where \(\epsilon\) is the perturbation budget and \(\mathcal{L}\) can be classification loss, localization loss, or a combination of both. Fast Gradient Sign Method (FGSM) (Goodfellow et al. (2014)) approximated it for \(\ell_{\infty}\) and has the following closed form: \(\delta_{FGSM}=\epsilon\cdot\text{sign}(\mathcal{L})\). A stronger and standard evaluation attack called Projected Gradient Descent (PGD) attack (Madry et al. (2017)) is based on an iterative solution of this objective. ## 4 Methodology In this section, we begin by discussing effective strategies for utilizing the robust model in the object detection framework. Next, we introduce an imitation loss mechanism designed to preserve the robustness of the object detector when using a fixed robust backbone. Finally, we present a lightweight technique to enhance the overall robustness of the object detector through the incorporation of delayed adversarial training. ### The Case of Catastrophic Forgetting of Robustness Most modern object detectors employ features extracted from a backbone pre-trained on classification datasets. These features are then fed to small classification and regression networks. The backbone plays a crucial role as both classification and localization networks share the same backbone and use its extracted features. Existing object detection methods mostly initialize the backbone with a pre-trained model (on a dataset like ImageNet Deng et al. (2009)) and train the model on object detection datasets. We hypothesize that adopting a pre-trained _robust_ backbone could enhance the robustness of object detection models. To test this hypothesis, a basic question is: can we achieve robustness by simply switching a normally trained backbone with a robust counterpart? To answer this question, we perform an experiment with RetinaNet. We switch the standard backbone with a pre-trained robust counterpart while keeping all other settings intact, three blocks being retrained and frozen BatchNorm 4. The results are shown in Table 3(a). The results demonstrate an interesting case of catastrophic forgetting of robustness. Footnote 4: Referecne code: [https://github.com/pytorch/vision/tree/main/](https://github.com/pytorch/vision/tree/main/) references/detection ### Effective Utilization of Robust Backbone To efficiently leverage robust pre-trained backbones, we introduce two light modifications: retraining fewer layers and updating the batch normalization layers (BatchNorm) of backbones on the new dataset. It is important to note these two modifications do not increase the training or inference time of a model. The details of these two modifications are as follows. Retraining of Layers.This problem has been studied for classification (Li and Hoiem (2017); Shafahi et al. (2019)) and previous results show that the backbone forgets its robustness when retrained on standard examples. The number of backbone layers retrained is vital for this problem (Shafahi et al. (2019)). To understand the role of layers in the preservation of robustness, we divided the backbone layers into four blocks, following the original ResNet (Krizhevsky et al. (2017)) configuration. Then, we performed an experiment where we progressively increased the number of blocks in the backbone that are being retrained. The details of settings are in Section 5.1. We start with no block retrained (0) to all the blocks retrained (4). These experiments are performed with a RetinaNet trained on the Pascal VOC dataset. The results are shown in Figure 2(a). The results suggest a clear trend for both robustness and clean mAP: the more blocks retrained, the less robustness, and the higher the clean accuracy. Based on our empirical study, we concluded that retraining zero or one block is the optimal configuration to preserve robustness. We also observed that retraining of blocks also acts as a trade-off between robustness and clean mAP. Specifically, the more layers retrained, the lesser the robustness and the higher the accuracy. **Updating BatchNorm.** Batch Normalization (BatchNorm) has shown to have a significant role in the adversarial robustness (Benz et al. (2020); Xie et al. (2020); Muhammad et al. (2023); Awais et al. (2020, 2020)). BatchNorm keeps track of batch statistics during training to estimate the population statistics. These estimates are employed during inference. It has been shown that these statistics play a crucial role in the overall robustness of a model. To understand the role of batch statistics of the backbone in object detection, we perform an experiment with frozen and non-frozen BatchNorm layers. For this experiment, we selected the default settings stated in Section 5.1. Table 3(b) shows that updating BatchNorm improves the robustness of object detection significantly. Based on these results, we propose to update batch statistics since they are data-dependent. This is a crucial insight for the preservation of robustness as most object detection methods freeze BatchNorm in the backbone(Wu et al. (2019)). ### Imitation of Robust Features The preceding section demonstrates that replacing a standard backbone with a robust one along with our proposed approach can enhance the overall robustness of object detectors. However, preserving robustness entirely by freezing certain backbone layers poses a challenge, as it restricts the ability of the backbone (trained on classification) to adopt new concepts and knowledge from the new object detection dataset. To address this issue, we propose a new approach using an imitation loss, as illustrated in Figure 1. Our approach allows more flexibility in the backbone. \begin{table} \end{table} Table 2: A comparison of adversarial robustness and clean mAP on MS-COCO for object detectors trained with various robustness algorithms. The adversarial training is performed with a perturbation budget of \(\epsilon=8\). \begin{table} \end{table} Table 3: (a) Does naively switching a standard backbone with a robust backbone yield any robustness? The table compares the normal mAP and PGD10-cls mAP robustness of a model trained with a standard and robust backbone. (b) The impact of updating the BatchNorm layer on the robustness of an object detector. By maintaining a frozen copy of the pre-trained backbone, we can leverage its robust features to regularize the updates of the backbone model. The formulation of the imitation loss is defined as follows, \(\mathcal{L}_{imi}=\sum_{l\in L}\|f_{\theta}^{l}(x)-f_{\theta^{\prime}}^{l}(x)\|_{p}\), where \(l\) is a block in the backbone, \(f_{\theta}\) is the backbone, \(f_{\theta^{\prime}}\) is the fixed backbone, and \(p\) is the norm used. ### Efficient Delayed Adversarial Training The previous sections present approaches for the utilization of the pre-trained backbone for free robustness. However, free robustness is limited as robust backbones are pre-trained on a different dataset for a different task to solve an entirely different problem. In this section, we further study how to utilize a pre-trained robust backbone more effectively with adversarial training while maintaining the efficiency of our method. To this end, we propose a two-phase approach consisting of regular training and a single-step, delayed adversarial training. Our method first trains object detectors on normal examples for \(t_{1}\) epochs. Second, the training is switched to the single-step-based adversarial examples (Wong et al. (2020)) for \(t_{2}\) epochs. This mechanism results in a more robust model at the cost of significantly less computation compared with other adversarial training methods. The objective for the standard training phase is as follows, \[\arg\min_{\theta,\omega}\bigg{[}\mathcal{L}_{cls}(x,y;\theta,\omega)+ \mathcal{L}_{loc}(x,b;\theta,\omega)+\lambda\cdot\mathcal{L}_{imi}(x,\theta, \theta^{\prime})\bigg{]}. \tag{3}\] Similarly, the objective function for the adversarial training phase is as follows, \[\arg\min_{\theta,\omega}\bigg{[}\max_{\|\delta\|_{p}\leq\epsilon}\mathcal{L}_ {cls}(x+\delta,y;\theta,\omega)+\mathcal{L}_{loc}(x+\delta,b;\theta,\omega)+ \lambda\cdot\mathcal{L}_{imi}(x+\delta,\theta,\theta^{\prime})\bigg{]}, \tag{4}\] where \(\delta\) is adversarial perturbation found by maximizing the classification loss and \(\epsilon\) is the perturbation budget. We utilized the single-step method proposed by Wong et al. (2020). ## 5 Experiments ### Experimental Settings We evaluate our approach using two widely used object detection datasets: Pascal VOC (Everingham et al. (2010)) and MS-COCO (Lin et al. (2014)). For Pascal VOC, we adopt the standard "07+12" protocol for training. This protocol involves using approximately 16,000 images from the combined trainval sets of the 2007 and 2012 datasets, covering 20 different object classes. The test set consists of 4,952 images from the 2007 dataset. As for the MS-COCO dataset, we utilize the train+valminusminivai 2014 dataset, which contains around 120,000 images spanning 80 diverse object classes. For the test set, we employ the minival2014 subset, comprising approximately 5,000 images. Our evaluation metric for Pascal VOC is the mean average precision (mAP) with an Intersection-over-Union (IoU) threshold of 0.5. For MS-COCO, we report the mAP at IoU thresholds ranging from 0.5 to 0.95, as per the convention established by Lin et al. (2014). To represent single-stage detectors, we employ RetinaNet (Lin et al. (2017)) with a ResNet50 backbone, while for two-stage detectors, we utilize Faster R-CNN (Ren et al. (2015)) with a ResNet50 backbone. Figure 2: (a) A comparison of robustness and clean mAP as the number of retrained blocks increases. Robustness decreases as the number of retrained blocks in the backbone increases. (b) The impact of starting adversarial training at different epochs. All models in our experiments are trained using Stochastic Gradient Descent (SGD) with a momentum of 0.9, weight decay of 1e-4, and a batch size of 4. The initial learning rate is set to 0.04, and it is adjusted based on the batch size and the number of GPUs following the approach by Goyal et al. (2017). For the Pascal VOC dataset, the models are trained for 50 epochs, while for the MS-COCO dataset, the training is performed for 26 epochs. We utilize the training script provided by Torchvision, keeping the original settings unchanged. To evaluate the robustness of the models, we conduct experiments with an adversarial perturbation budget of \(\epsilon=8/255\). We employ a PGD10-based attack constructed based on the classification loss for general robustness evaluation. Additionally, we report the robustness based on classification and regression losses. The following methods are included for comparison. **STD**: Object detector trained with standard training on clean images. **MTD**: Object detector trained using the robustness algorithm proposed by Zhang and Wang (2019). **TOAT**: Object detector trained using the robustness algorithm proposed by Zhang and Wang (2019). **CWAT**: Object detector trained using the robustness algorithm proposed by Chen et al. (2021). **FROD**: Free version of our proposed method. **FROD-DAT**: Our method with imitation loss and delayed adversarial training. ### Main Results Comparison with State-of-the-Art.To demonstrate the effectiveness of our method, we conducted a comparison with two previous State-of-the-Art (SOTA) robust object detection methods on the Pascal VOC and MS COCO datasets. We first present the results for the Pascal VOC dataset. Table 1 provides a comprehensive comparison between our method and the previous SOTA methods. It is evident from the table that our Free method (FROD) achieves comparable robustness and clean performance to the SOTA methods, without incurring any additional computational cost. Specifically, our method achieves a robustness of 0.2093, outperforming CWAT (0.224) and TOAT (0.218). Furthermore, our method demonstrates a significantly higher clean mAP of 0.6710, surpassing the values of 0.513 and 0.430 obtained by the previous methods. Moreover, our FROD-DAT method delivers even more impressive results in terms of clean and robust mAP. Notably, FROD-DAT achieves a clean mAP of 0.648, surpassing the previous SOTA value of 0.513, while simultaneously achieving a robust mAP of 0.2517, outperforming the previous SOTA value of 0.224. This significant improvement in performance highlights the efficacy of our FROD-DAT method. We further evaluate and compare our method on the challenging MS-COCO dataset (Lin et al., 2014), which provides a more realistic representation of real-world scenarios. The results of our method are presented in Table 2. Unlike previous works, we report the results of our approach at IoU thresholds of 0.5 and 0.5:0.95 to provide a comprehensive evaluation. Figure 3: Emergence of human-aligned patterns. Comparison of adversarial distortions crafted for a normal model versus our proposed approach. The distortions generated for the normally trained model are barely perceptible. In contrast, the adversarial attack applied to our method reveals the emergence of human-aligned patterns. Similar to the observations on the Pascal VOC dataset, our Free method (FROD) demonstrates robustness that is comparable to the previous SOTA methods on MS-COCO. For instance, FROD achieves a robustness of 12.2, outperforming CWAT (14.2) and MTD (13.0). Furthermore, FROD exhibits better clean performance, with a clean mAP of 0.249 compared to 0.237 and 0.190 for CWAT and MTD, respectively. This demonstrates that our method achieves comparable performance to the SOTA methods without any additional computational cost. Additionally, our FROD-DAT method achieves SOTA-level robustness, with a robust mAP of 0.153 compared to 0.142 and 0.130 for CWAT and MTD, respectively. Remarkably, our FROD-DAT method maintains computational efficiency compared to adversarial training, making it an attractive choice for robust object detection tasks. **Computational Complexity.** We evaluate the computational effectiveness of our method by comparing its complexity and time-per-epoch with previous approaches, as shown in Table 1. First, our FROD method does not introduce any additional steps compared to standard training (STD). This ensures that the computational overhead is minimal, allowing for efficient training without compromising performance. Second, FROD-DAT, our improved method, incorporates adversarial training for less than half of the total training epochs. By reducing the frequency of adversarial training, we strike a balance between robustness and computational efficiency. Therefore, our method achieves competitive performance while maintaining computational effectiveness, making it a practical and efficient choice for robust object detection tasks. **Single Stage vs Two Stage Detectors.** Our method is versatile and can be applied to any object detector that utilizes a pre-trained backbone. To demonstrate this, we conducted experiments using both a single-stage detector (RetinaNet) and a two-stage detector (Faster-RCNN) on the Pascal VOC dataset. The results are summarized in Table 5(b). The table clearly illustrates that our method effectively enhances the robustness of both single-stage and two-stage detectors. **Defense Against Transferred Attacks.** We further test the effectiveness of our method against transferred attacks. Transferred attacks are a type of black-box attacks that are constructed by utilizing another model as a proxy Goodfellow et al. (2014); Liu et al. (2016). For this purpose, we constructed attacks on Faster RCNN and tested them against RetinaNet. The results are shown in Table 4(b). Our method is effective against these attacks. ### Qualitative Results: Visualizing Adversarial Hallucinations **Emergence of Human-Aligned Patterns.** Previous research has demonstrated the presence of human-aligned and geometric patterns in adversarial examples crafted for robust models (Shafahi et al. (2019); Akhtar et al. (2021)). We observe similar intriguing behavior for our proposed method. In Figure 3, we provide a visual comparison of attacked images between a model trained using a standard Figure 4: Visual comparison of our method with standard training against PGD-based adversarial attack. The adversarial attack on the normal model leads to hallucinations of non-existent objects. In contrast, our method demonstrates enhanced robustness by mitigating this hallucination effect. method and our proposed approach. The adversarial perturbation crafted for a normally trained image is scarcely perceptible, aligning with the objective of being hidden. However, when targeting an image trained with our method, the resulting perturbations exhibit visible patterns reminiscent of objects found in the training data, such as televisions and humans. Despite these discernible patterns, our method successfully defends against such attacks, showcasing its robustness and effectiveness. Visual Comparison.In order to gain deeper insights into the detection results, we conduct a visual comparison between the outcomes of our proposed method and standard training, as depicted in Figure 4. As illustrated in the figure, the standard model performs well for normal inputs. However, when confronted with adversarial attacks, the standard model exhibits a susceptibility to hallucinating non-existent objects. In stark contrast, our method effectively rectifies these hallucinations and restores the model's sanity, thereby showcasing its ability to defend against adversarial attacks. ### Ablation Studies Sensitivity of Imitation Hyper-parameter.In this section, we empirically investigate the role of our proposed imitation loss for the preservation of robustness while allowing flexibility in the backbone. An essential factor of our imitation loss is the hyperparameter \(\lambda\) as defined in Equation 3. This hyperparameter controls the weight of the imitation term in the overall loss. To understand its sensitivity, we experiment with its different values. The results are shown in Table 5(a). The table shows the relative stability of our method across a range of hyperparameter values. Switching to Adversarial Training.To set the switch points for adversarial training, we performed an experiment where we switched adversarial training at several different points in FROD-based training. As shown in Figure 2(b), robustness first steadily improves when adversarial training starts at different epochs. However, after a particular position, the improvements saturate and dwindle. For instance, robustness improves starting from epoch 2 to epoch 35. It starts decreasing beyond that point. However, after epoch 30, the improvement in robustness is relatively marginal. Therefore, we select epoch 30 as the starting point for adversarial training. Misalignment of Tasks in Object Detection.Adversarial training crafts adversarial examples using backpropagation with respect to a loss function. Since object detection has a multi-task learning objective function, we need to understand the effect of different loss terms. The previous work Zhang and Wang (2019) has shown misalignment of the gradient to craft adversarial examples. Hence, we performed experiments with different loss terms to craft FGSM perturbations to understand the role of different loss functions. The results of these experiments are shown in Table 4(a). The results show that FGSM perturbation with only classification loss terms is better than regression or both \begin{table} \end{table} Table 4: The impact of updating the BatchNorm layer on the robustness of the object detector. (b) The effectiveness of our method against transferred attacks. \begin{table} \end{table} Table 5: (a) The impact of increasing imitation loss hyperparameter \(\lambda\) on the robustness and clean mAP. (b) Evaluation of our methods for single-stage and two-stage detectors. classification and regression loss terms. This case could be because the backbone is already trained with adversarial examples only crafted with classification loss. ## 6 Conclusion In this work, we have presented an approach that leverages robust pre-trained classification models to instill adversarial robustness in object detection models. We have found that simply utilizing a classification-based robust pre-trained backbone does not inherently confer robustness in object detection. To overcome this limitation, we have proposed effective modifications in these backbones to harness their robustness for object detection, resulting in robust object detection without incurring extra overhead. To further improve the robustness, we have introduced two key enhancements, namely imitation and delayed adversarial training, to further enhance robustness. Through extensive experiments on popular datasets like PASCAL-VOC and MS-COCO, we have demonstrated the efficacy of our method in achieving robust object detection.
2303.12238
DG-Trans: Dual-level Graph Transformer for Spatiotemporal Incident Impact Prediction on Traffic Networks
The prompt estimation of traffic incident impacts can guide commuters in their trip planning and improve the resilience of transportation agencies' decision-making on resilience. However, it is more challenging than node-level and graph-level forecasting tasks, as it requires extracting the anomaly subgraph or sub-time-series from dynamic graphs. In this paper, we propose DG-Trans, a novel traffic incident impact prediction framework, to foresee the impact of traffic incidents through dynamic graph learning. The proposed framework contains a dual-level spatial transformer and an importance-score-based temporal transformer, and the performance of this framework is justified by two newly constructed benchmark datasets. The dual-level spatial transformer removes unnecessary edges between nodes to isolate the affected subgraph from the other nodes. Meanwhile, the importance-score-based temporal transformer identifies abnormal changes in node features, causing the predictions to rely more on measurement changes after the incident occurs. Therefore, DG-Trans is equipped with dual abilities that extract spatiotemporal dependency and identify anomaly nodes affected by incidents while removing noise introduced by benign nodes. Extensive experiments on real-world datasets verify that DG-Trans outperforms the existing state-of-the-art methods, especially in extracting spatiotemporal dependency patterns and predicting traffic accident impacts. It offers promising potential for traffic incident management systems.
Yanshen Sun, Kaiqun Fu, Chang-Tien Lu
2023-03-21T23:44:09Z
http://arxiv.org/abs/2303.12238v1
DG-Trans: Dual-level Graph Transformer for Spatiotemporal Incident Impact Prediction on Traffic Networks ###### Abstract The prompt estimation of traffic incident impacts can guide commuters in their trip planning and improve the resilience of transportation agencies' decision-making on resilience. However, it is more challenging than node-level and graph-level forecasting tasks, as it requires extracting the anomaly subgraph or sub-time-series from dynamic graphs. In this paper, we propose DG-Trans, a novel traffic incident impact prediction framework, to foresee the impact of traffic incidents through dynamic graph learning. The proposed framework contains a dual-level spatial transformer and an importance-score-based temporal transformer, and the performance of this framework is justified by two newly constructed benchmark datasets. The dual-level spatial transformer removes unnecessary edges between nodes to isolate the affected subgraph from the other nodes. Meanwhile, the importance-score-based temporal transformer identifies abnormal changes in node features, causing the predictions to rely more on measurement changes after the incident occurs. Therefore, DG-Trans is equipped with dual abilities that extract spatiotemporal dependency and identify anomaly nodes affected by incidents while removing noise introduced by benign nodes. Extensive experiments on real-world datasets verify that DG-Trans outperforms the existing state-of-the-art methods, especially in extracting spatiotemporal dependency patterns and predicting traffic accident impacts. It offers promising potential for traffic incident management systems. spatiotemporal data mining, intelligent transportation systems, incident impact forecasting, transformer, anomaly detection. ## I Introduction The efficient detection of non-recurring congestion caused by traffic incidents is of great importance for modern intelligent transportation systems (ITS). Furthermore, the estimation of the impact of such incidents is crucial due to the potential social and economic loss [1]. However, effectively pinpointing traffic incidents is challenging due to high data volatility, the lack of incident labels, and the demands of ultra-low inference times in modern ITS. Despite this difficulty, traffic incident detection and diagnosis is an active research discipline, including approaches related to distributed computing, transportation management, and urban resource management. The widespread deployment of traffic sensors and traffic incident management systems (TIMS) has promoted such research by generating enormous amounts of high-dimensional traffic sensor records and making them ubiquitously accessible. With the abundance of traffic data sources, we can now present two datasets and an efficient neural network model for accurate traffic incident impact prediction from both the spatial and temporal perspectives. The impact of traffic incidents can be quantified in two dimensions: time and space. As illustrated in Figure 1, traffic along a road can be plotted over time and location as a heatmap of speed. The impact of an incident usually presents itself as a drop in speeds, that starts at the time and location of the incident and then spreads upstream before finally peaking. Conventionally, the incident duration is defined as the time between the validation and restoration of an incident, while the impact length is the number of cars blocked by the incident. However, we observe that cars are forced to slow down even if they are not near the incident location. In this case, for the spatial dimension, we redefine the impact length on the road as the maximum continuous congestion distance immediately upstream of the incident. Recent research has primarily focused on the temporal impact of incidents [2, 3, 4], with limited attention paid to their spatial impact. Traffic forecasting studies have developed models for learning spatiotemporal representations. These could be leveraged to assess incident effects, though their abilities in sub-graph extraction and summarization remain untested. Based on the previous work above, we can conclude that the current research has the following drawbacks. **The spatiotemporal quantification of traffic incident impacts is rarely considered, and the publicly available supported data is limited.** Unlike traditional transportation research, detailed traffic data - such as the number of cars in the congestion queue and traffic signal states - are usually unavailable to the public. In this case, it is essential to define extensive "traffic incident impact" criteria and construct open-source datasets in the context of dynamic graph data mining. **Relations between the static road network and the dynamic traffic correlation network are not properly modeled.** In traffic forecasting scenarios, in particular, distance-based adjacent matrices are not always accurate, and sensor measurements may not provide sufficient information for similarity estimation. Therefore, it is necessary to develop a new approach that takes full advantage of the flexibility of attention and emphasizes the importance of road network structure. **The capabilities of dynamic graph learning models are not evaluated for task-oriented subgraph/sub-time-series extraction.** In a traffic loop sensor network, an incident can dramatically affect a subset of sensors for a limited period. Therefore, it is necessary to develop dynamic graph learning models that can accurately predict the impacts of
2306.09978
Numerical Approximations of a Class of Nonlinear Second-Order Boundary Value Problems using Galerkin-Compact Finite Difference Method
In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases
Shovan Sourav Datta Pranta, Md. Shafiqul Islam
2023-06-16T17:18:43Z
http://arxiv.org/abs/2306.09978v1
Numerical Approximations of a Class of Nonlinear Second-Order Boundary Value Problems using Galerkin-Compact Finite Difference Method ###### Abstract In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases. _Keywords_ -- Compact Finite Difference Method; Galerkin Method; Nonlinear BVPs; Residual Correction. ## I Introduction The capacity to provide the nature of the solution to any physical occurrence, even when analytical answers are not possible, is a key advantage of the numerical approach. In addition, a numerical method requires simply the evaluation of standard functions and the four operations of addition, subtraction, multiplication, and division. There has been a significant increase in interest in the study of boundary value problems for second- and higher-order differential equations because both linear and nonlinear differential equations have the ability to replicate a wide variety of natural processes. They are also implemented in a variety of scientific and engineering applications. In this study, we consider the following generic form of a second-order linear-nonlinear boundary value problem: \[\frac{d^{2}f}{dx^{2}}=g\bigg{(}x,f,\frac{df}{dx}\bigg{)}\qquad\qquad a\leq x \leq b \tag{1}\] in accordance with the boundary conditions, \[a_{i}f\big{(}a\big{)}+b_{1}\left.\frac{df}{dx}\right|_{x\to a}= \lambda_{i} \tag{2}\] \[a_{2}f\big{(}b\big{)}+b_{2}\left.\frac{df}{dx}\right|_{x\to b}= \lambda_{2}\] where \(a_{1},a_{2},b_{1},b_{2},\lambda_{1},\lambda_{2}\) are some constants. If 1. \(b_{1}=b_{2}=0\) and \(a_{1}\neq 0,a_{2}\neq 0,\) (2) is referred to as Dirichlet boundary conditions. 2. \(a_{1}=a_{2}=0\) and \(b_{1}\neq 0,b_{2}\neq 0,\) (2) is referred to as Neumann boundary conditions. 3. If all of the parameters \(a_{1},a_{2},b_{1},b_{2}\) are not equal to 0, the Robin boundary condition is established. In the field of numerical analysis, there are a lot of different ways to deal with boundary value problems (BVPs). In their respective books, Bender _et al._[1] and Collatz [2] provided a full theory and some basic numerical treatments of boundary value problems. Kanth and Reddy [3] employed the cubic spline technique for solving two-point boundary value problems numerically. Then using the same technique, Kanth and Bhattacharya [4] analyzed and numerically resolved a class of nonlinear boundary value problems appearing in physiology. Sibana _et al._[5] used a new spectral-homotopy model to approximate the solution of second-order nonlinear BVPs. A class of second-order differential equations was solved numerically by Islam and Shirin [6] using a weighted residual technique called the Galerkin method via Bernoulli polynomials. In certain instances, however, a significant number of Bernoulli polynomials are necessary to achieve the needed precision, necessitating a substantial amount of computational time. Burden and Faires [7] explored certain numerical techniques for handling boundary value problems in their book on numerical analysis, such as the shooting method and the finite difference method. While, using the parametric difference approaches, Pandey [8] solved the two-point boundary value problems. Ramos and Rufai [9] implemented a third-derivative, two-step block Falkner approach to solve linear and non-linear BVPs. Some stochastic nonlinear second-order boundary value problems driven by additive noise has been solved by Baccouch [10] using the finite difference method. Numerous physical phenomena are modeled using strongly non-linear BVPs with specific parameter values. In particular, the Bratu's problem is a special type of nonlinear BVP of order two based on eigenvalues. Complex physical and chemical models are frequently described using Bratu's problem in both science and engineering. This problem is employed in a wide range of applications, such as the combustion theory's fuel ignition model, the thermal reaction mechanism, the Chandrasekhar model of universe expansion and chemical reaction theory, radiative heat transmission, and nanotechnology. It has been solved and analysed by different authors using different methods [11]-[16] in the literature. The Galerkin method is a very well-known method for numerical approximations of various types of problems. Numerous authors come up with various types of problems and solve them numerically using the Galerkin method [17]-[21]. In recent years, the implicit finite difference methodology, also known as the Compact Finite Difference method, has gained popularity. Lele [22] and Mehra & Patel [23] gave a discussion on a variety of ordering schemes in addition to a set of compact finite difference approaches. While Malele _et al._[24] used the highly precise compact finite difference approach for the solution of BVPs with boundary conditions of Robin type. In this study, an algorithm is used to improve the accuracy of computation for a class of second-order linear and nonlinear BVPs. For the proposed scheme, we employ the well-known Galerkin and Compact Finite Difference methods. We also introduce the mathematical formulation of 4th-order compact finite difference scheme for the general nonlinear BVPs. We use the Galerkin method to find our first approximation. Then, we use the compact finite difference method to solve the error differential equation in order to improve the accuracy of our first approximation. ## 2 Galerkin Method for BVPs The modified Galerkin method is a remarkably potent numerical technique for approximating boundary value problems. The fundamental method employs a trial solution that satisfies all of the problem's boundary conditions. Let us consider the trial approximation [25] of such a function \(f(x)\) of the differential equation (1), given by, \[\tilde{J}\left(x\right)=N_{0}\left(x\right)+\sum_{j=1}^{\kappa}\alpha_{j}N_{ j}\left(x\right) \tag{3}\] where \(\alpha_{j}\) denotes the undetermined parameters and \(N_{j}(x)\) denotes the basis functions (here Bernstein Polynomials). The preceding is the generic form of Bernstein polynomials of degree \(p\) over the interval \([x_{0},x_{n}]\), \[B_{p,j}\left(x\right)=\left(\begin{array}{c}p\\ i\end{array}\right)\frac{\left(x_{n}-x\right)^{p-i}\left(x-x_{0}\right)^{i}}{ \left(x_{n}-x_{0}\right)^{p}}\qquad\qquad i=0,1,2,......p \tag{4}\] Thus, over the interval [0,1] the Bernstein polynomials of degree \(p\) can be represented as, \[B_{p,i}(x)=\left(\begin{array}{c}P\\ i\end{array}\right)\left(1-x\right)^{p-i}x^{i}\qquad\qquad\qquad i=0,1,2, \ldots,p \tag{5}\] These Bernstein polynomials of degree \(p\) over the interval [0,1] have some special properties. One of them is, \(B_{p,i}(0)=0=B_{p,i}(1)\) for \(i=1,2,3,\ldots,p-1\). So, they can be used as the basis of the Galerkin method in the case of Dirichlet boundary conditions. Now, the Galerkin weighted residual equation becomes [25], \[\int_{s}^{s}\left[\frac{d^{2}\tilde{f}}{dx^{2}}-g\left(x,\tilde{f},\frac{d \tilde{f}}{dx}\right)\right]N_{i}\left(x\right)dx=0 \tag{6}\] After applying integration by parts to the second derivative part of (6) and then substituting the trial solution (3) into it, we get, \[\int_{s}^{s}\left[\left(\frac{dN_{0}}{dx}+\sum_{j=1}^{s}\alpha_{j}\frac{dN_{j} }{dx}\right)\frac{dN_{i}}{dx}+g\left(x,N_{0}+\sum_{j=1}^{s}\alpha_{j}N_{j}, \frac{dN_{0}}{dx}+\sum_{j=1}^{s}\alpha_{j}\frac{dN_{j}}{dx}\right)N_{i}\right] dx=\left[\frac{d\tilde{f}}{dx}N_{i}\right]_{a}^{b} \tag{7}\] Or equivalently, they can be written as the matrix form as, \[\sum_{j=1}^{s}\alpha_{j}K_{ij}=F_{i}\qquad\mbox{or}\qquad\mathbf{K}\alpha= \mathbf{F} \tag{8}\] where, \[K_{ij}=\int_{s}^{s}\left[\frac{dN_{i}}{dx}\frac{dN_{j}}{dx}+Q_{i}\left(x,N_{j},\frac{dN_{j}}{dx}\right)N_{i}\right]dx \tag{9}\] \[F_{i}=\left[\frac{d\tilde{f}}{dx}N_{i}\right]_{a}^{b}-\int_{s}^{s}\left[\frac {dN_{a}}{dx}\frac{dN_{i}}{dx}+Q_{0}\left(x,N_{0},\frac{dN_{0}}{dx}\right)N_{i} \right]dx \tag{10}\] Solving the system of equations gives us the values of the unknowns, \(\alpha\). In the case of nonlinear BVPs, iteration is required to find the value of these unknown coefficients. Then, the first approximate solution of the BVP is obtained from substituting the values of \(\alpha\)'s into (3). ## III Compact Finite Difference Method for BVPs Firstly, consider the generic form of the differential equation (1) in accordance with the boundary conditions (2). The domain in which the boundary value problem is valid is \([a,b]\). In order to simplify things further, let us divide the domain \([a,b]\) into \(n\) sub-intervals, with the length of each sub-interval being \(h\). It is possible to obtain an implicit numerical approximation of the 1st derivative \(f^{\prime}(x)\) at the mesh points by expressing \[\mathcal{M}_{i,-1}^{\prime}+f_{i}^{\prime}+\mathcal{M}_{i+1}^{\prime}=\frac{ a_{1}}{2h}\left(f_{i+1}-f_{i-1}\right) \tag{11}\] Here, \(A\) and \(a_{1}\) are arbitrarily chosen constants. These constants for 4th order implicit compact finite difference estimations are what we are attempting to discover now. We obtain the following from the Taylor series expansion: \[f_{i+1}=f_{i}+\mathit{b}f_{i}^{\prime}+\frac{h^{2}}{2!}f_{i}^{ \prime}+\frac{h^{3}}{3!}f_{i}^{\prime}+\frac{h^{4}}{4!}f_{i}^{\prime\prime}+ \frac{h^{5}}{5!}f_{i}^{\prime}+\mathrm{O}\left(h^{6}\right) \tag{12}\] \[f_{i-1}=f_{i}-\mathit{b}f_{i}^{\prime}+\frac{h^{2}}{2!}f_{i}^{ \prime}-\frac{h^{3}}{3!}f_{i}^{\prime}-\frac{h^{4}}{4!}f_{i}^{\prime\prime}- \frac{h^{5}}{5!}f_{i}^{\prime}+\mathrm{O}\left(h^{6}\right) \tag{13}\] Now, subtraction of (13) from (12) becomes, \[f_{i+1}-f_{i-1}=2\mathit{b}f_{i}^{\prime}+\frac{h^{3}}{3}f_{i}^{ \prime}-\frac{h^{5}}{60}f_{i}^{\prime\prime}+\mathrm{O}\left(h^{7}\right) \tag{14}\] Again, from the Taylor series expansion: \[f_{i+1}=f_{i}^{{}^{\prime}}+hf_{i}^{{}^{\prime}}+\frac{h^{2}}{2!}f_{i}^{{}^{\prime }}+\frac{h^{3}}{3!}f_{i}^{{}^{\prime\prime}}+\frac{h^{4}}{4!}f_{i}^{{}^{\prime \prime}}+\mathrm{O}\left(h^{3}\right) \tag{15}\] \[f_{i-1}^{{}^{\prime}}=f_{i}^{{}^{\prime}}-hf_{i}^{{}^{\prime}}+\frac{h^{2}}{2!} f_{i}^{{}^{\prime\prime}}-\frac{h^{3}}{3!}f_{i}^{{}^{\prime\prime}}+\frac{h^{4}}{4!}f_{i} ^{{}^{\prime}}+\mathrm{O}\left(h^{3}\right) \tag{16}\] Now, adding (16) and (15) we get, \[f_{i+1}^{{}^{\prime}}+f_{i-1}^{{}^{\prime}}=2f_{i}^{{}^{\prime}}+h^{2}f_{i}^{{ }^{\prime\prime}}+\frac{h^{4}}{12}f_{i}^{{}^{\prime\prime}}+\mathrm{O}\left(h ^{6}\right) \tag{17}\] Now, \[\mathcal{M}_{i-1}^{{}^{\prime}}+f_{i}^{{}^{\prime}} +\mathcal{M}_{i+1}^{{}^{\prime}}-\frac{a_{1}}{2h}\big{(}f_{i-1} -f_{i-1}\big{)}\] \[=A\bigg{[}2f_{i}^{{}^{\prime}}+h^{2}f_{i}^{{}^{\prime\prime}}+ \frac{h^{4}}{12}f_{i}^{{}^{\prime\prime}}+\mathrm{O}\left(h^{6}\right)\bigg{]} +f_{i}^{{}^{\prime}}-\frac{a_{1}}{2h}\bigg{[}2hf_{i}^{{}^{\prime}}+\frac{h^{3} }{3}f_{i}^{{}^{\prime\prime}}+\frac{h^{5}}{60}f_{i}^{{}^{\prime\prime\prime} }+\mathrm{O}\left(h^{3}\right)\bigg{]}\] \[=\big{(}2\,A+1-a_{1}\big{)}f_{i}^{{}^{\prime}}+\bigg{(}A-\frac{a_ {1}}{6}\bigg{)}h^{2}f_{i}^{{}^{\prime\prime}}+\bigg{(}\frac{A}{12}-\frac{a_{1} }{120}\bigg{)}h^{4}f_{i}^{{}^{\prime\prime\prime}}+\mathrm{O}\left(h^{6}\right)\] Setting the coefficient of the first term to zero yields a second-order scheme, whereas setting the coefficients of the first two terms to zero yields a fourth-order scheme. Setting the first two coefficients equal to zero we get, \(A=\frac{1}{4}\), \(a_{1}=\frac{3}{2}\), then for these values (11) is a two-parameter family of fourth-order scheme with the truncation error, \[\dot{\mathrm{o}}= \bigg{(}\frac{A}{12}-\frac{a_{1}}{120}\bigg{)}h^{4}f_{i+q}^{{}^{ \prime\prime}}=\frac{1}{120}h^{4}f_{i+q}^{{}^{\prime\prime}}\] So, the first difference equation is, \[\frac{3}{4h}f_{i-1}^{{}^{\prime}}-\frac{3}{4h}f_{i+1}+\frac{1}{4}f_{i-1}^{{}^{ \prime}}+f_{i}^{{}^{\prime}}+\frac{1}{4}f_{i+1}^{{}^{\prime}}=0\hskip 56.905512pti= \underline{1,n-1} \tag{18}\] and this difference equation is of convergent order \(\mathcal{O}(h^{4})\), which is equivalent to [26]. For the second difference equation we need our given differential equation. At the interior nodes \(x_{i}\), (1) becomes, \[f_{i}^{{}^{\prime}}=g\left(x,f_{i},f_{i}^{{}^{\prime}}\right) \tag{19}\] In order to solve this problem, we must discover a 4th order approximation of \(f_{i}^{{}^{\prime\prime}}\). Now using the Taylor series expansion, we get, \[f_{i+1}=f_{i}+hf_{i}^{{}^{\prime}}+\frac{h^{2}}{2}f_{i}^{{}^{ \prime\prime}}+\frac{h^{3}}{6}f_{i}^{{}^{\prime\prime}}+\frac{h^{4}}{24}f_{i} ^{{}^{\prime\prime}}+\frac{h^{5}}{120}f_{i}^{{}^{\prime\prime}}+\frac{h^{6}}{72 0}f_{i}^{{}^{\prime\prime}}+\mathrm{O}\left(h^{7}\right) \tag{20}\] \[f_{i-1}=f_{i}-hf_{i}^{{}^{\prime}}+\frac{h^{2}}{2}f_{i}^{{}^{ \prime\prime}}-\frac{h^{3}}{6}f_{i}^{{}^{\prime\prime}}-\frac{h^{4}}{24}f_{i} ^{{}^{\prime\prime}}-\frac{h^{5}}{120}f_{i}^{{}^{\prime\prime}}+\frac{h^{6}}{72 0}f_{i}^{{}^{\prime\prime\prime}}+\mathrm{O}\left(h^{7}\right) \tag{21}\] Adding (20) and (21) we get, \[f_{i+1}+f_{i-1}=2f_{i}+h^{2}f_{i}^{{}^{\prime}}+\frac{h^{4}}{12}f_{i}^{{}^{ \prime\prime\prime}}+\frac{h^{6}}{360}f_{i}^{{}^{\prime\prime}}+\mathrm{O} \left(h^{8}\right) \tag{22}\] Again, from the Taylor series expansion we get, \[f_{i+1}^{{}^{\prime}}=f_{i}^{{}^{\prime}}+hf_{i}^{{}^{\prime\prime}}+\frac{h^{2} }{2}f_{i}^{{}^{\prime\prime}}+\frac{h^{3}}{6}f_{i}^{{}^{\prime\prime\prime}}+ \frac{h^{4}}{24}f_{i}^{{}^{\prime\prime}}+\frac{h^{5}}{120}f_{i}^{{}^{\prime \prime}}+\mathrm{O}\left(h^{6}\right) \tag{23}\] \[f_{i-1}^{{}^{\prime}}=f_{i}^{{}^{\prime}}-hf_{i}^{{}^{\prime}}+\frac{h^{2}}{2}f_{i }^{{}^{\prime\prime}}-\frac{h^{3}}{6}f_{i}^{{}^{\prime\prime\prime}}+\frac{h^{4}}{ 24}f_{i}^{{}^{\prime\prime}}-\frac{h^{3}}{120}f_{i}^{{}^{\prime\prime\prime}}+ \mathrm{O}\left(h^{6}\right) \tag{24}\] Subtracting (24) from (23) we get, \[f_{i+1}^{{}^{\prime}}-f_{i-1}^{{}^{\prime}}=2hf_{i}^{{}^{\prime}}+\frac{h^{3}}{3}f_ {i}^{{}^{\prime\prime\prime}}+\frac{h^{5}}{60}f_{i}^{{}^{\prime\prime\prime}}+ \mathrm{O}\left(h^{7}\right) \tag{25}\] Firstly, multiplying (22) by 4 and (25) by \(h\) and then, subtracting (25) form (22) we get, \[4\left(f_{i+1}+f_{i-1}\right)-h\left(f_{i+1}^{{}^{\prime}}-f_{i-1}^{{}^{\prime}} \right)=4\bigg{(}2f_{i}+h^{2}f_{i}^{{}^{\prime}}+\frac{h^{4}}{12}f_{i}^{{}^{ \prime\prime}}\] \[+\frac{h^{6}}{360}\,f_{i}^{\prime\prime}+\mathrm{O}\left(h^{8}\right)\biggr{]}-h \left(2hf_{i}^{\prime}+\frac{h^{3}}{3}\,f_{i}^{\prime\prime}+\frac{h^{8}}{60}\,f _{i}^{\prime\prime}+\mathrm{O}\left(h^{7}\right)\right) \tag{26}\] After simplification we get, \[2h^{3}f_{i}^{\prime\prime}=4\left(f_{i+1}-2f_{i}+f_{i-1}\right)-h\left(f_{i+1} -f_{i-1}\right)+\mathrm{O}\left(h^{6}\right) \tag{27}\] Or equivalently, \[f_{i}^{\prime\prime}=\frac{2}{h^{2}}\left(f_{i+1}-2f_{i}+f_{i-1}\right)-\frac {1}{2h}\left(f_{i+1}-f_{i-1}^{\prime}\right)+\mathrm{O}\left(h^{4}\right) \tag{28}\] which is an approximation of the second derivative with convergent order \(\mathcal{O}(h^{4})\). Now, substituting (28) in (19) we get, \[\frac{2}{h^{2}}\bigl{(}f_{i+1}-2f_{i}+f_{i-1}\bigr{)}-\frac{1}{2h}\bigl{(}f_{ i+1}^{\prime}-f_{i-1}^{\prime}\bigr{)}=g\left(x_{i},f_{i},f_{i}^{\prime}\right) \tag{29}\] Or equivalently, \[\frac{2}{h^{2}}\bigl{(}f_{i+1}-2f_{i}+f_{i-1}\bigr{)}-\frac{1}{2h}\bigl{(}f_{ i+1}^{\prime}-f_{i-1}^{\prime}\bigr{)}-g\left(x_{i},f_{i},f_{i}^{\prime} \right)=0\qquad\qquad i=1,n-1 \tag{30}\] This is our 2nd difference equation of convergence order \(\mathcal{O}(h^{4})\) for the nonlinear differential equation (1). We obtain \((n-1)\) equations for \((n-1)\) interior nodes from both of the difference equations (18) and (30). This indicates that concurrently, they provide a total of \((2n-2)\) equations. To satisfy the criterion for a unique solution, the number of equations and the number of unknowns must be equal. In addition, we need two additional equations for the Dirichlet and Neumann boundary conditions and four additional equations for the Robin boundary conditions. These extra equations are obtained from the boundary conditions provided. At node \(\mathbf{x}=\mathbf{x}_{0}=\mathbf{a}\), we get from (19), \[f_{0}^{{}^{\prime\prime}}=g\bigl{(}x_{0},f_{0},f_{0}^{{}^{\prime}}\bigr{)} \tag{31}\] and at node \(\mathbf{x}=\mathbf{x}_{n}=\mathbf{b}\), we get from (19), \[f_{n}^{{}^{\prime\prime}}=g\left(x_{n},f_{n},f_{n}^{{}^{\prime}}\right) \tag{32}\] The next step is to establish a fourth-order approximation for \(f_{0}^{\prime\prime}\) and \(f_{n}^{\prime\prime}\) in terms of the variables \(f_{0}\), \(f_{1}\), and \(f_{2}\) as well as the variables \(f_{0}^{\prime},f_{1}^{\prime}\) and \(f_{2}^{\prime}\). Hence, they can be substituted in (31) and (32) to obtain our desired equations. In order to accomplish this goal, we will need to expand \(f_{1},f_{2},f_{1}^{\prime}\), and \(f_{2}^{\prime}\) in Taylor's series. Then from Taylor series expression of these terms we obtain, \[f_{1}=f_{0}+h_{0}^{{}^{\prime}}+\frac{h^{2}}{2}f_{0}^{{}^{\prime \prime}}+\frac{h^{3}}{6}f_{0}^{{}^{\prime\prime}}+\frac{h^{4}}{24}f_{0}^{{}^{ \prime\prime}}+\frac{h^{3}}{120}f_{0}^{{}^{\prime\prime}}+\mathrm{O}\left(h^{ 6}\right) \tag{33}\] \[f_{2}=f_{0}+2h_{0}^{{}^{\prime}}+2h^{2}f_{0}^{{}^{\prime}}+\frac {8h^{3}}{6}f_{0}^{{}^{\prime\prime}}+\frac{16h^{4}}{24}f_{0}^{{}^{\prime\prime }}+\frac{32h^{5}}{120}f_{0}^{{}^{\prime\prime}}+\mathrm{O}\left(h^{6}\right)\] \[f_{1}^{{}^{\prime}}=f_{0}^{{}^{\prime}}+h_{0}^{{}^{\prime\prime}}+ \frac{h^{2}}{2}f_{0}^{{}^{\prime\prime}}+\frac{h^{3}}{6}f_{0}^{{}^{\prime\prime \prime}}+\frac{h^{4}}{24}f_{0}^{{}^{\prime\prime}}+\mathrm{O}\left(h^{5}\right)\] \[f_{2}^{{}^{\prime}}=f_{0}^{{}^{\prime}}+2h_{0}^{{}^{\prime}}+2h^{{ }^{\prime}}f_{0}^{{}^{\prime\prime}}+\frac{8h^{3}}{6}f_{0}^{{}^{\prime\prime \prime}}+\frac{16h^{4}}{24}f_{0}^{{}^{\prime\prime}}+\mathrm{O}\left(h^{5}\right)\] Solving this system for four unknowns \(f_{0}^{\prime\prime},f_{0}^{\prime\prime\prime},f_{0}^{\prime\prime\prime}\), and \(f_{0}^{\prime\prime}\) we get the value of \(f_{0}^{\prime\prime}\) as, \[f_{0}^{{}^{\prime\prime}}=\frac{1}{2h^{2}}\bigl{(}-23f_{0}+16f_{1}+7f_{2}\bigr{)} -\frac{1}{h}\bigl{(}6f_{0}^{{}^{\prime}}+8f_{1}^{{}^{\prime}}+f_{2}\bigr{)}+ \mathrm{O}\left(h^{4}\right) \tag{34}\] which is the fourth order estimate of the second derivative at the boundary node \(x=x_{0}=\mathbf{a}\). Again, from Taylor series expansion we get, \[\begin{array}{c}f_{\rm a-1}=f_{\rm a}-bf_{\rm a}^{\prime}+\frac{h^{2}}{2}f_{\rm a }^{\prime}-\frac{h^{3}}{6}f_{\rm a}^{\prime\prime}+\frac{h^{4}}{24}f_{\rm a}^{ \prime\prime}-\frac{h^{3}}{120}f_{\rm a}^{\prime}+{\rm O}\left(h^{\rm a}\right)\\ f_{\rm a-2}=f_{\rm a}-2h_{\rm a}^{\prime}+2h^{2}f_{\rm a}^{\prime}-\frac{8h^{3 }}{6}f_{\rm a}^{\prime\prime}+\frac{16h^{4}}{24}f_{\rm a}^{\prime\prime}- \frac{32h^{5}}{120}f_{\rm a}^{\prime}+{\rm O}\left(h^{\rm a}\right)\\ f_{\rm a-1}^{\prime}=f_{\rm a}^{\prime}-hf_{\rm a}^{\prime\prime}+\frac{h^{2} }{2}f_{\rm a}^{\prime\prime}-\frac{h^{3}}{6}f_{\rm a}^{\prime\prime}+\frac{h^{ 4}}{24}f_{\rm a}^{\prime\prime}+{\rm O}\left(h^{\rm a}\right)\\ f_{\rm a-2}^{\prime}=f_{\rm a}^{\prime}-2hf_{\rm a}^{\prime}+2h^{2}f_{\rm a}^{ \prime\prime}-\frac{8h^{3}}{6}f_{\rm a}^{\prime\prime}+\frac{16h^{4}}{24}f_{ \rm a}^{\prime}+{\rm O}\left(h^{\rm a}\right)\end{array} \tag{35}\] Solving this system for four unknowns \(f_{\rm n}^{\prime\prime},f_{\rm n}^{\prime\prime\prime},f_{\rm n}^{\rm ir}\), and \(f_{\rm n}^{\rm ir}\) we get the value of \(f_{\rm n}^{\prime\prime}\) as, \[f_{\rm a}^{{}^{\prime}}=\frac{1}{2h^{2}}\big{(}7f_{\rm a-2}+16f_{\rm a-1}-23f _{\rm a}\big{)}+\frac{1}{h}\big{(}f_{\rm a-2}+8f_{\rm a-1}^{\prime}+6f_{\rm a }^{\prime}\big{)}+{\rm O}\left(h^{\rm a}\right) \tag{36}\] which is the fourth-order estimate of the second derivative at the boundary \(x=x_{\rm n}=b\). Substituting \(f_{\rm 0}^{\prime\prime}\) in (31) we get, \[\frac{1}{2h^{2}}\big{(}-23f_{\rm 0}+16f_{\rm i}+7f_{\rm j}\big{)}-\frac{1}{h} \big{(}6f_{\rm 0}^{\prime}+8f_{\rm i}^{\prime}+f_{\rm j}^{\prime}\big{)}-g \big{(}x_{\rm 0},f_{\rm 0},f_{\rm 0}\big{)}=0 \tag{37}\] This is our first additional equation derived from the left boundary with convergent order \(\mathcal{O}(h^{4})\) for the nonlinear differential equation. Again, substituting \(f_{\rm n}^{\prime\prime}\) in (32) we get, \[\frac{1}{2h^{2}}\big{(}7f_{\rm a-2}+16f_{\rm a-1}-23f_{\rm a}\big{)}+\frac{1} {h}\big{(}f_{\rm a-2}^{\prime}+8f_{\rm a-1}^{\prime}+6f_{\rm a}^{\prime}\big{)} -g\big{(}x_{\rm a},f_{\rm a},f_{\rm a}^{\prime}\big{)}=0 \tag{38}\] This is our second additional equation derived from the right boundary with convergent order \(\mathcal{O}(h^{4})\) for the nonlinear differential equation. For Dirichlet and Neumann boundary conditions, the required \(2n\) equations come from the two difference equations (18) and (30), and from the two additional equations (37) and (38) at the boundary of the domain. But, for Robin boundary conditions, we required two more additional equations. So, to reduce the number of unknowns, we have to put the boundary conditions into both the difference equations and the equations that come from the boundary. So that, we get a reduced system of \(2n\) equations with \(2n\) unknowns. When we evaluate these equations simultaneously, we can find the approximate solutions to any differential equation with different kinds of boundary conditions. ## IV Residual Correction Residual refers to the amount of error remaining after comparing an approximation to the real value of a solution. Boundary value problems can be approximated using a variety of numerical approximation techniques. But there are residuals for each. Residual correction is a technique for minimizing residuals by solving the error differential equation. The residual correction methodology was first presented by Oliveira [27] using the explicit finite difference method. Celik [28] utilizes the Chebyshev series for residual corrections later on. For residual corrections, this section employs the implicit compact finite difference approach. Consider the differential equation (1) in accordance with its boundary conditions (2) again. As we intend to apply the compact finite difference approach for residual correction, we must first discretize the domain into subdomains. Following this, we are tasked with deriving the differential error equation as well as the boundary conditions that correspond to it from the BVP that was provided. If \(\tilde{f}\) is the approximate solution and \(f\) is the exact solution, the residual can be obtained by, \[\theta=f-\tilde{f}\Rightarrow f=\tilde{f}+\theta \tag{39}\] Substituting this result into (1) we get, \[\frac{d^{2}}{dx^{2}}\big{(}\tilde{f}+\theta\big{)}=g\bigg{(}x,\tilde{f}+\theta, \frac{d\tilde{f}}{dx}+\frac{d\theta}{dx}\bigg{)} \tag{40}\] Or equivalently, this can be written as, \[\frac{d^{2}\theta}{dx^{2}}-M_{1}\Big{[}x,\theta,\frac{d\theta}{dx}\Big{]}=-\frac{ d^{2}\tilde{J}}{dx^{2}}+M_{2}\Bigg{[}x,\tilde{J},\frac{d\tilde{J}}{dx}\Bigg{]} \tag{41}\] which is the differential equation of the error which follows the subsequent boundary conditions, \[a_{1}\tilde{J}\left(a\right)+a_{1}\theta\left(a\right)+b_{1}\frac{d\tilde{J}}{ dx}\Bigg{|}_{x=a}+b_{1}\frac{d\theta}{dx}\Bigg{|}_{x=a}=\lambda_{1} \tag{42}\] \[a_{2}\tilde{J}\left(b\right)+a_{2}\theta\left(b\right)+b_{2}\frac{d\tilde{J}}{ dx}\Bigg{|}_{x=b}+b_{2}\frac{d\theta}{dx}\Bigg{|}_{x=b}=\lambda_{2}\] But the approximate solution satisfies the given boundary conditions. So, the boundary conditions for the error differential equations becomes, \[a_{1}\theta\left(a\right)+b_{1}\frac{d\theta}{dx}\Bigg{|}_{x=a}=0 \tag{43}\] \[a_{2}\theta\left(b\right)+b_{2}\frac{d\theta}{dx}\Bigg{|}_{x=b}=0\] We solve the BVP (41) in accordance with the boundary conditions (43) by \(4^{\text{th}}\) order compact finite difference method. Then, the updated approximate value becomes, \[\big{(}\text{\emph{Updated Approximation}}\big{)}=\big{(}\text{\emph{Weighted Residual Value}}\big{)}+\big{(}\text{\emph{Error Value}}\big{)} \tag{44}\] ## V Convergence Analysis In this section we discuss the convergence of our proposed scheme for solving the linear and nonlinear BVPs (1)-(2). Let \(\Omega=C^{1}([x_{0},x_{n}]\in R)\) be the linear space of real-valued functions that are \(l\) times differentiable on \(D=[x_{0},x_{n}]\). Suppose that, \[<\zeta_{i},\zeta_{2}>=\int_{0}w_{0}\left(x\right)\zeta_{i}\left(x\right) \zeta_{2}\left(x\right)dx \tag{45}\] be the \(L^{2}\) inner product on \(\Omega\) for some sufficiently smooth weight function \(w_{0}\) that induces the \(L^{2}\) norm, \[\big{|}\zeta_{i}\big{|}^{2}=\int_{D}w_{0}\left(x\right)\zeta_{1}^{z}\left(x \right)dx \tag{46}\] for which \(\Omega\) is an infinite dimensional Hilbert space. Assume \(V=\{B\_i|i=1,2,3,......\}\) be a Schauder basis of \(\Omega\), where \(B_{i}\)'s are Bernstein polynomials on \(D=[x_{0},x_{n}]\). Let us begin with an approximation subspace \(\Omega^{N}\) spanned by \(\{\psi_{1},\psi_{2},\psi_{3}\,....\,....,\psi_{N}\}\) that satisfies appropriate boundary conditions. The Galerkin weighted residual equation is given by \(<R\left(x,\tilde{J}(x)\right),\psi_{j}>=0\) where \(\tilde{J}(x)=N_{0}(x)+\sum_{j=1}^{n}\alpha_{j}\,N_{j}(x)\) is a trial solution of the differential equation. In particular, the residual function \(R\left(x,\tilde{J}(x)\right)\) is orthogonal to each function of the basis of the approximate subspace \(\Omega^{N}\). It is well known that Bernstein polynomials can approximate any continuous function with arbitrary precision. If the dimension of the subspace \(\Omega^{N}\) is infinitely large, the residual function \(R\left(x,\tilde{J}(x)\right)\) is orthogonal to each Bernstein polynomial which immediately implies that the residual \(R\left(x,\tilde{J}(x)\right)\) is orthogonal to any continuous function in \(\Omega\). A function that is orthogonal to any other functions in the space is necessarily the zero function. For the error differential equation in accordance with the error boundary conditions, we use 4th order compact finite difference method. **Definition 1**: [29] Let us consider \(\Psi_{h}\) as the solution of the discretized equation \(\mathcal{L}_{h}\Psi_{h}=b_{h}\) which converges to the solution \(\psi\) of the given differential equation \(\mathcal{L}\psi=b\) if \(\big{|}\big{|}\psi_{h}-\Psi_{h}\big{|}\big{|}\to 0\) as \(h\to 0\) Moreover, if for a positive constant \(k\) and another positive constant \(M_{0}>0\) which is independent of \(k\) in the sense that, \(\left|\psi_{n}-\psi_{n}\right|\leq M_{o}h^{k}\) in this case, it is stated that the discretized equation has \(k\)-th order precision with convergence order \(h^{k}\). **Definition 2:[29]** The discretized equation is stable if there exists \(h_{0}>0\) and \(\delta>0\) in such a way that for any \(h<h_{0}\) and any \(\epsilon_{h}\in\mathcal{B}_{\delta}\) satisfying \(\left|\left|\epsilon\right|\right|<\), the perturbed difference equation \(L_{\delta}w_{h}=b_{h}+\delta\epsilon_{h}\) has unique solution \(w_{h}\), satisfying \(\left|\left|w_{h}-\psi_{h}\right|\right|\leq M\left|\left|\epsilon_{h}\right|\right|\) where \(\Psi_{h}\) is the solution of the unperturbed difference equation and \(M>0\) is independent of \(h\). **Theorem 1**: [29] _If the discretized equation is stable and also consistent (with order \(h^{k}\)) with the given differential equation, then, the solution of the discretized equation \(\Psi_{h}\) converges to the solution \(\psi\) and which satisfies \(\left|\left|\psi_{h}-\psi_{h}\right|\right|\leq MM_{1}h^{k}\), where \(M\) and \(M_{1}\) are certain constants. Alternatively, the order in which the difference scheme approaches the continuous problem corresponds to the accuracy order of the difference scheme._ **Proof:[29]** Since the difference scheme is consistent, it is established that \(\left|\left|\delta b_{h}\right|\right|\leq M_{1}h^{k}\to 0\) as \(h\to 0\). This means that a grid \(\mathcal{D}_{\delta}\) can be constructed in such way that \(h<h_{0}\) and \(\delta b_{h}<\) as in Definition 2, and \[\mathrm{L}_{u}\Psi_{u}=b_{u}+\delta b_{u} \tag{47}\] Therefore, \(\Psi_{h}\) satisfies the conditions for stability. Due to this, \[\left|\psi_{u}-\Psi_{u}\right|\leq M\left|\left|\delta b_{h}\right|\right|\leq M \left(M_{i}h^{k}\right) \tag{48}\] Hence, the discrete solution \(\Psi_{h}\) converges to the continuous solution \(\psi\) of order \(h^{k}\). ## VI Results and Discussions In this section, we will apply our proposed scheme to some second-order linear and nonlinear problems and calculate the accuracy and stability of our proposed scheme. We compute the \(L_{\infty}\) norm by comparing the approximate solutions with the exact solutions as follows, \[L_{\infty}=\max_{\epsilon_{1}\epsilon_{2}\epsilon_{3}}\left|\tilde{f}_{i}-f_{ i}\right| \tag{49}\] We also compute the Convergence Rate (\(\mathcal{CR}\)) as follows [24], \[\mathrm{CR}=\frac{\log\left(\frac{\partial}{\partial_{t}}\right)}{\log\left( \frac{h_{t}}{h_{t}}\right)} \tag{50}\] where, \(\epsilon_{1}\) and \(\epsilon_{2}\) are the \(L_{\infty}\) for grid size \(h_{1}\) and \(h_{2}\), respectively. **Problem 1** Let us consider the linear DE in conjunction with Neumann boundary conditions [6],[30], \[\frac{d^{2}f}{dx^{2}}+f=-1\quad\text{with}\quad f^{{}^{\prime}}\left(0\right) =\frac{1-\cos\left(1\right)}{\sin\left(1\right)},f^{{}^{\prime}}\left(1\right) =-\frac{1-\cos\left(1\right)}{\sin\left(1\right)} \tag{51}\] with the exact solution, \[f\left(x\right)=\cos\left(x\right)+\frac{1-\cos\left(1\right)}{\sin\left(1 \right)}\sin\left(x\right)-1 \tag{52}\] In the first stage, the BVP is solved numerically using the modified Galerkin approach with the aid of the trial solution as (3). For residual correction we need the error differential equation with error BCs. The error BVP for the given differential equation is, \[\frac{d^{2}\theta}{dx^{2}}+\theta=-\frac{d^{2}\tilde{f}}{dx^{2}}-\tilde{f}-1 \quad\quad with\quad\quad\theta\left(0\right)=0,\theta^{{}^{\prime}}\left(1 \right)=0 \tag{53}\] The following table shows \(L_{\infty}\) and \(\mathcal{CR}\) generated using the proposed method for different grid sizes \(h\) for residual corrections. From Table I we can observe that we obtain high accuracy by using Bernstein polynomials of degree 4 only in our prescribed scheme. Whereas, in [6], Islam & Shirin implemented the modified Galerkin approach and the \(L_{\infty}\) obtained using Bernoulli polynomials of degree 10 is \(10^{-14}\). So, we can say that, in our proposed scheme, we can attain better accuracy by using fewer polynomials and then correcting the residuals of the approximation. **Problem 2** Let us consider the following nonlinear DE in conjunction with Dirichlet boundary conditions [6], \[\frac{d^{2}f}{dx^{2}}+\frac{1}{8}f\frac{df}{dx}=\left(4+\frac{1}{4}x^{3}\right) \quad\text{with}\quad\quad f\left(1\right)=17,f\left(3\right)=\frac{43}{3} \tag{54}\] with the exact solution, \[f\left(x\right)=x^{2}+\frac{16}{x} \tag{55}\] Equivalent BVP on [0,1] becomes, \[\frac{d^{2}f}{dx^{2}}+\frac{1}{4}f\frac{df}{dx}=16+\left(2x+1\right)^{3}\quad \quad\quad\text{with}\quad\quad f\left(0\right)=17,f\left(1\right)=\frac{43}{3} \tag{56}\] In the first stage, the BVP is solved numerically using the modified Galerkin approach with the aid of the trial solution as (3). For residual correction we need the error differential equation with error BCs. The error BVP for the given differential equation is, \[\frac{d^{2}\theta}{dx^{2}}+\frac{1}{4}\,\tilde{f}\,\frac{d\theta}{dx}+\frac{1 }{4}\theta\frac{df}{dx}+\frac{1}{4}\theta\frac{d\theta}{dx}=16+\left(2x+1 \right)^{3}-\frac{d^{2}\tilde{f}}{dx^{2}}-\frac{1}{4}\,\tilde{f}\,\frac{d \tilde{f}}{dx}\quad\text{with}\quad\theta\left(0\right)=\theta\left(1\right)=0 \tag{57}\] The following table shows \(L_{\infty}\) and \(\mathcal{ER}\) generated using the proposed method for different grid sizes \(h\) for residual corrections. From Table II we can observe that we obtain high accuracy by using Bernstein polynomials of degree 4 only in our prescribed scheme. Whereas, in [6], Islam & Shirin implemented the modified Galerkin approach and the \(L_{\infty}\) obtained using Bernoulli polynomials of degree 15 is \(10^{-08}\). So, we can say that, in our proposed scheme, we can attain better accuracy by using fewer polynomials and then correcting the residuals of the approximation. **Problem 3** Let us consider the following nonlinear DE in conjunction with Robin boundary conditions [6],[31], \[\frac{d^{2}f}{dx^{2}}=\frac{1}{2}\left(1+x+f\right)^{3}\quad\text{ with}\quad\quad f^{{}^{\prime}}\left(0\right)-f\left(0\right)=-1/2,\,f^{{}^{\prime}} \left(1\right)+f\left(1\right)=1 \tag{58}\] with the exact solution, \[f\left(x\right)=\frac{2}{2-x}-x-1 \tag{59}\] \begin{table} \begin{tabular}{c c c c} \hline \hline & \(h\) & \(L_{\infty}\) & \(\mathcal{ER}\) & **Reference Results (\(L_{\infty}\))** \\ \hline & 0.1 & \(5.1458\times 10^{-04}\) & **[6], [1101]:** \\ & 0.05 & \(5.1111\times 10^{-05}\) & 3.3315 & Bernoulli Pol. (10th degree) \\ Bernstein & 0.025 & \(4.2499\times 10^{-06}\) & 3.5883 & \(1.4449\times 10^{-05}\) \\ Polynomials & 0.0125 & \(3.1017\times 10^{-07}\) & 3.7763 & Bernoulli Pol. (15th degree) \\ of degree 4 & 0.01 & \(7.6237\times 10^{-08}\) & 3.8548 & \(6.3806\times 10^{-08}\) \\ & 0.005 & \(8.7611\times 10^{-09}\) & 3.9048 & \\ & 0.0025 & \(5.6623\times 10^{-10}\) & 3.9516 & \\ \hline \hline \end{tabular} \end{table} TABLE II: \(L_{\infty}\) AND CONVERGENC REATE (\(\mathcal{ER}\)) FOR PROBLEM 2 \begin{table} \begin{tabular}{c c c c} \hline \hline & \(h\) & \(L_{\infty}\) & \(\mathcal{ER}\) & **Reference Results (\(L_{\infty}\))** \\ \hline & 0.1 & \(2.0797\times 10^{-08}\) & **[6], 2011:** \\ & 0.05 & \(1.5311\times 10^{-09}\) & 3.7637 & Bernoulli Pol. (5th degree) \\ Bernstein & 0.025 & \(1.3140\times 10^{-10}\) & 3.5425 & \(1.173569\times 10^{-06}\) \\ Polynomials & 0.0125 & \(9.3694\times 10^{-12}\) & 3.8099 & Bernoulli Pol. (7th degree) \\ of degree 4 & 0.01 & \(3.9338\times 10^{-12}\) & 3.8892 & \(1.2808\times 10^{-09}\) \\ & 0.005 & \(2.5785\times 10^{-13}\) & 3.9313 & Bernoulli Pol. (10th degree) \\ & 0.0025 & \(1.6570\times 10^{-14}\) & 3.9599 & \(1.0131\times 10^{-14}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: \(L_{\infty}\) AND CONVERGENC REATE (\(\mathcal{ER}\)) FOR PROBLEM 1 In the first stage, the BVP is solved numerically using the modified Galerkin approach with the aid of the trial solution as (3). For residual correction we need the error differential equation with error BCs. The error BVP for the given differential equation is, \[\frac{d^{2}\theta}{dx^{2}}-\left(\frac{3}{2}\big{(}1+x\big{)}^{2}+3\big{(}1+x \big{)}\tilde{f}+\frac{3}{2}\tilde{f}^{\cdot}\right)\theta-\left(\frac{3}{2} \big{(}1+x\big{)}+\frac{3}{2}\tilde{f}\right)\theta^{z}-\frac{1}{2}\theta^{z} =-\frac{d^{2}\tilde{f}}{dx^{2}}+\frac{1}{2}\big{(}1+x+\tilde{f}\big{)}^{3} \tag{60}\] with the boundary conditions, \(\theta^{\prime}(0)-\theta(0)=-1/2\), \(\theta^{\prime}(1)+\theta(1)=1\). The following table shows \(L_{\infty}\) and \(\mathcal{CR}\) generated using the proposed method for different grid sizes \(h\) for residual corrections. From Table III we can observe that we obtain high accuracy by using Bernstein polynomials of degree \(4\) in our proposed scheme. Whereas, in [6], Islam & Shirin implemented the modified Galerkin approach using Bernoulli polynomials of degree \(10\) and the obtained \(L_{\infty}\) is \(10^{-9}\). In [31], Sohel _et al._ implemented the residual correction approach and the \(L_{\infty}\) using Legendre polynomials of degree \(11\) is \(10^{-12}\). So, we can say that, we can attain better accuracy by using fewer polynomials and then correcting the residuals of the approximation using our present approach. **Problem 4**: Let us consider the following strongly nonlinear Bratu's problem [12],[15],[32]-[35], \[\frac{d^{2}f}{dx^{2}}+\lambda e^{f(x)}=0\text{, with }\quad f\big{(}0\big{)}=0,f\big{(}1\big{)}=0 \tag{61}\] The term strongly non-linear is employed because the non-linearity is generated by exponential term. \[\text{This BVP has an exact solution, }\qquad f\big{(}x\big{)}=-2\ln\left[\frac{\cosh \left(x-\frac{1}{2}\right)\frac{\beta}{2}}{\cosh\left(\frac{\beta}{4}\right) }\right] \tag{62}\] where, \(\beta\) satisfies, \(\sqrt{2\lambda}\cosh\left(\frac{\beta}{4}\right)=\beta\). There are zero, one, and two solutions to the Bratu's problem when \(\lambda>\lambda_{\text{e}}\), \(\lambda=\lambda_{\text{c}}\) and \(\lambda<\lambda_{\text{c}}\), respectively, where \(\lambda_{\text{c}}=3.513830719\). In the first stage, the BVP is solved numerically using the modified Galerkin approach with the aid of the trial solution as (3). For residual correction we need the error differential equation with error BCs. The error BVP for the given differential equation is, \[\frac{d^{2}\theta}{dx^{2}}+\lambda e^{j}e^{\theta}=-\frac{d^{2}\tilde{f}}{dx^ {2}}\qquad with\qquad\theta\big{(}0\big{)}=0,\theta\big{(}1\big{)}=0 \tag{63}\] The following table shows \(L_{\infty}\) and \(\mathcal{CR}\) generated using the proposed method for different grid sizes \(h\) for residual corrections. Firstly, representing the results for \(\lambda=1\). \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(h\) & \(L_{\infty}\) & \(\mathcal{CR}\) & **Reference Results (\(L_{\infty}\))** \\ \hline & \(0.1\) & \(8.2287\times 10^{-08}\) & **[12], 2012:** \(6.187\times 10^{-05}\) \\ Bernstein & \(0.05\) & \(7.6868\times 10^{-09}\) & 3.4202 & **[15], 2004:** \(1.348\times 10^{-05}\) \\ Polynomials & \(0.025\) & \(6.2340\times 10^{-10}\) & 3.6242 & **[32], 2011:** \(9.048\times 10^{-07}\) \\ of & \(0.01\) & \(1.8176\times 10^{-11}\) & 3.8580 & **[34], 2019:** \(9.90\times 10^{-08}\) \\ degree \(4\) & \(0.005\) & \(1.1832\times 10^{-12}\) & 3.9413 & **[35], 2019:** \(6.41\times 10^{-13}\) \\ & \(0.0025\) & \(7.6439\times 10^{-14}\) & 3.9522 & \\ \hline \hline \end{tabular} \end{table} Table IV: \(L_{\infty}\) AND CONVERGENCE RATE (\(\mathcal{CR}\)) FOR PROBLEM 4 FOR \(\lambda=1\) From Table IV we can observe that we obtain the accuracy of \(10^{-14}\) by using Bernstein polynomials of degree \(4\) in our present scheme for \(\lambda=1\). Whereas, for \(\lambda=1\), in [12], Mustafa _et al._ implemented Subdivision collocation method and in [15], Khuri implemented Laplace transformation method and the obtained \(L_{\infty}\) in both of the cases is \(10^{-05}\), in [32], Abbasbandy _et al._ implemented the Lie Group Shooting method and the obtained \(L_{\infty}\) is of \(10^{-0}\), in [34] Ala O _et al._ implemented the Quantic B-spline approach and the obtained \(L_{\infty}\) is of \(10^{-08}\) and in [35] Roul & Thula implemented the B-spline Collocation method and the obtained \(L_{\infty}\) is of \(10^{-13}\). Then, representing the results for \(\lambda=2\), From Table V we can observe that we obtain the accuracy of \(10^{-13}\) by using Bernstein polynomials of degree \(4\) in our present scheme for \(\lambda=2\). Whereas, for \(\lambda=2\), in [12], Mustafa _et al._ implemented the subdivision collocation method and the obtained \(L_{\infty}\) is of \(10^{-04}\), in [32], Abbasbandy _et al._ implemented the Lie Group Shooting method and the obtained \(L_{\infty}\) is of \(10^{-06}\), in [33], Deeba _et al._ implemented the Decomposition algorithm and the obtained \(L_{\infty}\) is of \(10^{-02}\), in [34], Ala O _et al._ implemented the Quantic B-spline method and the obtained \(L_{\infty}\) is of \(10^{-06}\) and in [36], Farzana & Islam implemented the Chebyshev-Legendre collocation method and the obtained \(L_{\infty}\) is of \(10^{-0}\). So, we can say that, in our proposed scheme, we can attain better accuracy by using fewer polynomials and then correcting the residuals. ## VII Conclusion In this research, we have obtained numerical solutions for both linear and nonlinear BVPs, followed by residual corrections. Using the weighted residual technique, a numerical solution has been generated in the first part. Then, using the implicit compact finite difference technique, updated approximations were established. We have developed the formulation of the compact finite difference scheme for the nonlinear differential equations both at the interior node and the boundary with Dirichlet, Neumann, and Robin boundary conditions and established the convergence and stability of our numerical solutions. Our research has shown the effectiveness of the residual correction technique in improving the accuracy of the approximations. We have compared our results with those published in the literature and demonstrated the superiority of our approximations in terms of accuracy. Overall, this research offers valuable insights into the numerical solution of boundary value problems and provides a practical method for achieving high-precision results for various applications. The proposed method can be applied to higher-order boundary value problems as well as to partial differential equations.
2310.02113
FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
Federated learning (FL) is a distributed learning process that uses a trusted aggregation server to allow multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data. Recent research, however, has demonstrated the effectiveness of inference and poisoning attacks on FL. Mitigating both attacks simultaneously is very challenging. State-of-the-art solutions have proposed the use of poisoning defenses with Secure Multi-Party Computation (SMPC) and/or Differential Privacy (DP). However, these techniques are not efficient and fail to address the malicious intent behind the attacks, i.e., adversaries (curious servers and/or compromised clients) seek to exploit a system for monetization purposes. To overcome these limitations, we present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks. Our solution leverages crypto-currency to increase party accountability by penalizing malicious behavior and rewarding benign conduct. We conduct an extensive evaluation on four public datasets: Reddit, MNIST, Fashion-MNIST, and CIFAR-10. Our experimental results demonstrate that (1) FLEDGE provides strong privacy guarantees for model updates without sacrificing model utility; (2) FLEDGE can successfully mitigate different poisoning attacks without degrading the performance of the global model; and (3) FLEDGE offers unique reward mechanisms to promote benign behavior during model training and/or model aggregation.
Jorge Castillo, Phillip Rieger, Hossein Fereidooni, Qian Chen, Ahmad Sadeghi
2023-10-03T14:55:30Z
http://arxiv.org/abs/2310.02113v1
# FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks ###### Abstract. Federated learning (FL) is a distributed learning process that uses a trusted aggregation server to allow multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data. Recent research, however, has demonstrated the effectiveness of inference and poisoning attacks on FL. Mitigating both attacks simultaneously is very challenging. State-of-the-art solutions have proposed the use of poisoning defenses with Secure Multi-Party Computation (SMPC) and/or Differential Privacy (DP). However, these techniques are not efficient and fail to address the malicious intent behind the attacks, i.e., adversaries (curious servers and/or compromised clients) seek to exploit a system for monetization purposes. To overcome these limitations, we present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks. Our solution leverages crypto-currency to increase party accountability by penalizing malicious behavior and rewarding benign conduct. We conduct an extensive evaluation on four public datasets: Reddit, MNIST, Fashion-MNIST, and CIFAR-10. Our experimental results demonstrate that (1) FLEDGE provides strong privacy guarantees for model updates without sacrificing model utility; (2) FLEDGE can successfully mitigate different poisoning attacks without degrading the performance of the global model; and (3) FLEDGE offers unique reward mechanisms to promote benign behavior during model training and/or model aggregation. blockchain, federated learning, homomorphic encryption, security and privacy + Footnote †: ccs: Security and privacy \(\rightarrow\) Domain-specific security and privacy architectures + Footnote †: ccs: Security and privacy \(\rightarrow\) Domain-specific security and privacy architectures sample was used for training [33; 59], negating the privacy gains of FL. The most prominent example of this attack is the Membership Inference Attack [59]. FL is particularly vulnerable to inference attacks executed by a curious server, as it has access to the local models before aggregation. While the aggregation anonymizes the individual clients' contributions and makes inference attacks significantly harder [17], the server's access to individual local models poses a significant threat to the clients' privacy and raises concerns for applications with privacy-sensitive data [57]. To mitigate inference attacks, state-of-the-art defenses use one of the following approaches: Secure Multi-party Computation (SMPC) [2; 17; 54], Differential Privacy (DP) [38; 42], or Multi-Key Homomorphic Encryption (MKHE) [55]. In terms of security, recent work [5; 58] has demonstrated the high vulnerability of FL systems against Poisoning Attacks. In this type of attack, adversaries leverage the distributed property of FL to take control of one or more training clients. Malicious clients can alter their behavior and skew the convergence of the global model. Poisoning attacks can be divided into two categories: untargeted and targeted attacks. Untargeted attacks aim to degrade the utility of the global model [25]. In targeted (or backdoor) attacks [5; 66], however, an adversary guides the global model to a well-defined outcome. One example of an effective backdoor attack is where a malicious model is carefully trained to maintain a high accuracy for the main task but triggers backdoor behavior if specific patterns are detected during inference. Examples for such backdoor behavior include, e.g., injecting advertisement into an FL-based word suggestion system or creating a backdoor that makes an FL-based network intrusion detection system fail to detect network traffic of certain malware. A major threat of these attacks is that models are still black boxes. Thus, in practice, it is still an unsolved problem to determine if a model contains a hidden backdoor. The unique behavior makes backdoor attacks more important compared to other poisoning attacks. State-of-the-art defenses aim to minimize the threat of poisoning attacks using techniques such as model filtering [6; 58] to detect and exclude poisoned models from aggregation, and/or model clipping [48; 53] to limit poisoned updates' impact. These approaches, however, are limited by the underlying assumptions imposed by SMPC, e.g., availability during computations, and more importantly, they fail to address the motivation behind client misbehavior, i.e., accountability. Without crediting the contributions of individual clients, a malicious client may continuously try to poison the model until it is successful in one round. Further, mitigating poisoning and inference attacks at the same time is a complex task and existing approaches [27; 48] are not efficient. To overcome the limitations of existing solutions, we tackle the following questions: i) how to achieve a privacy-preserving aggregation framework that penalizes malicious intent during the aggregation process, ii) how to discriminate poisoned from benign updates to dynamically reward or punish client's behavioral patterns. **Goals and Contributions.** In this paper, we present the design, implementation, and evaluation of FLEDGE, a fully-decentralized crypto-system that provides resiliency to inference and poisoning attacks. FLEDGE is a 3-layer blockchain FL framework powered by smart contracts, where each layer operates specific components, i.e., training clients (client layer), smart contracts (computation layer), and ledger (data layer). Our primary motivation to use smart contracts is to provide a decentralized and immutable environment to protect the security and privacy of models. By using smart contracts, we achieve reasonable efficiency and are able to mitigate poisoning and inference attacks. FLEDGE leverages blockchain's decentralization to yield high computation availability and ledger immutability (i.e., committed data cannot be changed) to prevent data alterations that could lead to unexpected results such as inaccurate model filtering or incorrect distributions of rewards. To address the first question and perform privacy-preserving aggregation, we have to consider the following factors: private computation framework and aggregator compensation. To design a privacy-preserving computation platform, FLEDGE introduces the concept of Blockchain Two Contract Computation (BT2C). BT2C is defined as a semi-honest relationship3 between two smart contracts using Homomorphic Encryption (HE), in particular, we rely on the CKKS4 encryption scheme [11]. Footnote 3: The semi-honest setting is a well-established security model that dictates how involved parties must adhere to the pre-established protocol Footnote 4: Note that we use the term HE to refer to CKKS in the rest of this paper. Compared to SMPC and MKHE, BT2C is a decentralized cryptosystem based on HE that leverages the blockchain ledger to improve trust among smart contracts, where one contract (Defender) acts as a decryption service and the other contract (Gateway) acts as a computation hub. For our implementation, we develop a _secure decryption_ method that includes a compensation algorithm to evaluate a reward for the aggregation service based on its behavior. Our approach operates as a semi-honest cryptographic service such that the Gateway contract receives encrypted models from training clients and performs computations; and the Defender contract evaluates model characteristics (e.g., cosine distance) and provides incentives (i.e., crypto-currency, tokens) for benign behavior. To address the second question and discriminate poisoned models, we first separate it into the following components: poisoning detection and client compensation. To implement the poisoning detection, FLEDGE calculates the cosine distances between local and global models, and utilizes the Gaussian Kernel Density Estimation (G-KDE) function to divide them into different clusters. Here, a cluster is identified by the location of local minimums5. This information is leveraged as a breaking point to separate the distances into different clusters. After benign and malicious clusters have been correctly identified, FLEDGE implements a round-based client compensation algorithm to provide additional incentives to benign training behavior, and to penalize those who attempt model poisoning. Footnote 5: A local minimum is a point on the associated function (e.g., G-KDE) whose value is less than every other point in its vicinity. In summary, FLEDGE's contributions are threefold: (1) FLEDGE offers strong privacy guarantees by operating models in cipher text using the proposed BT2C protocol. Our approach is shown to be resilient against white-box inference attacks with a probability of success of \(\frac{1}{m^{2}}\), where \(m\) represents the number of ciphers generated per model (Sect. 6.1). (2) FLEDGE mitigates poisoning attacks using the proposed G-KDE clustering method to analyze the distribution of cosine distances and remove poisoned models. Our extensive evaluation on four public datasets (i.e., Reddit, MNIST, Fashion-MNIST, and CIFAR-10) indicates that FLEDGE is resilient against untargeted and targeted poisoning attacks (Sect. 6.2). (3) FLEDGE relies on our proposed aggregation and training compensation algorithms to offer incentives to benign aggregation services and benign training clients. Our results indicate that the proposed compensation algorithms automatically adjust the rewards to deter malicious intent from the training process (Sect. 6.3). ## 2. Requirements and Challenges This section presents the security and privacy requirements that FLEDGE fulfills and the challenges to be tackled in achieving them. ### Privacy for FL During model submission, clients upload trained models to the aggregation server such that the server generates a new global model (see App. A for details on the FL process). At this point, the server has complete access to each model (e.g., model weights, structure and hyperparameters), which increases the threat of white-box inference attacks. To mitigate the attack, the defender has to satisfy the following requirements: **P1: Utility Retention.** The defense must provide resiliency against inference attacks that are executed by curious servers while maintaining the utility of the model, i.e., main task accuracy (MA) remains the same with or without defense. Therefore, the performance of the new global model must not be compromised with the increase of privacy levels. **P2: Computation Availability.** The defense must remain available to process and analyze encrypted models6. Therefore, every model computation shall not fail due to limited resource availability. Footnote 6: An encrypted model is a collection of ciphers that represent encrypted weights. To the best of our knowledge, existing solutions for inference attacks that also preserve model utility rely on frail computation infrastructures, e.g., SMPC (Krishnan et al., 2017; Wang et al., 2018) or MKHE (Wang et al., 2018). Thus, they suffer from the requirement of high availability of the system's components to use privacy-preserving computations (SMPC and MKHE) (Wang et al., 2018) and also from a high computation complexity to detect poisoned models when using privacy-preserving computations. Our scheme combines blockchain (see App. B) and HE, in particular, the scheme of Cheon-Kim-Kim-Song (CKKS) (Chen et al., 2018) (see App. C), to introduce a unique privacy-preserving computation framework, overcoming current limitations. However, the use of blockchain brings additional concerns. Thus, FLEDGE addresses the following challenges: **C1:** How to leverage blockchain to improve trust between computation parties. **C2:** How to effectively combine HE and blockchain to limit the ledger's transparency effect and increase privacy to model updates. ### Security for FL FL is a distributed learning approach that allows numerous clients to participate in the training process through model submissions. An adversary who controls a fraction of the clients can then use their influence to poison the new global model. To mitigate poisoning attacks, the defender has to fulfill the additional requirements: **S1: Effective Poisoning Mitigation.** The defense must detect poisoning attempts, e.g., untargeted and targeted attacks, minimize their impact on the global model, and preserve model utility. For example, for targeted (backdoor) attacks, a defense should maintain the backdoor accuracy (BA) at the same level as without the attack. In addition, similar to **P1**, the defense must not negatively affect the training process, e.g., decrease MA by removal of benign models. **S2: Autonomous Behavior.** The defense must be flexible to adjust automatically to different strategies without manual configuration. Like existing solutions, FLEDGE leverages the cosine distance between local models and the global model to cluster their scores dynamically. Our approach, however, leverages this information to apply a deterrent to malicious clients, which adds another layer of security. Therefore, FLEDGE addresses the additional challenges: **C3:** How to solve the dilemma of preventing the server from analyzing the local models against inference attacks while the server has to inspect the local models to detect/mitigate poisoned models. **C4:** How to discriminate poisoned models s.t. malicious clients can be correctly identified to receive disciplinary actions. **C5:** How to credit the clients over multiple training rounds to make malicious clients accountable for their attacks. ## 3. Adversary Model and Assumptions In this section, we describe the threat model and assumptions used for the rest of the paper. We highlight the adversary's capabilities and main objectives. ### Privacy Threat Classic FL implementations rely on an aggregation server to compute new global models every training round (see App. A). However, a malicious aggregation server can extract private information from each of the local models, thus, raising privacy concerns. **White-box Inference Attack Goal.** Aligned with previous research (Krishnan et al., 2017; Wang et al., 2018), the _honest-but-curious_ aggregator instantiates the attack on local models \(W_{i}\) before aggregation. In other words, an adversary \(\mathcal{A}^{p}\) is aware of any process happening in the aggregator, but remains _honest_, i.e., continues to perform the aggregator's benign tasks, to avoid detection. However, \(\mathcal{A}^{p}\) is also _curious_, having the ability to infer private information about the training data \(D_{i}\) while processing \(W_{i}\). Formally, \(\mathcal{A}^{p}\) leverages \(W_{i}\) to learn if a given \(x\) was used as part of \(D_{i}\), allowing \(\mathcal{A}^{p}\) to extract sensitive information from every local model. Aligned with previous work (Chen et al., 2018; Krishnan et al., 2017; Wang et al., 2018), we focus on inference attacks on the local models, as the aggregation anonymizes the individual contributions. \(\mathcal{A}^{p}\)**Capabilities.** We assume \(\mathcal{A}^{p}\) is in full control of the aggregation server s.t. \(\mathcal{A}^{p}\) has access to every local model submitted by clients. We also assume \(\mathcal{A}^{p}\) cannot compromise clients directly or affect any of the training processes. ### Security Threat Multiple clients are selected to improve model accuracy. This collaboration allows one or more clients to conduct malicious activities in any training round. **Targeted Poisoning Attack Goals.** In a targeted poisoning attack, the adversary \(\mathcal{A}^{s}\) has the following goals: poisoning injection and defense evasion. For poisoning injection, \(\mathcal{A}^{s}\) manipulates local model \(W_{i}\) to produce poisoned local model \(W^{\prime}_{i}\). \(W^{\prime}_{i}\) is then used to alter the behavior of global model \(G_{t}\). In state-of-the-art targeted poisoning attacks (also called backdoor attacks), \(\mathcal{A}^{s}\) guides poisoned global model \(G^{\prime}_{t}\) to behave normally all the time except when a specific set of conditions or triggers are present in the input. To achieve its secondary goal, \(\mathcal{A}^{s}\) manipulates \(W^{\prime}_{i}\) s.t. it remains as close as possible to \(W_{i}\), e.g., adapting the loss function (Beng et al., 2017). \(\mathcal{A}^{s}\)**Capabilities.** Similar to recent studies (Beng et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019), we assume \(\mathcal{A}^{s}\) maliciously controls \(f\) compromised clients, which should be less than half of the total number of clients in \((f<\frac{n}{2})\). We also assume \(\mathcal{A}^{s}\) cannot observe benign clients' local data or their submitted local updates. To introduce a backdoor into the global model, \(\mathcal{A}^{s}\) can launch a combination between data poisoning (Zhu et al., 2019) and model manipulation attacks(Beng et al., 2017). Data poisoning is when \(\mathcal{A}^{s}\) adds _poisoned_ data to the existing training sets during model training, e.g., for an image classification task, \(\mathcal{A}^{s}\) can poison an image by drawing a shape into a specific corner. This attack allows \(\mathcal{A}^{s}\) to change model predictions to its desired outputs every time a trigger is identified during inference. In contrast, the model manipulation attack lets \(\mathcal{A}^{s}\) control the training algorithm to alter the convergence point of a model. This attack can be implemented through model scaling, modifying the loss function and/or adapting dedicated hyperparameters. ### Assumptions FLEDGE provides numerous security and privacy benefits under the following assumptions. **A1: Consensus Protocol is not Compromised.** Since blockchain is the underlying platform to exchange information and execute smart contracts, we assume the consensus process to be not compromised. **A2: Non-colluding Servers (Smart Contracts).** During the manipulation of encrypted data, deployed smart contracts engage in a semi-honest relationship to enable a privacy-preserving aggregation infrastructure. Therefore, to preserve privacy guarantees, we assume an adversary cannot control both contracts and their storage components, simultaneously. **A3: Clients Perform Encryption.** Training clients affiliated to FLEDGE are assumed to have sufficient computational resources to perform encryption. ## 4. Design This section first provides a high-level overview of FLEDGE and then describes its components in detail. ### High-level Overview FLEDGE is designed to fulfill privacy requirements (**P1**, **P2**) and security requirements (**S1**, **S2**). This is achieved by a layered framework that we will detail below. To detect poisoned models and satisfy **S1** and **S2**, FLEDGE uses a Gaussian Kernel Density Estimation (G-KDE) function to partition the received model updates into distinct clusters based on their pairwise distance. Since the cosine determines the angle, it reveals the changes that were applied to the local model. Compared to other metrics such as the Euclidean distance, it is more stable against manipulations. To satisfy **P1** and **S1** FLEDGE uses HE to encrypt models and perform privacy-preserving computations, i.e., private aggregation and/or private distance between models. In addition, FLEDGE leverages blockchain, in particular, smart contracts to meet **P2** and **S2**. Informally, FLEDGE is a 3-layer blockchain framework regulated by smart contracts that provides FL services to train models on arbitrary learning tasks, e.g., image classification, and/or word prediction. For every new learning task, a session reward is set by the owners of the task s.t. interested parties (clients and/or contracts) who join can be rewarded for their benign efforts. In other words, FLEDGE operates a crediting system to encourage participants to avoid malicious attempts to break the system, e.g., white-box inference or poisoning attacks. To manage every reward, FLEDGE registers training clients by generating unique cryptographic identities via its Membership Service Provider (MSP).7 Fig. 1 presents the different layers of FLEDGE. Footnote 7: In Fabric, an MSP provides verifiable identities to members of the blockchain network. **Client Layer.** This is the base layer of FLEDGE, and it is where training clients reside. As discussed in Sect. 3.2 some of the clients can be controlled by the poisoning attacker \(\mathcal{A}^{s}\). **Computation Layer.** This layer illustrates the logical components that enable FLEDGE to operate autonomously. It is formed by two smart contracts, namely Gateway and Defender contracts. The Gateway contract acts as the access gateway where clients submit their local models. Its core functions, e.g., model process, model analysis and model aggregate, provide privacy-preserving computations for encrypted models, addressing C1. The Defender contract, on the other hand, provides support to the Gateway in the form of security and privacy mechanisms, e.g., model privacy and model security, thus, allowing FLEDGE to defend against multiple threats. We define this blockchain infrastructure as Blockchain Two Contract Computation (BT2C). The goal of our BT2C implementation is to enable HE-based computations and secure decryption functions tailored to provide privacy-preserving FL (as defined in Sect. 3.1). Further details about the internal methods of both smart contracts can be found in Steps 2-5 of Sect. 4.2. **Data Layer.** The following layer represents the storage components of FLEDGE which constitutes two storage oracles, namely A and B, and a blockchain ledger. The storage oracles (i.e., external databases) are used to manage encrypted models and decryption keys for Gateway and Defender contracts, respectively, and the blockchain ledger stores information about FLEDGE (e.g., model information, session information) using the following transactions types (TT1 - TT7). _Init Transactions (TT1)_ are generated for each learning task to determine the owner of the task, the encryption keys to be used (i.e., HE public key \(P_{k}\)), the number of rounds \(T\) required, and the reward amount \(R\) for the full training session. _Storage Transactions (TT2)_ are generated for every encrypted local model \(W^{s}_{t}\) submitted by K clients to save the client ID (e.g., wallet address to receive payments), model ID, and the encrypted offset value \(\delta^{s}_{t}\), where \(i\in[1,K]\). The offset \(\delta\) is a random value generated based on the standard deviation of local model \(W_{t}\) that is injected before model encryption to further obfuscate \(W^{s}_{t}\). Further information about the use of \(\delta\) is found in Step 1 of Sect. 4.2. _Analysis Transactions (TT3)_ are created for every TT2 to compute the cosine distance between \(W^{s}_{t}\) and the current encrypted global model \(G^{s}_{t}\), where \(t\in[1,T]\). TT3 is used to store model ID and its respective score \(c_{i}\), i.e., cosine distance. _Privacy Transactions (TT4)_ are generated when a malicious contract (i.e., Gateway) is attempting to break the privacy of \(W_{i}^{*}\). TT4 includes the computed aggregation reward \(R_{C}\) for a given training session. Further information for \(R_{C}\) is found in Step A of Sect. 4.2. _Security Transactions (TT5)_ are created after FLEDGE evaluates every \(c_{i}\) from TT3 to classify clients into two categories: benign or malicious. This is represented in TT5 as a list of benign IDs and a list of malicious IDs. _Appraisal Transactions (TT6)_ are determined after TT5 to calculate the round reward \(R_{r}\) for benign clients. Additional details for \(R_{r}\) can be found in Step 4 of Sect. 4.2. _Global Transactions (TT7)_ are determined after model aggregation has occurred. TT7 includes the global ID, the decrypted weights of the new global model \(G_{t+1}\), and the corresponding encrypted global weights \(G_{t+1}^{*}\). ### FLEDGE Details To initialize FLEDGE, the smart contracts seen in the computation layer are deployed and initialized to the blockchain network. The initialization process for the smart contracts is completed after generating TT1. To avoid the possibility of data modification (or forks) at run-time, we refer back to assumption **A1**. Similarly, each smart contract takes into consideration assumption **A1** to protect the integrity of the contracts before/after deployment into the blockchain, and assumption **A2** to prevent an adversary from gaining full control over the system. Finally, to achieve privacy-preserving computations (as defined in Sect. 3.1), clients are bound to assumption **A3** to protect the privacy of local updates. The annotated steps seen in Fig. 1 illustrate the learning process of FLEDGE during a training round \(t\). Here, we separate the learning process into 6 steps, i.e., 5 main steps (Step 1 - Step 5) and 1 intermediary step (Step A). After multiple clients have joined a learning task, they first download the previous global model \(G_{t-1}\) and the corresponding encryption key \(P_{k}\) from TT1 at the Gateway contract. Note that for every \(P_{k}\), a corresponding secret key \(S_{k}\) is generated and maintained by the Defender contract. Furthermore, every \(P_{k}\) includes an encryption context, which is provided to the clients. The encryption context contains the degree of the polynomial (PolyDeg) used to generate \(P_{k}\) and \(S_{k}\). This value determines the size of \(P_{k}\) and the size of produced ciphers in terms of bytes. Clients may continue to use the same \(P_{k}\) unless a new public key is required by the system. **Model Encryption (Step 1)**. Each client \(i\) starts to train the model using local data \(D_{i}\) for a predefined number of epochs. After training, clients generate and inject an offset constant \(\delta_{i}\) such that \(W_{i}^{\prime}=W_{i}+\delta_{i}\). More specifically, \(\delta_{i}\) is generated from the multiplication of two random elements: the model's standard deviation \(\sigma_{W_{i}}\) after training, and a scaling factor \(f_{s}\in[-100,100]\) s.t. \(f_{s}\neq 0\). Note that \(f_{s}\) is bounded to \([-100,100]\) to avoid exceedingly large numbers (positive or negative) as model weights. This is primarily because we are interested in shifting (left or right) the distribution of \(W_{i}\) using the inherently random properties of each local model. Contrary to DP, \(\delta_{i}\) is recorded to be used in Step 3. The offset is applied to mask \(W_{i}\) and obfuscate the model during private computations. At this stage, an attacker would require to brute force every \(\delta_{i}\) in order to break the privacy of a single local update. Then clients start the encryption process of \(W_{i}^{\prime}\) and \(\delta_{i}\). Once a client has generated its encrypted local model \(W_{i}^{*}\) and encrypted offset \(\delta_{i}^{*}\), the client proceeds to submit them to the Gateway contract for further analysis. By this, FLEDGE addresses **C2**. Due to the limitations of HE, a client is required to first separate \(W_{i}^{\prime}\) into multiple chunks of data s.t. \(W_{i}^{\prime}=w_{1}^{\prime}...w_{n}^{\prime}\), where \(n\) is the number of chunks per model. To calculate \(n\), we first determine the capacity for every cipher \(z\), i.e., the maximum amount of elements each cipher is able to contain. We define the capacity to be PolyDeg / 2, e.g., a PolyDeg of 2048 yields a capacity of 1024 elements per cipher. Thus, the number of ciphers required to encrypt \(W_{i}^{\prime}\) is directly proportional to the number of trainable parameters given PolyDeg. This is further illustrated by Eq. 1: \[z_{1}...z_{n}=\mathrm{Encrypt}(w_{1}^{\prime}...w_{n}^{\prime},P_{k}),n=\frac{ \mathrm{len}(W_{i}^{\prime})}{\mathrm{PolyDeg}/2}+1 \tag{1}\] **Input :\(\delta^{*}\) * encrypted offset \(G^{*}\) * encrypted global model \(W^{*}\) * encrypted local model \(Z_{D}\leftarrow\mathrm{PrivateDotProduct}(G^{*}+\delta^{*},W^{*})\) \(X_{D}\leftarrow\mathrm{SecureDecryption}(Z_{D})\) * defender function \(Z_{G}\leftarrow\mathrm{PrivateMagnitudesSquared}(G^{*}+\delta^{*})\) \(X_{G}\leftarrow\mathrm{SecureDecryption}(Z_{G})\) \(Z_{L}\leftarrow\mathrm{PrivateMagnitudesSquared}(W^{*})\) \(X_{L}\leftarrow\mathrm{SecureDecryption}(Z_{L})\) \(c\gets 1-\frac{\sum_{n=1}^{n}X_{D}}{\sqrt{\sum_{n=1}^{n}X_{G}}\sqrt{\sum_{n=1}^{n}X_{ L}}}\) UpdateScoreToLedger(\(c\)) = new TT3 Figure 1. FLEDGE System Overview. Annotated steps illustrate the operation of FLEDGE during a training round \(t\). For FLEDGE, we have determined a minimum PolyDeg of 4096 is required to successfully compute the desired private functions. **Model Process (Step 2)** is the initial function that receives every \(W_{i}^{*}\) provided by the clients. In this step, the Gateway contract stores \(W_{i}^{*}\) into storage oracle A to avoid public visibility to any other contract deployed in the network. The storing process saves the others as encoded text into a single document. Every pair of \(W_{i}^{*}\) and \(\delta_{i}^{*}\) is used to generate and submit a new TT2 to the ledger. **Model Analysis (Step 3)** uses the previously submitted TT2 to retrieve the encrypted model from storage and its corresponding encrypted offset. \(\delta_{i}^{*}\) is used to offset the encrypted global model \(G_{i-1}^{*}\), where \(G_{i-1}^{*}\) can be easily downloaded from TT7 of the previous round. This process aligns encrypted models to compute an accurate cosine distance as given by Alg. 1. Formally, the private cosine distance function seen in Alg. 1 requires as inputs the encrypted global model \(G^{*}\), the encrypted local model \(W^{*}\) and the corresponding encrypted offset \(\delta^{*}\). Its goal is to compute the cosine distance score \(c\) between \(G^{*}\) and \(W^{*}\). To calculate the distance, the computation process is segmented into three BT2C rounds. This is to overcome the practical limitations of HE, e.g., inability to compute roots. The first round (lines 1-2) starts by computing the encrypted dot product \(Z_{D}\) between \(G^{*}+\delta^{*}\) and \(W^{*}\), where \(Z_{D}\) is a collection of ciphers \(z_{1},\ldots,z_{n}\) that represent the encrypted value of the dot product operation. \(Z_{D}\) is then delivered to Defender contract to perform _secure decryption_. Note that \(Z_{D}\) might be in any order to add randomness to the decryption process. This process returns \(X_{D}\), a collection of numbers \(x_{1},\ldots,x_{n}\) that represents the results of \(\sum G\cdot W\). Similarly, the two remaining rounds (lines 3-4 and 5-6) are used to generate \(X_{G}\) (\(\sum G^{2}\)) and \(X_{L}\) (\(\sum L^{2}\)), respectively. To finalize the computation process, the values for all three rounds are combined to calculate \(c\) and submitted to the ledger (TT3), as seen in line 7-8. **Model Privacy (Step A)** relies on TT1, TT2 and previous TT4 to enable the _secure decryption_ function seen in Alg. 2. This function includes two unique operations: limitation of data decryption (lines 1-13) and reward adjustment (lines 15-20). The former analyzes the information in every cipher to either return a collection of numbers \(X\) (see Step 3) or a collection of model chunks \(\rho\) (see Step 5). The latter regulates the contract reward \(R_{C}\) using the information stored in the ledger. The use of \(R_{C}\) enables FLEDGE to incentivize benign aggregation behavior in the framework while penalizing malicious conduct such as attempting to access local models. Note that in FLEDGE, \(R_{C}\) is set to be 10% (max) of the session reward \(R\) by default. Alg. 2 requires as inputs only the computation ciphers \(z_{1},\ldots,z_{m}\), where \(m\) is the number of submitted ciphers in the BT2C round. Note that for secure decryption, \(m\) is independent from the number of ciphers \(n\) in an encrypted model such that \(m\leq n\).8 Our approach returns decrypted data, which is represented by an array of numbers \(X\) or an array of model chunks \(\rho\). To initiate the decryption process and address \(\mathbf{C3}\), the Defender contract retrieves (lines 1-2) every encrypted offset \(\delta_{1}^{*},\ldots,\delta_{K}^{*}\) from TT2, and the corresponding secret key \(S_{k}\) from storage oracle B. The variation tolerance value in line 3 is set to \(t=0.05\) or \(5\%\)) as it is required to discriminate summation operations, e.g., \(\sum G\cdot W\), from model operations, e.g., model aggregation. More specifically, summation operations are determined by a low array variation, which indicates that all elements are the same or closely related9; and model operations are characterized with high variations as these are represented by distinct values within the decrypted array. After decryption (line 5), the data is analyzed w.r.t. the array variation factor \(v\) (line 6), where \(v\) is defined as the absolute percent difference between the maximum and minimum elements within \(\rho_{i}\). At this step, if \(v\leq t\), the elements inside \(\rho_{i}\) are considered to be the result of a summation operation, thus, generating \(X_{i}\) to represent its average as shown in line 8. Otherwise, the function proceeds to consider \(\rho_{i}\) as a model operation, where \(\rho_{i}\) is then adjusted by every \(\delta_{i}\) and divided by the number of available models \(K\) (from TT2) to complete the aggregation process \(\sum_{i=1}^{K}\frac{W_{i}}{K}\) as defined by lines 9-13. Footnote 8: A practical example is when the aggregation contract (Gateway) divides its computations into multiple rounds s.t. \(Z_{m}\in Z_{n}\) to prevent the Defender from potentially accessing full data. increases every time \(K\leq 1\). To finalize the penalization process, the new \(R_{C}\) is generated and added to the ledger (TT4) as seen in lines 18-19, and \(\rho\) is set to empty to avoid leaking information (line 20). **Model Security (Step 4)** collects the cosine distance scores \(c_{i}\) from TT3, applies the proposed clustering technique to filter poisoned models, and determines the client reward \(R_{r}\) to promote benign training behavior. To remove poisoned updates and address **C4**, our clustering method uses the Gaussian Kernel Density Estimation (G-KDE) function to identify the number of data distributions within the distance scores \(c_{i}\), and selects models according to their assigned distribution. If models are determined to be malicious, they are removed from the aggregation process and penalized by receiving a reward of 0 for that round. These leftover rewards are divided evenly among other clients, thus, addressing **C5**. Additional information related to the use of G-KDE to generate clusters is discussed in App. D. ``` Input :\(W_{1}^{*},\ldots,W_{N}^{*}\) * selected models 1\(Z\gets W_{1}^{*}\) * encrypted base model 2foreachupdate \(i\) in [\(2,N\)]do 3\(Z\leftarrow\) Add(\(Z\),\(W_{l}^{*}\)) 4 end foreach 5\(G_{t}\leftarrow\) SecureDecryption(\(Z\)) * defender function 6\(P_{k}\leftarrow\) ReadKeyFromLedger() * from TT1 7\(G_{t}^{*}\leftarrow\) Encrypt(\(G_{t}\), \(P_{k}\)) 8 UpdateGlobalToLedger(\(G_{t}^{*}\), \(G_{t}\)) * new TT7 ``` **Algorithm 3**Poisoning Defense The poisoning defense is presented in Alg. 3. The defense requires distance scores \(c_{1},\ldots,c_{K}\) as input to generate model clusters (or groups). To produce an accurate representation of the distribution between scores, we set a resolution factor \(f\) of 2000 (line 1), where \(f\) denotes the number of data points used to fit the G-KDE function. Note that we empirically found that \(f=2000\) provides the necessary resolution to find local minimums. In line 2, \(\ldots,c_{K}\) and \(f\) are used to compute G-KDE, thus, generating 2000 (\(x,y\)) data points, where \(x\) is bounded between min(\(c_{1},\ldots,c_{K}\)) and max(\(c_{1},\ldots,c_{K}\)), and \(y\) is the density estimation obtained from the process. Density values \(y_{1},\ldots,y_{f}\) are used in line 3 to calculate the location or index of every local minimum \(l_{1},\ldots,l_{N}\) found in the distribution of scores. These locations are used to generate a group set \(G\) that contains \(N+1\) (\(M\)) groups of models (line 4). At this stage, each score is allocated inside specific groups \(g\) to separate benign models from malicious (lines 6-12). These groups are committed to the ledger as part of TT5 in line 13. Finally, to calculate \(R_{r}\), the process retrieves \(R\) and the number of training rounds \(T\) from TT1, the current \(R_{C}\) from TT4, and the group closest to \(G_{t-1}\) (\(g_{1}\)) are combined by the defense as seen in lines 14-17. The updated \(R_{r}\) is committed to the ledger as a new TT6 (line 18). **Model Aggregate (Step 5)** selects the models defined by TTS from storage oracle A to compute a new global model \(G_{t}\). The private aggregation (Alg. 4) uses a single BT2C computation round to create \(G_{t}\). Formally, Alg. 4 requires as inputs every selected local model \(W_{1}^{*},\ldots,W_{N}^{*}\), where \(N\) is the number of admitted models selected in Step 4. Models are simply added (lines 1-4) into a single encrypted model \(Z\), where \(Z=(z_{1},\ldots,z_{n})\). In line 5, \(Z\) is submitted to Defender contract to complete the aggregation process, which returns a collection of arrays denoted as \(G_{t}\). The new model is then encrypted (lines 6-7) using the available \(P_{k}\) to produce the new encrypted global model \(G_{t}^{*}\). \(G_{t}\) and \(G_{t}^{*}\) are compiled into a new TT7 and committed into the ledger (line 8) to prepare FLEDGE for the next training round. ## 5. Experimental Setup The following sections illustrate our testbed, and describe the datasets and models used during evaluation. Note that a detailed list of evaluation metrics is provided in App. E. **Experimental Testbed.** We simulate a generic blockchain using Hyperledger Fabric (HLF) to illustrate the practicality of our approach for other blockchain implementations. For experimental setup, we abstract away the complexities introduced by the consensus protocol, and instead focus on the computational entanglements added by FLEDGE. This is because FLEDGE relies solely on smart contracts rather than the underlying blockchain platform. To instantiate the simulation environment, we deploy docker containers on a Windows PC with Intel Core i7-9750H and 32 GB RAM. The blockchain test network is formed by a single ordering node operated by single organization with two peers transacting under a single communication channel. To fit multiple encrypted models within a single block, we increase the block size to 100MB. Note that this is 100 times larger than a Bitcoin block (1MB) (Kip with Microsoft SEAL (Krizhevsky et al., 2017), an efficient and open-source HE library available in C++. Our HE setup uses a PolyDeg of 4096 to encrypt local models. To evaluate models, we use Pytorch in an Ubuntu 20 server with 2 AMD EPYC 7302, 480 GB RAM, and 6 NVIDIA A100 (40 GB RAM each). **Datasets and Models.** To assess FLEDGE, we use two popular FL applications: word prediction (WP) (Srivastava et al., 2017; Srivastava et al., 2017), and image classification (IC) (Krizhevsky et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017). Note that every model used for evaluation has been pre-trained to reach an acceptable accuracy level. Tab. 1 describes the dataset types (Dataset), the rounded number of records per dataset (#Records), the AI models used for training (Model), the number of trainable parameters (#params) and the number of ciphers (#ciphers) found per model. We use smaller models with fewer trainable parameters for IC datasets, compared with WP, to evaluate how model complexity impacts FLEDGE. _Word Prediction (WP)._ We use the Reddit dataset as example of WP for Natural Language Processing (NLP) applications, e.g., the real-world FL application G-Board (Srivastava et al., 2017). The dataset contains over 20M records of reddit users' posts from November 2017. Following previous work (Beng et al., 2017; Chen et al., 2017) we use a 2-layer LSTM model. _Image Classification (IC)._ We selected three popular IC datasets of different image complexity: MNIST, Fashion-MNIST (or Fashion for short) and CIFAR-10. They all consist of 10 evenly divided categories, where MNIST contains 70K handwritten digits, Fashion has 70K images of articles of clothing (i.e., shoe, dress, shirt), and CIFAR-10 has 60K pictures of objects (i.e., frog, airplane, car). For MNIST and Fashion, we use a simple CNN model comprised of 1 and 2 CNN layers, respectively. We customized the ConvMixer model (Zhu et al., 2017) to train CIFAR-10 with a width of 256. **Backdoor Attacks.** Aligned with earlier work (Beng et al., 2017; Chen et al., 2017; Chen et al., 2017) we use the constrain-and-scale attack of Bagdsyaran _et al._(Beng et al., 2017). Note that we focus on adaptive attacks, e.g., adversary adapts the loss function using the same metric as the defensive strategy. In other words, our adversary model leverages the _cosine distance_ in an attempt to evade our defense. For the Reddit dataset, the adversary aims to make the model predicting the word "delicious" after the trigger "pasta from astoria tastes" (Beng et al., 2017). The CIFAR-10 backdoor shall make all cars in front of a striped background being classified as birds (Beng et al., 2017). For MNIST and Fashion MNIST, the backdoor forces models to predict the number 0, and t-shirt/top, respectively, when the image trigger is detected. The trigger is simply a white rectangle located at the bottom left corner of each poisoned image. ## 6. Experimental Results The following sections evaluate the privacy of FLEDGE under a naive setup, the security aspect of FLEDGE against poisoning attacks, and the behavior of the reward system in FLEDGE for aggregation services and clients. We also provide a run-time performance analysis of FLEDGE in App. F, which is used to illustrate the increased complexity of our learning process. ### White-box Inference Attack Resiliency **Evaluation Baseline.** To evaluate the privacy of FLEDGE, we step outside A2 to explore a limited collaboration between the Gateway and Defender contracts. In this scenario, the Gateway is in full control of the attacker and attempts aggregation when there is only one model in FLEDGE (\(K=1\)), disregarding its potential reward \(R_{C}\). The Defender, however, is only partially compromised allowing the attacker to observe \(\rho_{i}\) during secure decryption such that the attacker can reverse the offset from local model \(W_{1}\). **Effectiveness of FLEDGE.** We evaluate the effectiveness of the obfuscation techniques implemented in FLEDGE to prevent white-box inference attacks. To breach the privacy of \(W_{1}\), a Defender requires to find the correct order of ciphers, since this is random for every BT2C computation round. We define such a brute force \begin{table} \begin{tabular}{c|c c c|c} \hline \hline **Application** & \multicolumn{3}{c|}{**IC**} & **WP** \\ \hline **Datasets** & MNIST & Fashion & CIFAR-10 & Reddit \\ **\#Records** & 70K & 70K & 60K & 20.6M \\ **Model** & CNN & CNN & ConvMixer\({}_{256/3}\) & LSTM \\ **\#params** & \(\sim\) 23K & \(\sim\) 29K & \(\sim\) 234K & \(\sim\) 20M \\ **\#ciphers** & 12 & 15 & 115 & \(\sim\) 10.1K \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset description for different learning tasks. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{**Poisoning Attack**} & \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c|}{**No Defense**} & **FLEDGE** \\ \cline{3-5} & & BA & MA & BA & MA \\ \hline \multirow{4}{*}{Untargeted (Krizhevsky et al., 2017)} & Reddit & – & 15.8 & – & 22.7 \\ & MNIST & – & 91.5 & – & 98.3 \\ & Fashion & – & 41.1 & – & 90.0 \\ & CIFAR-10 & – & 28.9 & – & 83.0 \\ \hline \multirow{4}{*}{Constrain-and-Scale (Beng et al., 2017)} & Reddit & 100 & 22.6 & 0.0 & 22.7 \\ & MNIST & 98.0 & 87.7 & 0.4 & 98.3 \\ \cline{1-1} & Fashion & 100.0 & 69.3 & 2.4 & 90.6 \\ \cline{1-1} & CIFAR-10 & 100.0 & 66.1 & 0.0 & 83.8 \\ \hline \multirow{4}{*}{DBA (Srivastava et al., 2017)} & Reddit & 100.0 & 22.6 & 0.0 & 22.7 \\ & MNIST & 82.6 & 77.2 & 0.1 & 98.3 \\ \cline{1-1} & Fashion & 99.7 & 36.7 & 1.0 & 98.3 \\ \cline{1-1} & CIFAR-10 & 85.2 & 67.4 & 2.1 & 83.8 \\ \hline \hline \end{tabular} \end{table} Table 2. Effectiveness of FLEDGE against multiple poisoning attacks in terms of Backdoor Accuracy % (BA) and Main Task Accuracy % (MA). Figure 2. Probability of success for white-box inference attack w.r.t. model complexity. process to be equivalent to \(m!\), where \(m\) is the number of ciphers for the BT2C round. Therefore, the probability of an attacker finding the right combination of ciphers to generate the correct model \(W_{1}\) is given by \(1/m!\). Fig. 2 illustrates the potential success rate (\(1/m!\)) of an attacker to extract \(W_{1}\), and its contrast w.r.t. \(m\) for every dataset. Here, we observe that the probability of success decreases from 2.08e\(-\)7% to 0% as the number of ciphers increases from 12 (MNIST model) to \(\sim 10.1K\) (Reddit model), i.e., large models provide better resiliency. Therefore, FLEDGE is \(1/m!\) resilient to white-box inference attacks even during a limited collaboration outside **A2**. ### Poisoning Mitigation **Evaluation Baseline.** We set PMR=0.5, non-IID=0.7, PDR=0.5 and \(\alpha\)=0.7 as baseline parameters for untargeted and targeted attacks (unless otherwise indicated). PMR (or Poisoned Model Rate) indicates the influence level of an attacker to the system, i.e., PMR of 0.5 denotes an attacker maliciously controls 50% of training clients. Non-IID data (or non-Independent and Identically Distributed) represents the number of training samples dispersed to a client that belongs to a specific class within a pre-defined group, i.e., non-IID of 0.7 means that clients should use 70% of training data from their given class while the rest 30% is from the remaining classes. We follow the approach in (Han et al., 2018) to prepare each dataset according to the number of output classes. PDR (or Poisoned Data Rate) determines the fraction of poisoned samples with respect to benign samples during training, i.e., PDR of 0.5 defines 50% of training data from a target class are poisoned samples. A higher PDR increases the success rate of the attacks. Similarly, the regularization term \(\alpha\), as defined by Bagdasaryan _et al._(Bagdasaryan et al., 2016), balances the loss function of client models aiming at limiting the distance between local and global models. A high value of \(\alpha\) allows the attacker to increase its success rate at the cost of visibility. **Effectiveness of FLEDGE.** We evaluate the resiliency of FLEDGE against different poisoning attacks such as untargeted poisoning (Bagdasaryan et al., 2016), constrain-and-scale (Bagdasaryan et al., 2016) and DBA (Wang et al., 2017). The experimental results illustrated in Tab. 2 show that FLEDGE successfully mitigates these attacks without sacrificing benign performance (MA). For untargeted poisoning, the adversary successfully degrades model performance when no defenses are in place, reaching as low as 15.8% (22.7% original) for Reddit, and 28.9% (83.9% original) for CIFAR-10. During constrain-and-scale attacks, the adversary is able to inject a backdoor into the model with almost 100% accuracy. Similarly, for DBA, the backdoor is injected into the global model with 80+%. These attacks, however, are not effective against FLEDGE as BA\(\simeq\)0 for every evaluated dataset.10 Moreover, FLEDGE maintains or even increases MA. Note that for the rest of the evaluation, we focus on targeted (or backdoor) attacks since they are the most sophisticated type of poisoning attacks. Footnote 10: In some applications, misclassifications are counted in favor of the BA if MA\(<100\%\). For this reason, the BA is greater than 0%, e.g., 2.4% for Fashion, although the aggregated model does not contain the backdoor. **Comparison to Existing Work.** Tab. 3 compares the effectiveness of FLEDGE with five state-of-the-art defense approaches (Kumar et al., 2017; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Notably, several defenses such as Krum (Kumar et al., 2017) cannot handle non-IID scenarios. FoolsGold is the most resilient defense that mitigates backdoors for all four datasets as its BAs are very closed to 0, but still a little higher than FLEDGE's BA rates. Similar to FoolsGold, the other four defenses' BA rates are much higher than or equal to FLEDGE's while MA rates are much lower than or equal to ours. Auro (Auro, 2018) works well for the image datasets, where the clients' local datasets overlap and show similar distributions but fails for the intrinsic non-IID Reddit dataset. Therefore, FLEDGE is shown to provide the most resilient defense to mitigate state-of-the-art backdoors. Appendix H and I provide further experiment results for WP and IC tasks respectively, showing FLEDGE's performance for different PDRs, PMRs, further attack strategies, and IID rates. ### Reward Analysis The reward systems implemented in FLEDGE are an additional layer of security designed to discourage malicious actions during the learning process. However, a defensive strategy is only as good as its weakest component. In other words, FLEDGE's reward mechanisms are constrained by the efficiency of its white-box inference resiliency (see Sect. 6.1) and its poisoning defense (see Sect. 6.2). Thus, the following section investigates the rewarding mechanism \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Defenses**} & \multicolumn{2}{c|}{**Reddit**} & \multicolumn{2}{c|}{**MNIST**} & \multicolumn{2}{c|}{**Fashion**} & \multicolumn{2}{c}{**CIFAR-10**} \\ \cline{2-9} & BA & MA & BA & MA & BA & MA & BA & MA \\ \hline Benign Setting & 0.0 & 22.7 & 0.5 & 98.3 & 3.7 & 90.9 & 0 & 83.9 \\ No Defense & 100.0 & 22.7 & 98.0 & 87.7 & 100.0 & 69.2 & 100.0 & 66.1 \\ \hline Krum (Kumar et al., 2017) & 100.0 & 22.6 & 0.6 & **98.3** & 2.8 & 90.1 & **0.0** & 83.0 \\ FoolsGold (Kumar et al., 2017) & **0.0** & **22.7** & 0.5 & **98.3** & 3.0 & 90.7 & **0.0** & 83.6 \\ Auro (Auro, 2018) & 100.0 & 22.5 & 0.5 & **98.3** & 2.5 & **90.9** & **0.0** & **83.9** \\ AFA (Bagdasaryan et al., 2016) & 100.0 & 22.6 & 83.1 & 94.2 & 97.9 & 87.3 & 100.0 & 66.5 \\ DP (Wang et al., 2017) & 77.0 & 22.0 & 26.5 & 97.3 & 52.2 & 88.6 & 60.0 & 76.6 \\ \hline FLEDGE & **0.0** & **22.7** & **0.4** & **98.3** & **2.4** & 90.6 & **0.0** & 83.8 \\ \hline \end{tabular} \end{table} Table 3. Comparison of FLEDGE and five state-of-art defenses’ efficiency to mitigate backdoor. BA refers to Backdoor Accuracy % and MA refers to Main Task Accuracy %. Figure 3. Behavior of rewards (a) \(R_{C}\), (b) \(R_{t}\) in FLEDGE. behind FLEDGE. Fig. 3 shows the behavior of the contract reward \(R_{C}\) and the training reward \(R_{\tau}\). **Effect of \(\phi\) in \(R_{C}\).** We assume FLEDGE has received the first local model \(W_{1}\) (\(K=1\)), and that the adversary has control over the Gateway contract. Fig. 3a illustrates how \(R_{C}\) is adjusted by the Defender every time an attempt to access private models (\(\phi\)) is registered during _secure decryption_. In other words, \(\phi\) represents the number of TT4 in FLEDGE, where a new TT4 is generated every time _secure decryption_ is attempted for \(K\leq 1\) as seen in Step A of Sect. 4.1. Note that we have simulated three attempts (Session 3, 5 & 7) to access \(W_{1}\), which forces the Defender to adjust \(R_{C}\). In particular, we observe that \(R_{C}\) is severely affected by \(\phi\) in comparison to the number of training sessions (s). This indicates that to increase \(R_{C}\), the contract must behave normally for a large number of training sessions such that the ratio \(\phi/s\) approximates 0. **Effect of PMR in \(R_{\tau}\).** For this experiment, we vary the PMR to \(\{0.1,0.3,0.5\}\) to observe the behavior of the training reward \(R_{\tau}\). Fig. 3b shows the benign \(R_{\tau}\) and malicious \(R_{\tau}\) under different PMR levels. Note that the process to determine \(R_{\tau}\) for each client is based on the number of malicious models found during a training round. In other words, the Defender forces malicious clients to transfer their potential rewards to benign ones, i.e., benign clients get 140% (Round 8), 160% (Round 4) and 200% (Round 3) for PMR of 0.1, 0.3 and 0.5, respectively. This process promotes benign training behavior since \(R_{\tau}\) is reduced to 0 for every client detected as malicious during a training round. ## 7. Security and Privacy Analysis The following sections provide a security and privacy analysis to further explore the resiliency against white-box inference and poisoning attacks under different adversary configurations. We also discuss the robustness of FLEDGE against clients randomly dropping from the learning process in App. G. ### FLEDGE Privacy Analysis FLEDGE uses a decentralized crypto-system maintained by Gateway and Defender contracts. This allows FLEDGE to manage and analyze ciphers. FLEDGE adopts a semi-honest security model such that only one (out of \(2\)) entity could be compromised at the time as discussed in assumption **A2**. Therefore, to underm the privacy of FLEDGE according to Sect. 3.1, an adversary \(\mathcal{A}^{p}\) may formulate one of the following scenarios. \(\mathcal{A}^{p}\) **Compromises Gateway Contract.** If \(\mathcal{A}^{p}\) maliciously controls the Gateway contract, \(\mathcal{A}^{p}\) would have access to every encrypted model. However, \(\mathcal{A}^{p}\) cannot decrypt any model directly as the private key is only held by the Defender contract. To access local updates, \(\mathcal{A}^{p}\) may try the following strategies. _Direct Decryption Request._\(\mathcal{A}^{p}\) directly requests decryption of encrypted models by submitting the appropriate ciphers to Defender. This initial approach yields ineffective results as the decryption process follows _secure decryption_ (Step A in Sect. 4.2), where this process can identify the type of computation (i.e., summations or model operations) by analyzing data composition after decryption. To mitigate this attack, our approach identifies each cipher as a model operation and returns decrypted arrays with injected random noises \(\delta\). Consequently, attempts to decrypt local updates result in random shifts to the distribution of each model. Therefore, this defense can effectively obfuscate local updates when an adversary attempts to decrypt them directly. _Reverse Engineer from Computations._ A sophisticated \(\mathcal{A}^{p}\) may consider to reverse engineer encrypted local updates from the result of specific computations, i.e., \(G^{*}_{i}+\delta^{*}_{i}-W^{*}_{i}\), \(W^{*2}_{i}\). However, this approach is also found ineffective since any type of model operation is constrained by _secure decryption_, resulting in data arrays being indirectly affected by \(\delta\). _FLEDGE Aware Decryption Request._\(\mathcal{A}^{p}\) attempts decryption of the first (\(K=1\)) encrypted model committed to FLEDGE during a training round to bypass the security measures imposed by _secure decryption_. Put differently, \(\mathcal{A}^{p}\) aims to extract the first local model before other \(\delta\) values skew the results. However, this approach is also ineffective because the Defender contract is aware of the number of models currently present in FLEDGE. As a consequence, the Defender contract leverages that information to adjust the contract reward \(R_{C}\) to penalize the attempt and address curious behavior as illustrated in Sect. 6.3. \(\mathcal{A}^{p}\) **Compromises Defender Contract.** For the following scenario, \(\mathcal{A}^{p}\) attempts to visualize local weights during _secure decryption_ as Defender holds the private key. To achieve this goal, \(\mathcal{A}^{p}\) requires the assistance of Gateway contract as the latter is the one that holds every encrypted model. Put differently, \(\mathcal{A}^{p}\) needs the Gateway to send encrypted models rather than encrypted computations, which breaks assumption **A2**. Therefore, FLEDGE is resistant to \(\mathcal{A}^{p}\) given assumption **A2** holds. \(\mathcal{A}^{p}\) **Under Limited Collaboration.** In this scenario, \(\mathcal{A}^{p}\) is aware of FLEDGE's limitation and aims to retrieve the first local model (\(K=1\)) as described in Sect. 6.1. However, our evaluation showed that FLEDGE is also resilient to this scenario, since \(\mathcal{A}^{p}\)'s probability of success reaches \(\sim\)0% as defined by \(1/m!\). ### FLEDGE Security Analysis FLEDGE efficiently filters state-of-the-art backdoor injections with the defense deployed in Defender contract. To bypass FLEDGE, an adversary \(\mathcal{A}^{s}\) should inject a poisoned model such that FLEDGE cannot distinguish benign models from poisoned ones. The following elaborate the methodologies that could be used by \(\mathcal{A}^{s}\) to achieve this goal. \(\mathcal{A}^{s}\) **manipulates \(\alpha\)**. \(\mathcal{A}^{s}\) continuously monitors and adjusts client's loss function to reduce the distance (i.e., cosine or L2 norm) between the client model and the current global model, a larger value of \(\alpha\) means more aggressive attacks. Sect. sec:eval-backdoorMitigation demonstrates that FLEDGE can eliminate poisoning attempts efficiently under different values of \(\alpha\) for WP and IC applications, respectively. \(\mathcal{A}^{s}\) **manipulates PMR**. \(\mathcal{A}^{s}\) would minimize its visibility by increasing the level of control over (or the number of) malicious clients. However, this approach cannot defeat FLEDGE as we empirically proved that FLEDGE maintains high performance w.r.t. changes in PMR for WP (App. H) and IC learning tasks (App. I). \(\mathcal{A}^{s}\) **manipulates PDR**. \(\mathcal{A}^{s}\) can also limit the number of poisoned samples by decreasing the PDR value during training to generate less suspicious models. As a result, backdoors (BA) will be reduced. Additionally, regardless of PDR, FLEDGE continuously filter poisoned models efficiently as shown in App. I. ## 8. Related Work **Privacy-Preserving Defenses.** Multiple approaches have been proposed to protect the privacy of the clients' training data. Passerat _et al._ use a blockchain for privately aggregating the individual models (Ryffel et al., 2017). Ryffel _et al._ propose a framework that eases the use of Secure-Multi-Party Computation (SMPC) for secure aggregation (Ryffel et al., 2017), while Fereidooni _et al._ rely only on SMPC (Ryffel et al., 2017). Sav _et al._ use Multiparty Homomorphic Encryption (HE) for collaboratively training a model (Sav et al., 2018). Bonawitz _et al._ propose a multi-party-computation scheme based on Shamir's secret sharing (Shamir, 2017). However, as this approach prevents the server from analyzing the local models, it also prevents analyzing them to identify poisoned models. FLEDGE uses Blockchain Two Contract Computation (BT2C) to engage in decentralized privacy-preserving computations based on smart contracts and HE (see Step A in Sect. 4.2). FLEDGE raises accountability by including a reward system that promotes benign contract behavior (see evaluation in Sect. 6.1 and Sect. 6.3). **Poisoning Defenses.** Existing defenses against data and model poisoning attacks aim at distinguishing malicious and benign model updates (Yin et al., 2017; Li et al., 2018) utilizing filter-based approaches (i.e., clustering techniques). However, all of these defenses make specific assumptions about the distribution of benign and malicious data or the characteristics of injected models, causing them to fail if any of these assumptions do not hold. Moreover, such defenses cannot detect stealthy attacks, e.g., where the adversary constrains their poisoned updates within benign update distribution such as constrain-and-Scale attack (Yin et al., 2017). Yin _et al._(Yin et al., 2017) and Guerraoui _et al._(Guerraoui et al., 2017) propose to change aggregation rules to mitigate the effect of malicious model updates. They utilize median parameters from all local models as the global model parameters. Other approaches validate the local models or the aggregated model (Ryffel et al., 2017; Li et al., 2018; Li et al., 2018). However, they cannot detect stealthy backdoors that have only minimal impact on the Main Task Accuracy (MA). Weak Differential Privacy (DP) techniques (Liu et al., 2018; Li et al., 2018; Li et al., 2018) have also been used to mitigate the effects of poisoning attacks. DP-based defense dilutes the impact of poisoned models by clipping model weights and adding noise to individual clients' model updates. DeepSight provided an efficient filtering that works even in non-IID data scenarios (Li et al., 2018) but does not credit the individual contributions and requires a central server than can inspect the model updates. Kalapaaking _et al._ use smart-contract to verify the client-side training process (Kalapaaking et al., 2018) but cannot prevent attackers from manipulating the training data (Kalapaak et al., 2018). The use of blockchain implementations (Kalapaak et al., 2018; Li et al., 2018; Li et al., 2018) have managed to provide defenses against poisoning attacks, however, these solutions only consider untargeted attacks and/or rely on specific assumptions about the distribution of training data. In contrast, FLEDGE effectively removes poisoned models by instantiating a filtering approach based on Gaussian Kernel Density Estimation (G-KDE) function (see Step 4 in Sect. 4.2). The poisoning resiliency of our solution is empirically evaluated on Sect. 6.2, which demonstrates that FLEDGE is an effective solution to mitigate poisoning attacks. In addition, our solution allows us to treat the underlying problem of poisoning attacks, i.e., client training misbehavior, by imposing monetary deterrents for detected poisoning attempts. This is illustrated in Sect. 6.3. **Hybrid Defenses.** A number of existing works recently focused on poisoning attacks while preserving the privacy of the individual model updates. Two works implemented Krum (Krum, 2017) using SMPC (Ryffel et al., 2017; Li et al., 2018). However, Krum focuses on untargeted poisoning attacks and fails to effectively mitigate backdoor attacks (see Sect. 6.2), and SMPC is vulnerable to attacks that limit availability (i.e., DoS attacks). Similarly, FLAME (Kalapaak et al., 2018) leverages DP with model filtering to provide an efficient defense against backdoors. However, FLAME also relies on SMPC to provide privacy-preserving computations. Several approaches utilize Trusted Execution Environments (TEE) to realize a privacy-preserving poisoning detection (Li et al., 2018; Li et al., 2018; Li et al., 2018). However, requiring TEEs restricts their application to a few scenarios as mobile devices often do not have a TEE, while FLEDGE does not make any assumption about the hardware. Biscotti (Biscotti, 2018), BEAS (Li et al., 2018) and the work of Lu _et al._(Lu et al., 2018) are blockchain-based frameworks that target secure and private FL. The first two approaches use Multi-Krum (Krum, 2017) (a Krum variant) to reduce the impact of backdoors in the system, where Multi-Krum also focuses on untargeted poisoning. In particular, Biscotti uses DP and Shamir secrets to preserve privacy of local updates during aggregation. BEAS and the work of Lu_et al._, on the other hand, leverages DP to obfuscate model weights and provide resiliency against inference attacks. In comparison, FLEDGE provides (1) strong privacy guarantees to client models by obfuscating them using our BT2C protocol (see Sect. 6.1), (2) an effective defense using G-KDE functions to filter different poisoning attacks (see Sect. 6.2), and (3) compensation algorithms to automatically adjust the reward and deter malicious behavior (see Sect. 6.3). Liu _et al._ proposed a smart-contract-based FL framework that utilizes a server-side validation dataset to detect untargeted poisoning attacks (Liu et al., 2018). However, assuming a server-side dataset is not practical (Li et al., 2018), while FLEDGE detects poisoning attacks without making such an assumption and also includes a reward mechanism to penalize malicious clients. ## 9. Conclusion and Future Work In this paper, we illustrated the current research gaps facing FL systems in terms of security and privacy. To fill in those gaps, we propose FLEDGE, a 3-layer blockchain FL framework powered by smart contracts and HE. Our proposed HE infrastructure, BT2C, enables the analysis and operation of model weights in cipher text through the use of a semi-honest collaboration between smart contracts. BT2C is shown to provide resiliency against white-box inference attacks with a decreased adversarial probability of success of \(\frac{1}{m!}\), where \(m\) is the number of ciphers inside an encrypted model. Furthermore, our extensive evaluation shows that FLEDGE also offers high resiliency against numerous poisoning strategies (BA\(\approx\)0) with minimal impact on benign accuracy. Our solution to both white-box inference and poisoning attacks allows us to effectively embed incentives or penalizations to both aggregation contracts (e.g., Gateway) and training clients in an effort to minimize adversarial intent via behavior accountability. **Limitations.** Although the use of blockchain technology and HE enable the security properties in FLEDGE, they also contribute to the growth in computation costs of its learning process. For instance, training round latency increases from 15.86s (MNIST model) to 76.6s (CIFAR10 model). This indicates that FLEDGE has an increase in latency of approximately five-times (4.82) for a model that is ten-times larger (10.17). Put differently, FLEDGE offers limited scalability given that its learning process slows down as both model complexity and the number of clients/models increases in the system. Furthermore, the reward mechanism embedded in FLEDGE is directly related to the performance of its defensive strategies. For example, an attacker capable of avoiding FLEDGE's poison defense (i.e., G-KDE Defense) may continue to receive credit even though its model negatively contributes to the learning process. **Future Work**. A formal in-depth analysis aimed into the scalability of FLEDGE, e.g., transaction fees, storage costs, computation costs and communication costs, is needed to generate additional insights into the limitations and efficiency of FLEDGE, specifically those imposed by the use of different blockchain platforms. ## Acknowledgment This work was supported in part by the U.S. Department of Energy/National Nuclear Security Administration (DOE/NNSA) #DE-NA0003985, Intel through the Private AI Collaborate Research Institute ([https://www.private-ai.org/](https://www.private-ai.org/)), BMBF and HMWK within ATHENE, as well as from the OpenS3 Lab, the Hessian Ministry of Interior and Sport as part of the F-LION project, following the funding guidelines for cyber security research, the Horizon Europe framework program of the European Union under grant agreement No. 101093126 (ACES). We extend our appreciation to KOBIL GmbH for their support and collaboration throughout the course of this project. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any of these funding agencies.
2307.08797
Network quantum steering enables randomness certification without seed randomness
Quantum networks with multiple sources allow the observation of quantum nonlocality without inputs. Consequently, the incompatibility of measurements is not a necessity for observing quantum nonlocality when one has access to multiple quantum sources. Here we investigate the minimal scenario without inputs where one can observe any form of quantum nonlocality. We show that even two parties with two sources that might be classically correlated can witness a form of quantum nonlocality, in particular quantum steering, in networks without inputs if one of the parties is trusted, that is, performs a fixed known measurement. We term this effect as swap-steering. The scenario presented in this work is minimal to observe such an effect. Consequently, a scenario exists where one can observe quantum steering but not Bell non-locality. We further construct a linear witness to observe swap-steering. Interestingly, this witness enables self-testing of the quantum states generated by the sources and the local measurement of the untrusted party. This in turn allows certifying two bits of randomness that can be obtained from the measurement outcomes of the untrusted device without the requirement of initially feeding the device with randomness.
Shubhayan Sarkar
2023-07-17T19:32:17Z
http://arxiv.org/abs/2307.08797v4
# Quantum steering without free will ###### Abstract Quantum networks with independent sources allow the observation of quantum nonlocality without inputs. Consequently, the incompatibility of measurements is not a necessity for observing quantum nonlocality when one has access to independent sources. Here we investigate the minimal scenario without inputs where one can observe any form of quantum nonlocality. We show that even two parties with two sources that might be classically correlated can witness a form of quantum nonlocality, in particular quantum steering, in networks without inputs if one of the parties is trusted, that is, performs a fixed known measurement. We term this effect as swap-steering. The scenario presented in this work is minimal to observe such an effect. Consequently, a scenario exists where one can observe quantum steering but not Bell non-locality. We further construct a linear witness to observe swap-steering. _Introduction--_ Quantum nonlocality is one of the most remarkable features of quantum mechanics that defy our classical intuitions about the world. It refers to the property of quantum particles to exhibit correlations that seem to occur instantaneously even when they are separated by large distances. This quantum property was first conceptualized in the celebrated work of Einstein, Podolsky and Rosen [1]. Based on it, Bell in 1964 [2; 3] proposed a theoretical test, known as Bell's inequality, that could distinguish between classical and quantum correlations. It was then experimentally verified [4; 5; 6; 7] and is now recognized as a fundamental aspect of quantum mechanics. The implications of quantum nonlocality are far-reaching, with potential applications in fields such as cryptography, quantum teleportation, quantum communication, and quantum computing (refer to [8] for a review). Another form of quantum nonlocality, known as quantum steering, allows for one observer to remotely influence the state of another observer's quantum system, even if the two observers are separated by large distances. Quantum steering was first conceptualized by Schrodinger [9] and was then rigorously introduced in [10]. The major difference between the scenarios to observe Bell nonlocality and quantum steering is that one of the parties is assumed to be trusted in the latter one, that is, known to perform fixed measurements. To observe quantum nonlocality or quantum steering, any party involved in the experiment must have at least two inputs as incompatible measurements are necessary to witness any of these phenomena. Interestingly, quantum networks allow for witnessing such non-classical features without the requirement of incompatibility of measurements. The framework to witness quantum nonlocality in networks was introduced in [11; 12; 13; 14]. However, it was first noted in [12] and then in [14] that considering independent sources shared between non-communicating parties allows one to observe quantum nonlocality with a single fixed measurement for every party. Recently, the authors in [15; 16; 17; 18] explore this phenomenon to construct scenarios where one can observe genuine quantum network nonlocality. One of the intriguing problems in this regard concerns the minimal scenario in which any form of quantum nonlocality can be observed without any inputs. It was shown in [15], that genuine network nonlocality can be observed without inputs if there are three parties with three independent sources. Inspired by entanglement swapping [19], we show here that if one of the parties is assumed to be trusted then one can observe a form of quantum nonlocality, which we term as swap-steering, using only two parties and two sources that might be classically correlated. This is in fact the minimal scenario where one can observe a form of quantum nonlocality without inputs. Further on, there is a lack of witnesses when observing quantum nonlocality without inputs in networks. This restricts the possibility of testing these phenomena at the operational level. Interestingly, we find a linear witness to observe swap-steering thus, making our notion of nonlocality experimentally testable. We further show that states that are unsteerable in the standard quantum steering scenario are swap-steerable, which can be interpreted as an entanglement-assisted activation of quantum steering. Finally, we find the necessary conditions to observe swap-steering. In a related work [20], we show that swap-steering can be utilised for one-sided device-independent tasks like certification of entangled states and measurements and generating secured randomness without the requirement to initially feed the devices with random numbers. _The scenario--_ In this work, we consider the simplest scenario consisting of two parties namely, Alice and Bob in two different labs far away from each other. Both of them receive two subsystems from two different sources \(S_{1},S_{2}\) that might be classically correlated to each other. Now they perform a single four-outcome measurement on their respective subsystems where the outcomes are denoted as \(a,b=0,1,2,3\) respectively for Alice and Bob. Alice is trusted here implying that the measurement performed by her on her subsystems is known (see Fig. 1). We consider here that she performs the measurement corresponding to the Bell basis given by \(M_{A}=\{|\phi_{+}\rangle\!\langle\phi_{+}|,|\phi_{-}\rangle\!\langle\phi_{-}|,| \psi_{+}\rangle\!\langle\psi_{+}|,|\psi_{-}\rangle\!\langle\psi_{-}|\}_{A_{1}A _{2}}\) where \[|\phi_{\pm}\rangle_{A_{1}A_{2}} = \frac{1}{\sqrt{2}}\left(|0\rangle_{A_{1}}|0\rangle_{A_{2}}\pm|1 \rangle_{A_{1}}|1\rangle_{A_{2}}\right)\] \[|\psi_{\pm}\rangle_{A_{1}A_{2}} = \frac{1}{\sqrt{2}}\left(|0\rangle_{A_{1}}|1\rangle_{A_{2}}\pm|1 \rangle_{A_{1}}|0\rangle_{A_{2}}\right). \tag{1}\] Here \(A_{1}/A_{2},B_{1}/B_{2}\) denote the two different subsystems of Alice and Bob respectively. Now, Alice and Bob repeat the experiment enough times to construct the joint probability distribution (correlations) \(\vec{p}=\{p(a,b)\}\) where \(p(a,b)\) denotes the probability of obtaining outcome \(a,b\) with Alice and Bob respectively. These probabilities can be computed in quantum theory as \[p(a,b)=\sum_{j}p_{j}\mathrm{Tr}\left[(M^{a}\otimes N^{b})\rho_{A_{1}B_{1}}^{j} \otimes\rho_{A_{2}B_{2}}^{j}\right] \tag{2}\] where \(M^{a},N^{b}\) denote the measurement elements of Alice and Bob which are positive and \(\sum_{a}M^{a}=\sum_{b}N^{b}=1\) and \(\sum_{j}p_{j}=1\). It is important to recall here that Alice and Bob can not communicate with each other during the experiment. _Swap-steering--_ Let us now suppose that there are some variables \(\lambda_{i}\) that are being sent by the sources \(S_{i}\) as depicted in Fig. 2. These variables in general might be hidden and have non-classical features with the parties not having access to them. Further on, as Alice is known to perform quantum measurements, the variable she receives is some quantum state \(\rho_{\lambda_{1},\lambda_{2}}\), however, there is no such restriction on Bob. Let us now state the two assumptions, namely outcome-independence and separable quantum sources, that must be satisfied if Bob is classical, or equivalently if the correlations are not swap-steerable from Bob to Alice. **Assumption 1** (Outcome-independence).: _The outcomes of two parties are independent of each other if one has access to the hidden variables \(\lambda_{i}\)._ In the scenario considered in this work, Bob's outcome \(b\) being independent of Alice's outcome \(a\) means that for any \(a,b,\lambda_{1},\lambda_{2}\), \[p(b|\lambda_{1},\lambda_{2},a)=p(b|\lambda_{1},\lambda_{2}). \tag{3}\] This is a weaker definition of locality when compared to Bell's assumption of locality, or the notion of locality in the standard quantum steering scenario. **Assumption 2** (Separable quantum sources).: _Two sources \(S_{i}\)\((i=1,2)\) generating a joint quantum state \(\rho_{\lambda_{1},\lambda_{2}}\) are separable if the state \(\rho_{\lambda_{1},\lambda_{2}}\) is separable for any \(\lambda_{1},\lambda_{2}\)._ Notice that the above assumption 2 that we impose on the sources is weaker when compared to independent quantum sources. As a matter of fact, the above assumption allows the sources to communicate classically with each other or equivalently the sources might generate classically correlated states. Now, given two sources \(S_{i}\) for \(i=1,2\) that generate some (for now hidden) states \(\lambda_{i}\), we can always express the probability \(p(a,b)\) as \[p(a,b)=\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})p(a,b|\lambda_{ 1},\lambda_{2}). \tag{4}\] Using Bayes rule and the fact that Alice is known to be performing quantum measurements, we can express the above expression as \[p(a,b)=\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})p(a|\rho_{ \lambda_{1},\lambda_{2}})p(b|\lambda_{1},\lambda_{2},a). \tag{5}\] Assuming outcome-independence, we arrive at \[p(a,b)=\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})p(a|\rho_{ \lambda_{1},\lambda_{2}})p(b|\lambda_{1},\lambda_{2}). \tag{6}\] Now, assuming uncorrelated sources we express \(\rho_{\lambda_{1},\lambda_{2}}\) using pure state decompositions to arrive at the following expression of \(p(a,b)\) \[\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})\sum_{p_{\lambda_{1}, \lambda_{2}}^{j}}p_{\lambda_{1},\lambda_{2}}^{j}p(a|\not|\psi_{\lambda_{1}}^{j })|\psi_{\lambda_{2}}^{j})p(b|\lambda_{1},\lambda_{2}). \tag{7}\] If correlations \(\vec{p}\) admit the form (7), then they are describable using a separable outcome-independent hidden state (SOHS) model. To witness swap-steering, a functional \(W\) can be constructed which depends on \(\vec{p}\) as \[W(\vec{p})=\sum_{a,b}c_{a,b}p(a,b)\leq\beta_{SOHS} \tag{8}\] Figure 1: Swap-steering scenario. Alice and Bob are spatially separated and each of them receive two subsystems from the sources \(S_{1},S_{2}\). On the received subsystem they perform a single four-outcome measurement. Alice is trusted here, meaning that she is known to perform the Bell-basis measurement. They are not allowed to communicate during the experiment. Once it is complete, they construct the joint probability distribution \(\{p(a,b)\}\). where \(c_{a,b}\) are real coefficients and \(\beta_{SOHS}\) denotes the maximum value attainable using assemblages admitting a SOHS model (7). For the purpose of this article, we consider only functionals that are linear over \(\vec{p}\). Now, consider the following functional \[W=p(0,0)+p(1,1)+p(2,2)+p(3,3)\leq\beta_{SOHS} \tag{9}\] Recall here that Alice is trusted and performs the measurements with elements given in (1). Let us now find the maximum value that can be achieved using correlations that admit a SOHS model (7). **Fact 1**.: _Consider the steering functional \(W\) (9). The maximum value \(\beta_{SOHS}\) that can be achieved using correlations that admit a SOHS model (7) of \(W\) is \(\beta_{SOHS}=\frac{1}{2}\)._ Proof.: The proof follows the exact same lines as presented in [21; 22; 23]. Let us now consider the steering functional \(W\) in Eq. (9) and express it in terms of the SOHS model (7) as \[\sum_{a=0}^{3}\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda _{2})p(a|\rho_{\lambda_{1},\lambda_{2}})p(a|\lambda_{1},\lambda_{2})\] \[\leq\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})\max _{a}\{p(a|\rho_{\lambda_{1},\lambda_{2}})\} \tag{10}\] where we used the fact that \(\sum_{a}p(a|\lambda_{1},\lambda_{2})=1\) for any \(\lambda_{1},\lambda_{2}\). Now, maximising over \(\rho_{\lambda_{1},\lambda_{2}}\) gives us \[\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})\max_{a} \{p(a|\rho_{\lambda_{1},\lambda_{2}})\}\] \[\leq\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1},\lambda_{2})\max _{\rho_{\lambda_{1},\lambda_{2}}}\max_{a}\{p(a|\rho_{\lambda_{1},\lambda_{2}} )\}. \tag{11}\] Now, using the fact that \(\sum_{\lambda_{i}}p(\lambda_{i})=1\) for \(i=1,2\) allows us to conclude that \[\beta_{SOHS}\leq\max_{|\psi\rangle_{A_{1}}|\psi\rangle_{A_{2}}}\max_{a}\{p(a| \ |\psi\rangle_{A_{1}},|\psi\rangle_{A_{2}})\}. \tag{12}\] As the steering functional \(W\) is linear, without loss of generality we consider the maximization only over pure states. Now, putting in the measurement of the trusted Alice (1), which locally acts on qubit Hilbert spaces, and the optimizing over pure states \(|\psi\rangle_{A_{1}},|\psi\rangle_{A_{2}}\in\mathbb{C}^{2}\) gives us \(\beta_{SOHS}\leq\frac{1}{2}\). This bound can be saturated when the sources prepare the maximally mixed \(\rho_{i}=\frac{1}{2}\left(|00\rangle\!\langle 00|+|11\rangle\!\langle 11| \right)_{A_{i}B_{i}}\) and the measurement with Bob is \(M_{B}=\{|00\rangle\!\langle 00|,|01\rangle\!\langle 01|,|10\rangle\!\langle 10|,|11 \rangle\!\langle 11|\}\right\}_{B_{0}B_{1}}\). This state clearly has a SOHS model and thus we get the desired SOHS bound. Now, consider that the sources prepare the state \(|\psi_{i}\rangle=|\psi_{+}\rangle_{A_{i}B_{i}}\) and Bob performs the same measurement as Alice, that is, \(M_{B}=\{|\psi_{+}\rangle\!\langle\psi_{+}|,|\psi_{-}\rangle\!\langle\psi_{-} |,|\psi_{+}\rangle\!\langle\psi_{+}|,|\psi_{-}\rangle\!\langle\psi_{-}|\}_{B_{ 0}B_{1}}\) where the corresponding states are given in (1). Using these states and Bob's measurement one can simply evaluate the steering functional \(W\) in (9) to get the value \(1\), which is the quantum bound of \(W\). Notice that this is also the algebraic value of \(W\). Let us also show here that one can not observe Bell type non-locality with only two parties without inputs. Without loss of generality, we consider here the scenario similar to one depicted in Fig. 1 such that Alice and Bob perform a measurement with arbitrary number of outcomes on subsystems sent by two independent or classically correlated sources. However unlike the previous scenario Alice is untrusted. If the correlations \(\vec{p}=\{p(a,b)\}\) admit a network-local hidden variable (NLHV) model [15; 18], then they can be represented as \[p(a,b)=\sum_{\lambda_{1},\lambda_{2}}p(\lambda_{1})p(\lambda_{2})p(a|\lambda_ {1},\lambda_{2})p(b|\lambda_{1},\lambda_{2}) \tag{13}\] for any \(a,b\). Let us state the following fact which is simple to prove. **Fact 2**.: _Consider the scenario depicted in Fig. 2. The correlations \(\vec{p}=\{p(a,b)\}\) obtained by Alice and Bob can always be Figure 2: Difference between SOHS and NLHV model in the minimal scenario. (left) Alice and Bob can explain the observed correlations \(p(a,b)\) using a SOHS model. Alice is trusted and thus receives quantum states from the sources but there is no restriction over Bob. The grey box denotes an unknown source of classical random variables that might correlate the sources \(S_{1},S_{2}\). (right) Alice and Bob can explain the observed correlations \(p(a,b)\) using a NLHV model. described by an NLHV model_ (13). The above fact can be straightforwardly generalized to the scenario with arbitrary number of sources between Alice and Bob. It is then well-known that one can not observe any non-locality without inputs when there is a single source distributing subsystems to Alice and Bob. Thus, to observe any form of quantum non-locality in the minimal possible scenario, in the sense that there are no inputs and only two parties, one has to trust either of the parties. Consequently, quantum steering can also be observed in scenarios where one can not observe Bell non-locality. Let us now show that states that are unsteerable in the standard quantum steering scenario are swap-steerable. _Entanglement assisted activation of steerability--_ Let us now consider the Werner state given by \[\rho^{W}(\alpha)=\alpha|\phi_{+}\rangle\!\langle\phi_{+}|+(1- \alpha)\frac{1}{4}. \tag{14}\] The above state is separable iff \(\alpha\leq\frac{1}{4}\)[24]. As proven in [25, 10], the above state is steerable in the standard quantum steering scenario iff \(\alpha>\frac{1}{2}\). Thus, in the range of \(\frac{1}{3}<\alpha\leq\frac{1}{2}\), the Werner state is unsteerable but entangled. We show here that the Werner state when coupled with the maximally entangled state is swap-steerable. Thus when assisted with entanglement, unsteerable states can be activated to display steerability without inputs. **Fact 3**.: _The Werner state \(\rho^{W}(\alpha)\) (14) with the maximally entangled state is swap-steerable for any \(\alpha>\frac{1}{3}\)._ Proof.: Consider the scenario presented in Fig. 1. Now, suppose that the source \(S_{i}\) generates the state \(\rho^{W}_{A_{i}B_{i}}(\alpha_{i})\) for \(i=1,2\). Bob again performs the Bell basis measurement \(M_{B}\). Given these states and measurements, let us again evaluate the steering functional \(W\) in (9) to obtain \[W=\frac{3\alpha_{1}\alpha_{2}+1}{4}. \tag{15}\] As proven above in Fact 1, if \(W>\frac{1}{2}\) then the state is swap-steerable from Bob to Alice. Thus, we have from (15) that the Werner state (14) is steerable if \(\frac{3\alpha_{1}\alpha_{2}+1}{4}>\frac{1}{2}\). Consequently, for any value of \(\alpha_{1}\alpha_{2}>\frac{1}{4}\), the Werner states are swap-steerable. Let us now observe that if \(\alpha_{1}=1\), that is, the source \(S_{1}\) generates maximally entangled state, then for any \(\alpha_{2}>\frac{1}{3}\) the Werner state becomes swap-steerable. Thus, states that are unsteerable in the standard quantum steering scenario can be activated using the maximally entangled state and shown to be swap-steerable. However, we also notice that to observe swap-steering, the states generated by both sources can not be unsteerable simultaneously. Let us now find some necessary conditions to observe swap-steering. _Necessary conditions for swap-steering._ Consider again the scenario depicted in Fig. 1. Notice that one of the trivial necessary conditions to observe swap-steering is that the trusted party, here Alice, needs to perform an entangled measurement. Let us now restrict to the case when the number of outcomes on Bob's side is a composite number, that is, \(b=b_{0}b_{1}\) where \(b_{0},b_{1}\) are positive integers. Now, Bob's measurement \(\{N^{b}\}\) with \(b=b_{0}b_{1}\) prepares a set of positive operators on the trusted Alice's side, known as assemblage, denoted as \(\{\sigma_{b}\}\) where \(\sigma_{b}=\sum_{j}p_{j}\text{Tr}_{B}(1_{A}\otimes N^{b}\rho^{j}_{A_{1}B_{1}} \otimes\rho^{j}_{A_{2}B_{2}})\). Now, we show that if the assemblage is of a particular form, one can never observe swap-steering. **Fact 4**.: _Consider the swap-steering scenario depicted in Fig. 1 where Alice and Bob share the states \(\rho_{A_{1}B_{1}},\rho_{A_{2}B_{2}}\). Let us assume that Bob performs a \(n-\)outcome measurement which prepares the assemblage \(\{\sigma_{b_{0}b_{1}}\}\) on the trusted Alice's side. If \(\sigma_{b_{0}b_{1}}\) is separable for \(b_{0}=0,1,\ldots,n_{1}-1\), \(b_{1}=0,1,\ldots,n_{2}-1\),then there exists a SOHS model for both the states \(\rho_{A_{1}B_{1}},\rho_{A_{2}B_{2}}\)._ Proof.: Let us first notice that \[\sum_{b_{0},b_{1}}\sigma_{b_{0}b_{1}} = \sum_{b_{0},b_{1},j}p_{j}\text{Tr}_{B}(1_{A}\otimes N^{b}\rho^{ j}_{A_{1}B_{1}}\otimes\rho^{j}_{A_{2}B_{2}}) \tag{16}\] \[= \sum_{j}p_{j}\rho^{j}_{A_{1}}\otimes\rho^{j}_{A_{2}}\] which also allows us to conclude that \(\sum_{b_{0},b_{1}}\text{Tr}(\sigma_{b_{0}b_{1}})=1\). Consider now the assemblage \(\{\sigma_{b_{0}b_{1}}\}\) is separable, that is, the operators \(\sigma_{b_{0}b_{1}}=\sum_{j}\sigma^{j}_{b_{0}}\otimes\sigma^{j}_{b_{1}}\). Notice that the following states \[\tilde{\rho}^{j}_{A_{i}B_{i}}=\frac{1}{\mathcal{N}_{i,j}}\sum_{b_ {i}=0}^{n_{i}-1}\sigma^{j}_{b_{i},A_{i}}\otimes|b_{i}\rangle\!\langle b_{i}|_{B_ {i}} \tag{17}\] where \(\mathcal{N}_{i,j}=\sum_{b_{i}}\text{Tr}(\sigma^{j}_{b_{i}})\) and Bob performing a measurement of the form \[\tilde{M}_{b_{0}b_{1}}=|b_{0}\rangle\!\langle b_{0}|_{B_{1}} \otimes|b_{1}\rangle\!\langle b_{1}|_{B_{2}}\qquad b_{i}=0,1\ldots,n_{i}-1. \tag{18}\] give the same assemblage on Alice's side as the states \(\sum_{j}p_{j}\rho^{j}_{A_{1}B_{1}}\rho^{j}_{A_{2}B_{2}}\) and the measurement \(M_{b}=\{N^{b}\}\). It is straightforward to observe that the states \(\tilde{\rho}_{A_{i}B_{i}}\) are separable and thus the \(\rho_{A_{i}B_{i}}\) admit a SOHS model. Consequently, one can observe from Fact 4 that if Bob performs a product measurement, then the states are not swap-steerable from Bob to Alice. Further on, both states prepared from the sources are needed to be entangled to observe swap-steering. Thus, to observe swap-steering both the states and measurements must be entangled. _Discussions--_ As shown above, swap-steering is violated in quantum theory which implies that the assumption of outcome-independence with separable quantum sources is violated in quantum theory. It would be extremely counter-intuitive if the assumption of separable quantum sources is violated in quantum theory as it would imply that every source will be correlated to another source via some quantum process. Thus, outcome-independence seems to be violated in quantum theory. However, a super deterministic nature might allow violation of the assumption of separable quantum sources. The idea of quantum steering in networks was introduced recently in [26]. However, the scenario considered in this work was not dealt with in Ref. [26]. Further on, the notion of quantum steering in networks [26] required the trusted party to perform a full tomography which implied that the trusted party has input. Contrary to this, the notion of swap-steering even the trusted party performs a single fixed measurement. This also makes our scheme experimentally friendly as one has to consider less number of correlations in order to witness quantum steering in networks. However, the measurement elements of the trusted party are maximally entangled and thus it would be beneficial to explore the possibilities of observing swap-steering with less entangled measurmenents. Constructing witnesses to observe quantum nonlocality in networks has been extremely difficult mainly due to the fact that the network-local polytope might not be convex as shown in [11; 13]. In this work, we find that assuming one of the parties to be trusted allows constructing linear witnesses to observe a form of quantum nonlocality in networks. One of the interesting follow-up directions would be to explore the structure of the set of correlations admitting the SOHS model. We showed in this work that any entangled Werner state can be used to witness swap-steering. An interesting follow-up question is whether every entangled state violates the notion of swap-steering. Another direction to explore will be toward generalizing the notion of swap-steering to more parties and outcomes. It is known that quantum steering is asymmetric, that is, there are quantum states that are steerable from Alice to Bob but not the other way around. It will be interesting to find similar properties of quantum states when considering the notion of swap-steering. ###### Acknowledgements. We would like to thank Stefano Pironio for reviewing the manuscript and providing critical comments that considerably improved the manuscript. This project was funded within the QuantERA II Programme (VERIqTAS project) that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733.
2303.15557
Extended Line Emission in the BCG of Abell 2390
We report CFHT/SITELLE imaging Fourier Transform Spectrograph observations of the Brightest Cluster Galaxy (BCG) of galaxy cluster Abell 2390 at z=0.228. The BCG displays a prominent cone of emission in H$\alpha$, H$\beta$, [NII], and [OII] to the North-West with PA = 42$^o$, 4.4 arcsec in length (15.9 kpc), which is associated with elongated and asymmetric Chandra soft X-ray emission. The H$\alpha$ flux map also contains a "hook" of H$\alpha$ and [NII] emission resulting in a broadened northern edge to the cone. Using SITELLE/LUCI software we extract emission line flux, velocity, velocity dispersion, and continuum maps, and utilize them to derive flux ratio maps to determine ionization mechanisms and dynamical information in the BCG's emission line region. The Baldwin-Phillips-Terlevich diagnostics on the BCG cone indicate a composite ionization origin of photoionization due to star formation and shock. Strong LINER-like emission is seen in the nuclear region which hosts an AGN. As Abell 2390 is a cool-core cluster, we suggest that the cooling flow is falling onto the central BCG and interacting with the central AGN. The AGN produces jets that inflate "bubbles" of plasma in the ICM, as is often observed in local galaxy clusters. Furthermore, combining signs of AGN activities from radio, optical emission line and X-ray data over a large range of physical scale, we find evidence for three possible episodes of AGN activity in different epochs associated with the Abell 2390 BCG.
Leo Y. Alcorn, H. K. C Yee, Laurent Drissen, Carter Rhea, Suresh Sivanandam, Julie Hlavacek-Larrondo, Bau-Ching Hsieh, Lihwai Lin, Yen-Ting Lin, Qing Liu, Adam Muzzin, Allison Noble, Irene Pintos-Castro
2023-03-27T19:15:30Z
http://arxiv.org/abs/2303.15557v1
# Extended Line Emission in the BCG of Abell 2390 ###### Abstract We report CFHT/SITELLE imaging Fourier Transform Spectrograph observations of the Brightest Cluster Galaxy (BCG) of galaxy cluster Abell 2390 at \(z=0.228\). The BCG displays a prominent cone of emission in H\(\alpha\), H\(\beta\), [N ii], and [O ii] to the North-West with PA = 42\({}^{o}\), 4.4\({}^{\prime\prime}\) in length (15.9 kpc), which is associated with elongated and asymmetric Chandra soft X-ray emission. The H\(\alpha\) flux map also contains a "hook" of H\(\alpha\) and [N ii] emission resulting in a broadened northern edge to the cone. Using SITELLE/LUCI software we extract emission line flux, velocity, velocity dispersion, and continuum maps, and utilize them to derive flux ratio maps to determine ionization mechanisms and dynamical information in the BCG's emission line region. The Baldwin-Phillips-Terlevich diagnostics on the BCG cone indicate a composite ionization origin of photoionization due to star formation and shock. Strong LINER-like emission is seen in the nuclear region which hosts an AGN. As Abell 2390 is a cool-core cluster, we suggest that the cooling flow is falling onto the central BCG and interacting with the central AGN. The AGN produces jets that inflate "bubbles" of plasma in the ICM, as is often observed in local galaxy clusters. Furthermore, combining signs of AGN activities from radio, optical emission line and X-ray data over a large range of physical scale, we find evidence for three possible episodes of AGN activity in different epochs associated with the Abell 2390 BCG. keywords: galaxies: clusters: Abell 2390 - galaxies: elliptical and lenticular, cD - galaxies: individual: J21536+1741 ## 1 Introduction The Brightest Cluster Galaxies (BCGs) of galaxy clusters are a unique population of galaxies, with marked differences from typical cluster galaxies (e.g. Lin et al., 2010). They are the brightest and most massive galaxies observed in our universe, and are located near the galaxy cluster central potential (Oegerle and Hoessel, 1991; Lin and Mohr, 2004; Lauer et al., 2014). BCG dark matter and stellar halos transition smoothly into the dark matter halo of the entire cluster, and into the diffuse intra-cluster light (see Montes, 2022, and references therein). The majority of BCGs are slow-rotating elliptical galaxies (Fisher et al., 1995; Tran et al., 2008; Brough et al., 2011; Jimmy et al., 2013), but in contrast to typical slow-rotating ellipticals in clusters, they are more likely to have strong optical nebular emission (McNamara and O'Connell, 1993; Crawford et al., 1999; Fogarty et al., 2015; Hamer et al., 2016) in certain environmental conditions such as cool-core clusters (Heckman et al., 1989; Cavagnolo et al., 2008; McDonald et al., 2010; Tremblay et al., 2015; Hogan et al., 2017; Calzadilla et al., 2022). In cool-core clusters, or, clusters which have a cooling flow, the density of the density of the intra-cluster medium (ICM) is inversely proportional to the cooling time, so cooling flows occur when the radiative cooling time of the cluster gas is significantly shorter than the age of the cluster. The density of the ICM at the centers of clusters increases because cool ICM gas is compressed by the ICM outside the cluster center. In order to maintain the hydrostatic equilibrium of the cluster, hot ICM gas begins to flow inward, toward the peak of the cluster dark matter halo and the BCG. This flow of hot gas in toward the center of the cluster is the "cooling flow" in cool-core clusters (Fabian, 1994; Hudson et al., 2010; Blanton et al., 2010). The BCGs of cool-core clusters are associated with peaks in X-ray emission and the previously mentioned optical nebular emission (Crawford et al., 1999; McNamara and Nulsen, 2007; Cavagnolo et al., 2010; Hudson et al., 2010; Fabian, 2012). They are also often hosts of radio-loud active galactic nuclei (AGN), which create radio lobes and jets (McNamara et al., 2000; Egami et al., 2006; Lin and Mohr, 2007; Ea et al., 2010). When the in-falling gas interacts with the supermassive black hole (SMBH) in a BCG, this can transition the SMBH to its active (kinetic) phase (Fabian, 2012; Hamer et al., 2016), although in contrast accretion can also result in a radiative phase (Churazov et al., 2005; Russell et al., 2013; Hlavacek-Larrondo et al., 2013). The jet activity of a SMBH (AGN feedback) can force pristine gas out of the galaxy and quench star formation (negative feedback), or compress the gas and cause a starburst (McNamara et al., 2006; Ea et al., 2010). The AGN of cool-core BCGs have been observed to inflate X-ray cavities from rising buoyant bubbles of plasma fuelled by the jets, which entrain cool gas from the galaxy behind them (e.g. Pope et al., 2010; David et al., 2011; Fabian et al., 2016; McNamara et al., 2016; Su et al., 2017; Tremblay et al., 2018; Duan & Guo, 2018; Russell et al., 2019; Chen et al., 2019; Smith & Donohoe, 2021; Zhang et al., 2022), drawing them outward from the BCG, creating X-ray profiles which deviate from circular symmetry of the X-ray surface brightness profile and optical emission line nebulae that extend out from the BCG. Bubbles and entrained gas can be disrupted via cooling instabilities, merger activity, and gas sloshing, which create bent or horseshoe-shaped emission-line nebulae observed in H\(\alpha\)(Fabian et al., 2003; Ueda et al., 2018). Examples of this process in the local universe can be seen in NGC1275 of the Perseus cluster (Churazov et al., 2000; Fabian et al., 2003; Aharonia et al., 2018), and NGC4696 of the Centaurus cluster (Fabian et al., 2005), among many others. Abell 2390 (hereafter, A2390) is a massive cluster at \(z=0.228\), with \(M_{200}\sim 1.3\times 10^{15}M_{\odot}\) and \(\sigma_{V,cluster}\sim 1100~{}km/s\)(Carlberg et al., 1996). It hosts a BCG with a prominent extended nebular emission line region and an asymmetric X-ray profile. A23901 was observed as part of the CNOCI cluster survey on CFHT using the MOS multi-object spectrograph (Yee et al., 1996,b; Abraham et al., 1996), providing a catalog of redshifts, photometry, spectral feature measurements and morphological indices for 371 cluster galaxies. The cluster contains several prominent strong gravitational lensing arcs, making it a popular target for high-redshift astronomy in addition to previously discussed dense evolved cluster studies (e.g. Pierre et al., 1996; Bezecourt & Soucail, 1997; Frye & Broadhurst, 1998; Pello et al., 1999; Balogh & Morris, 2000; Li et al., 2009; Stroe et al., 2017). X-ray data of A2390 indicate the cluster has a cool core and anisotropic X-ray morphology (Allen et al., 2001; Hlavacek-Larrondo & Fabian, 2011; Sonkamble et al., 2014), and it is a dynamically relaxed, non-merging cluster with a strong radio source (Abraham et al., 1996; Augusto et al., 2006; Egami et al., 2006). Footnote 1: The cluster center is located at a Right Ascension of 21:53:36 and a Declination of +17:41:43 in the J2000 epoch (Haines et al., 2013) The BCG of A2390 (named J21536+1741 in Smail et al. (2002) and radio source centred located at Right Ascension 21:53:36.82760 Declination +17:41:43.7260 in Patnaik et al. (1992)) was observed in multi-wavelength studies (e.g. Hutchings & Balogh, 2000; Egami et al., 2006; Augusto et al., 2006). The BCG contains several interesting and unusual properties - an active nucleus with a flat spectral index compact radio source in the core, and a strong H\(\alpha\), H\(\beta\), [Nii], and Ly\(\alpha\) emission line extended region with position angle 42.7\({}^{o}\) northwest of the nuclear region, referred to as a "cone". This interplay between the BCG, the SMBH, the surrounding hot ICM, and the cooling gas make this galaxy and its cone an ideal laboratory for studying the extreme physics of AGN jets, the most massive galaxies in the universe, and their effect on the cluster environment, as well as the effect of cold gas accretion onto a central galaxy AGN from the cluster's cooling gas flow. In this paper we present nebular emission line spectral imaging of the central cD-type BCG of A2390. The data are obtained in three band-limiting filters designed for the detection of \(z\sim 0.25\) diagnostic optical nebular emission lines (e.g. H\(\alpha\), [Nii], H\(\beta\),[Oiii], [Oii]). This project is part of an ongoing \(z\sim 0.25\) cluster survey reported in Liu et al. (2021) using SITELLE on the Canada-France-Hawaii Telescope (CFHT), a Fourier Transform Integral Field Spectrograph (Drissen et al., 2019). The \(11^{\prime}\times 11^{\prime}\) SITELLE field of view makes it ideal for studying galaxy clusters at low to intermediate redshifts, which can extend out to tens of arcminutes on the sky. By combining the imaging emission-line data from SITELLE with multi-wavelength data from radio to X-ray, we present a possible AGN activity scenario that connects these data. This paper is structured as follows: Section 2 discusses our CFHT/SITELLE data and the processing of these data cubes. Observing conditions and data are reported in Section 2.1, internal data processing and galaxy cataloguing are the subjects of Section 2.2, and the emission line characterization software LUCI and its use in this project is introduced in Section 2.3. In Section 3 we describe existing multi-wavelength data in Chandra X-ray, HST optical, Spitzer infrared, and multi-instrument, multi-band radio interferometry of the BCG that are used to support our work. We present the results from our SITELLE data in Section 4, and discuss their analysis and the possible interpretations along with the multi-wavelength data in Section 5. Our data, methods, and findings are summarized in Section 6. Throughout this work, we assume a \(\Lambda\)CDM cosmology, with \(\Omega_{m}=0.3\), \(\Omega_{\Lambda}=0.7\), and H\({}_{0}\)=70 km/s/Mpc. At the redshift of A2390, one arc second is equivalent to 3.65 kpc. ## 2 SITELLE observations of A2390 ### Observations and Data SITELLE was originally intended for the study of emission line objects such as local nebulae, supernova remnants, and the HII regions of nearby galaxies as well as star-formation and emission line galaxies in galaxy clusters (Drissen et al., 2019; Rousseau-Nepton et al., 2019). The wide \(11^{\prime}\times 11^{\prime}\) field of view of SITELLE/CFHT makes it an ideal instrument for studying nearby galaxy clusters (see Figure 1). By observing rich clusters such as A2390 with SITELLE, one can obtain imaging spectroscopy of an unbiased, luminosity-weighted sample of cluster galaxies including the cD BCG, identified as Obj1757C in the catalog generated in Liu et al. (2021). The proper name for Obj1757C is J21536+1741 (from SIMBAD), but we use the catalog name for the BCG for brevity. We obtained SITELLE imaging Fourier Transform Spectrograph (iFTS) data cubes in the C1, C2, and C4 band-limiting filters, which at the redshift of the cluster will detect the [Oii] doublet from \(z=0.03-0.31\) (albeit blended), [Oiii] and H\(\beta\) from \(z=0.15-0.25\), and H\(\alpha\) and [Nii] from \(z=0.21-0.25\), respectively. These filters cover the primary optical diagnostic nebular emission lines at the redshift of Obj1757C. Images from SITELLE have a spatial scale of \(0.322^{\prime\prime}\)/pix, which at the cluster redshift of \(z\sim 0.228\) is an angular scale of 3.6 kpc\({}^{\prime\prime}\)/\(\sigma\), or 1.17 kpc per pixel. These data are available on the Canadian Astronomical Data Centre website at the observation IDs2 listed in Table 1. C1 was observed on August 27th, 2016 (PI: Loubser) as part of the SITELLE commissioning dataset. C2 and C4 were observed on September 27th, 2019 (PI: Hsieh) and July 6th, 2017 (PI: Yee), respectively. Exposure times (including overhead between wavenumber steps), resolution, seeing, and filter band widths are listed in Table 1. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Proposal ID & Filter & Wavelength Range (nm) & Seeing (\({}^{\star}\)) & N\({}_{steps}\) & \(\Delta\lambda\) (Å) & R & Emission Lines (Å) & Exposure (s) \\ \hline 16BC03 & C1 & 385-490 & 1.3 & 304 & 5.8 & 600 & [O ii]3727 (blended) & 12,160 \\ 19BT07 & C2 & 562-625 & 1.2 & 297 & 3.6 & 1300 & H\(\beta\), [O iii]4959,5007 & 13,365 \\ 17AD94 & C4 & 798-823 & 1.3 & 124 & 5.2 & 1080 & H\(\alpha\), [Nii]6548,6583 & 11,408 \\ \end{tabular} \end{table} Table 1: SITELLE Observation Details of A2390 Figure 1: SITELLE deep image of the central pointing of A2390 (\(z=0.228\)). This image shows a single SITELLE pointing \(11^{\prime}\times 11^{\prime}\) in size. The inset and zoomed image is the location of the brightest cluster galaxy (BCG), Object 1757C (\(z=0.231\)) from the catalog Liu et al. (2021). The inset is a \(10^{\prime\prime}\times 10^{\prime\prime}\) cutout under a different stretch to emphasize the nuclear region in contrast to the cone region of the BCG The initial data products are data cubes, with a spectrum at each spatial element (Figure 2). Each spectrum is made up of multiple "channels", or wavelength steps. The data are reduced and calibrated through the ORB software (Martin et al., 2016)3. The C4 data is processed through a software pipeline4 described in Liu et al. (2021), which identifies H\(\alpha\)-emitting galaxies and removes sky contamination. Footnote 3: [https://github.com/thomasorb/orbits](https://github.com/thomasorb/orbits) Footnote 4: [https://github.com/NGC4676/STTELLE_ELG_finder](https://github.com/NGC4676/STTELLE_ELG_finder) ### Emission-line Galaxy Identification and Cataloguing Initial measurements of Obj1757C are generated from the cataloging algorithm described in Liu et al. (2021). The software pipeline identifies emission line objects in SITELLE data cubes by fitting emission line templates for the H\(\alpha\) and [N ii]6548,6583A doublet, the H\(\beta\) and [O iii]4959,5007A doublet, and a single undefined emission line over the spectral coverage (channels). The Liu et al. (2021) catalogs contain the centroid, ellipticity, radius, and position angle (PA) of the H\(\alpha\) emission-line region derived from the specific spectral channels that contain the H\(\alpha\) line feature. We summarize the processing steps of Liu et al. (2021) here, which resulted in the catalog used for this work. The sky background for each channel is evaluated via a \(128\times 128\) pix\({}^{2}\) box using its mode, and this value is subtracted from the channel. A low-pass filter is applied to the data cube to minimize fringing, which is produced by the combination of the interferometric nature of SITELLE and the presence of some bright sky lines. A stacked image is created by summing along a restricted set of wavenumber channels which have good transmission, and the stack is used to detect sources with the photutils Python package. Emission line galaxies are identified by subtracting smoothly-varied continua, and then matched to emission line templates using a cross-correlation technique (see Section 3.6 of Liu et al. 2021 for details). The cross-correlation measures the best-fit \(z_{spec}\) and emission line identities of each measured galaxy. ### Spatially Resolved Emission Line Mapping Using LUCI5(Rhea et al., 2021, 2020), a set of programs specifically designed to analyze iFTS spectral imaging data such as those from SITELLE, we can extract maps of the redshifted emission line flux. These steps are performed for each observed filter of Obj1757C (C1, C2, and C4). From the Liu et al. (2021) catalog, we input the redshift and spatial position of the estimated H\(\alpha\) emission profile centroid, and fit emission line profiles and continuum values over individual pixels covering a square region 4\(\times\) the galaxy's H\(\alpha\) equivalent radius6 on each side. Pixels in this region are fit independently by LUCI. We subtract background sky levels from an empty circular aperture near the galaxy 5.9\({}^{\prime\prime}\) in radius, which in the case of Obj1757C was located 18\({}^{\prime\prime}\) away southwest from the H\(\alpha\) centroid. Footnote 5: [https://crhea93.github.io/LUCI/index.html](https://crhea93.github.io/LUCI/index.html) Footnote 6: From the Python package photutils (Bradley et al., 2020), the radius of a circle with an area equal to the source segment on the segmentation map. LUCI fits the velocity-shifted position of the emission line, which provides a velocity measurement relative to the input object redshift derived from an integrated spectral fit. LUCI measures the width, or broadening, of the emission line via a SincGauss function, which is the convolution of a sinc function with a Gaussian function (for broadening the sinc function) and describes the instrumental line shape for extended objects as observed by SITELLE (Martin et al., 2016). The extracted flux maps are fit over all of the spectral channels to find the location of an emission line, and therefore any detected emission will be included in the map despite a velocity shift or kinematic broadening. Additionally, LUCI measures the continuum of the galaxy spectrum simultaneously with the line flux in the measured pixel, therefore providing a continuum-subtracted emission line flux. Continuum values are obtained by fitting a flat continuum with zero slope, simultaneously with the SincGauss function of the emission line, to the spectrum at each pixel. The filters C1, C2, and C4 are all narrow band-limiting filters of under 110 nm; visual inspection confirms a flat continuum over most pixels after background subtraction. Initial guesses for line broadening due to velocity dispersion and line shift due to velocity are provided from a training set created in a similar method to that described in Rhea et al. (2021), which we briefly summarize here. Reference spectra for C1, C2, and C4 are created for each filter over the wavelength coverage of each filter. We redshift the emission lines to the redshift value of Obj1757C (\(z=0.231\), determined from the integrated spectrum of the galaxy described in Liu et al. (2021) and Section 2.2). We use line ratios for [O ii]3726,3729A, H\(\beta\), [O iii]4959,5007A, [N ii]6548,6583A, and H\(\alpha\) taken from the Mexican Million Model database replicating HII regions (Morisset et al., 2015). A set of 50,000 test spectra are generated with velocity shifts \(v=\) -1000km s\({}^{-1}\) to 1000km s\({}^{-1}\), ionized gas velocity dispersion \(\sigma_{\rm g}\)= 0 -300km s\({}^{-1}\), and signal-to-noise ratio SNR \(=2-30\). An artificial neural network is trained on these data, separating them into a training and test set to determine the ability of the network to recover input values. When a spectrum is inputted to LUCI, the network predicts the initial guess for the velocity shift and \(\sigma_{\rm g}\) broadening values of the emission lines. The fit is run through an MCMC algorithm with priors clustered around these initial guesses of velocity, \(\sigma_{\rm g}\), continuum levels, and amplitude of the emission lines to determine errors on the fit at each pixel. Emission lines observed in each filter are fit simultaneously and are tied at a fixed velocity for each emission line complex. H\(\alpha\) and the [N ii] doublet emission lines are allowed to vary in the line broadening (as well as the [O iii] doublet and H\(\beta\)) but the fit does not produce significantly different results if each emission line is fixed to the same broadening value for all emission lines. The [O ii] doublet is fit as a single line because it is not resolved at the SITELLE C1 filter resolution. The [O iii] doublet, [O iii]4959A and [O iii]5007A, are held to a constrained flux ratio of 1:2.98. Additionally, [N ii]6548A and [N ii]6583A are constrained to a fixed flux ratio of 1:3 (Dojcinovic et al., 2022). The results of the LUCI software on our SITELLE data include maps of the emission line fluxes seen in Figure 3 and discussed in Section 4.1 (later used to derive flux ratio maps described in Sections 4.2 and 5.1), continuum emission maps seen in Figure 4, and velocity and broadening maps discussed in Section 4.3. Fractional uncertainty levels in each of the flux maps and continuum maps is low in the nuclear region (0.05-0.1 fractional error) and increases outward in the cone and toward the hook (0.1-0.15 in H\(\alpha\) and [N ii], 0.15-0.35 in [O iii], 0.1-0.2 in H\(\beta\), and 0.5-0.2 in continuum). ## 3 Multi-Wavelength Data on A2390 ### Chandra X-Ray Imaging Chandra X-ray imaging of A2390 was obtained by Allen et al. (2002) in hard (2-7 keV), medium (1.2-2 keV), and soft (0.5-1.2 keV) X-ray channels. The Chandra observation (OBS ID: 4193) obtained from the archive consisted of a 96 ks ACIS pointing of the cluster center. The exposure-corrected flux images were generated using the archive pipeline processed event file with CIAO v4.14 tools and CALDB v4.9.6 calibration files. The pipeline processed Level 2 event file was used in conjunction with _dmcopy_ to extract the events and form images in different energy ranges. The images were further binned 2x2 pixels to increase the signal-to-noise ratio. The hard X-rays peak at the BCG nucleus (Figure 5), and are associated with the AGN in the galaxy's center. Diffuse soft emission is asymmetric and elongated along the extended emission line region (X-ray PA=46.1\({}^{o}\) from West) and the H\(\alpha\) cone (H\(\alpha\) PA=42.7\({}^{o}\) from West), similar to many H\(\alpha\) filaments observed in BCGs which host buoyantly rising bubbles (Fabian et al., 2003; McDonald et al., 2010; McNamara et al., 2016; Calzadilla et al., 2022). This elongated region is associated with a cooler temperature along the NW region of the ICM, in the same location as the H\(\alpha\) emission line cone (see Figure 5). These data from the Chandra X-ray Observatory were also utilized by Sonkamble et al. (2014), whose analysis identified five X-ray deficient cavities which appear around the BCG within 30\({}^{\prime\prime}\). There is an edge to the 0.5-7 keV diffuse X-ray gas along the NW direction at 68\({}^{\prime\prime}\), or 246 kpc, associated with cold fronts (Sonkamble et al., 2014). Intriguingly, the NW X-ray edge is possibly associated with the NW elongated region of H\(\alpha\) flux and the elongated soft X-ray emission. Buoyant gaseous bubbles are associated with the appearance of X-ray cavities like the ones seen in this analysis. ### HST Optical and Spitzer IR Data Hutchings & Balogh (2000) were the first to report a notably extended emission line region in the A2390 BCG. They combine their CFHT/OSIS narrow-band H\(\alpha\) imaging with HST/WFPC2 F555W, and HST/ACS F850LP images. We obtained the same HST/WFPC2 F555W imaging (Fort, 1994) for our own analysis from MAST, and performed no additional processing. The extended HST continuum region was observed on both sides of the galaxy, but the southeastern side of the extended region is not observed in our SITELLE H\(\alpha\) imaging or Hutchings & Balogh (2000) Ly\(\alpha\) grism spectroscopy. Knots of structure are seen close to the galaxy nucleus and along the emission line region in all observations. Additionally, a possible dust lane within the central region of the galaxy is observable at a 90\({}^{\circ}\) offset from the cone in both F555W and F850LP (see Figure 6). Legacy Spitzer/IRAC IR imaging of A2390 in 3.6 (Channel 1) and 8 \(\mu\)m (Channel 4) were obtained using MAST (Egami et al., 2006) (Figure 7). The BCG of A2390 was the third IR-brightest BCG in the sample of Egami et al. (2006), but it is not bright enough to be classified as a LIRG (\(>10^{11}L_{\odot}\)). A bright IR flux is correlated with a shorter radiative cooling timescale, which is common in cool-core clusters. Based on IR spectral energy distribution signatures, the highly luminous IR emission was judged by the authors to be more likely caused by star formation rather than AGN emission. 8 \(\mu\)m imaging would provide us polycyclic aromatic hydrocarbon (PAH) distribution in the BCG, and 3.6 \(\mu\)m imaging provides IR imaging of the stellar component. In the right panel of Figure 7, we created a Figure 2: Example spectra at two single pixels in different locations of the BCG. The “Nucleus” (left) column contains spectra in C4, C2, and C1 of the BCG at the location of the H\(\alpha\) centroid. The “Cone” (right) column contains spectra from the same pixel in the cone at 3.4\({}^{\prime\prime}\) from the H\(\alpha\) centroid. In each panel, the spectrum is shown in black, and flux units are in erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). Emission line locations are marked in red and pink. We only plot spectral regions within the usable band-limiting filter channels. difference image of the 3.6 and 8 \(\mu\)m filters (left and middle panels, respectively) to emphasize the excess of the 8 \(\mu\)m to the northwest. We note that the 8 \(\mu\)m emission is more extended than the 3.6 \(\mu\)m, and the extended region is in the direction of the BCG cone. ### Radio Interferometry Augusto et al. (2006) observed the BCG of A2390 (the radio source of the BCG is called B2151+174 in this study) using multi-frequency, multi-epoch radio interferometry from several arrays such as VLBA, the VLA, and MERLIN. These data indicated that the BCG radio source has a complex structure, consisting of an extended mini-halo, and a number of compact mini-jets, with the two most prominent milli-arcsecond jets in the north-south orientation. The authors also state that it has one of the flattest radio spectrums and most spatially compact radio sources known at the time of observation. The 1.4 GHz radio power of the source is \(\sim 10^{25.1}\) W Hz\({}^{-1}\). The MERLIN 5.0 GHz map indicates a size of 0.49'' (\(\sim\)1.5 kpc), making it a bright core medium-size symmetric object. Savini et al. (2018) obtained LOFAR radio imaging of A2390. Figure 4: Continuum maps of C4, C2, and C1 SITELLE filters derived from LUCI, with all emission line flux removed. The flux is in units of erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\), same as in Figure 3. The C4 filter (rest-frame 6482Å to 6686Å) displays only a mild elliptical shape of \(\epsilon=0.29\) and no cone. The continuum of the C2 filter (rest-frame 4565Å to 5077Å) is more elliptical than C4 and has no cone. The C1 filter which covers rest-frame 3128Å to 3980Å is effectively the rest U-band continuum, and is elongated in a direction consistent with the emission line cone. Figure 3: Flux maps of emission lines derived from LUCI. The flux (given by the colorbar) is in units of erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). Only three colorbars are shown because flux maps from each filter use the same colormap (i.e. H\(\alpha\) and [Nii] from C4, [O iii] and H\(\beta\) from C2, and [O ii] from C1). Top row from left: The flux maps derived from the C4 SITELLE filter, which provides coverage of the redshift H\(\alpha\) and [Nii] emission lines. These maps show weaker [Nii] emission in the cone and hook and bright emission in the core. Bottom row: The farthest left three panels are flux maps derived from the C2 filter, covering the redshifted [O iii] and H\(\beta\) emission lines. The farthest right panel is the [O ii] emission line doublet from the C1 filter. We see very strong [O ii] emission compared to the [O iii] and H\(\beta\) lines, indicating low ionization. Figure 5: Chandra X-ray imaging of A2390. The top row are cutouts of the Chandra images in a \(10^{\prime\prime}\times 10^{\prime\prime}\) square. The H\(\alpha\) centroid location is marked with a red ‘X’. The H\(\alpha\) images with Chandra X-ray contours are shown in the bottom row. Excess in the soft channels (left panels) appear to be located in the same space as the H\(\alpha\) cone, indicating cooler gas temperatures. The hard channels (right) are brightest at the galaxy center, the location of the AGN. Figure 6: Left: The HST/WFPC2/F555W image of the A2390 BCG. A two arcsecond bar is shown as a scale. Striking and complex substructure not resolved in the SITELLE flux maps is seen, including an extended region to the southeast and a gap, most likely a dust lane. H\(\alpha\) flux contours are overlaid in red. Center: The same image as left, but shifted upward and with structure directions overlaid. We include the narrow HST blue lane extending from the southeast side of the nucleus to the northwest (blue line), and the H\(\alpha\) cone (red) and hook (green) as detected by SITELLE. The HST extended regions/blue lane emanating from the nucleus and the H\(\alpha\) cone have position angles misaligned by \(\sim 4^{\prime\prime}\). The most prominent clumps in F555W are along the H\(\alpha\) cone. Right: The HST/ACS/F850LP image of the BCG, which also displays a less prominent dust lane and some of the substructure seen in F555W as one might expect due to less reddening. They list the diffuse emission from low-resolution images as 1100 kpc in size. The authors also identified two large extended lobes on the order of \(\sim\)600 kiloparsecs from the center region, separate from the giant radio halo. ## 4 Results ### Emission Line and Continuum Structure Hutchings & Balogh (2000) noted the extended structure of the H\(\alpha\) emission line in their CFHT/OSIS narrow band images, which the authors referred to as the "emission cone"; we will continue this terminology. We reconfirm the existence of an extended emission line cone in the BCG via the LUCI emission line maps (Figure 3). The cone is observed in H\(\alpha\), [N ii]6548,6583A, H\(\beta\), and [O ii]3727A at a position angle of 42.7\({}^{\circ}\) (from the West of the image) extending northwest of the nucleus of the galaxy. The cone is not observed in [O iii]4959,5007A. The cone is observed in the C1 continuum but not in the C2 and C4 continuua (Figure 4). Additionally, the SITELLE maps can detect a North-Eastern protrusion from the cone, which we will refer to as the "hook". The hook is observed in the H\(\alpha\) map as well as [N ii]6583A. However, it is not observed in [O ii]3727A, [O iii]4959,5007A, and H\(\beta\), in any continuum image including SITELLE, HST/F555W, and HST/F850LP imaging, or in the H\(\alpha\) narrow-band imaging of Hutchings & Balogh (2000). Intriguingly, the hook is also correlated with soft X-ray structure observed with Chandra (Figure 5, left panels). At 1.8\({}^{\prime\prime}\) from the galaxy nucleus, clumpy structure is seen in the cone in H\(\alpha\) and [N ii]. These clumps are not visible in the other emission line maps. The clumps are located in the same region as those found in the HST/F555W image (Figure 6). The nuclear region displays a possible dust lane in the HST images. The F555W imaging (Figure 6, left) has a gap visible between the center of the nucleus and the cone structure. The F850LP imaging (Figure 6, right) contains the same overall structure as F555W. The dimming between the two nuclear clumps is less pronounced in F850LP, as one might expect with lower reddening from dust. Therefore the gap is likely a dust lane, rather than an indicator of a major merger close pair. This possible dust lane is not observed in any SITELLE emission line flux map due to the resolution limits of ground-based non-AO imaging. While no south-eastern extended region is seen in the H\(\alpha\) flux maps, the resolution of HST provides a clear image of the extended region in WFC3/F555W and ACS/850LP (Figure 6), referred to in Hutchings & Balogh (2000) as the "blue lane". The HST blue lane on the NW side and H\(\alpha\) cone are offset by 4\({}^{\circ}\). The significance of this offset is unclear due to the lower spatial resolution of ground-based SITELLE data compared to HST spatial resolution. ### Ionization and BPT Profiles The Baldwin-Phillips-Terlevich (BPT) diagram (Baldwin et al., 1981) is a diagnostic figure that plots the relative flux ratios of emission lines to determine the state of the plasma or ionized gas. The most widely used version of the BPT diagram is the \(\log([\mathrm{N}\mathrm{n}]/H\alpha)\) vs. \(\log([\mathrm{O}\mathrm{n}]/H\beta)\) ratio map, which can distinguish between photoionized vs shock ionized gas, as well as between LINERs and Seyfert type AGN. This diagnostic method is very helpful for a galaxy such as Obj1757C, which has an unknown ionization source in the cone, and a known ionization source (a radio AGN) at the nucleus. To create ionization and BPT diagram maps, we use the LUCI emission line flux maps and flux error maps to create flux ratio maps (Figure 8). We then extract overlapping rectangular apertures from the flux ratio maps along the direction of the cone from the galaxy center (as determined from the light-weighted H\(\alpha\) ionized gas profile) to the edge of the cone (Figure 9, lower two panels). Each aperture is 1.0\({}^{\prime\prime}\) in width along the cone and 2.3\({}^{\prime\prime}\) in length, and overlaps with the previous and following apertures along the cone to smooth the resulting profiles. We measure the median flux from the BCG nucleus to the cone. Errorbars are derived bootstrapping the flux maps 1000 times. Given that [O iii]5007 is not observed in the cone and is below Figure 7: Spitzer/IRAC infrared imaging of A2390. Left: The 3.6\(\mu\)m emission with H\(\alpha\) flux map contours (Figure 3). The 3.6\(\mu\)m channel is associated with the stellar population of the BCG and shows no extended structure. Center: The 8\(\mu\)m emission, which has an extension towards the NW. Both images are normalized to the peak emission in each image. Right: Difference image of 3.6-8\(\mu\)m. Regions dominated by 8\(\mu\)memission is represented by darker shades. The excess 8\(\mu\)memission region coincides with the H\(\alpha\) cone which is shown by the red contours. the flux detection limit of \(\sim 4.26\times 10^{-17}\) erg cm\({}^{-2}\) s\({}^{-1}\) arcsec\({}^{-2}\), the \(\log([{\rm O}{\rm II}]/H\beta)\) flux ratio is an upper limit. In contrast, the \(\log([{\rm N}{\rm II}]/H\alpha)\) flux ratio is a secure detection until the edge of the cone. Figures 9 and 11 show these flux ratio detections, using the same color scheme along the cone from lightest at the AGN core to darker along the cone. Additionally, the \(2\sigma\) detectable limits on the flux ratios are displayed for the \(\log([{\rm O}{\rm II}]/H\beta)\) ratio, where shaded gray regions are below SITELLE's flux detection limit. The \(2\sigma\) detectable limits for \(\log([{\rm N}{\rm II}]/H\alpha)\) are below the lower limit of the figure. These flux ratio limits are calculated by measuring the surface brightness of the H\(\alpha\) and H\(\beta\) emission line structures in the cone, and determining the minimum flux ratio detectable at the SITELLE \(2\sigma\) flux detection limit. ### Kinematics of the H\(\alpha\) Ionized Gas We derive values for H\(\alpha\) velocity and \(\sigma_{\rm g}\), the velocity dispersion of the ionized gas, from LUCI by measuring the shift of emission lines Figure 8: Top. The log ratio of [N ii]6584Å and H\(\alpha\), a component of the BPT ionization diagnostic. This flux ratio map was smoothed using a 3X3 square filter to reduce noise in the low flux regions of the flux maps. The ratio appears relatively flat, but with a distinct drop between the nuclear region and the cone. This is consistent with other BCG measurements from Hamer et al. (2016). Bottom: Log ratio of H\(\beta\) and [Oiii]5007Å. This ratio is also a component of the BPT diagnostic. This flux ratio map was smoothed using a 5X5 square filter to reduce noise in the cone, due to low flux values in the cone. The [Oiii] line is faint compared to other optical diagnostic emission lines, so it was not measurable over most of the galaxy. In both maps, the red contours are those of the H\(\alpha\) as seen in Figure 3. Figure 9: Top: Median \(\sigma_{\rm g}\) derived from the H\(\alpha\) emission line fitting as a function of distance from the galaxy nucleus. The points are colored as a function of distance R from galaxy center (yellow) to the edge of the cone and hook (dark violet). These points are similarly colored for the middle and bottom panels. Errorbars in grey are the measurement errors derived from bootstrapping 1000 times per aperture. Each aperture is 1.0′′ in width along the cone and 2.3′′ in length. Middle: The median ratio of [N ii]6584Å and H\(\alpha\), a component of the BPT ionization diagnostic. Bottom: Median ratio of H\(\beta\) and [Oiii]5007Å as a function of distance from the galaxy center. This ratio is also a component of the BPT diagnostic. The shaded region is the detection limit of the flux ratio of a \(2\sigma\) detection of both emission lines. The [Oiii] emission line is not detected in the cone, making the points in the cone upper limits. from the integrated redshift of the emission line gas in Obj1757C (\(z=0.231\)), and from the broadening of the H\(\alpha\) emission line (Figure 10). The velocity of the emission line gas has a 700km s\({}^{-1}\) gradient from the nucleus to the cone and hook (Figure 10, upper). The AGN region appears to be blueshifted and the cone and hook are redshifted relative to the luminosity-weighted integrated redshift of the galaxy's emission line gas. No smooth velocity fitting is used (e.g. fitting to an arctangent rotation curve), but the velocity field is, nevertheless, remarkably ordered and smooth, indicating no recent disturbances due to major mergers, or a major merger with axes of rotation aligned. The velocity dispersion profile peaks at the nucleus of the galaxy at 233 km s\({}^{-1}\), and decreases to \(\sim 130\) km s\({}^{-1}\) in the cone and hook (Figure 10, lower). The higher velocity dispersion region is associated with an AGN, but the lower \(\sigma_{\rm g}\) region could be associated with rising buoyant bubbles of plasma blown by the AGN (Aharonia et al., 2018; Zhang et al., 2022). These bubbles have low \(\sigma_{g}\) values because the motion of the gas is primarily dominated by outward jet stream motion (see Section 5.2 for further discussion of plasma bubbles). The clumpy substructure of the H\(\alpha\) flux map and the HST/F555W imaging (Figure 6) is also associated with a slight flattening of the \(\sigma_{\rm g}\) values (Figure 10, top panel), before an increasing decline in \(\sigma_{\rm g}\) to the edge of the cone and hook. Alternatively, the velocity dispersion values of the cone could be due to turbulence in the star forming regions, or due to slow shocks, or a combination of these reasons. ## 5 Discussion ### Star Formation or Shock Ionization in the Cone The BPT diagnostics seen in Figures 11 and 12 indicate that the cone is in a composite region; a combination of ionization state from photo-ionization and shock heating. While emission lines can be the result of processes other than stellar photoionization (e.g. Li, 2020), the clumpy structure of the cone and hook in H\(\alpha\) emission line flux and HST imaging is possibly of a star-forming origin. The galaxy nucleus houses an AGN, which is located firmly in the LINER region of the BPT diagram (Figure 12). In Spitzer/IRAC imaging of the BCG, the 3.6\(\mu\)m emission morphology is concentrated in the central AGN region (Figure 7, left). In contrast, the 8\(\mu\)m morphology is more extended to the northwest, into the same region as the H\(\alpha\) cone (Figure 7, center), and could be associated with PAH emission from the star forming region (Sivanandam et al., 2014). The soft X-ray emission (Section 3.1) is also extended in the same orientation as the H\(\alpha\) cone and the 8\(\mu\)m emission (Figure 5, left), and could be similarly associated with star-formation. Similar star-formation activity in BCGs with AGN activity has been observed in the local universe up to \(z\sim 0.5\)(e.g. Crawford et al., 1999; McNamara et al., 2006; Von Der Linden et al., 2007; Fogarty et al., 2015; Hamer et al., 2016; Ciocan et al., 2021; Maier et al., 2022). Our observations are consistent with BPT diagnostic measurements of BCGs in Hamer et al. (2016), from VLT/VMOS IFU data of 73 galaxy clusters. Note that the A2390 BCG was observed in that study, but its H\(\alpha\) morphology was classified as a simple elliptical with centrally concentrated core, which is inconsistent with our own Figure 11: BPT diagram with points color coded as in Figure 9, indicating distance from the galaxy center (yellow) to the edge of the cone and hook (dark violet). Errorbars are 2\(\sigma\) range of values in each aperture as in Figure 9. The red solid line marks the edge of the star-formation photoionization region, derived in Kauffmann et al. (2003). The blue solid line is the edge of the composite ionization region from Kewley et al. (2001). The green solid line is the dividing line between LINER and Seyfert galaxies empirically derived by Cid Fernandes et al. (2010). The nuclear regions are located firmly in the LINER region, indicating shock-dominated photoionization with a soft ionizing field. However, the furthest points in the cone and hook region (dark violet points) move toward the composite photoionization region, indicating both shock and photo-ionization. Figure 10: Top: Velocity shift of the H\(\alpha\) line derived by LUCI. The velocity offset is with respect to the redshift of the integrated emission line gas from the H\(\alpha\) and [N\(\rm{n}\)] emission lines (\(z=0.231\)) as observed in the integrated spectrum. Bottom: Gas velocity dispersion \(\sigma_{g}\) derived from the H\(\alpha\) line broadening. In both maps, the contours are the H\(\alpha\) flux contours seen in Figure 3. conclusion. Instead, our observations of Obj1757C are more consistent with BCGs classified as "Plumes" in Hamer et al. (2016), where the H\(\alpha\) emission shows a clear extent in one preferential direction not shared by the continuum. These objects also showed LINER-like emission in the nuclear regions, and composite emission in the plumes, consistent with our own measurements of Obj1757C (Figure 12). H\(\alpha\), while commonly used as an indicator of star-formation (Kennicutt, 1997), can also be produced from other processes such as shock heating. Based on the high H\(\alpha\) emission, the clumpy structure observed in both HST imaging and the SITELLE H\(\alpha\) maps, and the possible presence of PAH from IRAC 8um imaging, star formation could be occurring in the cone, but further data are needed to confirm the presence of star-formation. While the source of ionization in the cone of Obj1757C is firmly in the composite region of the BPT diagram, it is not shock dominated, because H\(\alpha\)\(\sigma_{\rm g}\) values are low (\(\sim 100-200\)km s\({}^{-1}\), see Section 4.3), which is inconsistent with \(\sigma_{\rm g}\) values for fast shocks (Fabian, 2012; Hamer et al., 2016; Aharonia et al., 2018). Slow shocks could be powering the ionization, but other processes such as the interplay between hot plasma and cold gas could fuel the ionization seen in the cone. The soft X-ray plume seen in Figure 5 could be an indicator of this process (i.e. "turbulent diffusive reconnection" Fabian et al., 2011). To confirm if star-formation is occurring in the cone, high-resolution observations of cold gas (CO features or H\({}_{2}\)) such as from ALMA would be enlightening (Russell et al., 2017). Obj1757C has been observed with ALMA, but the authors do not have access to these data currently. Additionally, high-resolution H\(\alpha\) imaging or IFU data would be useful, to determine whether the H\(\alpha\) flux is coincident with the HST clumpy structure seen in Figure 6. ### Kinetic Mode AGN and ICM Bubbles Obj1757C displays characteristics quite similar to the ICM "bubbles" (e.g. Fabian et al., 2003; Salome et al., 2006; McDonald et al., 2010; Pope et al., 2010; Fabian, 2012; Tremblay et al., 2015; McNamara et al., 2016; Chen et al., 2019). Fabian et al. (2003) note that filamentary structures in the central BCGs of galaxy clusters are associated with H\(\alpha\) emission and soft X-ray emission, both of which are consistent with what is seen in Obj1757C. Additionally, A2390 is a cool-core cluster, and these types of clusters quite commonly show bubble activity (e.g. Fabian, 2012). Cool-core clusters have a feeding mechanism for powering an AGN jet mode. Pristine cold gas from the cooling flow falls onto the BCG and accretes onto the AGN. As the AGN is fueled by this cooling flow, its jets inflate bubbles of plasma on either side of the galaxy nucleus. The mechanism for how bubbles rise and maintain their structure is not fully understood. Once the bubbles have formed, one possible scenario includes that they separate from the jet and rise buoyantly through the hot ICM (Churazov et al., 2000; McNamara et al., 2000). Models where the bubble is allowed to rise as a rigid body (e.g. due to a magnetic draping layer as proposed in Dursi and Pfrommer, 2008) show that detachment of the bubble from the jet is due to eddies of thin gaseous structures, similar to the H\(\alpha\) filamentary structures seen in observations of the Perseus cluster (Fabian et al., 2003). The filamentary structures form from gas condensing along the path of a buoyantly rising bubble (Pope et al., 2010; Gaspari et al., 2012; Sharma et al., 2012; Li and Bryan, 2014, 2014), or are lifted by the bubble itself (Churazov et al., 2001; Revaz et al., 2008). The structures, after detaching, form a stream of gas falling back toward the cluster center as the bubble rises (Gaspari et al., 2012; McNamara et al., 2016; Zhang et al., 2022). In cool-core clusters, the bubble is later disrupted by cold fronts (Fabian et al., 2022), which limits the number of bubbles observed in the outer ICM regions of cool-core clusters. ### Signatures of AGN Precession? Multiple episodes of AGN activity is suggested by multi-wavelength data in the literature and our SITELLE H\(\alpha\) observations. The evidence of multiple episodes is summarized and illustrated in Figure 13. Augusto et al. (2006) first suggested that there may be multiple episodes of precessed AGN activity in the A2390 BCG. They analyzed the (multi-wavelength, spatially resolved) radio morphology of the core region of the A2390 BCG using radio interferometry. The authors find their MERLIN 5.0 GHz map indicates a compact size of 0.49''(\(\sim\)1.5 kpc) with a very young milliarcsecond-scale two-jet North-South structure in VLBI measurements. We stress that the orange arrows in Figure 13 do not trace the extent of the north-south jets, but are in fact much larger than the milliarcsecond scale of the jets and are enlarged to emphasize the orientation with respect to other episodes of AGN activity. Augusto et al. (2006) point out that the jets have a \(\sim\)45\({}^{o}\) misalignment from the HST blue lane that extends from the South-east to North-west as seen by Hutchings and Balogh (2000) (see Figure 6). The authors suggest an interpretation of the misalignment as a result of a previous episode of AGN activity in a different orientation. Furthermore, the northern radio jet exhibits a twisted structure, similar to the hook structure of the H\(\alpha\) cone (Figure 3), which bends to the North-east, suggesting there may be a precession of the radio jet. Savini et al. (2018) analyze the large-scale radio structure of A2390 using LOFAR, noting an extended double lobe of \(\sim 600\) kpc. We refer to these lobes as the east-west lobes, which are shown schematically in Figure 13 as dark red arrows, with dark red circles indicating the location and extent of the widest section of the lobes. The radio lobes of A2390 are larger than any other in their sample of 9 clusters, because usually the ICM prevents lobes this size from forming in Figure 12: BPT map with pixels color coded by location on the BPT diagram (Figure 11). The H\(\alpha\) and [Nii] flux ratio map was smoothed using a 3X3 square filter, and the [Oiii] and H\(\beta\) flux ratio map was smoothed using a 5X5 square filter as shown in Figure 8 to reduce noise. Regions without colored pixels were either outside the H\(\alpha\) contours or contained an unusable flux for flux ratio analysis. The galaxy nucleus is dominated by LINER-like emission, powered by its AGN. The cone region is dominated by composite emission. clusters (Wing and Blanton, 2011; Savini et al., 2018). Structures with similar radio lobe scale include the BCG of cluster MS0735.6+7421 (\(\sim\)600 kpc McNamara et al., 2009; Vantyghem et al., 2014) and the BCG of the Ophiuchus cluster (\(\sim\)570 kpc Giacintucci et al., 2020). While H\(\alpha\) is observed in these galaxies, in contrast to A2390 there is no extended nebular line structure associated with the BCG itself in either cluster (McNamara et al., 2009; McDonald et al., 2010; Durret et al., 2015). Savini et al. (2018) argue that the lobes observed are remnants of an older AGN active phase, because the radio source at the BCG core Figure 13: SITELLE deep image of the central pointing of A2390 (\(z=0.228\)) on which sources from radio,optical and x-ray observations are marked. Larger-scale radio lobes observed in Savini et al. (2018) by LOFAR are seen in dark red, extending out past \(\sim\)300 kpc. The sizes and locations of the red circles represent the sizes and locations of the radio lobe contours in Savini et al. (2018). A region of flat spectral index radio emission not marked as a lobe is indicated by the violet arrow. These arrows mark the locations and fullest extent of the measured radio regions. Locations of five X-ray cavities measured in Sonkamble et al. (2014) are indicated with cyan crosses. The size of the crosses is not indicative of the size of the cavities. The inset is a 7′′ \(\times\)′′ cutout in which the milliarcsecond-scale north-south radio jets observed in Augusto et al. (2006) are marked in orange. Note that the arrows are indicated for direction only and do not represent the size of the jet, which is smaller than is feasible to depict at this scale. The H\(\alpha\) contours are shown in red. The radio lobes and jets radiate from the radio center of the BCG, and the H\(\alpha\) jet radiates from the H\(\alpha\) centroid measured as in Section 2.2. Note that the position angle of the LOFAR flat spectral index radio emission region is close to those of the H\(\alpha\) cone and one of the X-ray cavities. contains a relatively flat spectrum, whereas the lobes have a steeper spectrum and appear to be fading. They suggest a possible scenario consistent with that proposed by Augusto et al. (2006), is that there have been two epochs of AGN and outflow activities. The older AGN activity is marked by the large, fading east-west lobes, and the newer AGN episode would be associated with the small-scale north-south jets. However, we note that Savini et al. (2018) did not consider the extended optical emission, both continuum and emission line, found in the HST image with a PA in between those of the radio jets and the radio lobes. Savini et al. (2018) also includes a spectral index map of A2390 (their Figure 13). There is a region of flat spectral index sources to the northwest of the BCG, which was not commented upon by the authors. This region's direction and greatest extent is marked by a violet arrow in Figure 13. This region is located at a similar orientation to the H\(\alpha\) cone (see Figure 13), although is extended to \(\sim 1.5\) arcminutes (\(\sim\)275 kpc). The flat radio spectrum region, along with the H\(\alpha\) cone, can be interpreted as being associated with the same AGN activity. The X-ray analysis of Sonkamble et al. (2014) confirms the presence of AGN feedback activity associated with the formation of buoyantly-rising bubbles from the soft X-ray spectra. Additionally, the multiple X-ray cavities at varying orientations indicates that the direction of the feedback is changing with time. There are four major X-ray cavities in the north, south, east, and west of the BCG, but Sonkamble et al. (2014) also observes a smaller cavity to the northwest of the BCG. This cavity is located at a similar orientation to the H\(\alpha\) cone observed by STELLE and the flat spectral index radio region measured by LOFAR (see Figure 13). Combining all these data, we propose a possible scenario of a third episode of AGN activity associated with the H\(\alpha\) cone, with an age between those of the episodes that gave rise to the large radio lobes and the milli-arcsecond jets. The emission line cone is possibly associated with an intermediate age episode of AGN emission that is lifting cooler ionized and molecular gas into the ICM, forming these X-ray deficient cavities. An alternative explanation could be that the H\(\alpha\) cone and the X-ray cavities and radio regions associated with it could be an example of one of these bubbles that has reached its maximum rising height, and the gas it entrained behind it has fallen back toward the center of the cluster (Fabian et al., 2003). As it falls, it forms a nebular filamentary structure with star formation present. All lines of evidence including radio and X-ray data point to Obj1757C harboring an extraordinary multi-episode active AGN, likely a kinetic-mode AGN, receiving a cold gas stream which is feeding the central SMBH. The kinetic mode in cool-core clusters is associated with the formation of the bubbles seen in the Perseus cluster (Fabian et al., 2003, 2011), but the ongoing question is whether this bubble formation happens in bursts or is a constant process that occasionally results in a detached bubble (Fabian, 2012). ## 6 Conclusions We present CFHT/STELLE observations of the BCG (named Obj1757C) of the Abell 2390 galaxy cluster located at \(z=0.228\). Obj1757C displays a complex morphology, most notably an extended region of emission in HST/F555W, HST/F850LP, Spitzer/IRAC 8\(\mu\)m, Chandra 0.5-1.2 keV and 1.2-2 keV, and now H\(\alpha\) emission line imaging, referred to as the "cone". The origin of the cone was unclear due to multiple possible processes that could produce this emission line morphology. To investigate the origin of the extended region, we create emission line flux maps of the blended [Oii] doublet (from the C1 filter), the [Oiii] doublet and H\(\beta\) (C2), and the [Nii] doublet and H\(\alpha\) (C4). The BCG emission line flux maps, continuum maps, and kinematic properties were extracted from LUCI software, which performs a Bayesian analysis of each pixel of a STELLE IFU data cube, and extracts physical properties by fitting a SincGauss function to each emission line. Emission line ratio maps are derived from our emission line flux maps, allowing us to perform a spatially resolved BPT analysis of the source of the ionizing radiation. Emission line ratios show that the nucleus is dominated by LINER emission from the AGN, and that the cone is in a composite region of the BPT diagram that is fuelled by both photo-ionization and shock heating. However, low values for \(\sigma_{g}\) show that the cone is not shock dominated, so fast shocks cannot be powering the ionization mechanism. Slower shocks and the interplay between hot plasma and cold gas, and star formation regions could fuel the ionization as an alternative explanation. Our results indicate that Obj1757C is consistent with being a kinetic-mode AGN that is forming H\(\alpha\) filaments due to the inflation, detachment, and rising of buoyant plasma-filled bubbles. The kinematic maps indicate that Obj1757C has a velocity gradient of 700 km s\({}^{-1}\) between the nucleus and end of the extended emission region, a high \(\sigma_{g}\) nuclear region (\(\sim 240\) km s\({}^{-1}\)), and a low \(\sigma_{g}\) region in the cone (\(\sim 120\) km s\({}^{-1}\)). Our STELLE H\(\alpha\) observations, in combination with radio interferometry observations (Augusto et al., 2006; Savini et al., 2018) and Chandra X-ray imaging (Sonkamble et al., 2014), point to at least three epochs of AGN outflow activity. A very young milli-arcsecond scale North-South jet system is identified in Augusto et al. (2006). Two large-scale (\(\sim\)600 kpc) fading radio lobes to the East-West in Figure 13 are remnants of an older episode of activity as discussed in Savini et al. (2018). There is a region of flat spectral index radio sources that is located at a similar orientation but at a much greater extent to the H\(\alpha\) cone, and marks another episode of AGN activity. X-ray cavities associated with each of these episodes are observed in Sonkamble et al. (2014). We propose that these data point to three episodes of AGN activity, with the H\(\alpha\) cone associated with an intermediate-age epoch of AGN activity. Further information is needed to constrain the source of the ionizing radiation in the cone of Obj1757C. The source of ionization will provide us information on the physical processes of kinetic mode AGN in galaxy clusters, and indicate how metal-rich gases are spread through the ICM from the cluster BCG. High spatial resolution H\(\alpha\) imaging or IFU data and high-resolution IR imaging from JWST, could provide information on the structure of the cone and clumpy regions seen in the HST imaging, and possible star formation or shock activity causing the ionization of the gas. It would also be useful for determining the smaller-scale structure of the H\(\alpha\) cone, providing information on whether the cone is moving outward from the BCG or is a filament falling back onto the BCG. Additionally, deeper, higher resolution radio interferometry of the flat-spectrum radio sources in A2390 would be useful for determining the properties of the intermediate-age episode of AGN activity. ## Acknowledgements We thank the anonymous referee for their assistance with this article. The research of LYA is supported by an NSERC Discovery grant to HKCY. JHL thanks the NSERC Discovery grant program, the Discovery Accelerator Supplements program and the Canada Research Chair program. Based on observations obtained at the Canada France-Hawai Telescope (CFHT) which is operated from the summit of Maunakea by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site, and we thank the indigenous and resident Hawaiian population for being our gracious hosts. Based on observations obtained with STELLE, a joint project between Universite Laval, ABB-Bomem, Universite de Montreal and the CFHT with funding support from the Canada Foundation for Innovation (CFI), the National Sciences and Engineering Research Council of Canada (NSERC), Fond de Recheche du Quebec - Nature et Technologies (FRQNT) and CFHT. This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al., 2020). ## Data Availability The data underlying this article are available on the Canadian Astronomical Data Centre website, at [https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/](https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/). The datasets can be downloaded using the proposal IDs listed in Table 1.
2310.03219
$c_1$-cohomological rigidity for smooth toric Fano varieties of Picard number two
The $c_1$-cohomological rigidity conjecture states that two smooth toric Fano varieties are isomorphic as varieties if there is a $c_1$-preserving isomorphism between their integral cohomology rings. In this paper, we confirm the conjecture for smooth toric Fano varieties of Picard number two.
Yunhyung Cho, Eunjeong Lee, Mikiya Masuda, Seonjeong Park
2023-10-05T00:25:56Z
http://arxiv.org/abs/2310.03219v1
# \(c_{1}\)-cohomological rigidity for smooth toric Fano varieties of Picard number two ###### Abstract. The \(c_{1}\)-cohomological rigidity conjecture states that two smooth toric Fano varieties are isomorphic as varieties if there is a \(c_{1}\)-preserving isomorphism between their integral cohomology rings. In this paper, we confirm the conjecture for smooth toric Fano varieties of Picard number two. \({}^{*}\) S. Park is the corresponding author. _Dedicated to Professor Victor M. Buchstaber on his 80th birthday_ ###### Contents * 1 Introduction * 2 Generalized Bott manifolds * 3 Fano condition * 4 Two-stage Fano generalized Bott manifolds * 5 Related cohomological rigidity ## 1. Introduction Motivated by McDuff's question mentioned later, we posed the following conjecture in [5]. **Conjecture 1.1** ([5, Conjecture 1.4]).: _Let \(X\) and \(Y\) be smooth toric Fano varieties. If there exists a \(c_{1}\)-preserving graded ring isomorphism between their integral cohomology rings, then \(X\) and \(Y\) are isomorphic as varieties, where \(c_{1}\)-preserving means preserving the first Chern classes of \(X\) and \(Y\)._ Neither the Fano condition nor the \(c_{1}\)-preserving condition can be dropped as is observed for Hirzebruch surfaces. We say that a smooth toric Fano variety \(X\) is \(c_{1}\)_-cohomologically rigid_ if any smooth toric Fano variety \(Y\) which allows a \(c_{1}\)-preserving graded ring isomorphism \(H^{*}(Y;\mathbb{Z})\to H^{*}(X;\mathbb{Z})\) is isomorphic to \(X\) as a variety. Then Conjecture 1.1 is equivalent to saying that every smooth toric Fano variety is \(c_{1}\)-cohomologically rigid. The \(c_{1}\)-cohomological rigidity is verified for Fano Bott manifolds ([5]), smooth toric Fano varieties of dimension up to four or of Picard number greater than or equal to \(2n-2\) ([17]), where \(n\) is the complex dimension of the Fano variety. In each dimension \(n\), the complex projective space \(\mathbb{C}P^{n}\) is Introduction Let \(X\) be a smooth toric variety and \(\mathbb{C}P^{n_{1}}\) be a smooth toric variety. We say that \(X\) is _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. If \(X\) is \(c_{1}\)-cohomologically rigid, then \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid_ if \(X\) is \(c_{1}\)-cohomologically rigid. The _\(c_{1}\)-cohomologically rigid_ is a _\(c_{1}\)-cohomologically rigid_ if \(X there is a symplectomorphism \(f\colon(M,\omega,T_{1})\to(M,\omega,T_{2})\) together with an isomorphism \(\sigma\colon T_{1}\to T_{2}\) such that \(f(gp)=\sigma(g)f(p)\) for \(g\in T_{1}\) and \(p\in M.\) This means that \(f\) is in \(\operatorname{Ham}(M,\omega)\) and \(T_{1}\) is conjugate to \(T_{2}\) in \(\operatorname{Ham}(M,\omega)\) by \(f\). Thus, we obtain the following as a corollary of Theorem 1.2. **Corollary 1.5** ([15, Corollary 1.15]).: _The monotone toric manifold associated with a smooth toric Fano variety of Picard number two has a unique toric structure._ This paper is organized as follows. In Section 2, we briefly review generalized Bott manifolds and the presentation of their cohomology ring in terms of so-called generalized Bott matrices. In Section 3, we investigate when generalized Bott manifolds are Fano by applying Batyrev's criterion. In Section 4, we prove Theorem 1.2. In Section 5, we discuss related cohomological rigidity problems and results. ### Acknowledgements Y. Cho was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP; Ministry of Science, ICT & Future Planning) (No. 2020R1C1C1A01010972) and (No. 2020R1A5A1016126). E. Lee was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. RS-2023-00239947). M. Masuda was supported in part by JSPS Grant-in-Aid for Scientific Research 22K03292. S. Park was supported by the National Research Foundation of Korea [NRF-2020R1A2C1A01011045]. This work was partly supported by Osaka Central Advanced Mathematical Institute (MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics) and the HSE University Basic Research Program. ## 2. Generalized Bott manifolds In this section, we recall some basic facts on generalized Bott manifolds from [7]. **Definition 2.1** ([7]).: A _generalized Bott tower_\(\mathcal{B}_{\bullet}\) of height \(m\) is an iterated \(\mathbb{C}P^{n_{i}}\)-bundle: \[\begin{CD}\mathcal{B}_{m}@>{\pi_{m}}>{}>\mathcal{B}_{m-1}@>{\pi_{m-1}}>{}>\cdots @>{\pi_{2}}>{}>\mathcal{B}_{1}@>{\pi_{1}}>{}>\mathcal{B}_{0},\\ P(\underline{\mathbb{C}}\oplus\xi_{m-1}^{1}\oplus\cdots\oplus\xi_{m-1}^{n_{m}}) \mathbb{C}P^{n_{1}}\{\text{a point}\}\end{CD}\] where each \(\mathcal{B}_{i}\) is the complex projectivization of the Whitney sum of line bundles \(\xi_{i-1}^{k}\) (\(1\leq k\leq n_{i}\)) and the trivial line bundle \(\underline{\mathbb{C}}\) over \(\mathcal{B}_{i-1}.\) We call \(\mathcal{B}_{m}\) an \(m\)-stage _generalized Bott manifold_. When \(n_{i}=1\) for every \(i\), a generalized Bott tower is called a _Bott tower_, and accordingly a generalized Bott manifold is called a _Bott manifold_. The fiber of the bundle \(\pi_{i}\colon\mathcal{B}_{i}\to\mathcal{B}_{i-1}\) is the complex projective space \(\mathbb{C}P^{n_{i}}\) and if the line bundles \(\xi_{i-1}^{k}\) constructing the tower \(\mathcal{B}_{\bullet}\) are all trivial, then \(\mathcal{B}_{m}\) is isomorphic to \(\prod_{i=1}^{m}\mathbb{C}P^{n_{i}}\) as a variety. Let \(\gamma_{j}\) be the tautological line bundle over \(\mathcal{B}_{j}.\) By abuse of notation, we denote the pullback of \(\gamma_{j}\) by the projection \(\pi_{i}\circ\cdots\circ\pi_{j+1}\colon\mathcal{B}_{i}\to\mathcal{B}_{j}\) for \(i>j\) by the same notation \(\gamma_{j}.\) Then the Picard group \(\operatorname{Pic}(\mathcal{B}_{i})\) of \(\mathcal{B}_{i}\) is generated by the line bundles \(\gamma_{j}\) for \(1\leq j\leq i\) and isomorphic to \(\mathbb{Z}^{i}\). Thus each line bundle \(\xi_{i-1}^{k}\) (where \(1\leq k\leq n_{i}\)) over \(\mathcal{B}_{i-1}\) can be expressed by \[\xi_{i-1}^{k}=\bigotimes_{1\leq j<i}\gamma_{j}^{\otimes a_{i,j}^{k}} \tag{2.1}\] for some integers \(a_{i,j}^{k}\in\mathbb{Z}\) with \(1\leq j<i\). Accordingly, the set \(\{a_{i,j}^{k}\}_{\begin{subarray}{c}1\leq j<i\leq m,\\ 1\leq k\leq n_{i}\end{subarray}}\) of integers determines a generalized Bott manifold. The projection map \(\pi_{i}\colon\mathcal{B}_{i}\to\mathcal{B}_{i-1}\) admits a section induced from the zero section of the vector bundle \(\underline{\mathbb{C}}\oplus\bigoplus_{k=1}^{n_{i}}\xi_{i-1}^{k}\) so that the induced ring homomorphism \[\pi_{i}^{*}\colon H^{*}(\mathcal{B}_{i-1};\mathbb{Z})\to H^{*}(\mathcal{B}_{i };\mathbb{Z})\] is injective and we think of elements in \(H^{*}(\mathcal{B}_{j};\mathbb{Z})\) as elements in \(H^{*}(\mathcal{B}_{m})\) for any \(j\leq m\). We set \[x_{j}:=-c_{1}(\gamma_{j})\in H^{2}(\mathcal{B}_{m};\mathbb{Z}).\] Then it follows from (2.1) that \[c_{1}(\xi_{i-1}^{k})=-\sum_{j=1}^{i-1}a_{i,j}^{k}x_{j}\in H^{2}(\mathcal{B}_{m };\mathbb{Z}).\] A generalized Bott manifold \(\mathcal{B}_{m}\) is a smooth projective toric variety of \(\mathbb{C}\)-dimension \(n\coloneqq\sum_{i=1}^{m}n_{i}\), where the algebraic torus action can be constructed in an iterative way using a toric structure of a base space and a \((\mathbb{C}^{*})^{n_{i}}\)-action on a fiber at each stage. The associated fan \(\Sigma\) can be described as follows. (See also [12, SS7.3].) Let \(\{\mathbf{e}_{1}^{1},\ldots,\mathbf{e}_{1}^{n_{1}},\ldots,\mathbf{e}_{m}^{1}, \ldots,\mathbf{e}_{m}^{n_{m}}\}\) be the standard basis vectors of \(\mathbb{R}^{n}=\mathbb{R}^{n_{1}}\oplus\cdots\oplus\mathbb{R}^{n_{m}}\). Then the set of ray generators of \(\Sigma\) is given by the columns of the matrix \[(E\ |\ A)\coloneqq\begin{pmatrix}E_{n_{1}}&&&&-\mathds{1}&&&&\\ &E_{n_{2}}&&&&\mathbf{a}_{2,1}&-\mathds{1}&&\\ &&E_{n_{3}}&&&\mathbf{a}_{3,1}&\mathbf{a}_{3,2}&-\mathds{1}&&\\ &&&\ddots&&\vdots&\vdots&\ddots&\ddots&\\ &&&&E_{n_{m}}&\mathbf{a}_{m,1}&\mathbf{a}_{m,2}&\cdots&\mathbf{a}_{m,m-1}&- \mathds{1}\end{pmatrix}, \tag{2.2}\] where \(E_{n_{j}}\) is the identity matrix of size \(n_{j}\), i.e., the column vectors of \(E_{n_{j}}\) are \(\mathbf{e}_{j}^{1},\ldots,\mathbf{e}_{j}^{n_{j}}\), and \[\mathbf{a}_{i,j}\coloneqq(a_{i,j}^{1},\ldots,a_{i,j}^{n_{i}})^{T}\in\mathbb{Z }^{n_{i}}\qquad\text{ and }\qquad-\mathds{1}=(-1,\ldots,-1)^{T}\] for \(j=1,\ldots,m\). For simplicity, we denote the \((n+j)\)th column vector of (2.2) by \(\mathbf{v}_{j}\) for \(j=1,\ldots,m\). Then it is easy to see that \(\mathcal{P}\) is a maximal cone in \(\Sigma\) if and only if \(\mathcal{P}\) is of the form \[\mathcal{P}=\operatorname{Cone}(\widehat{\mathcal{P}}_{1}\cup\cdots\cup \widehat{\mathcal{P}}_{m}),\] where \[R_{j}=\{\mathbf{e}_{j}^{1},\ldots,\mathbf{e}_{j}^{n_{j}},\mathbf{v}_{j}\}, \quad\widehat{\mathcal{P}}_{j}=R_{j}\backslash\{\mathbf{w}_{j}\}\quad\text{ for some }\mathbf{w}_{j}\in R_{j},j=1,\ldots,m.\] In particular, \(\Sigma\) is combinatorially equivalent to the product fan \(\Sigma_{1}\times\cdots\times\Sigma_{m}\), where \(\Sigma_{i}\) is the fan of \(\mathbb{C}P^{n_{i}}\) and so there are \(\prod_{j=1}^{m}(n_{j}+1)\) maximal cones. We call the matrix \(A=[\mathbf{v}_{1}\ldots\mathbf{v}_{m}]\) a _generalized Bott matrix_ of type \((n_{1},\ldots,n_{m})\). **Example 2.2**.: A generalized Bott matrix \(A\) of type \((1,4,2)\) has the following form: \[A=\begin{bmatrix}-\mathds{1}&\mathbf{0}&\mathbf{0}\\ \mathbf{a}_{2,1}&-\mathds{1}&\mathbf{0}\\ \mathbf{a}_{3,1}&\mathbf{a}_{3,2}&-\mathds{1}\end{bmatrix}=\begin{bmatrix}-1 &0&0\\ a_{2,1}^{1}&-1&0\\ a_{2,1}^{2}&-1&0\\ a_{2,1}^{3}&-1&0\\ a_{2,1}^{4}&-1&0\\ a_{3,1}^{1}&a_{3,2}^{1}&-1\\ a_{3,1}^{2}&a_{3,2}^{2}&-1\end{bmatrix}.\] We set \[\alpha_{i}^{k}:=a_{i,1}^{k}x_{1}+\cdots+a_{i,i-1}^{k}x_{i-1}\in H^{2}(\mathcal{ B}_{m};\mathbb{Z})\qquad(i=2,\ldots,m,\quad k=1,\ldots,n_{i}).\] By the Borel-Hirzebruch formula, the integral cohomology ring of \(\mathcal{B}_{m}\) can be represented by \[H^{*}(\mathcal{B}_{m};\mathbb{Z})=\mathbb{Z}[x_{1},\ldots,x_{m}]\Bigg{/} \left\langle x_{1}^{n_{1}+1},\ x_{i}\prod_{k=1}^{n_{i}}(x_{i}-\alpha_{i}^{k}) \ \ (i=2,\ldots,m)\right\rangle, \tag{2.3}\] where \(\langle\ \rangle\) denotes the ideal generated by the elements in it. The total Chern class of \(\mathcal{B}_{m}\) is written by \[c(\mathcal{B}_{m}) = (1+x_{1})^{n_{1}+1}\prod_{i=2}^{m}\left[(1+x_{i})\prod_{k=1}^{n_{ i}}(1+x_{i}-\alpha_{i}^{k})\right]\] and in particular, we have \[c_{1}(\mathcal{B}_{m}) = (n_{1}+1)x_{1}+\sum_{i=2}^{m}\left\{(n_{i}+1)x_{i}-\sum_{k=1}^{n_{ i}}\alpha_{i}^{k}\right\}\] \[= \sum_{i=1}^{m}(n_{i}+1)x_{i}-\sum_{i=2}^{m}\sum_{k=1}^{n_{i}} \alpha_{i}^{k}. \tag{2.4}\] **Remark 2.3**.: 1. We obtain \(H^{*}(\mathcal{B}_{m};\mathbb{Z})\cong H^{*}(\prod_{i=1}^{m}\mathbb{C}P^{n_{i} };\mathbb{Z})\) as _groups_, so \(H^{*}(\mathcal{B}_{m})\) determines the multiset \(\{n_{1},\ldots,n_{m}\}\) of fiber dimensions in the generalized Bott tower \(\mathcal{B}_{\bullet}\). 2. A smooth compact toric variety whose cohomology ring is isomorphic to the cohomology ring of a generalized Bott manifold is a generalized Bott manifold. (See [7] or [8].) ## 3. Fano condition In this section, we describe the Fano condition on generalized Bott manifolds using Batyrev's criterion in [1, 2]. We refer the reader to [26] for another description of the Fano condition on generalized Bott manifolds. For a complete nonsingular fan \(\Sigma\), a subset \(R\) of the primitive ray vectors is called a _primitive collection_ of \(\Sigma\) if \[\mathrm{Cone}(R)\notin\Sigma\quad\text{ but }\quad\mathrm{Cone}(R\setminus\{ \mathbf{u}\})\in\Sigma\quad\text{ for every }\mathbf{u}\in R.\] We denote by \(\mathrm{PC}(\Sigma)\) the set of primitive collections of \(\Sigma\). For a primitive collection \(R=\{\mathbf{u}_{1}^{\prime},\ldots,\mathbf{u}_{\ell}^{\prime}\}\), we have \(\mathbf{u}_{1}^{\prime}+\cdots+\mathbf{u}_{\ell}^{\prime}=\mathbf{0}\) or there exists a unique cone \(\sigma\) of positive dimension such that is in the interior of \(\sigma\). That is, \[\mathbf{u}_{1}^{\prime}+\cdots+\mathbf{u}_{\ell}^{\prime}=\begin{cases}\mathbf{0},&\text{or}\\ \lambda_{1}\mathbf{u}_{1}+\cdots+\lambda_{s}\mathbf{u}_{s},\end{cases} \tag{3.1}\] where \(\mathbf{u}_{1},\ldots,\mathbf{u}_{s}\) are the primitive generators of \(\sigma\) and \(\lambda_{1},\ldots,\lambda_{s}\) are positive integers. We call (3.1) a _primitive relation_, and the _degree_\(\deg R\) of a primitive collection \(R\) is defined to be \[\deg R:=\begin{cases}\ell&\text{if }\mathbf{u}_{1}^{\prime}+\cdots+\mathbf{u}_{ \ell}^{\prime}=\mathbf{0},\\ \ell-(\lambda_{1}+\cdots+\lambda_{s})&\text{otherwise}.\end{cases} \tag{3.2}\] **Proposition 3.1** ([2, Proposition 2.3.6]).: _A smooth compact toric variety \(X\) is Fano if and only if \(\deg R>0\) for every primitive collection \(R\) of the fan \(\Sigma\) of \(X\)._ From the description of the fan \(\Sigma\) of \(\mathcal{B}_{m}\), we have \[\operatorname{PC}(\Sigma)=\{R_{j}\mid j=1,\ldots,m\}=\{\{\mathbf{v}_{j}, \mathbf{e}_{j}^{1},\ldots,\mathbf{e}_{j}^{n_{j}}\}\mid j=1,\ldots,m\}.\] For each primitive collection \(R_{j}\)\((1\leq j<m)\), we obtain nonnegative integer vectors \(\boldsymbol{\lambda}_{i,j}=(\lambda_{i,j}^{0},\lambda_{i,j}^{1},\ldots,\lambda_ {i,j}^{n_{i}})\in\mathbb{Z}_{\geq 0}^{n_{i}+1}\)\((1\leq j<i\leq m)\) such that \[\mathbf{v}_{j}+\mathbf{e}_{j}^{1}+\cdots+\mathbf{e}_{j}^{n_{j}}= \sum_{i=j+1}^{m}\left(\lambda_{i,j}^{0}\mathbf{v}_{i}+\sum_{k=1}^{n_{i}}\lambda _{i,j}^{k}\mathbf{e}_{i}^{k}\right)\quad\text{and}\] \[\prod_{k=0}^{n_{i}}\lambda_{i,j}^{k}=0\quad\text{for each }i.\] The following lemma follows immediately from Proposition 3.1. **Lemma 3.2**.: _The generalized Bott manifold \(\mathcal{B}_{m}\) is Fano if and only if_ \[\sum_{i=j+1}^{m}\sum_{k=0}^{n_{i}}\lambda_{i,j}^{k}\leq n_{j}\quad\text{ for all }j=1,\ldots,m. \tag{3.3}\] **Example 3.3**.: For a generalized Bott manifold \(\mathcal{B}_{3}\) associated with a generalized Bott matrix of type \((n_{1},n_{2},n_{3})=(3,2,2)\) defined by \[A=\begin{bmatrix}-\mathds{1}&\mathbf{0}&\mathbf{0}\\ \mathbf{a}_{2,1}&-\mathds{1}&\mathbf{0}\\ \mathbf{a}_{3,1}&\mathbf{a}_{3,2}&-\mathds{1}\end{bmatrix}=\begin{bmatrix}-1&0 &0\\ -1&0&0\\ -1&-1&0\\ -1&-1&0\\ 1&1&-1\\ 2&0&-1\end{bmatrix},\] we have \[\mathbf{v}_{1}+\mathbf{e}_{1}^{1}+\mathbf{e}_{1}^{2}+\mathbf{e}_{1}^{3}=(0,0, 0,-1,-1,1,2)^{T}=\mathbf{v}_{2}+2\mathbf{e}_{3}^{2},\] \[\mathbf{v}_{2}+\mathbf{e}_{2}^{1}+\mathbf{e}_{2}^{2}=(0,0,0,0,0,1,0)^{T}= \mathbf{e}_{3}^{1},\] \[\mathbf{v}_{3}+\mathbf{e}_{3}^{1}+\mathbf{e}_{3}^{2}=\mathbf{0}.\] Therefore, the integer vectors \(\mathbf{\lambda}_{i,j}\) are given as follows: \[\mathbf{\lambda}_{2,1} =(\lambda_{2,1}^{0},\lambda_{2,1}^{1},\lambda_{2,1}^{2})=(1,0,0), \quad\mathbf{\lambda}_{3,1}=(\lambda_{3,1}^{0},\lambda_{3,1}^{1},\lambda_{3,1}^{2})= (0,0,2),\] \[\mathbf{\lambda}_{3,2} =(\lambda_{3,2}^{0},\lambda_{3,2}^{1},\lambda_{3,2}^{2})=(0,1,0).\] Since we have \[\sum_{i=2}^{3}\sum_{k=0}^{n_{i}}\lambda_{i,1}^{k}=3\leq n_{1}=3\quad\text{and} \quad\sum_{k=0}^{n_{3}}\lambda_{3,2}^{k}=1\leq n_{2}=2,\] the generalized Bott manifold \(\mathcal{B}_{3}\) is Fano by Lemma 3.2. ## 4. Two-stage Fano generalized Bott manifolds It is known that two-stage generalized Bott manifolds are _diffeomorphic_ to each other if and only if their cohomology rings are isomorphic as graded rings ([7, Theorem 6.1]). There are many two-stage generalized Bott manifolds which are diffeomorphic but not isomorphic as varieties to each other. Moreover, as is observed for Hirzebruch surfaces, two-stage generalized Bott manifolds are not necessarily isomorphic as varieties even if there is a \(c_{1}\)-preserving isomorphism between their integral cohomology rings. In this section, we prove the \(c_{1}\)-cohomological rigidity for two-stage _Fano_ generalized Bott manifolds, that is, we prove Theorem 1.2. Let \(\mathcal{B}\) be the two-stage generalized Bott manifold associated with a generalized Bott matrix \(A\) of type \((n_{1},n_{2})\): \[A=\begin{bmatrix}-1&0\\ \vdots&\vdots\\ -1&0\\ a_{1}&-1\\ \vdots&\vdots\\ a_{n_{2}}&-1\end{bmatrix}_{(n_{1}+n_{2})\times 2} \tag{4.1}\] Then \[\mathcal{B}=P(\underline{\mathbb{C}}\oplus\gamma^{a_{1}}\oplus\cdots\oplus \gamma^{a_{n_{2}}}), \tag{4.2}\] where \(\gamma\) denotes the tautological line bundle over \(\mathbb{C}P^{n_{1}}\). It follows from (2.3) and (2.4) that \[H^{*}(\mathcal{B};\mathbb{Z}) =\mathbb{Z}[x_{1},x_{2}]\Big{/}\left<x_{1}^{n_{1}+1},x_{2}\prod_{ k=1}^{n_{2}}(x_{2}-a_{k}x_{1})\right>, \tag{4.4}\] \[c_{1}(\mathcal{B}) =\left(n_{1}+1-\sum_{k=1}^{n_{2}}a_{k}\right)x_{1}+(n_{2}+1)x_{2}. \tag{4.3}\] We note that the isomorphism class of \(\mathcal{B}\) as a variety is unchanged by permuting \(a_{k}\)'s, so it depends on the multiset \(\{a_{1},\ldots,a_{n_{2}}\}\). Related to this, we recall an elementary fact used later. The _elementary symmetric polynomials_ in \(n\) variables \(x_{1},\ldots,x_{n}\) for \(r=1,\ldots,n\) are defined by \[e_{r}(x_{1},\ldots,x_{n})=\sum_{1\leq j_{1}<\cdots<j_{r}\leq n}x_{j_{1}}\ldots x _{j_{r}}.\] For a vector \((b_{1},\ldots,b_{n})\in\mathbb{Z}^{n}\), the values \(e_{r}(b_{1},\ldots,b_{n})\)\((1\leq r\leq n)\) determine the set \(\{b_{1},\ldots,b_{n}\}\) as a multiset, in particular if \(e_{r}(b_{1},\ldots,b_{n})=0\) for \(1\leq r\leq n\), then \(b_{k}=0\) for every \(k=1,\ldots,n\). Indeed, this fact immediately follows from the identity \[\prod_{k=1}^{n}(1+b_{k}t)=\sum_{r=1}^{n}e_{r}(b_{1},\ldots,b_{n})t^{r}.\] **Lemma 4.1**.: _Let \(\mathcal{B}\) be the two-stage generalized Bott manifold associated with (4.1). Suppose that \(n_{1}>n_{2}\) and there exists a nonzero element \(y\in H^{2}(\mathcal{B};\mathbb{Z})\) such that \(y^{n_{2}+1}=0\). Then \(a_{k}=0\) for every \(k=1,\ldots,n_{2}\), in particular, \(\mathcal{B}\) is isomorphic to \(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}}\) as a variety._ Proof.: One can express \(y=px_{1}+qx_{2}\) with integers \(p\) and \(q\) by (4.3). Then \[0=y^{n_{2}+1}=(px_{1}+qx_{2})^{n_{2}+1}=p^{n_{2}+1}x_{1}^{n_{2}+1}+\binom{n_{2 }+1}{1}p^{n_{2}}qx_{1}^{n_{2}}x_{2}+\cdots+q^{n_{2}+1}x_{2}^{n_{2}+1}.\] Since \(n_{2}+1\leq n_{1}\), the coefficient of \(x_{1}^{n_{2}+1}\) above must vanish by (4.3); so \(p=0\) and hence \(y=qx_{2}\). Since \(y\neq 0\) and \(y^{n_{2}+1}=0\) by assumption, we get \(x_{2}^{n_{2}+1}=0\). Then it follows from (4.3) that we have \[0=x_{2}\prod_{k=1}^{n_{2}}(x_{2}-a_{k}x_{1})=\sum_{r=1}^{n_{2}}(-1)^{r}e_{r}(a _{1},\ldots,a_{n_{2}})x_{1}^{r}x_{2}^{n_{2}+1-r}.\] Since \(n_{1}>n_{2}\) by assumption, one can see from (4.3) that the elements \(x_{1}^{r}x_{2}^{n_{2}+1-r}\)\((1\leq r\leq n_{2})\) above are linearly independent; so their coefficients above must vanish. Hence \(a_{k}=0\) for every \(k=1,\ldots,n_{2}\). Let \(\mathcal{B}\) (resp. \(\widetilde{\mathcal{B}}\)) be a two-stage generalized Bott manifold associated with a generalized Bott matrix of type \((n_{1},n_{2})\) (resp. \((\widetilde{n}_{1},\widetilde{n}_{2})\)). If their cohomology rings are isomorphic to each other, then \(\{n_{1},n_{2}\}=\{\widetilde{n}_{1},\widetilde{n}_{2}\}\) as sets as remarked in Remark 2.3, so \((\widetilde{n}_{1},\widetilde{n}_{2})=(n_{1},n_{2})\) or \((n_{2},n_{1})\). When their cohomology rings are isomorphic to \(H^{*}(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}};\mathbb{Z})\), both cases can occur but otherwise only the former case occurs as is shown below. **Lemma 4.2**.: _Let \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\) and be as above. If \(H^{*}(\mathcal{B};\mathbb{Z})\cong H^{*}(\widetilde{\mathcal{B}};\mathbb{Z})\) as graded rings and they are not isomorphic to \(H^{*}(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}};\mathbb{Z})\), then \((\widetilde{n}_{1},\widetilde{n}_{2})=(n_{1},n_{2})\)._ Proof.: Let \(\varphi\colon H^{*}(\widetilde{\mathcal{B}};\mathbb{Z})\to H^{*}(\mathcal{B}; \mathbb{Z})\) be an isomorphism as graded rings. Suppose that \((\widetilde{n}_{1},\widetilde{n}_{2})\neq(n_{1},n_{2})\). Then \((\widetilde{n}_{1},\widetilde{n}_{2})=(n_{2},n_{1})\) and \(n_{1}\neq n_{2}\). We may assume \(n_{1}>n_{2}\) because otherwise we interchange the role of \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\). Let \(\widetilde{x}\) be a nonzero element of \(H^{2}(\widetilde{\mathcal{B}};\mathbb{Z})\) coming from the base space \(\mathbb{C}P^{n_{2}}\) of \(\widetilde{\mathcal{B}}\), so \(\widetilde{x}^{n_{2}+1}=0\). Then since \(\varphi(\widetilde{x})\neq 0\) and \(\varphi(\widetilde{x})^{n_{2}+1}=0\), \(\mathcal{B}\) is isomorphic to \(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}}\) as a variety by Lemma 4.1. This contradicts the assumption on \(H^{*}(\mathcal{B})\), proving the lemma. For the integers \(a_{k}\)\((1\leq k\leq n_{2})\) in (4.1), we may assume that \[a_{k}\geq 0\quad\text{ for every }k=1,\ldots,n_{2}. \tag{4.5}\] Indeed, \(P(E)\) is isomorphic to \(P(E\otimes L)\) for any complex vector bundle \(E\) and any line bundle \(L\); so if the minimum among \(a_{1},\ldots,a_{n_{2}}\), say \(a_{d}\), is negative, then we consider \(P(E\otimes\gamma^{-a_{d}})\) instead of \(P(E)\) for \(E=\underline{\mathbb{C}}\oplus\gamma^{a_{1}}\oplus\cdots\oplus\gamma^{a_{n_{2}}}\) in (4.2), where the exponents \(a_{k}-a_{d}\) of \(\gamma\) appearing in \(E\otimes\gamma^{-a_{d}}\) are all nonnegative. The assumption (4.5) is convenient to see the Fano condition for \(\mathcal{B}\). As before, we denote the \(j\)th colomn vector in (4.1) by \(\mathbf{v}_{j}\) where \(j=1,2\). Then \[\mathbf{v}_{1}+\mathbf{e}_{1}^{1}+\cdots+\mathbf{e}_{1}^{n_{1}}=a_{1}\mathbf{ e}_{2}^{1}+\cdots+a_{n_{2}}\mathbf{e}_{2}^{n_{2}}\quad\text{and}\quad\mathbf{v}_{2 }+\mathbf{e}_{2}^{1}+\cdots+\mathbf{e}_{2}^{n_{2}}=\mathbf{0}.\] Since \(a_{k}\geq 0\) every \(k=1,\ldots,n_{2}\), it follows from Lemma 3.2 that \(\mathcal{B}\) is Fano if and only if \[\sum_{k=1}^{n_{2}}a_{k}\leq n_{1}. \tag{4.6}\] We denote the generalized Bott manifold associated with (4.1) by \(\mathcal{B}(n_{1},(a_{1},\ldots,a_{n_{2}}))\), where we take \(a_{k}\geq 0\) for \(k=1,\ldots,n_{2}\) by (4.5). **Proposition 4.3** (cf. [23, Proposition 1.8]).: _Let \(\mathcal{B}=\mathcal{B}(n_{1},(a_{1},\ldots,a_{n_{2}}))\). Suppose that \(H^{*}(\mathcal{B};\mathbb{Z})\) is isomorphic to \(H^{*}(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}};\mathbb{Z})\) as graded rings. If either_ 1. \(n_{1}>n_{2}\)_, or_ 2. \(n_{1}\leq n_{2}\) _and_ \(\mathcal{B}\) _is Fano,_ _then \(a_{k}=0\) for every \(k=1,\ldots,n_{2}\)_(so \(\mathcal{B}\) is isomorphic to \(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}}\) as a variety)._ Proof.: It follows from [7, Theorem 6.1] that the assumption \(H^{*}(\mathcal{B};\mathbb{Z})\cong H^{*}(\mathbb{C}P^{n_{1}}\times\mathbb{C}P ^{n_{2}};\mathbb{Z})\) is equivalent to the existence of an integer \(b\) such that \[\prod_{k=1}^{n_{2}}(1+a_{k}x)=(1+bx)^{n_{2}+1}\quad\text{in}\ \ \mathbb{Z}[x]/\langle x^{n_{1}+1}\rangle. \tag{4.7}\] If \(n_{1}>n_{2}\), then we get \(b=0\) by comparing the coefficients of the term \(x^{n_{2}+1}\) above. Hence \(a_{k}=0\) for every \(k\). If \(n_{1}\leq n_{2}\) and \(\mathcal{B}\) is Fano, then we have \(0\leq\sum_{k=1}^{n_{2}}a_{k}\leq n_{1}<n_{2}+1\) from (4.6). On the other hand, we have \(\sum_{k=1}^{n_{2}}a_{k}=(n_{2}+1)b\) from (4.7). Therefore, \(b=0\) and hence \(\sum_{k=1}^{n_{2}}a_{k}=0\). This implies that \(a_{k}=0\) for every \(k\) because \(a_{k}\geq 0\) for every \(k\). **Remark 4.4**.: 1. The proof above shows that the generalized Bott tower \(\mathcal{B}_{\bullet}\) of height two with \(\mathcal{B}=\mathcal{B}_{2}\) is trivial under the assumption of the proposition. 2. If the Fano condition is dropped in Proposition 4.3, then \(\mathcal{B}\) is _diffeomorphic_ to \(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}}\) ([7, Corollary 6.3]) but not necessarily isomorphic to \(\mathbb{C}P^{n_{1}}\times\mathbb{C}P^{n_{2}}\) as a variety. Indeed, the Hirzebruch surface \(F_{a}=P(\underline{\mathbb{C}}\oplus\gamma^{a})\), which is a two-stage Bott manifold, has the cohomology ring isomorphic to \(H^{*}(\mathbb{C}P^{1}\times\mathbb{C}P^{1};\mathbb{Z})\) when \(a\) is even but \(F_{a}\) is not isomorphic to \(\mathbb{C}P^{1}\times\mathbb{C}P^{1}\) as a variety unless \(a=0\). 3. When \(n_{1}=n_{2}=1\), we need the Fano condition as remarked above. However, when \(n_{1}=n_{2}\geq 2\), the conclusion of the proposition holds without the Fano condition. Indeed, in this case, one can deduce \(b=0\) from (4.7) so that \(a_{k}=0\) for every \(k\). Note that \(\mathcal{B}(1,(a))\) (i.e. \(n_{1}=n_{2}=1\)) is a Hirzebruch surface \(F_{a}\) and it is Fano if and only if \(a=0,1\). When \(n_{2}=1\), the following is known. **Proposition 4.5**.: _Let \(a\) and \(\widetilde{a}\) be nonnegative integers._ 1. _When_ \(n_{1}=1\)_,_ \(H^{*}(\mathcal{B}(1,(a));\mathbb{Z})\cong H^{*}(\mathcal{B}(1,(\widetilde{a})); \mathbb{Z})\) _if and only if_ \(a\equiv\widetilde{a}\pmod{2}\)_._ 2. _When_ \(n_{1}>1\)_,_ \(H^{*}(\mathcal{B}(n_{1},(a));\mathbb{Z})\cong H^{*}(\mathcal{B}(n_{1},( \widetilde{a}));\mathbb{Z})\) _if and only if_ \(a=\widetilde{a}\)_._ Proof.: (1) is well-known and easy to prove. (2) is Proposition 5.2 in [10]. In the cases treated in Propositions 4.3 and 4.5, two-stage _Fano_ generalized Bott manifolds are distinguished as varieties by their cohomology rings. However, this is not true in general. **Example 4.6**.: Let \(\mathcal{B}=\mathcal{B}(2,(1,1))\) and \(\widetilde{\mathcal{B}}=\mathcal{B}(2,(0,1))\). They are four-dimensional and Fano by (4.6) but not isomorphic as varieties. Indeed, they are ID 70 and ID 141 respectively in the classification list of smooth toric Fano varieties by Obro ([24]), see also [17, Table 6]. However, there is a graded ring isomorphism \[H^{*}(\mathcal{B};\mathbb{Z})=\mathbb{Z}[x_{1},x_{2}]/\langle x_{1}^{3},x_{2}( x_{2}-x_{1})^{2}\rangle\xrightarrow{\varphi}H^{*}(\widetilde{\mathcal{B}}; \mathbb{Z})=\mathbb{Z}[\widetilde{x}_{1},\widetilde{x}_{2}]/\langle\widetilde{ x}_{1}^{3},\widetilde{x}_{2}^{2}(\widetilde{x}_{2}-\widetilde{x}_{1})\rangle\] defined by \(\varphi(x_{1})=\widetilde{x}_{1}\) and \(\varphi(x_{2})=\widetilde{x}_{1}-\widetilde{x}_{2}\), so that they are diffeomorphic to each other by [7, Theorem 6.1]. Note that \(c_{1}(\mathcal{B})=x_{1}+3x_{2}\) and \(c_{1}(\widetilde{\mathcal{B}})=2\widetilde{x}_{1}+3\widetilde{x}_{2}\) and \(\varphi\) is not \(c_{1}\)-preserving. To treat the case \(n_{2}\geq 2\), we recall a lemma. **Lemma 4.7** ([7, Lemma 6.2]).: _Assume that \(n_{2}\geq 2\) and let \((d_{1},\ldots,d_{n_{2}})\) be a nonzero integer vector. If \((ax+by)^{n_{1}+1}=0\) in \(\mathbb{Z}[x,y]/\langle x^{n_{1}+1},y\prod_{i=1}^{n_{2}}(y+d_{i}x)\rangle\) for some integers \(a\) and \(b\), then we get \(b=0\)._ Below is our main result in this section. **Proposition 4.8**.: _Let \(\mathcal{B}=\mathcal{B}(n_{1},(a_{1},\ldots,a_{n_{2}}))\) and \(\widetilde{\mathcal{B}}=\mathcal{B}(n_{1},(\widetilde{a}_{1},\ldots,\widetilde {a}_{n_{2}}))\). If \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\) are Fano and there is a \(c_{1}\)-preserving isomorphism between \(H^{*}(\mathcal{B};\mathbb{Z})\) and \(H^{*}(\widetilde{\mathcal{B}};\mathbb{Z})\) as graded rings, then \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\) are isomorphic as varieties._ Proof.: When \(n_{2}=1\), the theorem follows from Proposition 4.5 (in this case, the \(c_{1}\)-preserving condition is unnecessary). So, we assume \(n_{2}\geq 2\). Moreover, we may assume that both vectors \((a_{1},\ldots,a_{n_{2}})\) and \((\widetilde{a}_{1},\ldots,\widetilde{a}_{n_{2}})\) are nonzero by Proposition 4.3. We may further assume (4.5) and (4.6) for \(a_{k}\)'s and \(\widetilde{a}_{k}\)'s. Under this situation, we prove \(\{a_{1},\ldots,a_{n_{2}}\}=\{\widetilde{a}_{1},\ldots,\widetilde{a}_{n_{2}}\}\) as multisets, which means that \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\) are isomorphic as varieties. We denote by \(\widetilde{x}_{i}\) the element in \(H^{2}(\widetilde{\mathcal{B}};\mathbb{Z})\) corresponding to \(x_{i}\) for \(i=1,2\). Then \(H^{*}(\widetilde{\mathcal{B}};\mathbb{Z})\) and \(c_{1}(\widetilde{\mathcal{B}})\) have the presentation (4.3) and (4.4) with tilde. Let \(\varphi\colon H^{*}(\widetilde{\mathcal{B}};\mathbb{Z})\to H^{*}(\mathcal{B}; \mathbb{Z})\) be a \(c_{1}\)-preserving graded ring isomorphism. Since \(\varphi(\widetilde{x}_{1})^{n_{1}+1}=\varphi(\widetilde{x}_{1}^{n_{1}+1})=0\) in \(H^{*}(\mathcal{B};\mathbb{Z})\), it follows from Lemma 4.7 that we have \[\varphi(\widetilde{x}_{1})=\epsilon_{1}x_{1}\quad\text{and}\quad\varphi( \widetilde{x}_{2})=px_{1}+\epsilon_{2}x_{2} \tag{4.8}\] for some integer \(p\), where \(\epsilon_{1}\) and \(\epsilon_{2}\) are \(\pm 1\) because \(\varphi\) is an isomorphism. Therefore, \[\varphi\left(\widetilde{x}_{2}\prod_{k=1}^{n_{2}}(\widetilde{x}_{2}-\widetilde{ a}_{k}\widetilde{x}_{1})\right)=(\epsilon_{2}x_{2}+px_{1})\prod_{k=1}^{n_{2}}( \epsilon_{2}x_{2}+(p-\widetilde{a}_{k}\epsilon_{1})x_{1}).\] The right hand side above vanishes in \(H^{*}(\mathcal{B};\mathbb{Z})\) because so does the left hand side above by (4.3) for \(\widetilde{\mathcal{B}}.\) It follows from (4.3) that there exist a homogeneous polynomial \(f(x_{1},x_{2})\) of degree \(n_{2}-n_{1}\) when \(n_{2}\geq n_{1}\) (\(f(x_{1},x_{2})=0\) otherwise) and an integer \(q\) such that \[(\epsilon_{2}x_{2}+px_{1})\prod_{k=1}^{n_{2}}(\epsilon_{2}x_{2}+(p-\widetilde{ a}_{k}\epsilon_{1})x_{1})=f(x_{1},x_{2})x_{1}^{n_{1}+1}+qx_{2}\prod_{k=1}^{n_{2}}(x_ {2}-a_{k}x_{1}) \tag{4.9}\] as polynomials in \(x_{1}\) and \(x_{2}.\) Comparing the coefficients of \(x_{2}^{n_{2}+1}\) on both sides above, we get \[\epsilon_{2}^{n_{2}+1}=q. \tag{4.10}\] On the other hand, since \(\varphi\) is \(c_{1}\)-preserving, it follows from (4.4) (for \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\)) and (4.8) that \[\epsilon_{1}\left(n_{1}+1-\sum_{k=1}^{n_{2}}\widetilde{a}_{k}\right)x_{1}+(n_ {2}+1)(px_{1}+\epsilon_{2}x_{2})=\left((n_{1}+1)-\sum_{k=1}^{n_{2}}a_{k} \right)x_{1}+(n_{2}+1)x_{2}.\] Comparing the coefficients of \(x_{2}\) on both sides above, we get \(\epsilon_{2}=1;\) so \(q=1\) by (4.10) and the identity above reduces to \[\epsilon_{1}\left(n_{1}+1-\sum_{k=1}^{n_{2}}\widetilde{a}_{k}\right)+(n_{2}+1 )p=(n_{1}+1)-\sum_{k=1}^{n_{2}}a_{k}. \tag{4.11}\] Moreover, comparing the coefficients of \(x_{1}x_{2}^{n_{2}}\) on both sides of (4.9) with \(\epsilon_{2}=q=1,\) we get \[(n_{2}+1)p-\epsilon_{1}\sum_{k=1}^{n_{2}}\widetilde{a}_{k}=-\sum_{k=1}^{n_{2} }a_{k}. \tag{4.12}\] By substituting (4.12) into (4.11), we get \(\epsilon_{1}(n_{1}+1)=n_{1}+1.\) Therefore, \(\epsilon_{1}=1\) and hence (4.12) reduces to \[(n_{2}+1)p=\sum_{k=1}^{n_{2}}\widetilde{a}_{k}-\sum_{k=1}^{n_{2}}a_{k}. \tag{4.13}\] **Case 1: \(n_{2}<n_{1}.\)** In this case, we have \(f(x_{1},x_{2})=0,\) so (4.9) with \(\epsilon_{1}=\epsilon_{2}=q=1\) becomes \[(x_{2}+px_{1})\prod_{k=1}^{n_{2}}(x_{2}+(p-\widetilde{a}_{k})x_{1})=x_{2}\prod _{k=1}^{n_{2}}(x_{2}-a_{k}x_{1}) \tag{4.14}\] as polynomials in \(x_{1}\) and \(x_{2}.\) Hence \[p=0\quad\text{ or }\quad p=\widetilde{a}_{k_{0}}\quad\text{for some $1\leq k_{0}\leq n_{2}$ with $\widetilde{a}_{k_{0}}>0$.} \tag{4.15}\] Suppose that the latter case in (4.15) occurs. Then it follows from (4.14) that \(p=-a_{k_{1}}\) for some \(k_{1}\) (\(1\leq k_{1}\leq n_{2}\)) but this is a contradiction because \(\widetilde{a}_{k_{0}}>0\) while \(-a_{k_{1}}\leq 0\) by (4.5). Therefore, the former case in (4.15) must occur, i.e. \(p=0.\) By substituting \(p=0\) in (4.14), we get \[x_{2}\prod_{k=1}^{n_{2}}(x_{2}-\widetilde{a}_{k}x_{1})=x_{2}\prod_{k=1}^{n_{2} }(x_{2}-a_{k}x_{1}).\] Therefore, we obtain \(\{\widetilde{a}_{1},\ldots,\widetilde{a}_{n_{2}}\}=\{a_{1},\ldots,a_{n_{2}}\}\) as multisets. **Case 2: \(n_{2}\geq n_{1}\).** It follows from (4.6) (for \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\)) that \[\Bigg{|}\,\sum_{k=1}^{n_{2}}a_{k}-\sum_{k=1}^{n_{2}}\widetilde{a}_{k}\ \Bigg{|}\leq n_{1}<n_{2}+1.\] This together with (4.13) implies \(p=0\), so (4.9) with \(p=0\) and \(\epsilon_{1}=\epsilon_{2}=q=1\) becomes \[x_{2}\prod_{k=1}^{n_{2}}(x_{2}-\widetilde{a}_{k}x_{1})=f(x_{1},x_{2})x_{1}^{n_ {1}+1}+x_{2}\prod_{k=1}^{n_{2}}(x_{2}-a_{k}x_{1})\] as polynomials in \(x_{1}\) and \(x_{2}\). Comparing the coefficients of \(x_{1}x_{2}^{n_{2}},x_{1}^{2}x_{2}^{n_{2}-1},\ldots,x_{1}^{n_{1}}x_{2}^{n_{2}-n _{1}+1}\) on both sides above, we get \[e_{k}(\widetilde{a}_{1},\ldots,\widetilde{a}_{n_{2}})=e_{k}(a_{1},\ldots,a_{ n_{2}})\qquad\text{for $k=1,\ldots,n_{1}$.} \tag{4.16}\] Since \(a_{k}\)'s are nonnegative integers by (4.5) and \(\sum_{k=1}^{n_{2}}a_{k}\leq n_{1}\) by (4.6), at most \(n_{1}\) elements in \(\{a_{1},\ldots,a_{n_{2}}\}\) are nonzero. The same is true for \(\widetilde{a}_{k}\)'s. Therefore, we get \(\{\widetilde{a}_{1},\ldots,\widetilde{a}_{n_{2}}\}=\{a_{1},\ldots,a_{n_{2}}\}\) as multisets by (4.16). Now we are ready to prove Theorem 1.2. Proof of Theorem 1.2.: Any smooth compact toric variety of Picard number two is a two-stage generalized Bott manifold ([19]). So, suppose that \(X\) and \(Y\) are two-stage Fano generalized Bott manifolds and there is a \(c_{1}\)-preserving graded ring isomorphism between their integral cohomology rings. By Lemma 4.2 and Proposition 4.3, we may assume that \(X\) and \(Y\) are associated with generalized Bott matrices of the same type. Then \(X\) and \(Y\) are isomorphic as varieties by Proposition 4.8. ## 5. Related cohomological rigidity In this section, we overview related cohomological rigidity problems and results. All cohomology groups are taken with \(\mathbb{Z}\) coefficients unless otherwise stated. ### Equivariant cohomology and equivariant first Chern class Let \(X\) be a smooth compact toric variety and \(\mathbb{T}\) the algebraic torus acting on \(X\). The equivariant cohomology of \(X\) is defined as \[H_{\mathbb{T}}^{*}(X):=H^{*}(E\mathbb{T}\times_{\mathbb{T}}X)\] where \(E\mathbb{T}\to B\mathbb{T}\) is the universal principal \(\mathbb{T}\)-bundle and \(E\mathbb{T}\times_{\mathbb{T}}X\) denotes the orbit space of \(E\mathbb{T}\times X\) by the diagonal \(\mathbb{T}\)-action. The equivariant cohomology \(H_{\mathbb{T}}^{*}(X)\) is not only a ring but also an algebra over \(H^{*}(B\mathbb{T})\) through the projection \(E\mathbb{T}\times_{\mathbb{T}}X\to B\mathbb{T}\). The group \(\operatorname{Aut}(X)\) of all automorphisms of \(X\) is known to be an algebraic group of finite dimension and the algebraic torus \(\mathbb{T}\) acting on \(X\) determines a maximal torus of \(\operatorname{Aut}(X)\) (see [25, Section 3.4]). This implies that if smooth compact toric varieties \(X\) and \(Y\) are isomorphic as varieties, then they are isomorphic as toric varieties up to an automorphism of \(\mathbb{T}\), that is, there is an isomorphism \(f\colon X\to Y\) together with a group automorphism \(\sigma\) of \(\mathbb{T}\) such that \(f(gx)=\sigma(g)f(x)\) for \(g\in\mathbb{T}\) and \(x\in X\). Therefore, if \(X\) and \(Y\) are isomorphic as varieties, then \(H_{\mathbb{T}}^{*}(X)\) and \(H_{\mathbb{T}}^{*}(Y)\) are weakly isomorphic as algebras over \(H^{*}(B\mathbb{T})\), which means that there is a ring isomorphism \(\Phi\colon H_{\mathbb{T}}^{*}(Y)\to H_{\mathbb{T}}^{*}(X)\) together with an automorphism \(\sigma\) of \(\mathbb{T}\) such that \(\Phi(u\alpha)=\sigma^{*}(u)\Phi(\alpha)\) for any \(u\in H^{*}(B\mathbb{T})\) and \(\alpha\in H^{*}_{\mathbb{T}}(Y)\), where \(\sigma^{*}\) denotes the automorphism of \(H^{*}(B\mathbb{T})\) induced from \(\sigma\). Moreover, the ring isomorphism \(\Phi\) induced from the variety isomorphism \(f\) preserves the equivariant first Chern classes of \(X\) and \(Y\). It turns out that the converse holds, namely if there is a weak \(H^{*}(B\mathbb{T})\)-algebra isomorphism \(\Phi\colon H^{*}_{\mathbb{T}}(Y)\to H^{*}_{\mathbb{T}}(X)\) preserving the equivariant first Chern classes of \(X\) and \(Y\), then \(X\) and \(Y\) are isomorphic as varieties. (Note. It is pointed out in [17, Remark 2.5] that the condition preserving the equivariant first Chern classes is necessary for [21, Theorem 1.1].) Such \(\Phi\) induces an isomorphism \(\varphi\) between \(H^{*}(X)\) and \(H^{*}(Y)\) preserving the first Chern classes of \(X\) and \(Y\). Therefore, Conjecture 1.1 suggests that it might be possible to recover \(\Phi\) from \(\varphi\) for smooth toric Fano varieties. ### Cohomological rigidity over a commutative ring \(\Lambda\) The cohomological rigidity problem posed in [22] asks whether smooth compact toric varieties are diffeomorphic (or homeomorphic) if they have isomorphic integral cohomology rings. Many partial positive results are known, but no counterexample is known. To state known results for the cohomological rigidity problem, it is convenient to fix a family \(\mathcal{M}\) of smooth manifolds and say that \(\mathcal{M}\) is _cohomologically rigid_ if any two objects in \(\mathcal{M}\) are distinguished up to diffeomorphism (or homeomorphism) by their integral cohomology rings. The family of \(2\)-stage generalized Bott manifolds is cohomologically rigid ([7]). However, it is not known whether the family of generalized Bott manifolds is cohomologically rigid. A recent notable achievement is that the family of Bott manifolds is cohomologically rigid ([11]). See [9, 10, 16] for further results. For a real analogue of smooth compact toric manifolds, such as real loci of compact smooth toric varieties or small covers introduced by Davis-Januszkiewicz (see [3, 13]), it is natural to take cohomology rings with \(\mathbb{Z}/2\mathbb{Z}\)-coefficients. We say that a family \(\mathcal{M}\) of smooth manifolds is _cohomologically rigid over a commutative ring \(\Lambda\)_ if any two objects in \(\mathcal{M}\) are distinguished up to diffeomorphic (or homeomorphic) by their cohomology rings with \(\Lambda\)-coefficients. It is known that the family of real Bott manifolds is cohomologically rigid over \(\mathbb{Z}/2\mathbb{Z}\) ([6, 18]). Real Bott manifolds are real loci of Bott manifolds and provide examples of Riemannian flat manifolds. Similarly, the family of hyperbolic \(3\)-manifolds of Lobel type, which are small covers over \(3\)-dimensional right-angled hyperbolic polytopes, is also cohomologically rigid over \(\mathbb{Z}/2\mathbb{Z}\) ([4]). However, the family of \(2\)-stage real generalized Bott manifolds is not cohomologically rigid over \(\mathbb{Z}/2\mathbb{Z}\) ([20]) although the family of \(2\)-stage generalized Bott manifolds is cohomologically rigid over \(\mathbb{Z}\) as is mentioned above. ### Cohomological super-rigidity One can also think of an algebraic version of the cohomological rigidity. Following [16], we may say that a family \(\mathcal{V}\) of smooth algebraic varieties is _cohomologically super-rigid_ if any objects in \(\mathcal{V}\) are distinguished up to isomorphism by their integral cohomology rings. Propositions 4.3 and 4.5 are results of this type, see also [16] for results of this type.
2303.03321
Implementation of a noisy hyperlink removal system: A semantic and relatedness approach
As the volume of data on the web grows, the web structure graph, which is a graph representation of the web, continues to evolve. The structure of this graph has gradually shifted from content-based to non-content-based. Furthermore, spam data, such as noisy hyperlinks, in the web structure graph adversely affect the speed and efficiency of information retrieval and link mining algorithms. Previous works in this area have focused on removing noisy hyperlinks using structural and string approaches. However, these approaches may incorrectly remove useful links or be unable to detect noisy hyperlinks in certain circumstances. In this paper, a data collection of hyperlinks is initially constructed using an interactive crawler. The semantic and relatedness structure of the hyperlinks is then studied through semantic web approaches and tools such as the DBpedia ontology. Finally, the removal process of noisy hyperlinks is carried out using a reasoner on the DBpedia ontology. Our experiments demonstrate the accuracy and ability of semantic web technologies to remove noisy hyperlinks
Kazem Taghandiki, Elnaz Rezaei Ehsan
2023-03-06T17:48:27Z
http://arxiv.org/abs/2303.03321v1
# Implementation of a noisy hyperlink removal system: A semantic and relatedness approach ###### Abstract As the volume of data on the web grows, the web structure graph, which is a graph representation of the web, continues to evolve. The structure of this graph has gradually shifted from content-based to non-content-based. Furthermore, spam data, such as noisy hyperlinks, in the web structure graph adversely affect the speed and efficiency of information retrieval and link mining algorithms. Previous works in this area have focused on removing noisy hyperlinks using structural and string approaches. However, these approaches may incorrectly remove useful links or be unable to detect noisy hyperlinks in certain circumstances. In this paper, a data collection of hyperlinks is initially constructed using an interactive crawer. The semantic and relatedness structure of the hyperlinks is then studied through semantic web approaches and tools such as the DBpedia ontology. Finally, the removal process of noisy hyperlinks is carried out using a reasoner on the DBpedia ontology. Our experiments demonstrate the accuracy and ability of semantic web technologies to remove noisy hyperlinks. Semantic web, Noisy hyperlinks, Ontology, Reasoner, Semantic similarity, Relatedness similarity. ## 1 Introduction In recent years, the ability to create any number of web pages, in addition to the extremely large volumes of data created in various fields of technology [1], have resulted in a challenging concept known as "Big Data" [2, 3]. In 2021, nearly 49 billion web pages were indexed by Google and Bing crawlers [4]. Clearly, there is a significant surge in number of web pages on the Internet, which has led to the growth of the web structure graph. As a result, it is exceedingly difficult to navigate and explore the structure of the web, due to spam data such as noisy hyperlinks [1]. Therefore, the need for a mechanism to eliminate spam hyperlinks is evident. Historically, many information retrieval algorithms used contents of web documents to classify, cluster, and remove spam pages. However, this process-intensive and time-consuming approach was later replaced with algorithms which rely on hyperlink characteristics in the web structure graph instead of document content. A number of algorithms such as PageRank [5] work with these characteristics to reduce the required processing for search engines. However, these link-based algorithms are predicated on the assumption that the links point exactly to pages wanted by the users [6, 7]. Thus, many link mining algorithms, mistakenly, assume that the web structure graph is completely semantic and content-based. However, the graph contains useless spam links, which may both mislead the user and affect the output of the algorithm. Spam hyperlinks allow irrelevant documents to obtain higher ranks than relevant ones. This attempt to boost the rank of a page is mainly made for "business" purposes [6]. A number of studies have been conducted to detect and eliminate spam links from the structure of web. The proposed methods, however, are highly dependent on the string and structural characteristics of hyperlinks [6, 8] while ignoring their semantic and relatedness structures. In this paper, we consider the semantic and relatedness structures of the hyperlinks at both the page and site levels. Using semantic web technologies, e.g. ontologies and reasoners, we remove noisy hyperlinks. In doing so, initially, a dataset of hyperlinks is created in a separate process. Then, using semantic web technologies such as ontologies and reasoners, the concept of the source page hyperlinks and the concept of the target page are semantically and relationally analyzed. The analysis may be used to determine whether a hyperlink is noisy or useful. The proposed system takes the constructed dataset as input; each row of the dataset is composed of the class mapped from the hyperlink context topic of the source page, the class mapped from the topic of the target page, a field indicating the noisy or useful nature of the hyperlink from the user's perspective, and the domain name of the source page. Thereafter, noisy hyperlinks are detected and the results are compared to those of the user in order to identify the extent to which each semantic or relatedness property in the ontology contributes to a correct detection. Furthermore, among other advantages, it is possible to know which queries lead the user to noisy hyperlinks and which domains contain the greatest number of noisy hyperlinks [9]. The experiments demonstrate the accuracy, capability, and scalability of semantic web technologies in eliminating noisy hyperlinks. The remainder of this paper is organized as follows. In Section 2, a survey of previous works on removal of noisy hyperlinks is given. The implementation of the proposed approach is detailed in Section 3. Section 4 explains the experiments as well as the obtained results. Finally, in Section 5, concluding remarks are presented. ## 2 Related work Toward, the implementation of hyperlink removal systems, there are several viable types of research such as, [10, 11]. In this regard, Qi et al. [12] categorize navigational, advertising, and irrelevant hyperlinks as spam, while other hyperlinks are deemed useful. In their algorithm, a Support Vector Machine (SVN) with two classes, namely "qualified" and "unqualified", is used to detect and filter noisy hyperlinks, employing a total of six string similarity features. By applying the algorithm to a collection of 2.1 million web pages, 23% of the hyperlinks are classified as "unqualified". However, this mechanism does not use semantic or relatedness approaches for removal of hyperlinks. Wookey et al. [13] design a system called Anchor Woman, wherein noisy hyperlinks are detected using the hyperlink structure of a website and divided into three categories: \(\bullet\) Multi-arc loops: chains of hyperlinks which form many cycles in the web graph. \(\bullet\) Multiple arcs: many hyperlinks that point to the same page. \(\bullet\) Recursive cycles: web pages that contain hyperlinks pointing to themselves. The system works by receiving a web address as input and conducting a breadth-first search of its hyperlinks to identify and eliminate noisy ones. Next, a graphical hierarchical representation of the web structure is generated for the user. A mechanism to detect noisy hyperlinks at the site level is proposed by Carvalho et al. [14] who identify two types of spam relationships: (1) mutual reinforcement, wherein two websites are strongly connected by exchanging site-level hyperlinks and (2) alliance among a chain of strongly connected websites. If the number of hyperlinks between the sites exceeds a threshold, they are considered noisy. The proposed algorithm, which takes a structural approach, is able to remove 16.7 percent of the hyperlinks with a Mean Average Precision of 59.16Chakrabarti [15] proposes a finer grained model of the web, in which pages are represented by their Document Object Models, with the resulted DOM trees being interconnected by regular hyperlinks. The method is able to counter "nepotistic clique attacks", but needs more input data than our algorithms (which are based exclusively on link analysis). Also, since we specifically target noise removal, we are able to identify different types of hyperlinks anomalies. Samanta et al. [7] use graph-based methods to improve the web structure graph and facilitate user navigation. The paper considers the case of UK university websites. Over six million links are extracted from 110 academic websites to form a dataset. A large portion of undesirable links to images, audio, and video files are removed using TextPipe. The number of web documents, path length, and Strongly Connected Components (SCC) are optimized using this approach. However, the proposed methodology relies merely on the type of hyperlink, without considering semantic or relational approaches. The two steps are as follows: 1. Eliminating advertising and navigational hyperlinks which are commonly located near the top of the page. 2. Eliminating the hyperlinks not covered by association or aggregation relationships. An aggregation relationship refers to a hierarchy relationship between two concepts in which the source concept is broader than the target concept. An association relationship is also known as a horizontal relationship, implying that the source and target concepts share the same parent. In other words, two concepts are horizontally related if and only if they have a common parent. The authors reported a recognition rate of 92.89 percent in the removal of navigational hyperlinks. The aggregation relationship conveys a semantic approach; the association relationship, however, cannot be considered a complete relatedness approach. Even thoughPedersen et al. [16] use an association or horizontal relationship to demonstrate relatedness similarity between two concepts, horizontal relationships only indicate a "Part Of" relationship between the two. Nevertheless, other relational properties such as Object Properties in ontologies can also be used to represent relatedness similarity. Another algorithm for removing noisy hyperlinks, called the Website Structuring Extracting Algorithm (WSE) was proposed by Oguz [17] in 2022. The primary objective of the WSE is to eliminate noisy hyperlinks while retaining semantic ones. However, with regard to semantics, the paper focuses on the path structure of the hyperlinks so that the hierarchy of hyperlinks is maintained. For example, assuming four pages, namely A, B, C, and D, hyperlinks from A to B, from B to C, and from C to D, are semantic hyperlinks whereas hyperlinks in the opposite direction are considered noisy. In [18] the authors propose to detect negotitic links using language models. In this method, a link is downweighted if its source and target page are not related based on their language models. This approach is based on the assumption that pages connected by non-nepotistic links must be sufficiently similar. Wu and Davison [19] propose a two-step algorithm to identify link farms. The first step generates a seed set based on the intersection of in-link and out-links of web pages. The second step expands the seed set to include pages pointing to many pages within the seed set. The links between these identified spam pages are then re-weighted and a ranking algorithm is applied to the modified link graph. Previous works tend to focus on page-level hyperlinks; however, modern spam sites usually make use of site-level hyperlinks by generating illegal links to other websites thereby improving their rank in Google's index. Therefore, it is critical to consider site-level hyperlinks. In this paper, we seek to remove noisy hyperlinks at both page and site levels. Another shortcoming in prior studies pertains to the exclusive application of string or structural approaches. Despite their considerable speed in detecting hyperlink types, these approaches sometimes eliminate useful hyperlinks while being unable to detect noisy hyperlinks at other times. For instance, suppose a page containing the text "Bank" (meaning financial institution) points to a page about "Banks" (meaning shore). Using the string approach, the hyperlink is regarded as useful whereas the semantic approach employs hyperlink text information to identify and eliminate the noisy hyperlinks. This paper aims to apply the semantic web approach and current tools such as the DBpedia ontology to consider the semantic and relational structure of hyperlinks and to remove noisy hyperlinks by activating the DBPedia ontology reasoner. The experiments demonstrate the accuracy and ability of semantic web technologies to eliminate noisy hyperlinks. In the preceding mechanisms as well as others such as [1, 20, 21, 22, 23, 24], existing data collections are not used for information retrieval; rather, noisy hyperlinks are removed through analysis of the web structure graph. Therefore, we need to construct a data collection of hyperlinks. In the following section, the details of this procedure according to information retrieval principles [25] are presented. ## 3 Proposed Approach The semantic and relatedness system for eliminating noisy hyperlinks involves three general steps as shown in Figure 1 ### _Constructing the Data Collection_ The dataset construction step is a distinct process consisting of several steps as shown in Figure 2. In constructing the dataset, the user is only involved in domain selection while the other steps are performed independently. #### 3.1.1 Domain Selection Internet users use web search engines for a variety of purposes every day, submitting millions of queries to search engines such as Google. Here, deciding which websites to crawl is an important issue, since both noisy and useful hyperlinks are necessary. While the former will be eliminated using semantic and relatedness properties of the ontology, the latter help demonstrates the ability of the proposed approach in detecting useful hyperlinks, using the same properties. The crawled websites must contain content that is popular among Internet users. Using Google Trends, a number of popular topics including news, money, online shopping, new technologies, and celebrities such as actors or athletes were identified. Table I shows popular search queries in 2021 according to Google Trends. By learning about popular topics among Internet users, organizations and individuals can take two distinct approaches in designing web pages. 1. Creating websites that are weakly focused on the topic but use background hyperlinks to conduct highly profitable business activities such as directing users to online stores or pornography websites. Such websites contain noisy hyperlinks which have no regard for web user needs. 2. Creating websites with useful content on a particular topic to provide users with appropriate information. Hyperlinks in these websites are rarely considered spam. These websites contain useful hyperlinks that are in line with user needs. The main idea behind this domain selection approach is to crawl the domains that are retrieved by search engines in response to frequent queries. The retrieved domains fall into one of two general categories: (1) those having noisy hyperlinks for illegal business purposes and (2) those with legal objectives that provide useful hyperlinks to help users achieve their goals. The ability to distinguish between these categories shows the strength of the proposed method in maintaining useful hyperlinks while removing noisy ones. #### 3.1.2 Crawling Domains In this stage, the user enters each topic from the previous stage into the Google search engine and randomly selects a subset of the returned websites. Then website addresses are given as input to an interactive crawler which is developed using libraries of the Java programming language. Next, the crawler begins exploring the domain; and finally, a list of the links in the domain is obtained, as shown in Figure 3. \begin{table} \begin{tabular}{|c|c|c|c|} \hline News & Celebrating & Spends & Electronics \\ \hline Affunctunct & Jointly Learning & Joint Midiuct C & Flower D \\ \hline ADC Stack & Know Extrablation & Choice F.C. & Catalog F.2 [left] \\ COVID Vaccine & blue Correct & Persic Semi-German F.C. & Noisy Standard \\ \hline Depression & Interly Morgan & IC Transformation & Motends Make C Tower \\ \hline \end{tabular} \end{table} TABLE I: Popular search queries in 2021 Fig. 1: Constructing the Data Collection Fig. 3: Links extracted by the crawler Fig. 2: Stages of constructing the data collection Here, a total of 114 useful and unuseful domains related to the topics were crawled, all of which are in English. #### 3.1.3 Hyperlink Preprocessing A large portion of the extracted hyperlinks, e.g. repetitive links or those pointing to audio or video files, are inappropriate for the purpose of this paper. Hence, they are removed according to the operations proposed by [1] (Section 2). The status of the hyperlinks subsequent to preprocessing can be seen in Figure 4. As shown, following the preprocessing operations, 27 percent of the hyperlinks are removed, reducing the number of crawled hyperlinks from 2665 to 1946. Note that a large number of the extracted domains included many video, image, and audio hyperlinks, which are not compatible with the proposed approach. In some cases, the links occurred several times in a page. Therefore, 27 percent of the links (719 links) were eliminated. As mentioned, the proposed method is only compatible with text hyperlinks making the subsequent steps (i.e. those concerning dataset construction) only applicable to text hyperlinks. Thus, incompatible hyperlinks need to be removed in the preprocessing step. #### 3.1.4 Feature Extraction In order to ensure that the topic of the hyperlink's surrounding text as well as that of its target page are detected with sufficient accuracy, a number of features must be extracted from web pages. However, one needs to determine which features are used more frequently for designing web pages; this in turn helps identify the key features to be extracted. The frequencies of five key features in 5000 pages are presented in Figure 5. As shown, "Keyword Metadata", "Title Tag", and "First-Level Heading Tag"are most common in web pages. Therefore, three features of the page i.e. title tag, keyword metadata and first-level heading tag are used to determine the topic of the target page. In addition to the first two features, the text of the hyperlink as well as theparagraphcontaining it are utilized to extract the topic context of the hyperlink from the source page. Hyperlink text and its surrounding paragraph act as features that form the context of the hyperlink. Tables II and III present examples of extracted features for determining the topic of the target page and the context of hyperlink text, respectively. These features accelerate the analysis and topic detection procedures in the following stages. In contrast to our work, several studies [26, 27, 28, 29, 30, 31] extract the topic based on the entire content of the documents. Most web designers make use of Web 2.0 techniques such as HTML to create web documents. The markup language is less semantically capable compared to its Web 3.0 counterparts such as RDF, XML, and OWL [29]. Thus, given the popularity of HTML attributes in designing web documents, the language appears to be the best choice for feature selection in Web 2.0. However, by taking advantage of Web 3.0 techniques, it becomes quite easy to semantically extract features; this in turn plays an important role in the proposed approach. Nevertheless, this is beyond the scope of this paper and remains a recommended direction for future works. #### 3.1.5 Feature Preprocessing The quality of the features extracted in the previous step must be improved so that the semantic and relatedness system is able to perform the topic identification process with higher accuracy and lower error rate. In this paper, this is achieved by using typical text mining preprocessing techniques such as stop words, token normalization, case folding, and stemming [25]: 1. Stop words refer to words which occur frequently and are of little use in finding specific information. Examples include articles and prepositions such as "the", "and", "or", \begin{table} \begin{tabular}{|c|c|} \hline **Page Title** & **Description free Software** \\ \hline Keyword Metadata & download software browser \\ \hline Friend Friend Reading & **Description** \\ \hline \end{tabular} \end{table} TABLE II: Extracted features to detect target page topic (from www.Flehippo.com) Fig. 4: Status of the hyperlinks subsequent to preprocessing Fig. 5: Top five most commonly used features in web pages etc. Removing these words accelerates the topic detection step. 2. Token normalization is a standardization process that aims to use a single form for each word. For instance, consider "anti discriminatory" and "nondiscriminatory"; subsequent to the normalization process, both forms are mapped onto "anti-discriminatory". 3. Case folding is a well-known word normalization process wherein capitalized letters are converted to their lower-case equivalents. In fact, case folding may be regarded as a type of token normalization. 4. Words are often used in different forms depending on grammatical rules; for example, organize, organize, and organize. Furthermore, different forms of a word may have nearly similar meanings e.g. democracy and democratic. By removing the endings of the words, the Stemming process aims to obtain a common root for different forms of a word. In this study, preprocessing is performed via MALLET as well as the text mining library in Python. #### 3.1.6 Topic Identification The purpose of this stage is to determine the topic of hyperlink text and the target page using the previously extracted features. In this stage, a supervised process with three operations occurs, as depicted in Figure 6. All operations are conducted using MALLET. In the following subsections, pertinent details are provided. 1. Building a Topic Classifier In order to build a topic classifier, an initial set of high-quality features is required. Thus, we decided to construct a dataset of features by extracting the three most common key features (i.e. keyword metadata, title tag, and first-level heading tag) of 5000 web pages. The topic classifier is built upon this dataset. Furthermore, 80 percent of the data were used for training purposes while the remaining 20 percent were used for testing the accuracy of the classifier. In this paper, topic classification was conducted using four different methods, namely Naive Bayes, C4.5, Decision Tree, and Max Entropy with 10 cross-validations [32, 33], to obtain high efficiency. The algorithms were compared to find the best method for classification. According to Figure 7, Max Entropy yields the highest accuracy on the training data. Therefore, the algorithm is used to create the topic classifier. 2. Determining the Input and Generating MALLET File This operation involves providing the extracted features from hyperlink text context and target page as input to MALLET in order to identify the topic of each one. This is carried out using the command-line interface. As a result, two output files are obtained, which are also known as feature vectors. The vectors are numerical representations of the input values that enable faster analysis operations [34]. The files serve as input to the next operation. 3. Final Topic Identification Here, the MALLET output files from the previous operation are obtained and the topic classifier identifies the topic. Moreover, the ensuing model (i.e. output) can be used to infer the topic of new input data in the subsequent steps. Upon completion, the topic classifier is able to successfully identify the topic of approximately 81.8 percent of extracted features. The remaining 18.2 percent of the features may remain unclassified for one of the following reasons: 1. As a result of poor design, the feature extraction step may be unable to extract appropriate features for identifying the topic of the hyperlink text and the target page. This situation precludes topic assignment. 2. The features used to construct the topic classifier may be completely distinct from those extracted from a new page. Consequently, the page is not assigned a topic. The output of the step includes four separate text files. Table IV presents a portion of each file. #### 3.1.7 Topic Integration The topics from the previous stage, located in several text files, are combined and integrated to create a single well-formed input for the matching stage. An example of the output file is shown in Figure 8. From left to right, the fields in the file represent the context topic of the hyperlink in the source page, topic of the target page, type of the link (i.e. noisy or useful) as perceived by the user, and the domain of the source page. In effect, a topic dataset is constructed, in this stage. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Rule 4 & Rule 3 & Rule 2 & Rule 1 \\ \hline Non-learned/prog #### 3.1.8 Matching Topics to Classes The purpose of this stage is to match the topics from the previous stage to classes of the DBpedia ontology. The matching operations are conducted using the WS4J library and the Terminological Search Algorithm (Semantic Search) [35]. A comparison of the two algorithms using 400 randomly selected topics and five criteria, including accuracy, is shown in Figure 9. According to the results, WS4J and Semantic Search may complement each other, which justifies examining their simultaneous application. As shown in Figure 9, the combination of WS4J and Semantic Search outperforms every single algorithm. Furthermore, both algorithms support the required semantic and string similarity. We thus use WS4J+Semantic Search for matching purposes. Examples of the final matches are presented in Figure 10. Once again, the fields represent the class matched to the topic context of the hyperlink in the source page, the topic of the matched class in the target page, the type of hyperlink, and the domain name, respectively. In this step, using the mapping subsystem, nearly 87.3 percent of topics are mapped to those of the DBpedia ontology. In fact, in this step, a final dataset is created which can be referred to as the conceptual hyperlink dataset. ### _Semantic and Relatedness Analysis_ In this stage, the final data collection is given as input to the system, which contains two concepts or ontology classes: the context of the hyperlink text and the target page. The system then examines the relatedness and semantic properties of the input using the added knowledge from the DBpedia ontology. In this way, it is possible to determine whether the hyperlink is noisy or useful. The reasoner is the most critical component in the semantic and relatedness analysis step. It is a piece of software which works on one or more conceptual datasets created using ontologies. The reasoner aims to extract logical results from extant facts in the ontology. The proposed system achieves this task using Pellet. By acting on the primary knowledge from the DBpedia ontology, the Pellet reasoner obtains added knowledge from the properties, relations, and classes of the ontology. Next, a row of the data collection, representing a hyperlink in a web page, is given in the form of two concepts i.e. hyperlink text context and target page. Semantic and relatedness similarities of the two are determined using the properties and relations that are inferred from the DBpedia ontology. In the remainder of this paper, we use the terms source concept and target concept to refer to the concepts of hyperlink text and target page, respectively. A hyperlink is considered useful, by the reasoner, if at least one of the following properties is satisfied: 1. "Equivalent Class": The source and target concepts are equivalent. 2. "Subclass Of" and "Has Superclass": The source concept is a sub/superclass of the target concept. Properties (1) and (2) are known as semantic properties, which represent semantic similarity between the two concepts. For instance, the concepts of "Woman" and "Person" are semantically similar since the former is a subclass of the latter. 3. "Object Property": The source and target concepts are related through an object property. This is known as a relatedness property and represents relatedness similarity. As an example, the concepts of "Monkey" and "Banana" have relatedness similarity through several relations such as "Liking" or "Eating" (i.e. "The monkey likes bananas" or "The money eats bananas"). A number of current noisy hyperlink removal methods merely focus on the first two properties while ignoring Object Properties. As a result, they cannot detect relatedness properties between the source and target concepts which may result in falsely removing a useful hyperlink. If none of these conditions hold, then the source and target concepts have no semantic or relatedness similarity; thus, the hyperlink is considered noisy. Put differently, instead of taking the user to his/her intended page, the hyperlink on the page leads to an unexpected and completely irrelevant page. Thus, at the end of this step, a Fig. 8: Topic integration output file Fig. 10: Examples of matches Fig. 9: Comparison of WS4J, Semantic Search, and WS4J+Semantic Search conceptual dataset of hyperlinks is created, whose noisy or useful nature is determined by the reasoner. In Figure 11, several records constructed during the analysis stage are given. From left to right, the fields denote Subject, Object, inferred relational and semantic property for the relation between the first two fields, type of hyperlink perceived by the reasoner, and the domain of the source page, respectively. ### _Experiment and Results_ An important and useful tool for evaluating the proposed approach a confusion matrix which involves two types of labelling: (1) system labelling during semantic and relatedness analysis and (2) expert user labelling while creating the dataset. Each element of the matrix may be one of the following: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). Table V shows the confusion matrix of the system, which is generated based on user opinions provided during the data collection stage and inferences by the reasoner during the semantic and relatedness analysis. * TP represents the number of hyperlinks which are considered useful by both the user and the proposed system. * FP represents the number of hyperlinks which are considered noisy by the user and useful by the proposed system. * TN represents the number of hyperlinks which are considered noisy by both the user and the proposed system. * FN represents the number of hyperlinks which are considered useful by the user and noisy by the proposed system. Values of six commonly used performance measures in information retrieval systems are visualized in Figure 12. As illustrated, the proposed approach achieves high levels of accuracy and precision, while maintaining error rate sufficiently low. The extent to which the reasoner uses various semantic and relatedness properties in theDBpedia ontology to represent semantic and relatedness similarities between the Subject and the Object is shown in Figure 13. From Figure 13, one can see that the reasoner uses the "Subclass Of" property in 12.6 percent of the cases to relate the source hyperlink concept to the target page concept. The corresponding value for "Equivalent Class" is 36.7 percent. These properties indicate semantic similarity between the source and target concepts. Furthermore, in 11.9 percent of the cases, the reasoner uses "Object Property", which represents relatedness similarity. For the remaining 38.5 percent, no semantic or relatedness property is found using the DBpedia ontology. An overview of the results obtained through semantic and relatedness analysis is presented in Table VI. The feature extraction step obtains and stores a small set of information pertaining to the hyperlink domain (href values of the i\(\alpha\) tag). Thus, after encountering the hyperlink on the source page, the user determines whether the hyperlink is noisy. The objective is to compare our results with the users' opinions. To better understand the semantic and relational analysis results in Table VI, a comparison with \begin{table} \begin{tabular}{|c|} \hline Number of control links & Number of links to the dataset \\ \hline \(\text{Node}\) & \(\text{Info}\) \\ \hline Number of destination & Catalog-basedness \\ \hline \(114 the user's perspective is also conducted. Table VII provides an overview of the user's perspective identifying the useful or noisy nature of the hyperlinks. A comparison of Tables VI and VII testifies to the ability of the semantic and relatedness approach. The percentages of the hyperlink types from both perspectives are visualized in Figure 14. Looking at the figure, it is evident that the proposed approach is able to distinguish between noisy and useful hyperlinks just as accurately as an expert user. Figure 14 shows that the reasoner and the user are able to detect useful hyperlinks in 61.4 and 70.7 percent of the times. With respect to noisy hyperlinks, the values are 38.5 and 29.2, respectively. Based on these results, it is clear that the reasoner is able to achieve the results of an expert user. #### 3.3.1 Scalability It is important that the final data collection contain records which can be analyzed through semantic and relatedness approaches. In other words, the data collection must have records which belong to the classes of the DBpedia ontology. The reasoner requires a certain amount of time to discover new relations, classes, and properties based on the DBpedia ontology and determine the type (i.e. noisy or useful) of the hyperlinks in the data collection. In the remainder of this section we discuss scalability of the reasoner as well as the entire process. Scalability tests were performed a computer system with an Intel Core i5 2450M (2.50 GHz) with 4.00 GB of RAM and running a 64-bit operating system. In Figure 15, the amount time required to run the semantic and relatednesseroser for 500, 1000, 1500, and 1946 hyperlinks can be seen. The variables y and R denote the number of hyperlinks and the accuracy of the equation, respectively. During the initial two to three seconds, the reasoner is being activated. In Figure 15, it is assumed that the final dataset is available. However, in Figure 16, scalability of the entire process from topic identification to semantic and relatedness analysis is considered for the same cases. Our investigations revealed that the matching stage is the most time-consuming step of the process. ## 4 Discussion and Conclusion Hyperlinks are a type of noisy data in the structure of the web, which negatively impact the efficiency of many information retrieval algorithms. Nearly all of these algorithms focus on the string or graph structure of hyperlinks. Therefore, these approaches incorrectly remove certain useful hyperlinks and are not able to detect noisy hyperlinks in certain cases. In this paper, we consider semantic and relational structures at both the page and site levels while semantic web technologies such as ontologies and reasoners are used to eliminate noisy hyperlinks. In doing so, a dataset of hyperlinks is created in a separate process. The dataset is then analyzed with respect to both semantics and relatedness. As a result of the analysis, noisy and useful hyperlinks are distinguished. The proposed system takes the constructed dataset as input. Each row of the dataset consists of the class mapped from the hyperlink Fig. 16: Scalability of the entire system for different numbers of hyperlinks Fig. 14: Percentage of hyperlink types as perceived by the user and the reasoned Fig. 15: Scalability of the reasoner for different numbers of hyperlinks \begin{table} \begin{tabular}{|c|c|} \hline Number of needed links & Number of links to the dataset \\ \hline 365 & 1946 \\ \hline \multicolumn{3}{|c|}{Number of domains} \\ \hline 114 & \\ \hline \multicolumn{3}{|c|}{Unted domains} \\ \hline Wikipedia, Inc., Facebook, Twitter, Online \\ \hline \multicolumn{3}{|c|}{Noby domains} \\ \hline \multicolumn{3}{|c|}{Localizations/Comple, Millennium/Localized, hyperlinks/Complex, accessed} \\ \hline \multicolumn{3}{|c|}{Unted domains} \\ \hline \multicolumn{3}{|c|}{1000000} \\ \hline 10 context topic of the source page, the class mapped from the topic of the target page, a field indicating the noisy or useful nature of the hyperlink from the user's perspective, and the domain name of the source page. Then the results are compared to those of the user to demonstrate the extent to which each semantic or relational property in the ontology contributes to a hyperlink being identified as either noisy or useful. Furthermore, the categories of queries which lead users to noisy hyperlinks and the domains with the highest number of noisy hyperlinks can be found. Our experiments proved the accuracy, ability, and scalability of semantic web technologies in eliminating noisy hyperlinks. Directions for future works include the following: 1. Combining the DBpedia ontology with another ontology to cover a larger domain at the T-Box level concepts. 2. Using available datasets on linked data to cover A-Box level concepts. 3. Extending the noise detection operation to include hyperlinks pointing to images, videos, and audio files. 4. Applying various algorithms to match topics to ontology classes. 5. Using semantic tools to identify the topic of hyperlinks and target pages via semantic properties in pages that are constructed using Web 3.0 techniques.
2308.10991
Improved mirror ball projection for more accurate merging of multiple camera outputs and process monitoring
Using spherical mirrors in place of wide-angle cameras allows for cost-effective monitoring of manufacturing processes in hazardous environment, where a camera would normally not operate. This includes environments of high heat, vacuum and strong electromagnetic fields. Moreover, it allows the layering of multiple camera types (e.g., color image, near-infrared, long-wavelength infrared, ultraviolet) into a single wide-angle output, whilst accounting for the different camera placements and lenses used. Normally, the different camera positions introduce a parallax shift between the images, but with a spherical projection as produced by a spherical mirror, this parallax shift is reduced, depending on mirror size and distance to the monitoring target. This paper introduces a variation of the 'mirror ball projection', that accounts for distortion produced by a perspective camera at the pole of the projection. Finally, the efficacy of process monitoring via a mirror ball is evaluated.
Wladislav Artsimovich, Yoko Hirono
2023-08-15T04:18:55Z
http://arxiv.org/abs/2308.10991v1
Improved mirror ball projection for more accurate merging of multiple camera outputs and process monitoring ###### Abstract Using spherical mirrors in place of wide-angle cameras allows for cost-effective monitoring of manufacturing processes in hazardous environment, where a camera would normally not operate. This includes environments of high heat, vacuum and strong electromagnetic fields. Moreover, it allows the layering of multiple camera types (e.g., color image, near-infrared, long-wavelength infrared, ultraviolet) into a single wide-angle output, whilst accounting for the different camera placements and lenses used. Normally, the different camera positions introduce a parallax shift between the images, but with a spherical projection as produced by a spherical mirror, this parallax shift is reduced, depending on mirror size and distance to the monitoring target. This paper introduces a variation of the'mirror ball projection', that accounts for distortion produced by a perspective camera at the pole of the projection. Finally, the efficacy of process monitoring via a mirror ball is evaluated. _Keywords--_ curved mirror, image registration, mirror ball, image processing, process monitoring, spherical projection ## 1 Introduction Spherical mirror projections are used in a wide variety of applications as a substitute for wide-angle or panoramic camera systems. This extends to the field of robotics with 'ommid-rectional sensors' as used in [3] and to the field of computer graphics with various 'image-based lighting' techniques ([2]) or novel ways of displaying and interacting with panoramic content ([1]). A less explored use case is process monitoring. The mirror ball can be made to withstand extreme conditions at relatively low cost. For instance, stainless steel ball bearings are a mass-produced commodity, that can survive high heat environments, whilst producing a mirror ball projection. Moreover, multiple cameras can be pointed at the same mirror sphere, all identically encoding the full 360\({}^{\circ}\) environment, regardless of camera position, at least mathematically. This allows for merging of multiple camera outputs into a single video feed. A mirror ball is mathematically the simplest spherical mirror and projects the full 360\({}^{\circ}\) environment onto a 2D plane, when captured with an orthographic camera. Whilst an orthographic camera can be achieved in the real-world with the use of a telecentric lens, almost all cameras are perspective ones. Distortion at the pole-point of the projection is introduced by merely approximating an orthographic camera when using a perspective one. Figure 1: Multi-source mirror ball live stream of a laser cladding process ## 2 The projection term The mirror ball projection term, as used in this paper, maps the 3D reflection vector \(\vec{r}\), as defined by the column vector \(\left[r_{x}\quad r_{y}\quad r_{z}\right]^{\intercal}\), to the corresponding pixel on the 2D plane of an image, as defined by the column vector \(\left[image_{x}\quad image_{y}\right]^{\intercal}\). The origin of the coordinate system is at the center of the mirror ball, which itself is defined as a unit sphere, projecting onto the image plane as a unit disk. The term is a projection, which expresses a virtual camera-view at the center of an infinitely large sphere, allowing one to look around the full \(360^{\circ}\) environment, as reflected by the mirror surface of the imaged sphere. ### Deriving the classical mirror ball projection term The orthographic camera has its parallel view-rays defined by the incident ray column vector as written in Equation 1. As an orthographic camera captures the image, both the image's x,y-plane and the camera's x,y-plane are identical. \[\vec{i}=\left[0\quad 0\quad 1\right]^{\intercal} \tag{1}\] The law of reflection for a light ray hitting a smooth surface is defined by Eq. 2, with \(\vec{r}\) being the reflected ray, \(\vec{n}\) the surface normal and the dot '\(\cdot\)' being the scalar product. \[\vec{r}=2\left(\vec{i}\cdot\vec{n}\right)\vec{n}-\vec{i} \tag{2}\] As per definition, the surface normal at each point of a unit sphere with its center at the origin, equals the vector as drawn from the origin to the surface at each point. With the orthographic camera's definition Eq. 1, the normal vector and the incident ray share the same xy-plane, leaving just the depth component of the normal \(n_{z}\) as an unknown. Using that information, we can populate the law of reflection Eq. 2 and express the reflection vector \(\vec{r}\) with the image's xy-plane: \[\begin{bmatrix}r_{x}\\ r_{y}\\ r_{z}\end{bmatrix}=2\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}-\begin{bmatrix}image_{x}\\ image_{y}\\ n_{z}\end{pmatrix}\begin{bmatrix}image_{x}\\ image_{y}\\ n_{z}\end{bmatrix}-\begin{bmatrix}0\\ 0\\ 1\end{pmatrix} \tag{3}\] Simplifying Eq. 3, it expresses this system of equations: \[\begin{cases}r_{x}=2n_{z}\;image_{x}\\ r_{y}=2n_{z}\;image_{y}\\ r_{z}=2n_{z}^{2}-1\end{cases} \tag{4}\] However, we need this system the other way around. We want to get the correct image pixel, based on the reflection direction \(\vec{r}\). Solving the 3rd equation of Eq. 4 for \(n_{z}\), gives us \[n_{z}=\sqrt{\frac{r_{z}+1}{2}} \tag{5}\] Finally, we may substitute \(n_{z}\) as defined by Eq. 5 into equation 1 and 2 of system Eq. 4 Simplifying the substitution gives us our mirror ball projection term: \[\begin{bmatrix}image_{x}\\ image_{y}\end{bmatrix}=\frac{1}{\sqrt{2(r_{z}+1)}}\begin{bmatrix}r_{x}\\ r_{y}\end{bmatrix} \tag{6}\] ### Extending the projection term The classical projection term Eq. 6 suffers from distortion around the pole-point, as seen in Fig. 1(c), because it assumes an orthographic camera. A perspective camera obscures part of the mirror sphere as a function of sphere radius and distance from the center. The true pole-point of the projection is thus not actually captured. This issue may be side-stepped by use of another geometries, such as basing the mirror on a parabola, as shown in [4]. Staying with the mirror ball, the projection can also be adjusted to account for the mis-match between mathematical model and captured real-life mirror ball, by defining the blind spot inherit to capturing the mirror ball with a perspective camera. The missing information can be conceived as the one-shaped'solid angle', defined by \(360^{\circ}-\alpha\), \(180^{\circ}<\alpha<360^{\circ}\), where \(\alpha\) expresses the field of view, that the mirror sphere is reflecting from the perspective of the camera's image. Any mapping from \(\vec{r}\) going outside of the sphere's view cone will remain undefined. We need to remap the existing information to only fall within the visible cone. This can be achieved by stretching the reflection ray's image mapping by \(\mathrm{scalar}\sin\left(\frac{\alpha}{4}\right)\). This ratio stems from the amount of surface representing the reflected information decreasing by the sine of the reflected angle divided by 4, as we go from the middle point of the sphere to its edges. By stretching the reflection rays and allowing them to become undefined when going past the image plane's unit circle, we rigidly define the missing information. This shows up as a black circle in place of the classical mirror ball projection's pole-point, depicted in Fig. 1(d). Note how the lines of the wall are parallel, as they are in real-life with the correction of the improved term. Including said stretching scalar, results in the new and improved mirror ball projection term, shown in Equation 7. This term has been implemented in a demo WebApp, which can be accessed alongside sample photos and video footage on GitHub: github.com/FrostKiwi/MirrorBall \[\begin{bmatrix}image_{x}\\ image_{y}\end{bmatrix}=\frac{1}{\sqrt{2(r_{z}+1)}\sin\left(\frac{\alpha}{4} \right)}\begin{bmatrix}r_{x}\\ r_{y}\end{bmatrix} \tag{7}\] Figure 2: Setup to show the distortion correction ## 3 Process monitoring use-case A laser cladding process inside a DMG MORI 'LASERTEC 3000 DED hybrid' industry machine was monitored via the mirror ball projection, to determine its efficacy for process monitoring and its ability to merge multiple camera sources into one coherent environment projection. A color camera was placed outside the machine, capturing mirror ball through the machine's protective glass. A thermal camera was placed inside the machine, opposite the color one, as visualized by the schematic Fig. 2(a). Captured were a \(3k^{2}\) pixel resolution color live-stream and combined with a low-resolution infrared stream to augment the monitoring experience. A further \(4k^{2}\) px resolution color-only live-stream has been made to compare the impact of resolution. Snippets from both the \(4k^{2}\) px and combined source live stream have been made available in the demo WebApp to be viewed by the reader and are also attached as supplementary material. ### Quality evaluation As expected from the mirror ball projection, the resolution towards the pole-point is weak, but everywhere else shows a well defined image, with enough resolution to properly judge the state of the machine. Even so, fine adjustments of the machine's tools cannot be reliably performed based on this view, as the resolution stretched across the projection's \(360^{\circ}-\alpha\) field of view does not leave enough resolution to judge distances on the millimeter scale. Increasing the resolution does not necessarily constitute a solution, since the ball's surface blemishes also gain increased prominence. One the flip-side, bringing a bigger sphere too close to the monitoring target influences the result via parallax, as discussed in 3.3. Additionally two mirror balls of different size have been captured without the protective glass in the way, shown in Fig. 2(c) and 2(d). Based on the model, the smaller the ball, the wider its field of view, when captured with a perspective camera. In reality, a ball which is too small needs a rather advanced lens to be properly focused. Futhermore, surface blemishes and scratches throw off the camera's auto-focus. The projection of the mirror ball 2(d) shows a softness to the projection, because the auto-focus accidentally focused on the scratches, instead of the environment in the mirror image. On the flip-side, the projection of the larger ball 2(c) shows stronger parallax distortion, making image registration less reliable. Flipping between the two shows how the ball size changes the projection in slight ways, leading to an alignment which is not perfect. ### Multi-source registration Multiple camera views capturing the same mirror ball projection can be rotated into each other by a 3x3 rotation matrix, achieving image registration. Fig. 0(a) and 0(b) show the resulting mirror ball video feeds during laser cladding. Note, how both feeds show the hot metal sparks in different parts of the reflection. After correcting for the different rotations, both feeds are merged and can be switched between, as shown in figures 0(c) and 0(d). Alignment settings become invalid as soon as a camera is moved though, leading to a fragile setup. Even though the camera positions are different, the projections are at least mathematically in the exact same spot, thus having no parallax, according to the model. ### Parallax In reality the mirror balls have a physical size, thus their reflections originate from the surface, creating a new kind of parallax, easily observed by viewing the mounting point of the mirror ball, which shows strong distortion. Without Z-depth data, this cannot be corrected for. Whether the parallax is important enough to influence this use-case is entirely dependant on the balance between ball size and distance to the monitoring target. As a general rule, targets further away than the ball's diameter, preserve their shape in the projection well, though this a subjective statement. Finally, aligning based on one target leads to mis-alignment in the rest of the projection, an effect more pronounced with increasing ball size. ## 4 Conclusion This paper extended the classical mirror sphere projection term by a new distortion scalar \(\sin\left(\frac{\alpha}{4}\right)\) and its field of view parameter \(\alpha\). This parameter allows one to express the distortion numerically, that is introduced by a perspective camera, when capturing a mirror ball projection. By counteracting this distortion, a more accurate registration of multiple camera outputs was achieved, when capturing the same mirror ball projection from different viewpoints. Monitoring a process via a mirror ball has highlighted the fragility of the setup and will not produce better quality results when compared to a wide-angle camera, installed at the spot of the mirror ball. Whether the drawbacks of the mirror ball setup can justify it's use, will depend on the difficulty of setting up a more traditional vision system. It is a viable option, but only becomes a favorable one in the niche use-case of a camera not being installable at the monitoring spot. Figure 3: Setup to determine viability of the mirror ball projection for process monitoring
2308.01644
Spectral Torsion
We introduce a trilinear functional of differential one-forms for a finitely summable regular spectral triple with a noncommutative residue. We demonstrate that for a canonical spectral triple over a closed spin manifold it recovers the torsion of the linear connection. We examine several spectral triples, including Hodge-de\,Rham, Einstein-Yang-Mills, almost-commutative two-sheeted space, conformally rescaled noncommutative tori, and quantum $SU(2)$ group, showing that the third one has a nonvanishing torsion if nontrivially coupled.
Ludwik Dąbrowski, Andrzej Sitarz, Paweł Zalecki
2023-08-03T09:21:03Z
http://arxiv.org/abs/2308.01644v3
# Spectral torsion ###### Abstract. We introduce a trilinear functional of differential one-forms for a finitely summable regular spectral triple with a noncommutative residue. We demonstrate that for a canonical spectral triple over a closed spin manifold it recovers the torsion of the linear connection. We examine several spectral triples, including Hodge-de Rham, Einstein-Yang-Mills, almost-commutative two-sheeted space, conformally rescaled noncommutative tori, and quantum \(SU(2)\) group, showing that the third one has a nonvanishing torsion if nontrivially coupled. Key words and phrases:spectral geometry, noncommutative geometry, torsion, noncommutative residue 2010 Mathematics Subject Classification: 58B34, 46L87, 58J42, 83C65, 58J50 This work is supported by the Polish National Science Centre grant 2020/37/B/ST1/01540. ## 1. Introduction The existence and uniqueness of a metric-compatible linear connection with vanishing torsion is one of the fundamental theorems of Riemannian geometry. Torsion appears quite naturally as a vector-valued two-form in this approach, and the assumption that it identically vanishes leads to the Levi-Civita connection, which solely depends on the metric. In the background of general relativity lies the torsion-free condition of Riemannian geometry, which very accurately describes the gravitational interaction of bodies. Torsion, on the other hand, has a physical interpretation as the quantity that measures the twisting of reference frames along geodesics and has been considered an independent field in physics in Einstein-Cartan theory [4]. Torsion, unlike gravitational fields, must vanish in a vacuum and does not propagate; however, its existence causes nonlinear interactions of matter with spin and has the potential to change the standard singularity theorems of General Relativity (see [16, 21] for a review of physical theories). The emergence of noncommutative geometry [5, 6], which generalises standard notions of differential geometry to an algebraic (or operator-algebraic) setup, has raised new questions about the concepts of linear connections, metric, and torsion. A transparent link between different approaches, ranging from purely algebraic noncommutative differential geometry to operator algebraic spectral triple formalism, and a unifying view of metric, linear connection, and torsion has yet to be established. Although the algebraic concepts of linear connections, torsion, and metric compatibility have been achieved, the existence of the Levi-Civita connection can only be proven in special cases (see [1, 2, 3] and the references therein). On the other hand, the spectral triple approach, with the Dirac operator being the fundamental object of geometry, has not yet been able to determine whether the constructed Dirac operators correspond to the Levi-Civita connection or whether they contain a nonvanishing torsion. So far, for that purpose, one could only try to minimise the spectral functional corresponding to the integrated scalar curvature while keeping the metric defined by the Dirac operator unchanged.
2301.02646
Trajectories for the Optimal Collection of Information
We study a scenario where an aircraft has multiple heterogeneous sensors collecting measurements to track a target vehicle of unknown location. The measurements are sampled along the flight path and our goals to optimize sensor placement to minimize estimation error. We select as a metric the Fisher Information Matrix (FIM), as "minimizing" the inverse of the FIM is required to achieve small estimation error. We propose to generate the optimal path from the Hamilton-Jacobi (HJ) partial differential equation (PDE) as it is the necessary and sufficient condition for optimality. A traditional method of lines (MOL) approach, based on a spatial grid, lends itself well to the highly non-linear and non-convex structure of the problem induced by the FIM matrix. However, the sensor placement problem results in a state space dimension that renders a naive MOL approach intractable. We present a new hybrid approach, whereby we decompose the state space into two parts: a smaller subspace that still uses a grid and takes advantage of the robustness to non-linearities and non-convexities, and the remaining state space that can by found efficiently from a system of ODEs, avoiding formation of a spatial grid.
Matthew R. Kirchner, David Grimsman, Joao P. Hespanha, Jason R. Marden
2023-01-06T18:47:44Z
http://arxiv.org/abs/2301.02646v2
# Trajectories for the Optimal Collection of Information ###### Abstract We study a scenario where an aircraft has multiple heterogeneous sensors collecting measurements to track a target vehicle of unknown location. The measurements are sampled along the flight path and our goals to optimize sensor placement to minimize estimation error. We select as a metric the Fisher Information Matrix (FIM), as "minimizing" the inverse of the FIM is required to achieve small estimation error. We propose to generate the optimal path from the Hamilton-Jacobi (HJ) partial differential equation (PDE) as it is the necessary and sufficient condition for optimality. A traditional method of lines (MOL) approach, based on a spatial grid, lends itself well to the highly non-linear and non-convex structure of the problem induced by the FIM matrix. However, the sensor placement problem results in a state space dimension that renders a naive MOL approach intractable. We present a new hybrid approach, whereby we decompose the state space into two parts: a smaller subspace that still uses a grid and takes advantage of the robustness to non-linearities and non-convexities, and the remaining state space that can be found efficiently from a system of ODEs, avoiding formation of a spatial grid. ## 1 Introduction We present a method to optimize vehicle trajectories to gain maximal information for target tracking problems. The scenario currently being studied is an aircraft receiving passive information from sensors rigidly mounted to the airframe. These sensors include, but are not limited to, infrared or visible spectrum, as well as RF receivers that measure the frequency shifts from an external transmitter. The measurements are sampled in order to determine the location of a target vehicle. The placement of the sensors is determined by the path of the aircraft, influencing how much information is gained as well as the overall effectiveness of estimating where the target is located. By optimizing the trajectory, we can achieve maximum information gain, and hence the greatest accuracy in localizing the target. This problem is a generalization of what appeared in [1], where the path of the vehicle was fixed and a subset of measurements were selected only from along this path. In this context we optimize a metric of the cumulative Fisher Information Matrix (FIM) of the aircraft path, which is motivated by its connection to the (Bayesian) Cramer-Rao lower bound [2]. The logdet metric is chosen as this gives a D-optimal estimate, essentially corresponding to minimizing the volume of the error ellipsoid, and additionally provides favorable numeric properties. It is worth noting that while the focus of this paper is the logdet metric, other metrics may be considered, provided the metric meets certain conditions that are outlined in what follows in the paper. Of particular interest would be the trace of the inverse metric, as that gives the A-optimal estimate, effectively minimizing the mean-square estimate error. Analysis of the trace of the inverse metric is outside the scope of this paper and will be investigated in future work. We formulate the problem in such a way that the optimal value function satisfies a Hamilton-Jacobi (HJ) partial differential equation (PDE), from which the optimal trajectories immediately follow. Naively, a solution of the Figure 1: An illustration of the target tracking problem. An aircraft collects measurement for sensors as it flies along a path, attempting to estimate the location of the ship, denoted here as \(\theta\). Modifying the path of the vehicle can greatly improve the estimation performance. corresponding HJ PDE using a grid-based method would have many advantages since they handle the non-linear and non-convex problems that arises in FIM-based optimization. However, the sensor estimation problem induces a state space dimension that renders typical grid-based methods [3] for PDE solutions intractable due to the exponential dimensional scaling of such methods. Recognition of this problem is not new, and the phrase "curse of dimensionality" was coined decades ago by Richard Bellman [4]. This creates a large gap between the rigorous theory of HJ equations and practical implementation on many problems of interest, especially vehicle planning and coordination problems. New research has emerged in an attempt to bridge this technological gap, including trajectory optimization approaches [5, 6, 7], machine learning techniques [8, 9, 10], and sub-problem decomposition [11, 12]. The structure of the sensor placement problem lends itself well to the later strategy. Unique in this context, though, is that we do not need to abandon spatial grids entirely, instead forming a hybrid approach. This leverages the strength of grid-based methods in dealing with the non-convexities that commonly arise when using the FIM matrix, but restricts their applications to a small subspace of the problem. In what follows we formally introduce the sensor estimation problem and form its corresponding HJ PDE. We then proceed to show a new hybrid method of lines (MOL) approach that involves decomposing the state space. and conclude with simulated results of the optimal trajectories that result from heterogeneous sensors tracking the location of a mobile target. Section 2 shows how the information collecting problem gives rise to nonlinear dynamics with a cascade structure, that the input only directly affects one first subcomponent of the state, whereas the optimization criteria only depends on a second subcomponent. Section 3 addresses the optimal control of this type of systems using the HJ PDE and the classical MOL. Section 4, develops the theory needed for the new hybrid method of lines, which is applicable to systems in a cascade form. This type of systems arises naturally in formation collecting, but the hybrid methods of lines can be applied to the optimal of more general cascade systems. Section 5 specializes the hybrid MOL to the information collection. Section 6 includes simulation results for a particular vehicle model and sensor type. ## 2 The Vehicle Sensing Problem We choose as our vehicle a Dubin's car [13] and denote by \(\left(X,Y,\psi\right):=x\in\mathcal{X}:=\mathbb{R}^{2}\times\text{SO}\left(2\right)\) the vehicle state where \(X\) and \(Y\) are the rectangular positional coordinates of the vehicle under and \(\psi\) is the heading angle. The dynamics are defined by \[\frac{d}{ds}x\left(s\right)=f\left(x\left(s\right)\right)+Bu\left(s\right), \ \text{a.e.}\,s\in\left[0,t\right] \tag{1}\] where \[f\left(x\right)=\left[\begin{array}{c}v\cos\psi\\ v\sin\psi\\ 0\end{array}\right],\,B=\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right], \tag{2}\] where \(u\left(s\right)\in U:=\left[-\omega_{\max},\omega_{\max}\right]\) is the allowable control set of turn rates and \(v\) is the fixed forward speed of the vehicle. The admissible control set is defined as \[U\left[0,t\right]:=\left\{u\left(\cdot\right):\left[0,t\right]\to U \,|\,u\left(\cdot\right)\text{ is measurable}\right\}. \tag{3}\] Our method applied to vehicles that can be expressed in the general form \(\left(1\right)\), which includes the Dubins vehicle in \(\left(\ref{eq:Dubins}\right)\). The Dubins vehicle with bounded turning rate is particularly interesting because it is a low-dimensional model that generates trajectories that are easy to track by an aircraft flying at constant speed and altitude. The vehicle defined above has a group of rigidly attached sensors collecting measurements. The measurements, denoted as \(y\), are sampled in order to determine an unknown random variable, \(\theta\). The measurements are assumed to be random variables dependent on \(\theta\) with density function \[y\sim\rho\left(y|\theta\right).\] Assuming that all measurements \(y\) are conditionally independent given \(\theta\), the cumulative Bayesian Fisher Information Matrix (FIM) associated with the estimation of \(\theta\) is of the form \[\text{FIM}\left(t,x,u\left(\cdot\right)\right):=Q_{0}+\int_{0}^{t}Q\left( \gamma\left(s;x,u\left(\cdot\right)\right)\right)ds,\] where \[Q\left(x\right):=\mathbb{E}_{\theta}\left[Q\left(x;\theta\right)\right], \tag{4}\] with \[Q\left(x;\theta\right):=\mathbb{E}_{y}\left[\left(\frac{\partial\log\rho\left( y|\theta,x\right)}{\partial\theta}\right)^{\top}\left(\frac{\partial\log\rho \left(y|\theta,x\right)}{\partial\theta}\right)\right], \tag{5}\] and \[Q_{0}:=\mathbb{E}_{\theta}\left[\left(\frac{\partial\log\rho\left(\theta \right)}{\partial\theta}\right)^{\top}\left(\frac{\partial\log\rho\left( \theta\right)}{\partial\theta}\right)\right],\] where \(\rho\left(\theta\right)\) is the a-priori probability density function for \(\theta\). The formula above assumes a scenario where the measurement, \(y\left(t\right)\), is collected by one sensor or by multiple independent sensors that generate at the same (constant) sampling rate. When multiple independent sensors collect measurements at constant but different sampling rates, the FIM matrix can be factored for each sensor \(i\): \[Q\left(t,x,u\left(\cdot\right)\right)=\sum_{i}F^{i}Q^{i}\left(\gamma\left(s;x,u \left(\cdot\right)\right)\right),\] where \(F^{i}\) is the sampling rate of the \(i\)-th sensor. The above matrices are given from [14], where the expectation over \(y\) in (5) is given in closed form for some distributions, see for example [1, Sec. 5]. While the outer expectation over \(\theta\) in (4) is rarely known in closed form, many approximation schemes can be employed, for example Monte Carlo sampling or Taylor series expansion. The placement of the sensors is determined by the path of the aircraft, influencing how much information is gained as well the overall effectiveness of estimating \(\theta\). Therefore we optimize the trajectory to achieve maximum information gain, and hence the greatest performance in estimating \(\theta\) from the measurements \(y\). For a given initial state \(x\in\mathcal{X}\) and terminal time \(t\in\left[0,\infty\right)\), we define the following cost functional: \[J\left(t,x,u\left(\cdot\right)\right):=G\left(\text{CFIM}\left(t,x,u\left( \cdot\right)\right)\right)+\log\det\left(Q_{0}\right), \tag{6}\] where \[G\left(x,z\right):=-\log\det\left(\text{vec}^{-1}z\right),\] We denote by \(V\left(t,x\right)\) the value function defined as \[V\left(t,x\right)=\inf_{u\left(\cdot\right)\in U\left[0,t\right]}J\left(t,x,u \left(\cdot\right)\right), \tag{7}\] which can be interpreted as the maximal information gain for a family of trajectory optimization problems parameterized by initial state \(x\in\mathcal{X}\) and terminal time \(t\in\left[0,\infty\right)\). The cost functional in \(\left(\ref{eq:V}\right)\) is not in a standard form, so we convert the problem into a common standard, the so-called Mayer form. To do this we augment the state vector with \(z\in\mathcal{Z}:=\text{dom}\left(G\right)\). Our new state becomes \[\chi:=\left(x,z\right)^{\top},\] with augmented dynamics \[\frac{d}{ds}\chi\left(s\right)=\hat{f}\left(\chi\left(s\right),u\left(s\right) \right)=\left[\begin{array}{c}f\left(x\left(s\right)\right)\\ \ell\left(x\left(s\right)\right)\end{array}\right]+\left[\begin{array}{c}B \\ \mathbf{0}\end{array}\right]u\left(s\right), \tag{8}\] with \[\ell\left(x\left(s\right)\right):=\text{vec}\left(Q\left(x\left(s\right) \right)\right),\] where vec is the vectorize operator that reshapes a matrix into a column vector and \(\mathbf{0}\) is a vector of zeros of the same number of elements as the augmented variable \(z\). If we fix the \(z\) initial condition such that \[z=\text{vec}\left(Q_{0}\right), \tag{9}\] then the cost functional \(\left(\ref{eq:V}\right)\) can equivalently written as \[J\left(t,x,u\left(\cdot\right)\right)=J\left(t,\chi,u\left(\cdot\right)\right) =G\left(\text{vec}^{-1}\left(z\right)\right), \tag{10}\] where we denote by \(Z=\text{vec}^{-1}\left(z\right)\) the inverse operator such that \[\text{vec}\left(\text{vec}^{-1}\left(z\right)\right)=z.\] Hereafter we will denote by \(\tilde{G}\) as the function \(G\) with the input reshaped as a function of \(z\) with \[\tilde{G}\left(z\right):=G\left(\text{vec}^{-1}\left(z\right)\right). \tag{11}\] Likewise the value function is equivalently written as \[V\left(t,\chi\right)=\inf_{u\left(\cdot\right)\in U\left[0,t\right]}J\left(t,\chi,u\left(\cdot\right)\right). \tag{12}\] ## 3 Decomposition of Coupled Systems The approach we will develop to solve \(\left(\ref{eq:V}\right)\) is applicable to a more general class of cascade systems that we introduce in this section, and for which we discuss the use of HJ methods for optimal control. Denote by \(\chi:=\left(x,z\right)^{\top}\) where \(x\in\mathcal{X}=\mathbb{R}^{n}\) and \(z\in\mathcal{Z}=\mathbb{R}^{m}\). The state has coupled dynamics as follows: \[\begin{cases}\dot{x}\left(s\right)=f\left(x\left(s\right)\right)+g\left(x \left(s\right)\right)u\left(s\right)\quad\text{a.e}\,s\in\left[0,t\right]\\ \dot{z}\left(s\right)=\ell\left(x\left(s\right)\right),\end{cases} \tag{13}\] with \(u\in U\), where \(U\) is a closed convex set. We denote by \(\left[0,t\right]\ni s\mapsto\gamma\left(s;x_{0},u\left(\cdot\right)\right)\in \mathbb{R}^{n}\) the \(x\) state trajectory that evolves in time according to \(\left(\ref{eq:V}\right)\) starting from initial state \(x_{0}\) at \(t=0\). The trajectory \(\gamma\) is a solution of \(\left(\ref{eq:V}\right)\) in that it satisfies \(\left(\ref{eq:V}\right)\) almost everywhere: \[\begin{cases}\dot{\gamma}\left(s;x_{0},u\left(\cdot\right)\right)=f\left( \gamma\left(s;x_{0},u\left(\cdot\right)\right)\right)+g\left(\gamma\left(s;x_ {0},u\left(\cdot\right)\right)\right)u,\\ \gamma\left(0;x_{0},u\left(\cdot\right)\right)=x_{0}.\end{cases} \tag{14}\] Likewise, we denote by \(\left[0,t\right]\ni s\mapsto\xi\left(s;\chi_{0},u\left(\cdot\right)\right)\) the trajectory of the \(z\) variable and it satisfies the following almost everywhere: \[\begin{cases}\frac{d}{ds}\xi\left(s;\chi_{0},u\left(\cdot\right)\right)=\ell \left(\gamma\left(s;x_{0},u\left(\cdot\right)\right)\right),\\ \xi\left(0;\chi_{0},u\left(\cdot\right)\right)=z_{0}.\end{cases} \tag{15}\] Note that the trajectory can be found directly from the expression: \[\xi\left(s;\chi_{0},u\left(\cdot\right)\right):=z_{0}+\int_{0}^{s}\ell\left( \gamma\left(\tau;x_{0},u\left(\cdot\right)\right)\right)d\tau. \tag{16}\] Denote \(G:\mathbb{R}^{m}\rightarrow\mathbb{R}\) as the terminal cost function such that the mapping \[\mathcal{Z}\ni z\mapsto G\left(z\right)\in\mathbb{R},\] We define the cost functional \[J\left(t,\chi,u\left(\cdot\right)\right):=G\left(\xi\left(t;\chi,u\left( \cdot\right)\right)\right),\] and the associated value function as \[V\left(t,\chi\right):=\inf_{u\left(\cdot\right)\in U\left[0,t\right]}J\left(t,\chi,u\left(\cdot\right)\right),\] where \(U\left[0,t\right]\) is defined as in \(\left(\ref{eq:V}\right)\). We denote by \[\hat{f}\left(\chi,u\right):=\left[\begin{array}{c}f\left(x\right)+g\left(x \right)u\\ \ell\left(x\right)\end{array}\right],\] the joint vector field in \(\left(\ref{eq:V}\right)\). We assume that \(\hat{f}\), \(U\), and \(G\) satisfy the following regularity assumptions: 1. \(\left(U,d\right)\) is a separable metric space. 2. The maps \(\hat{f}:\mathcal{X}\times U\rightarrow\mathbb{R}^{n+m}\) and \(G:\mathcal{Z}\rightarrow\mathbb{R}\) are measurable, and there exists a constant \(L>0\) and a modulus of continuity \(\omega:\left[0,\infty\right)\rightarrow\left[0,\infty\right)\) such that for \(\varphi\left(\chi,u\right)=\hat{f}\left(\chi,u\right),G\left(z\right)\), we have for all \(\chi,\chi^{\prime}\in\mathcal{X}\times\mathcal{Z}\), and \(u,u^{\prime}\in U\) \[\left|\varphi\left(\chi,u\right)-\varphi\left(\chi^{\prime},u^{\prime}\right) \right|\leq L\left\|\chi-\chi^{\prime}\right\|+\omega\left(d\left(u,u^{\prime} \right)\right),\] and \[\left|\varphi\left(\mathbf{0},u\right)\right|\leq L.\] (F3) The maps \(\hat{f}\), and \(G\) are \(C^{1}\) in \(\chi\), and there exists a modulus of continuity \(\omega:\left[0,\infty\right)\rightarrow\left[0,\infty\right)\) such that for \(\varphi\left(\chi,u\right)=\hat{f}\left(\chi,u\right),G\left(z\right)\), we have for all \(\chi,\chi^{\prime}\in\mathcal{X}\times\mathcal{Z}\), and \(u,u^{\prime}\in U\) \[\left|\varphi_{\chi}\left(\chi,u\right)-\varphi_{\chi}\left(\chi^{\prime},u^{ \prime}\right)\right|\leq\omega\left(\left\|\chi-\chi^{\prime}\right\|+d\left(u,u ^{\prime}\right)\right).\] #### Hamilton-Jacobi Formulation Under a set of mild Lipschitz continuity assumptions, there exists a unique value function \(\left(\ref{eq:V}\right)\) that satisfies the following Hamilton-Jacobi (HJ) equation [15] with \(V\left(t,\chi\right)\) being the viscosity solution of the partial differential equation (PDE) for \(s\in\left[0,t\right]\) \[V_{s}\left(s,\chi\right)+\mathcal{H}\left(\chi,V_{\chi}\left(s, \chi\right)\right) =0, \tag{17}\] \[V\left(0,\chi\right) =G\left(z\right),\] where \(\sigma:=\left(p,\lambda\right)^{\top}\) and \[\mathcal{H}\left(\chi,\sigma\right):=\min_{u\in U}H\left(\chi,u,\sigma\right), \tag{18}\] with the Hamiltonian, \(H\), defined by \[\begin{split} H\left(\chi,u,\sigma\right)&=\left\langle \left[\begin{array}{c}f\left(x\right)+g\left(x\right)u\\ \ell\left(x\right)\end{array}\right],\left[\begin{array}{c}p\\ \lambda\end{array}\right]\right\rangle\\ &=\left\langle f\left(x\right),p\right\rangle+\left\langle g\left(x\right)u,p \right\rangle+\left\langle\ell\left(x\right),\lambda\right\rangle.\end{split}\] In the case where the set \(U\) is bounded by a norm, i.e. \[U=\left\{u\in\mathbb{R}^{n_{u}}|\left\|u\right\|\leq c\right\}, \tag{19}\] for some \(c\), then \(\left(\ref{eq:18}\right)\) is given in closed form by \[\mathcal{H}\left(\chi,\rho\right)=\left\langle f\left(x\right),p\right\rangle +\left\|g\left(x\right)^{\top}p\right\|_{*}+\left\langle\ell\left(x\right), \lambda\right\rangle, \tag{20}\] where \(\left\|\left(\cdot\right)\right\|_{*}\) is the dual norm to \(\left\|\left(\cdot\right)\right\|\) in \(\left(\ref{eq:19}\right)\). We denote by \(\pi\) the control that optimizes the Hamiltonian and is given by \[\pi\left(s,\chi\right):=\underset{u\in U}{\arg\min}H\left(\chi,u,V_{\chi} \left(s,\chi\right)\right).\] We note here that under mild assumptions, the viscosity solution of \(\left(\ref{eq:17}\right)\) is Lipschitz continuous in both \(s\) and \(\chi\)[16, Theorem 2.5, p. 165]. This implies by Rademacher's theorem [17, Theorem 3.1.6, p. 216] the value function is differentiable almost everywhere. For what follows, we assume that the value function has continuous first and second derivatives. The points where this fails to be true only exists on a set of measure zero, and any practical implementation of the method presented will only evaluate points where the first and second derivatives exist. A characterization of the differentiability of the value function is outside the scope of this paper and a full rigorous treatment will appear in forthcoming work. _Necessary Conditions of the Optimal Trajectories_ Fix \(x\in\mathcal{X}\) and \(z\in\mathcal{Z}\) as initial conditions and fix the terminal time \(t\). Denote by \(\bar{\gamma}\left(s\right)\) and \(\bar{\xi}\left(s\right)\) as the optimal state trajectories such that \[\bar{\gamma}\left(s\right):=\bar{\gamma}\left(s;\chi\right)=\gamma\left(s;x, \bar{u}\left(\cdot;\chi\right)\right),\] and \[\bar{\xi}\left(s\right):=\bar{\xi}\left(s;\chi\right)=\xi\left(s;x,z,\bar{u} \left(\cdot;\chi\right)\right),\] such that \(\bar{u}\) optimizes \(\left(\ref{eq:12}\right)\). By Pontryagin's theorem [18] there exists adjoint trajectories \(p\left(s\right):=p\left(s;\chi\right)\) and \(\lambda\left(s\right):=\lambda\left(s;\chi\right)\) such that the function \[\left[0,t\right]\ni s\mapsto\left(\bar{\gamma}\left(s\right),\bar{\xi}\left(s \right),p\left(s\right),\lambda\left(s\right)\right) \tag{21}\] is a solution of the characteristic system (22) almost everywhere \(s\in\left[0,t\right]\) with boundary conditions _Numerical Approximations Viscosity Solutions to First-Order Hyperbolic PDEs_ Traditional methods for computing the viscosity solution to \(\left(\ref{eq:17}\right)\) rely on constructing a discrete grid of points. This is typically chosen as a Cartesian grid, but many other grid types exist. The value function is found using a _method of lines_ (MOL) approach by the solving the following family of ODEs, pointwise at each grid point \(\chi^{k}=\left(x^{k},z^{k}\right)\in\mathcal{S}:=\mathcal{X}\times\mathcal{Z}\): \[\begin{cases}\dot{\phi}\left(s,\chi^{k}\right)=-\mathcal{H}\left(\chi^{k},D_{ \chi}\phi\left(s,\chi^{k}\right)\right),&s\in\left[0,t\right]\\ \phi\left(0,\chi^{k}\right)=G\left(z^{k}\right),\end{cases} \tag{23}\] where \(\phi\left(s,\chi^{k}\right)\) should be viewed as an approximation to the value function \(V\left(s,\chi^{k}\right)\) in \(\left(\ref{eq:17}\right)\) and \[D_{\chi}\phi\left(s,\chi^{k}\right)\approx\phi_{\chi}\left(s,\chi^{k}\right)\] is obtained by a finite difference scheme used to approximate the gradient of \(\phi\) at grid point \(k\). Care must be taken when evaluating finite differences of possibly non-smooth functions and the family of _Essentially Non-Oscillatory_ (ENO) methods were developed to address this issue [19]. The advantage of the method of lines is that we can compute \(\left(\ref{eq:23}\right)\) independently at each grid point with \(\phi\left(t,\chi^{k}\right)\approx V\left(t,\chi^{k}\right)\). Under certain conditions, for example the Lax-Richtmyer equivalence theorem [20], \[\Delta s\to 0,\,\Delta\chi\to 0\implies\phi\left(t,\chi^{k}\right)\to V \left(t,\chi^{k}\right)\] when the scheme is both consistent, i.e. the error between \(\phi\left(t,\chi^{k}\right)\) and \(V\left(t,\chi^{k}\right)\) tends to zero, and stable. In this case, stability is enforced when the time step, \(\Delta s\), satisfies the Courant-Friedrichs-Lewy (CFL) condition [21]. When the HJ equation is a non-linear PDE, then additionally a Lax-Friedrichs approximation [22, 23] is needed to ensure stability. In the Lax-Friedrichs method the Hamiltonian in \(\left(\ref{eq:23}\right)\) is replaced by \[\hat{\mathcal{H}}\left(\chi,\sigma^{+},\sigma^{-}\right):= \mathcal{H}\left(\chi,\frac{\sigma^{+}+\sigma^{-}}{\in}\right)\] \[-\nu\left(\chi\right)^{\top}\left(\frac{\sigma^{+}+\sigma^{-}}{2} \right),\] where inputs \(D_{\chi}^{+}\phi\left(s,\chi^{k}\right)\rightarrow\sigma^{+}\) and \(D_{\chi}^{-}\phi\left(s,\chi^{k}\right)\rightarrow\sigma^{-}\) are the right and left side bias finite differencing approximations to the gradient, respectively. The term \(\nu\left(\chi\right)\) is the artificial dissipation and depends on \(H_{\sigma}\left(\chi,\sigma\right)\), the gradient of the Hamiltonian with respect to the adjoint variable. The MOL approach in \(\left(\ref{eq:23}\right)\) becomes \[\begin{cases}\dot{\phi}\left(s,\chi^{k}\right)=-\hat{\mathcal{H}}\left(\chi^{k },D_{\chi}^{+}\phi\left(s,\chi^{k}\right),D_{\chi}^{-}\phi\left(s,\chi^{k} \right)\right),\\ \phi\left(0,\chi^{k}\right)=G\left(z^{k}\right),\end{cases} \tag{24}\] In general, no closed form solution exists to \(\left(\ref{eq:24}\right)\) and therefore an explicit Runge-Kutta scheme is employed. If the first order Euler method is used to solve \(\left(\ref{eq:24}\right)\), then we have the following time-marching scheme with iteration for \(s\in\left[0,t\right]\): \[\begin{cases}\phi\left(s+\Delta s,\chi^{k}\right)=\phi\left(s,\chi^{k}\right) \\ \phi\left(0,\chi^{k}\right)=G\left(z^{k}\right).\end{cases} \tag{25}\] The reader is encouraged to read [3] for a comprehensive review on numeric numeric methods to solving first-order hyperbolic HJ PDEs. ## 4 HJB Decomposition We are especially interested in problems for which the \(x\)-component of the state in \(\left(\ref{eq:13}\right)\) has a relatively small dimension, but \(z\)-component does not. This is common in the vehicle sensing problem discussed in Section 2, because the dimension of \(z\) scales with the square of the number of parameters to be estimated and therefore, even for simple vehicle dynamics and a relatively small number of parameters, the dimension of the state \(\chi\) is too large to apply \(\left(\ref{eq:25}\right)\). To overcome this challenge, we present an hybrid method of lines that uses a grid over \(x\), but no grid over \(z\). A key challenge to creating such a method is to find a closed-form expression for the gradient of the value function with respect to \(z\), so as to avoid finite differencing schemes in \(z\). Taking advantage of the specific structure of the problem, we show that we can use a grid over the state variable \(x\) to compute \(D_{x}\phi\left(s,\chi^{k}\right)\approx\phi_{x}\left(s,\chi^{k}\right)\) with finite differences, but avoid a grid over the state variable \(z\) by solving a family of ODEs to compute \(D_{z}\phi\left(s,\chi^{k}\right)\). This is supported by the following theorem. **Theorem 1**.: _Suppose the value function \(V\left(s,\chi\right)\) is twice differentiable at \(\left(s,\chi\right)\in\left[0,\infty\right)\times\mathcal{S}\). Then at any point \(\chi\), the gradient of the value function with respect to \(z\) can be found using the following ODE:_ \[\begin{cases}\dot{V}_{z}\left(s,\chi\right)=-\frac{\partial}{\partial z}\left< G_{z}\left(\bar{\xi}\left(s\right)\right),\ell\left(x\right)\right>\\ \qquad\qquad\qquad-R_{x}\left(s,\chi,\pi\left(s,\chi\right),f\left(x\right),g \left(x\right)\right),\\ V_{z}\left(0,\chi\right)=G_{z}\left(z\right),\end{cases} \tag{26}\] _where_ \[R_{x}\left(s,\chi,u,\alpha,\beta\right):= \frac{\partial}{\partial x}\Big{\{}\left<G_{z}\left(\bar{\xi} \left(s\right)\right),\alpha\right> \tag{27}\] \[+\left<G_{z}\left(\bar{\xi}\left(s\right)\right),\beta u\right> \Big{\}}. \tag{28}\] The proof of Theorem 1 will need the following technical lemma. **Lemma 2**.: _Suppose that the gradient \(V_{z}\left(t,\chi\right)\) exists at \(\left(t,\chi\right)\in\left[0,\infty\right)\times\mathcal{S}\). Then the gradient of the value function with respect to the augmented variable is given by_ \[V_{z}\left(t,\chi\right)=G_{z}\left(\bar{\xi}\left(t;\chi\right)\right).\] Proof.: Recall from \(\left(\ref{eq:16}\right)\) and applying the optimal control sequence, \[\bar{\xi}\left(s\right)=z+\int_{0}^{s}\ell\left(\bar{\gamma}\left(\tau\right) \right)d\tau.\] Therefore \[G_{z}\left(z+\int_{0}^{t}\ell\left(\bar{\gamma}\left(\tau\right)\right)d\tau \right)=G_{z}\left(\bar{\xi}\left(t\right)\right):=\lambda\left(t\right). \tag{29}\] Recognize that \(\left(\ref{eq:29}\right)\) is the boundary condition of the characteristic system \(\left(\ref{eq:22}\right)\), and that \[V_{z}\left(t,\chi\right) =\lambda\left(0\right)\] Where the first line above uses the connection between the adjoint variable, \(\lambda\), and the value function [16, Theorem 3.4, p. 235]. Observing that the Hamiltonian \(\left(\ref{eq:20}\right)\) does not depend on the argument \(z\), then it follows that \[\mathcal{H}_{z}\left(\bar{\gamma}\left(s\right),\bar{\xi}\left(s\right),p \left(s\right),\lambda\left(s\right)\right)=0,\,s\in\left[0,t\right],\] which leads to \[V_{z}\left(t,\chi\right)=G_{z}\left(\bar{\xi}\left(t\right)\right).\] We now proceed to the proof of Theorem 1. Proof.: Fix \(x,z\) and noting the original HJB equation \(\left(\ref{eq:17}\right)\): \[\dot{V}_{z}\left(s,\chi\right) =\frac{\partial}{\partial s}\left\{V_{z}\left(s,\chi\right)\right\}\] \[=\frac{\partial}{\partial z}\left\{V_{s}\left(s,\chi\right)\right\}\] \[=\frac{\partial}{\partial z}\left\{-\mathcal{H}\left(\chi,V_{x} \left(s,\chi\right),V_{z}\left(s,\chi\right)\right)\right\}.\] From the definition of the Hamiltonian \[\dot{V}_{z}\left(s,\chi\right)=\frac{\partial}{\partial z}\Bigg{\{} -\left<V_{z}\left(s,\chi\right),\ell\left(x\right)\right>-\left<V_{x}\left(s, \chi\right),f\left(x\right)\right>\] \[-\min_{u\in U}\left<V_{x}\left(s,\chi\right),g\left(x\right)u \right>\Bigg{\}}.\] Fix time \(s\in\left[0,t\right]\), and define the function \[\varphi^{s}\left(\chi,u\right): =\min_{u\in U}F^{s}\left(\chi,u\right),\] where \[F^{s}\left(\chi,u\right):=\left<V_{x}\left(s,\chi\right),g\left(x\right)u \right>,\] and recall that \[\pi\left(s,\chi\right):=\operatorname*{arg\,min}_{u\in U}\left<V_{x}\left(s, \chi\right),g\left(x\right)u\right>.\] Since by assumption both \(V_{x}\left(s,\chi\right)\) and \(V_{zx}\left(s,\chi\right)\) exist, and \(F^{s}\left(\chi,u\right)\) is differentiable at \(\chi\), this implies the gradient of \(\varphi^{s}\) can by found [24, Theorem 4.13] with the following relation: \[\varphi_{z}^{s}\left(\chi,u\right)=F_{z}^{s}\left(\chi,\pi\left(s,\chi\right) \right).\] This gives \[\dot{V}_{z}\left(s,\chi\right)= -\frac{\partial}{\partial z}\left\{\left<V_{z}\left(s,\chi\right), \ell\left(x\right)\right>\right\}\] \[-\frac{\partial}{\partial z}\left\{\left<V_{x}\left(s,\chi\right), \alpha\right>\right\}\bigg{|}_{\alpha=f\left(x\right)}\] \[-\frac{\partial}{\partial z}\left\{\left<V_{x}\left(s,\chi\right), \beta u\right>\right\}\bigg{|}_{u=\pi\left(s,\chi\right),\beta=g\left(x \right)}.\] Noting the symmetry of the gradients with respect to \(x,z\) we have \[\dot{V}_{z}\left(s,\chi\right)= -\frac{\partial}{\partial z}\left\{\left\langle V_{z}\left(s,\chi \right),\ell\left(x\right)\right\rangle\right\}\] \[-\frac{\partial}{\partial x}\left\{\left\langle V_{z}\left(s,\chi \right),\alpha\right\rangle\right\}\bigg{|}_{\alpha=f\left(x\right)}\] \[-\frac{\partial}{\partial x}\left\{\left\langle V_{z}\left(s, \chi\right),\beta u\right\rangle\right\}\bigg{|}_{u=\pi\left(s,\chi\right), \beta=g\left(x\right)},\] and then applying Lemma 2, the result follows. _Method of Lines with State Space Decomposition_ Recall that we denote by \(\phi\left(s,\chi\right)\) the numeric approximation to the value function, \(V\left(s,\chi\right)\). The proposed hybrid MOL is relies on an approximations \(D_{x}\phi\left(s,\chi\right)\) of the gradient of the value function with respect to \(x\), \(V_{x}\left(s,\chi\right)\), that is based on the Lax-Friedrichs approximation. However, the approximation \(\Phi\left(s,\chi\right)\) of the gradient of the value function with respect to \(z\), \(V_{z}\left(s,\chi\right)\), is obtained by solving an ODE in time and does not require a spatial grid. In view of this, this method computes the two approximations \(\phi\left(s,x^{k},z\right)\) and \(\Phi\left(s,x^{k},z\right)\) on points \(\left(x^{k},z\right)\in\mathcal{S}\) where the \(x^{k}\) are restricted to a finite grid of the \(x\)-component of the state, whereas \(z\) is not restricted to a grid. To accomplish this, we need the following assumption that, together with Theorem 1, leads to the following MOL. Suppose that the first term in \(\left(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq element and zeros elsewhere. We now have \[\frac{\partial}{\partial Z_{ij}}\text{tr}\left(\bar{\Xi}\left(z \right)^{-1}Q\left(x\right)\right)\] \[=-\text{tr}\left(\bar{\Xi}\left(z\right)^{-1}e_{i}e_{j}^{\top} \bar{\Xi}\left(z\right)^{-1}Q\left(x\right)\right)\] \[=-\text{tr}\left(e_{j}^{\top}\bar{\Xi}\left(z\right)^{-1}Q\left( x\right)\bar{\Xi}\left(z\right)^{-1}e_{i}\right)\] \[=-e_{j}^{\top}\bar{\Xi}\left(z\right)^{-1}Q\left(x\right)\bar{\Xi }\left(z\right)^{-1}e_{i}\] \[=\left[\bar{\Xi}\left(z\right)^{-1}Q\left(x\right)\bar{\Xi}\left( z\right)^{-1}\right]_{ji},\] \[=\left[\text{vec}^{-1}\left(G_{z}\left(\bar{\xi}\left(s\right) \right)\right)\cdot Q\left(x\right)\cdot\text{vec}^{-1}\left(G_{z}\left(\bar{ \xi}\left(s\right)\right)\right)\right]_{ji}\] and the result follows. Note Lemma 3 gives us the relation in \(\left(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eq placement of sensors. This method provides for robustness, where needed, in the \(x\) subspace by using a classic grid approach with finite differencing. It avoids a grid in the \(z\) subspace and hence scales well with the number of \(z\) dimensions. We applied this to a trajectory optimization problem where the goal is to find the trajectory that minimizes the estimation error from the measurements collected along the calculated path. Future work includes investigating metrics other than logdet such as the trace of the inverse and studying if the hybrid method of lines approach can be generalized to a broader class of systems. ## Appendix A Hamiltonian Regularity Assumptions Let \(n\) be the dimension of the augmented state variable \(\chi\), and denote by \(\sigma:=\left(p,\lambda\right)^{\top}\), and with a slight abuse of notation note that \(\mathcal{H}\left(s,\chi,\sigma\right)=\mathcal{H}\left(s,x,z,\sigma\right)= \mathcal{H}\left(s,x,z,p,\lambda\right)\) and vice versa. We introduce a set of mild regularity assumptions: 1. The Hamiltonian \[\left[0,t\right]\times\mathcal{X}\times\mathcal{Z}\times\mathbb{R}^{n}\ni \left(s,x,z,p,\lambda\right)\mapsto\mathcal{H}\left(s,x,z,p,\lambda\right)\in \mathbb{R}\] is continuous. 2. There exists a constant \(c>0\) such that for all \(\left(s,x,z\right)\in\left[0,t\right]\times\mathcal{X}\times\mathcal{Z}\) and for all \(\sigma^{\prime},\sigma^{\prime\prime}\in\mathbb{R}^{n}\), the following inequalities hold \[\left|\mathcal{H}\left(s,x,z,\sigma^{\prime}\right)-\mathcal{H}\left(s,x,z, \sigma^{\prime\prime}\right)\right|\leq\kappa_{1}\left(\chi\right)\left\| \sigma^{\prime}-\sigma^{\prime\prime}\right\|,\] and \[\left|\mathcal{H}\left(s,x,z,\mathbf{0}\right)\right|\leq\kappa_{1}\left(\chi \right),\] with \(\kappa_{1}\left(\chi\right)=c\left(1+\left\|\chi\right\|\right)\). 3. For any compact set \(M\subset\mathbb{R}^{n}\) there exists a constant \(C\left(M\right)>0\) such that for all \(\chi^{\prime},\chi^{\prime\prime}\in M\) and for all \(\left(s,\sigma\right)\in\left[0,t\right]\times\mathbb{R}^{n}\) the inequality holds \[\left|\mathcal{H}\left(s,\chi^{\prime},\sigma\right)-\mathcal{H}\left(s,\chi ^{\prime\prime},\sigma\right)\right|\leq\kappa_{2}\left(\sigma\right)\left\| \chi^{\prime}-\chi^{\prime\prime}\right\|;\] with \(\kappa_{2}\left(\sigma\right)=C\left(M\right)\left(1+\left\|\sigma\right\|\right)\). 4. The terminal cost function \[\mathbb{R}^{n}\ni\chi\mapsto G\left(\chi\right)\in\mathbb{R},\] is continuous. Next we present an important theorem on the existence and uniqueness of viscosity solutions of the Hamilton-Jacobi equation. **Theorem 4** ([28, Theorem II.8.1, p. 70]).: _Let assumptions \(\left(H1\right)-\left(H4\right)\) hold. Then there exists a unique viscosity solution to \(\left(\ref{eq:1}\right)\)._ ## Appendix B Supporting Propositions **Proposition 5**.: _Let \(\chi\in\mathcal{S}\), then_ \[\frac{\partial}{\partial z}\xi\left(t;\chi,\bar{u}\left(\cdot\right)\right)=I.\] Proof.: By assumption, the terminal point of the state trajectory \(\zeta\left(t;\chi,\bar{u}\left(\cdot\right)\right)\) is differentiable with respect to initial condition \(\chi\in\mathcal{S}\). Defining the Jacobin, for \(s\in\left[0,t\right]\), \[m\left(s\right):=\left[\begin{array}{cc}m_{xx}\left(s\right)&m_{x,z} \left(s\right)\\ m_{xx}\left(s\right)&m_{zz}\left(s\right)\end{array}\right]\] We have from [16, Chapter 5, Equation 3.23] that \(m\left(t\right)\) satisfies the following matrix equation almost everywhere: \[\begin{cases}\dot{m}\left(s\right)=\hat{f}_{\chi}\left(\bar{\zeta}\left(s; \chi,\bar{u}\left(\cdot\right)\right),\bar{u}\left(s\right)\right)m\left(s \right),&s\in\left[0,t\right],\\ m\left(0\right)=I.\end{cases}\] Figure 3: Here a series of optimal trajectories are shown in red from different starting locations, with each vehicle starting out moving from right to left. Same as in Fig. 2, the aircraft is only using Doppler shift measurements. The blue circle is the 95% error ellipse of the prior distribution on \(\theta\). From which the \(m_{zz}\) partition is written as \[\begin{cases}\dot{m}_{zz}\left(s\right)=\ell_{z}\left(\bar{\gamma}\left(s;\chi, \bar{u}\left(\cdot\right)\right)\right)m_{zz}\left(s\right),&s\in\left[0,t \right],\\ m_{zz}\left(0\right)=I.\end{cases}\] Since \(\ell\) does not depend on \(z\), we have \[\dot{m}_{zz}\left(s\right)=0,\;\forall s\in\left[0,t\right],\] and the result follows. ## Acknowledgments The authors would like to thank Levon Nurbekyan, with the Department of Mathematics at UCLA, for providing a reference that assisted in the proof of Theorem 1. This research was funded by the Office of Naval Research under Grant N00014-20-1-2093.
2310.04576
Challenges in Statistically Rejecting the Perfect Competition Hypothesis Using Imperfect Competition Data
We theoretically prove why statistically rejecting the null hypothesis of perfect competition is challenging, known as a common problem in the literature. We also assess the finite sample performance of the conduct parameter test in homogeneous goods markets, showing that statistical power increases with the number of markets, a larger conduct parameter, and a stronger demand rotation instrument. However, even with a moderate number of markets and five firms, rejecting the null hypothesis of perfect competition remains difficult, irrespective of instrument strength or the use of optimal instruments. Our findings suggest that empirical results failing to reject perfect competition are due to the limited number of markets rather than methodological shortcomings.
Yuri Matsumura, Suguru Otani
2023-10-06T20:38:32Z
http://arxiv.org/abs/2310.04576v4
# Finite Sample Performance of a Conduct Parameter Test in Homogenous Goods Markets ###### Abstract We assess the finite sample performance of the conduct parameter test in homogeneous goods markets. Statistical power rises with an increase in the number of markets, a larger conduct parameter, and a stronger demand rotation instrument. However, even with a moderate number of markets and five firms, regardless of instrument strength and the utilization of optimal instruments, rejecting the null hypothesis of perfect competition remains challenging. Our findings indicate that empirical results that fail to reject perfect competition are a consequence of the limited number of markets rather than methodological deficiencies. **Keywords:** Conduct parameters, Homogenous goods market, Monte Carlo simulation, Statistical power analysis **JEL Codes:** C5, C12, L1 ## 1 Introduction Measuring competitiveness is important in the empirical industrial organization literature. A conduct parameter is considered to be a useful measure of competitiveness. However, the parameter cannot be directly measured from data because data generally lack information about marginal costs. Therefore, researchers endeavor to learn conduct parameters. Researchers estimate and test structural models to understand firm conduct in both homogeneous and differentiated good markets (Nevo, 1998; Magnolfi and Sullivan, 2022; Duarte et al., 2023). We focus on homogenous good markets. The conduct parameters for the linear model are identified by Bresnahan (1982), and Matsumura and Otani (2023b) resolve conflicts between Bresnahan (1982) and Perloff and Shen (2012) vis-a-vis identification problems. Estimation accuracy may be improved by adding equilibrium existence conditions in the log-linear model (Matsumura and Otani 2023a). Conduct parameter testing is undertaken by Genesove and Mullin (1998), who compare estimates from the sugar industry with direct measures of market power.1 When market power is around 0.1 and the number of markets is less than 100, perfect competition cannot be rejected. Clay and Troesken (2003) investigate the robustness of specifications using 38 whiskey markets and find that estimation is sensitive to model specification. Steen and Salvanes (1999) study 48 markets in the French salmon industry and Shaffer (1993) studies 25 markets in the Canadian banking industry. They also cannot reject the null hypothesis that markets are perfectly competitive. Their results raise doubts about the methodology in itself (Shaffer and Spierdijk 2017). Footnote 1: Genesove and Mullin (1998) made mistakes on how to get predicted interaction terms of rotation demand instruments and endogenous quantity in the first stage regression. See Section A.3, Matsumura and Otani (2023b). Recent literature on conduct parameter estimation and test in homogenous goods markets like an electricity generation industry follows Genesove and Mullin (1998) and proceeds with a comparison of estimated conduct parameters with directly measured ones through price-cost markups recovered from observed prices and marginal costs data. Wolfram (1999) studies the British electricity industry using 25,639 samples and finds that the directly measured conduct parameter is about 0.05 and the estimated conduct parameter is 0.01. So, the author cannot reject the null hypothesis of perfect competition. Kim and Knittel (2006) study California electricity markets using 21,104 samples and find that Bresnahan's technique overstates marginal costs on average and is likely to reject the null hypothesis of perfect competition. Puller (2007) uses 163 to 573 samples and cannot reject the null hypothesis of Cournot competition for the same industry. The robustness of the data and methodology has been extensively tested in previous studies focusing on market power in the California electricity market (for example, Borenstein et al. (2002), Wolak (2003), and Orea and Steinbuks (2018)). While popular, there is a lack of formal Monte Carlo simulations for a conduct parameter test. The lack prevents us from understanding the above comparison properly. Accordingly, we investigate the finite sample performance of the conduct parameter test in homogeneous goods markets. We analyze statistical power by varying the number of markets, firms, and strength of demand rotation instruments under the null hypothesis of perfect competition. Our findings indicate that statistical power increases with more markets, a larger conduct parameter, and stronger demand rotation instruments. However, even with a moderate number of markets (e.g., 1000) and five firms, we cannot achieve an 80% rejection frequency \((1-\beta=0.8\), where \(\beta\) represents the probability of a Type II error), irrespective of instrument strength and the use of optimal instruments. While optimal instruments enhance rejection probability in large samples, they do not alter the core findings. This highlights the challenge of testing perfect competition, as recognized by Genesove and Mullin (1998), Steen and Salvanes (1999), and Shaffer (1993), primarily stemming from the limited number of markets rather than methodological flaws. Our results and code provide a valuable reference for applied researchers examining assumptions about firm conduct in homogeneous goods markets, i.e., whether it is perfect competition, Cournot competition, or perfect collusion. ## 2 Model Consider data with \(T\) markets with homogeneous products. Assume that there are \(N\) firms in each market. Let \(t=1,\ldots,T\) be the index for markets. Then, we obtain a supply equation: \[P_{t}=-\theta\frac{\partial P_{t}(Q_{t})}{\partial Q_{t}}Q_{t}+MC_{t}(Q_{t}), \tag{1}\] where \(Q_{t}\) is the aggregate quantity, \(P_{t}(Q_{t})\) is the demand function, \(MC_{t}(Q_{t})\) is the marginal cost function, and \(\theta\in[0,1]\) is the conduct parameter. The equation nests perfect competition (\(\theta=0\)), Cournot competition (\(\theta=1/N\)), and perfect collusion (\(\theta=1\)) (See Bresnahan (1982)). Consider an econometric model integrating the above. Assume that the demand and marginal cost functions are written as: \[P_{t} =f(Q_{t},Y_{t},\varepsilon_{t}^{d},\alpha), \tag{2}\] \[MC_{t} =g(Q_{t},W_{t},\varepsilon_{t}^{c},\gamma), \tag{3}\] where \(Y_{t}\) and \(W_{t}\) are vectors of exogenous variables, \(\varepsilon_{t}^{d}\) and \(\varepsilon_{t}^{c}\) are error terms, and \(\alpha\) and \(\gamma\) are vectors of parameters. Additionally, we have demand- and supply-side instruments, \(Z_{t}^{d}\) and \(Z_{t}^{c}\), and assume that the error terms satisfy the mean independence conditions, \(E[\varepsilon_{t}^{d}\mid Y_{t},Z_{t}^{d}]=E[\varepsilon_{t}^{c}\mid W_{t},Z_{ t}^{c}]=0\). ### Linear demand and cost Assume that linear demand and marginal cost functions are specified as: \[P_{t} =\alpha_{0}-(\alpha_{1}+\alpha_{2}Z_{t}^{R})Q_{t}+\alpha_{3}Y_{t} +\varepsilon_{t}^{d}, \tag{4}\] \[MC_{t} =\gamma_{0}+\gamma_{1}Q_{t}+\gamma_{2}W_{t}+\gamma_{3}R_{t}+ \varepsilon_{t}^{c}, \tag{5}\] where \(W_{t}\) and \(R_{t}\) are exogenous cost shifters and \(Z_{t}^{R}\) is Bresnahan's demand rotation instrument. The supply equation is written as: \[P_{t}=\gamma_{0}+\theta(\alpha_{1}+\alpha_{2}Z_{t}^{R})Q_{t}+\gamma_{1}Q_{t}+ \gamma_{2}W_{t}+\gamma_{3}R_{t}+\varepsilon_{t}^{c}. \tag{6}\] By substituting (4) with (6) and solving it for \(P_{t}\), we obtain the aggregate quantity \(Q_{t}\) based on the parameters and exogenous variables as follows: \[Q_{t}=\frac{\alpha_{0}+\alpha_{3}Y_{t}-\gamma_{0}-\gamma_{2}W_{t}-\gamma_{3}R_{t} +\varepsilon_{t}^{d}-\varepsilon_{t}^{c}}{(1+\theta)(\alpha_{1}+\alpha_{2}Z_{t }^{R})+\gamma_{1}}. \tag{7}\] ## 3 Simulation results ### Simulation and estimation procedure We set true parameters and distributions as shown in Table 1. We vary the true value of \(\theta\) from 0.05 (20-firm symmetric Cournot) to 1 (perfect collusion) and the strength of demand rotation instrument, \(\alpha_{2}\), from 0.1 (weak) to 20.0 (extremely strong) which is unrealistically larger than the price coefficient level, \(\alpha_{1}=1.0\). For simulation, we generate 100 datasets. We separately estimate the demand and supply equations via two-stage least squares (2SLS) estimation. The instrumental variables for demand estimation are \(Z_{t}^{d}=(1,Z_{t}^{R},Y_{t},H_{t},K_{t})\) and for supply estimation are \(Z_{t}^{c}=(1,Z_{t}^{R},W_{t},R_{t},Y_{t})\) for a benchmark model. To achieve theoretical efficiency bounds, we add optimal instruments of Chamberlain (1987), used in demand estimation (Reynaert and Verboven, 2014). Optimal instruments lead to asymptotically efficient estimators, as their asymptotic variance cannot be reduced via additional orthogonality conditions. See Appendix A.2 for construction details. The null hypothesis is that markets are under perfect competition, that is, \(\theta=0\). We compute the rejection frequency as the power by using t-statistics at a significance level of 0.05 over 100 datasets. Figure 1 displays the finite sample performance results for the conduct parameter \(\theta\).2 Rejec \begin{table} Note: \(\sigma=1.0\). \(N:\) Normal distribution. \(U:\) Uniform distribution. \end{table} Table 1: True parameters and distributions tion frequency increases under the following conditions: a large sample size (number of markets), a larger \(\theta\) (fewer firms), and a larger \(\alpha_{2}\) (stronger demand rotation instrument). Panel (f) indicates that with 20 symmetric firms (\(\theta=0.05\)) and a sufficiently large number of markets, we achieve an approximately 70% power to reject the null hypothesis of markets operating under perfect competition. However, we cannot reject the null hypothesis with an acceptable sample size and power when markets follow 20-firm symmetric Cournot competition. A remarkable finding is that even with a moderate number of markets (e.g., 1000 in Panel (c)) and five firms, the rejection frequency cannot achieve 80% (i.e., \(1-\beta=0.8\), where \(\beta\) is the probability of making a Type II error), regardless of instrument strength. This implies that Genesove and Mullin (1998), using 97 markets, Shaffer (1993), using 25 markets, and Steen and Salvanes (1999), using 48 markets, fail in rejecting perfect competition due to the small sample problem. Figure 2 shows the efficiency gain of optimal instruments relative to the aforementioned benchmark model. We find that if the number of markets exceeds 1,000, optimal instruments increase the rejection probability. However, the gain does not change our benchmark results. Why is it statistically challenging to differentiate between perfect and Cournot competition? In differential product markets, as demonstrated by Berry and Haile (2014), the variation in instrumental variables can aid in discerning firm behavior. Various factors such as changes in the number of products, prices in other markets, and alterations in product characteristics, can be utilized without requiring a specific functional form. In contrast, homogeneous product markets exhibit limited variation only on demand rotation instruments. Therefore, even when the number of firms is substantial, firm conduct tests may lack the necessary power to differentiate between perfect and Cournot competition. ## 4 Conclusion We perform a statistical power analysis for conduct parameter estimation. Power rises with an increase in the number of markets, a larger conduct parameter, and a stronger demand rotation instrument. Nevertheless, rejecting the null hypothesis of markets operating under perfect competition remains challenging, even with a moderate number of markets (e.g., 1000) and five firms, regardless of instrument strength and the use of optimal instruments. This reaffirms that the difficulty in testing perfect competition, as observed by Genesove and Mullin (1998), Steen and Salvanes (1999), and Shaffer (1993), is primarily attributed to the limited number of markets, rather than methodological shortcomings. AcknowledgmentsWe thank Jeremy Fox, Isabelle Perrigne, and Yuya Shimizu for their valuable advice. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Figure 1: Statistical power of conduct parameter \(\theta\) Note: Dotted lines are 80% and 100% rejection frequencies out of 100 simulation data. Figure 2: Relative efficiency gain of optimal instruments
2301.04237
Solving the semidefinite relaxation of QUBOs in matrix multiplication time, and faster with a quantum computer
Recent works on quantum algorithms for solving semidefinite optimization (SDO) problems have leveraged a quantum-mechanical interpretation of positive semidefinite matrices to develop methods that obtain quantum speedups with respect to the dimension $n$ and number of constraints $m$. While their dependence on other parameters suggests no overall speedup over classical methodologies, some quantum SDO solvers provide speedups in the low-precision regime. We exploit this fact to our advantage, and present an iterative refinement scheme for the Hamiltonian Updates algorithm of Brand\~ao et al. (Quantum 6, 625 (2022)) to exponentially improve the dependence of their algorithm on precision. As a result, we obtain a classical algorithm to solve the semidefinite relaxation of Quadratic Unconstrained Binary Optimization problems (QUBOs) in matrix multiplication time. Provided access to a quantum read/classical write random access memory (QRAM), a quantum implementation of our algorithm exhibits a worst case running time of $\mathcal{O} \left(ns + n^{1.5} \cdot \text{polylog} \left(n, \| C \|_F, \frac{1}{\epsilon} \right) \right)$.
Brandon Augustino, Giacomo Nannicini, Tamás Terlaky, Luis Zuluaga
2023-01-10T23:12:05Z
http://arxiv.org/abs/2301.04237v3
Solving the semidefinite relaxation of QUBOs in matrix multiplication time, and faster with a quantum computer ###### Abstract Recent works on quantum algorithms for solving semidefinite optimization (SDO) problems have leveraged a quantum-mechanical interpretation of positive semidefinite matrices to develop methods that obtain quantum speedups with respect to the dimension \(n\) and number of constraints \(m\). While their dependence on other parameters suggests no overall speedup over classical methodologies, some quantum SDO solvers provide speedups in the low-precision regime. We exploit this fact to our advantage, and present an iterative refinement scheme for the Hamiltonian Updates algorithm of Brandao et al. (_Quantum 6_, 625 (2022)) to exponentially improve the dependence of their algorithm on the precision \(\epsilon\), defined as the absolute gap between primal and dual solution. As a result, we obtain a classical algorithm to solve the semidefinite relaxation of Quadratic Unconstrained Binary Optimization problems (QUBOs) in matrix multiplication time. Provided access to a quantum read/classical write random access memory (QRAM), a quantum implementation of our algorithm exhibits \(\mathcal{O}\left(ns+n^{1.5}\cdot\mathrm{polylog}\left(n,\|C\|_{F},\frac{1}{ \epsilon}\right)\right)\) running time, where \(C\) is the cost matrix and \(s\) is its sparsity parameter (maximum number of nonzero elements per row). ## 1 Introduction We consider optimization problems of the form: \[\max x^{\top}Cx \tag{1}\] \[\mathrm{s.t.} x\in\{-1,1\}^{n},\] where \(C\in\mathcal{S}^{n}\) is the problem data and \(\mathcal{S}^{n}\) is the space of symmetric matrices in \(\mathbb{R}^{n\times n}\). Solving (1) can be viewed as computing the \(\infty\to 1\) norm of the coefficient matrix \(C\). This particular norm is intrinsically related the _cut norm_ of a matrix, which plays a crucial role in developing efficient approximation algorithms for dense graph and matrix problems [1, 21], with perhaps the most well-known application being the task of finding the largest cut in a graph (MaxCut). These problems also play an important role in quantum information sciences; the Ising model belongs to this class of problems [52], and quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) [18] and quantum annealing [19] can address its solution. Computing the cut norm corresponds to replacing \(x\in\{-1,1\}^{n}\) with \(z\in\{0,1\}^{n}\) in (1), giving rise to _quadratic unconstrained binary optimization_ (QUBO) problems. A standard QUBO is of the form \[\max z^{\top}Cz \tag{2}\] \[\mathrm{s.t.} z\in\{0,1\}^{n}.\] Provided that we allow for linear terms (in both formulations), it is well known that solutions to (1) can be used to compute a solution to (2) which differs only by a constant factor, and vice-versa, due to the equivalence \(z=\frac{x+e}{2}\) if \(z\in\{0,1\}^{n}\) and \(x\in\{-1,1\}^{n}\), where \(e\in\mathbb{R}^{n}\) is the all ones vector of dimension \(n\). Although (1) and (2) cover many applications of interest, they are intrinsically difficult to solve; computing optimal solutions to either (1) or (2) is NP-Hard in general. Following the seminal work of Lovasz [43] and the theoretical and practical development of Interior Point Methods (IPMs) for solving semidefinite optimization (SDO) problems [46, 47, 48, 49, 50, 55, 56], a prevailing approach has been to obtain approximate solutions to (1) and (2) by relaxing integrality and lifting the problem from a vector space of dimension \(n\), to the space of \(n\times n\) symmetric matrices. The quadratic form \(x^{\top}Cx\) can be equivalently expressed by \(\operatorname{tr}\left(Cxx^{\top}\right)\), where \(\operatorname{tr}\left(U\right)\) denotes the sum of the diagonal elements (or, trace) of a matrix \(U\in\mathbb{R}^{n\times n}\). To deal with the bilinear term \(xx^{\top}\), we introduce a matrix variable \(X\in\mathbb{R}^{n\times n}\), and require that \(X\) satisfies the following: \[\operatorname{diag}(X)=e,\quad X\succeq 0,\quad\operatorname{rank}(X)=1,\] where the notation \(U\succeq V\) means that the matrix \(U-V\) is a symmetric positive semidefinite matrix. Under these requirements, \(X\) is guaranteed to be of the form \(X=xx^{\top}\) for \(x\in\{-1,1\}^{n}\). The rank constraint, however, is not convex, and thus dropping it yields the following (convex) SDO relaxation of (1): \[\max \operatorname{tr}\left(CX\right)\] (3) s.t. \[\operatorname{diag}\left(X\right)=e,\quad X\succeq 0.\] Although the optimal solution \(X^{*}\) to (3) is no longer guaranteed to satisfy \(X^{*}=x^{*}{x^{*}}^{\top}\) and may not be integral in general, the approximation of \(x^{*}\) provided by \(X^{*}\) is of sufficient quality to justify its use. In fact, SDO approximations cover some of the most celebrated results in optimization, such as the 0.878-approximation guarantee of Goemans and Williamson for MaxCut [28] and the Lovasz-\(\vartheta\) number [43]. ### Literature Review More generally, a (primal) SDO problem involving \(n\times n\) matrices and \(m\) constraints is of the form \[\max_{X} \operatorname{tr}\left(CX\right)\] s.t. \[\operatorname{tr}\left(A_{i}X\right)=b_{i}\quad\text{for }i\in[m],\] \[X\succeq 0,\] where \([m]=\{1,\ldots,m\}\) and \(A_{1},\ldots,A_{m},C\in\mathcal{S}^{n}\), and \(b\in\mathbb{R}^{m}\) are the (given) problem data. The _dual_ SDO problem associated with the primal is given by \[\min_{(u,S)} b^{\top}u\] s.t. \[S=\sum_{i=1}^{m}u_{i}A_{i}-C\succeq 0.\] where \(S\) is the dual slack matrix.1 The classical literature on algorithms for solving SDO problems is rich and can be categorized into two classes; algorithms that depend poly-logarithimically on the inverse precision to which we solve the problem and the size of the minimally inscribed ellipsoid, and algorithms that depend polynomially on these quantities but exhibit an advantage with respect to \(n\) and \(m\). For instances with \(m\leq\sqrt{n}\), the cutting plane methods (CPMs) of [35, 42] are the best performing classical algorithms,2 and can solve SDO problems in time Footnote 1: While the dual variable is typically denoted by \(y\) rather than \(u\), it is also customary in the literature to use \(y\) to denote a certain state preparation pair, and we do so later in this paper. Footnote 2: We remark that the running time in [35] does however exhibit improved dependence with respect to poly-logarithmic factors compared to the running time of [42]. \[\mathcal{O}\left(m(mns+m^{2}+n^{\omega})\cdot\operatorname{polylog}\left(m,n, R,\frac{1}{\epsilon}\right)\right),\] where \(\omega\in[2,2.38]\) is the matrix multiplication exponent, \(R\) is an upper bound on the trace of a primal optimal solution \(X\) (which can be exponentially large), \(\epsilon\) is the precision parameter, \(s\) denotes the maximum number of nonzeros per row of the input matrices and hence, \(\mathcal{O}(mns)\) is the total number of nonzeros in the constraints of SDO problem. However, we typically have \(m\in[\Omega(n),\mathcal{O}(n^{2})]\), in which case the CPMs given in [35, 42] are outperformed by the IPM for SDO from Jiang et al. [34]. Their IPM exhibits a worst case running time of \[\mathcal{O}\left(\sqrt{n}(mns+m^{\omega}+n^{\omega})\cdot\mathrm{polylog} \left(m,n,\frac{1}{\epsilon}\right)\right),\] where the term \(m^{\omega}+n^{\omega}\) represents the per-iteration cost of inverting the Hessian and matrices of the variables. While quantum SDO solvers could also be categorized in a somewhat similar fashion, it is perhaps more natural to do so according to how they attempt to obtain quantum speedups. In this case we also have two classes; at a high level, all proposed quantum SDO solution methodologies quantize a classical algorithm by either using quantum linear system algorithms (QLSAs) [12, 14, 31], or a quantum mechanical interpretation of normalized positive semidefinite matrices. We now review these works in detail. The former class is comprised of algorithms that quantize IPMs, giving rise to quantum IPMs (QIPMs). QIPMs attempt to speedup the bottleneck of the classical IPM by substituting the classical solution of the Newton linear system with the combined use of QLSA and quantum state tomography (with some classical computation between iterates). Augustino et al. [6] present a convergent QIPM for SDO, avoiding the shortcomings prevalent in early works on QIPMs (see, e.g., [39]), by properly symmetrizing the Newton linear system, and utilizing an orthogonal subspace representation of the search directions. This representation guarantees that primal and dual feasibility are satisfied exactly by all the iterates generated by inexact solutions of the Newton linear system obtained via quantum subroutines. The worst case complexity of their algorithm is \[\widetilde{\mathcal{O}}_{n,\kappa,\frac{1}{\epsilon}}\left(\sqrt{n}\left(\frac {n^{3}\kappa^{2}}{\epsilon}+n^{4}\right)\right),\] where \(\kappa\) is an upper bound on the condition numbers of the intermediate Newton linear system coefficient matrices that arise over the course of the algorithm. Here, the notation \(\widetilde{\mathcal{O}}_{a,b}(f(x))\) suppresses poly-logarithmic factors in \(f(x)\), \(a\) and \(b\) that appear in the overall running time, i.e., \(\widetilde{\mathcal{O}}_{a,b}(f(x))\equiv\mathcal{O}(f(x)\cdot\mathrm{polylog }(a,b,f(x)))\). While this QIPM achieves a speedup in \(n\) over the IPM from [34] when \(m=\mathcal{O}(n^{2})\), its dependence on \(\kappa\) and \(\epsilon\) suggest no quantum advantage overall: the complexity of the classical IPM does not depend on \(\kappa\) and its dependence on \(\epsilon^{-1}\) is logarithmic. As the authors in [6] note, dependence on the condition number bound \(\kappa\) is particularly problematic in the context of IPMs. The second class of quantum SDO solvers are those that quantize algorithms based on matrix exponentials and Gibbs states. The most prominent example is the Matrix Multiplicative Weights Update (MMWU) Method of Arora and Kale [3], which can solve SDO problems in time \[\widetilde{\mathcal{O}}_{n,R,\frac{1}{\epsilon}}\left(nms\left(\frac{Rr}{ \epsilon}\right)^{4}+ns\left(\frac{Rr}{\epsilon}\right)^{7}\right).\] where \(r\) is a _known_\(\ell_{1}\)-norm upper bound3 on a dual optimal solution \(u\). Unlike IPMs, the MMWU framework does not involve the solution of linear systems; rather, these algorithms alternate between candidate solutions to the primal and dual SDO problems. IPMs and MMWUs also employ different definitions of optimality; for IPMs, \(\epsilon\)-optimality implies that the primal and dual feasible solutions exhibit a _normalized_ duality gap bounded by \(\epsilon\), i.e.: Footnote 3: It is also assumed that \(R,r\geq 1\). \[\frac{\mathrm{tr}\left(XS\right)}{n}\leq\epsilon,\] whereas an \(\epsilon\)-optimal solution obtained using an MMWU approximates the optimal objective value to additive error \(\epsilon\) (via binary search). Finally, we point out a distinction between these algorithms with respect to output. While primal-dual IPMs return the primal-dual optimal solution \((X,u,S)\), MMWUs report \(u\), but may avoid explicitly reporting \(X\) and \(S\) to maintain the speedups they offer with respect to \(n\). Reporting \(X\) or \(S\) under the MMWU framework necessitates the computation of matrix exponentials, which may impose a considerable overhead because it generally resorts to matrix multiplication. The MMWU framework has been specialized to solve SDO problems of the form in (3) (see, e.g., [4]), and the current state of the art is attributed to Lee and Padmanabhan [41], who give an algorithm that can solve (3) to additive error \(\|C\|_{\ell_{1}}\epsilon\) with overall complexity \[\widetilde{\mathcal{O}}_{n,\frac{1}{\epsilon}}\left(ns\epsilon^{-3.5}\right),\] where \(\|C\|_{\ell_{1}}=\sum_{i,j}|C_{ij}|\). It is important to note however, that to achieve the stated complexity their methodology does not explicitly report4 the solution \(X\) and the authors assume \(\sum_{i,j}|C_{ij}|=n\). To achieve the same error scaling as the algorithms we present in this work, the algorithm in [41] would have overall cost \(\widetilde{\mathcal{O}}_{n,\frac{1}{\epsilon}}\left(\|C\|_{\ell_{1}}^{3.5} ns\epsilon^{-3.5}\right)\), see Section 5.3. Footnote 4: Alternatively, they report a “gradient” \(G\in\mathcal{S}^{n}\) such that \(X=W\exp(G)W\) for a diagonal matrix \(W\). Brandao and Svore [11] and van Apeldoorn et al. [61] were the first to quantize the MMWU framework, utilizing a clever interpretation of the primal variables: _Gibbs states_, which can be efficiently prepared on a quantum computer, naturally correspond to trace-normalized positive definite matrices. The running time of these MMWU-based algorithms was subsequently improved [30, 60], and the current state of the art running time of the quantum MMWU (QMMWU) algorithm for SDO problems is: \[\widetilde{\mathcal{O}}_{n,s,R,\frac{1}{\epsilon}}\left(\left(\sqrt{m}+\sqrt{n }\frac{Rr}{\epsilon}\right)s\left(\frac{Rr}{\epsilon}\right)^{4}\right).\] Similar to the complexity of QIPMs, QMMWU algorithms are faster with respect to \(m\) and \(n\) when compared to their classical counterparts, but these algorithms still exhibit a non-polynomial running time, due to their polynomial dependence on the scale invariant parameter \(\frac{Rr}{\epsilon}\), whereas the natural input size depends on the logarithm of this quantity. Seeking to improve the performance of quantum SDO solvers, Brandao et al. [10] present an algorithm, which they call _Hamiltonian Updates_ (HU), for solving the SDO approximation (3) of (1). The HU method is a primal-only algorithm closely related to the QMMWU framework, in that it leverages a Gibbs state representation of the primal variable and progression towards the optimal solution is made via matrix-exponentiated gradient updates. Specifically, the authors in [10] are interested in solving an SDO feasibility problem that arises upon renormalizing and relaxing (3): find \[X\] (4) s.t. \[\operatorname{tr}\left(\frac{C}{\|C\|}X\right)\geq\gamma-\epsilon\] \[\sum_{i\in[n]}\left|\langle i|X|i\rangle-\frac{1}{n}\right|\leq\epsilon\] \[\operatorname{tr}\left(X\right)=1,\quad X\succeq 0.\] Here, \(\gamma\) is an upper bound on the absolute value of the optimal objective value of (3) when the cost matrix \(C\) is normalized, obtained via binary search over \([-1,1]\), and \(|i\rangle\) for \(i\in\{1,\ldots,n\}\) are the computational basis states. Since any \(\log(n)\)-qubit Gibbs state is an element of the set \(\{X\in\mathbb{R}^{n\times n}:\operatorname{tr}(X)=1,X\succeq 0\}\) by definition, solutions to (4) can be naturally be expressed as a Gibbs state \[\rho=\frac{\exp(-H)}{\operatorname{tr}(\exp(-H))},\] where \(H\) is the _Hamiltonian_ associated with \(\rho\). The key observation in [10] is that upon using the Gibbs state change of variables in (4), one can model the \(n\) constraints on the diagonal elements as single constraint which requires that the distribution on the diagonal elements of a feasible solution \(\rho\) to (4) be at most \(\epsilon\) in total variation distance to the uniform distribution. In other words, the task of solving (4) reduces to finding a \(\log(n)\)-qubit mixed quantum state that upon measurement in the computational basis is approximately indistinguishable from the maximally-mixed state, and whose trace inner product with the normalized cost matrix \(C\|\mathcal{C}\|^{-1}\) is at least \(\gamma-\epsilon\). Using a quantum computer, the HU method of [10] solves (3) to additive error \(\mathcal{O}\left(n\|C\|\epsilon\right)\) in time \[\widetilde{\mathcal{O}}_{n,\frac{1}{\epsilon}}\left(n^{1.5}\sqrt{s}^{1+o(1)} \epsilon^{-28+o(1)}\exp\left(1.6\sqrt{\log(\epsilon^{-1})}\right)\right).\] The authors in [10] also provide an analysis of essentially the same algorithm when using a classical computer, and show that the classical algorithm has a complexity of \[\widetilde{\mathcal{O}}_{n}\left(\min\{n^{2}s,n^{\omega}\}\epsilon^{-12} \right).\] The quantum algorithm yields a speedup in \(n\) over classical algorithms, for a specific class of SDO problems. However, as we have already seen with QIPMs and QMMWU algorithms, its dependence on other parameters (in this case the inverse precision) is prohibitive unless a very low precision solution is acceptable. This raises the question as to whether the poor scaling in the inverse precision can be mitigated without incurring additional cost in \(n\) and \(s\). We answer this question in the affirmative using iterative refinement techniques. Iterative Refinement (IR) is a methodology for computing high-precision solutions to linear system of equations [29], as well as linear [25, 26, 27] and mixed integer optimization problems [2, 17]. We summarize the methodology at a high level as follows, and present a detailed discussion for the case of convex feasibility problems later in the paper. Given an initial solution \(x^{(0)}\in\mathbb{R}^{d}\), at each iteration \(k\) IR produces a refined solution \(x^{(k+1)}\gets x^{(k)}+u^{(k)}\), where \(u^{(k)}\) acts as a correction of the error \(r^{(k)}\) associated with \(x^{(k)}\), and is determined by solving a _refining problem_ induced by the current solution. These operations can all be carried out using the same level of accuracy, called the _fixed-precision_ approach. Alternatively, one may increase the accuracy with which the residuals \(r^{(k)}\) are computed as compared to \(u^{(k)}\), and this approach is called a _mixed precision_ approach [29, 62]. In this paper, we utilize the fixed precision approach. ### Contributions In this paper we develop an IR scheme for SDO approximations of QUBO problems that uses the HU algorithm of [10] as a subroutine. We show that proceeding in this way allows one to exponentially improve the dependence on the inverse precision for both the quantum and classical algorithms. With the proposed IR scheme, the classical algorithm solves the SDO problem (3) up to absolute error \(\mathcal{O}(\epsilon)\) with worst-case complexity \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\cdot\mathrm{polylog}\left(n,\|C\|_ {F},\frac{1}{\epsilon}\right)\right).\] This is a significant speedup compared to general-purpose SDO solvers, such as IPMs. This algorithm can be quantized following a similar strategy to [10]. When provided access to quantum random access memory (QRAM), the quantum algorithm takes \[\mathcal{O}\left(n^{1.5}\cdot\mathrm{polylog}\left(n,\|C\|_{F},\frac{1}{ \epsilon}\right)\right)\] accesses to the QRAM and additional quantum gates (this is the standard way of describing complexity in the QRAM model of computation), plus \(\mathcal{O}(ns)\) classical arithmetic operations -- note that simply reading the cost matrix \(C\) takes \(\mathcal{O}(ns)\) time. Summarizing, the combination of HU with IR described in this paper provides exponential speedups over the methodology proposed in [10] with respect to the precision parameter \(\epsilon\). To the best of our knowledge, our classical and quantum algorithms are the fastest known algorithms in their respective model of computation for this class of problems, and our quantum algorithm provides a genuine asymptotic speedup over known classical solution methodologies, provided that we have access to QRAM. In the sparse-access input model (without QRAM), the algorithm takes \(\widetilde{\mathcal{O}}_{n,\|C\|_{F,\frac{1}{e}}}\left(n^{1.5}s^{0.5+o(1)}\right)\) accesses to an oracle describing the coefficient matrix \(C\) and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F,\frac{1}{e}}}\left(n^{2.5}s^{0.5+o(1)}\right)\) additional gates, therefore yielding no quantum speedup (the quantum gate complexity is asymptotically larger than the classical complexity). The remainder of this paper is organized in the following manner. Section 2 introduces notation, as well as the relevant input models and quantum subroutines. In Section 3 we introduce the Hamiltonian Updates (HU) algorithm from [10], and our Iterative Refinement scheme for SDO approximations of QUBOs is presented in Section 4. The running time analysis is performed in Section 5, and Section 6 concludes the manuscript. ## 2 Preliminaries We write \([n]\) to represent the set of elements \(\{1,\ldots,n\}\). We denote the \(i\)-th element of a vector \(x\in\mathbb{R}^{n}\) by \(x_{i}\) for \(i\in[n]\), and the \(ij\)-th element of a matrix \(A\in\mathbb{R}^{m\times n}\) by \(A_{ij}\) for \(i\in[m]\) and \(j\in[n]\). To refer to the \(i\)-th row of a matrix \(A\), we write \(A_{i,\cdot}\) and write \(A_{\cdot,j}\) when referring to its \(j\)-th column. We distinguish the quantity \(a\) to the \(k\)-th power and the value of \(a\) at iterate \(k\) using round brackets, writing \(a^{k}\) and \(a^{(k)}\) to denote these quantities, respectively. The smallest and largest singular values of a matrix \(A\) are denoted \(\sigma_{\min}(A),\sigma_{\max}(A)\), and if \(A\in\mathcal{S}^{n}\), then the smallest and largest eigenvalues are denoted \(\lambda_{\min}(A),\lambda_{\max}(A)\). We let \(\mathcal{S}^{n}_{+}\) and \(\mathcal{S}^{n}_{++}\) represent the cones of symmetric positive semidefinite, and symmetric positive definite matrices, respectively. For \(A,B\in\mathcal{S}^{n}\), we write \(A\succeq B\) (\(A\succ B\)) to indicate that the matrix \(A-B\) is symmetric positive semidefinite (symmetric positive definite), i.e., \(A-B\in\mathcal{S}^{n}_{+}\) (\(A-B\in\mathcal{S}^{n}_{++}\)). The matrix exponential \(\exp(A)\), which is defined by the power series \[\exp(A)=I+A+\frac{1}{2!}A^{2}+\frac{1}{3!}A^{3}+\cdots,\] maps symmetric matrices to the space of symmetric positive definite matrices. Given the spectral decomposition \(A=V\Lambda V^{\top}\), then \(\exp(A)=V\exp(\Lambda)V^{\top}\), where \(\exp(\Lambda)=\operatorname{diag}(\exp(\Lambda_{11}),\exp(\Lambda_{22})\ldots,\exp(\Lambda_{nn}))\). We let \(A\circ B\) denote the Hadamard (or element-wise) product of two matrices, and \(A\otimes B\) denotes their tensor product. Later in this work, we make use of the following facts regarding Hadamard products. **Lemma 1** (Lemma 5.1.4 in [33]).: _Let \(E\), \(F\) and \(G\) be \(m\times n\) matrices. Then, the \(i\)-th diagonal entry of the matrix \((E\circ F)G^{\top}\) coincides with the \(i\)-th diagonal entry of the matrix \((E\circ G)F^{\top}\). That is,_ \[[(E\circ F)G^{\top})]_{ii}=[(E\circ G)F^{\top})]_{ii}\quad\forall i\in[m].\] **Lemma 2** (Theorem 5.3.4 in [33]).: _Let \(A\) and \(B\) be \(n\times n\) Hermitian matrices. If \(A\in\mathcal{S}^{n}_{+}\), then any eigenvalue \(\lambda(A\circ B)\) of \(A\circ B\) satisfies_ \[\lambda_{\min}(A)\cdot\lambda_{\min}(B)\leq\lambda(A\circ B)\leq\lambda_{\max }(A)\cdot\lambda_{\max}(B).\] **Corollary 1**.: _Suppose \(A\) and \(B\) are \(n\times n\) Hermitian matrices. If \(A\in\mathcal{S}^{n}_{+}\) and \(B\in\mathcal{S}^{n}\) with \(B\) having at least one negative eigenvalue, then_ \[\lambda_{\max}(A)\cdot\lambda_{\min}(B)\leq\lambda_{\min}(A\circ B).\] Proof.: First, note that by Lemma 2, we have \(\lambda_{\min}(A\circ B)\geq\lambda_{\min}(A)\cdot\lambda_{\min}(B)\). Moreover, since \(B\) has at least one negative eigenvalue, we have \(\lambda_{\min}(B)<0\), which combined with the fact that \(\lambda_{\min}(A)\geq 0\) yields \(\lambda_{\max}(A)\cdot\lambda_{\min}(B)\leq\lambda_{\min}(A)\cdot\lambda_{ \min}(B)\). Hence, \[\lambda_{\max}(A)\cdot\lambda_{\min}(B)\leq\lambda_{\min}(A\circ B),\] and the proof is complete. We write \(e\) to refer to the vector of all ones in \(\mathbb{R}^{n}\), and use the notation \(e_{i}\) to refer to the \(i\)-th unit vector in the standard orthonormal basis \(\{e_{1},\ldots,e_{n}\}\) for \(\mathbb{R}^{n}\). Analogously, the computational basis states are denoted by \(\left|i\right\rangle\) for \(i\in[n]\). Hence, for \(x\in\mathbb{R}^{n}\), we denote its amplitude encoding by \(\left|x\right\rangle\), defined as \[\left|x\right\rangle=\frac{1}{\left\|x\right\|}\sum_{i\in[n]}x_{i}\left|i \right\rangle.\] Observe that \(\left|x\right\rangle\) is a \(\log(n)\)-qubit state; for simplicity, we assume that the dimensions of all spaces are powers of \(2\). All logarithms are base \(2\). Where appropriate, our analysis makes use of the _Schatten_\(p\)-norm, defined for a bounded linear operator \(A\) as \[\left\|A\right\|_{p}=\left[\operatorname{tr}\left(\left|A\right|^{p}\right) \right]^{\frac{1}{p}},\] where \(\left|A\right|=(A^{\dagger}A)^{\frac{1}{2}}\) with \(A^{\dagger}\) denoting the conjugate transpose of \(A\). Notice that the trace and operator norms \(\left\|\cdot\right\|_{\operatorname{tr}}\) and \(\left\|\cdot\right\|\) are the Schatten-\(1\) and Schatten-\(\infty\) norms, respectively, and the Frobenius norm \(\left\|\cdot\right\|_{F}\) corresponds to the Schatten-\(2\) norm. For a scalar \(x\in\mathbb{R}\) define the _sign function_\(\operatorname{sign}(x)\) as \[\operatorname{sign}(x)=\begin{cases}-1&\text{if }x<0\\ 0&\text{if }x=0\\ 1&\text{if }x>0.\end{cases}\] When \(x\in\mathbb{R}^{n}\), \(\operatorname{sign}(x)=(\operatorname{sign}(x_{1}),\ldots,\operatorname{sign }(x_{n}))^{\top}\). For any positive integer \(q\), and binary strings \(j,k\in\{0,1\}^{q}\), we denote by \(j\oplus k\) the bitwise modulo \(2\) addition of \(q\)-digit strings, defined as \[j\oplus k=h\] where \(h\in\{0,1\}^{q}\) is the bitstring whose elements \(h_{p}\) are defined for \(p\in[q]\) as \[h_{p}=\begin{cases}0&\text{if }j_{p}=k_{p},\\ 1&\text{otherwise.}\end{cases}\] ### "Big-O" notation We define \(\mathcal{O}(\cdot)\) as \[f(x)=\mathcal{O}(g(x))\iff\exists\ell\in\mathbb{R},c\in\mathbb{R}_{+},\text{ such that }f(x)\leq cg(x)\quad\forall x>\ell.\] We write \(f(x)=\Omega(g(x))\iff g(x)=\mathcal{O}(f(x))\). We also define \(\widetilde{\mathcal{O}}(f(x))=\mathcal{O}(f(x)\cdot\operatorname{polylog}(f(x )))\) and when the function depends poly-logarithmically on other variables we write \[\widetilde{\mathcal{O}}_{a,b}\ (f(x))=\mathcal{O}(f(x)\cdot\operatorname{polylog}(a,b,f (x))).\] ### Input models and subroutines For our quantum algorithm, we provide analyses for two distinct models of input. One model considers a _quantum-read/classical-write_ RAM (QRAM), and the other is the _sparse-access model_, which we use to bound the running time without access to QRAM. #### 2.1.1 Sparse-access model In the _sparse-access model_, the input matrix \(C\) is assumed to be \(s\)-row sparse for some known bound \(s\in[n]\). In other words, \(C\) has at most \(s\) nonzero entries per row. The sparse-access model is closely related to the classical notion, in that we assume access to an oracle \(O_{\text{sparse}}\), which upon being queried with input \((i,j)\) returns the index of the \(j\)-th nonzero entry of the \(i\)-th row of \(C\) by calculating the index function: \[\text{index}:[n]\times[s]\rightarrow[n].\] That is, for \(i\in[n]\) and \(j\in[s]\), \(O_{\text{sparse}}\) computes the position in place: \[O_{\text{sparse}}\left|i,j\right>=\left|i,\text{index}(i,j)\right>.\] We also assume access to an oracle that returns a bitstring representation of the individual entries of the normalized cost matrix \(C\|C\|_{F}^{-1}\) for every \(i,j\in[n]\): \[O_{C}\left|i,j,z\right>=\left|i,j,z\oplus(C_{ij}\|C\|_{F}^{-1})\right>.\] #### 2.1.2 Quantum random access memory We consider a _quantum-read/classical-write_ RAM (QRAM), which enables us to store classical data that our quantum algorithms can make oracle calls to. This type of storage is the direct quantum analog of classical RAM: it enables a quantum algorithm to access classical data in superposition. Accessing a QRAM of size \(n\) takes \(\mathcal{O}(n)\) gates [5, 24], but these gates can be arranged in parallel so that the circuit depth remains \(\mathcal{O}(\text{polylog}(n))\). Therefore we make the assumption (standard in the literature on quantum algorithms) that the cost of accessing a QRAM of size \(n\) is \(\mathcal{O}(\text{polylog}(n))\). The next result from Chakraborty et al. [12], is adapted from an earlier result of Kerenidis and Prakash [38] and summarizes the aspects of the data structure we utilize. **Theorem 1** (Theorem 1 in [12]).: _(Implementing quantum operators using an efficient data structure) Let \(A\in\mathbb{R}^{m\times n}\) be a matrix. If \(w\) is the number of non-zero entries of \(A\), then there exists a data structure of size \(\mathcal{O}\left(w\log^{2}(mn)\right)\) that, given the entries \((i,j,A_{ij})\) in an arbitrary order, stores them such that time taken to store each entry of \(A\) is \(\mathcal{O}(\log(mn))\). Once this data structure has been initiated with all non-zero entries of \(A\), there exists a quantum algorithm that can perform the following maps with \(\xi\)-precision in time \(\mathcal{O}\left(\text{polylog}\left(\frac{mn}{\xi}\right)\right)\):_ \[\widetilde{U}:\left|i\right>\left|0\right> \mapsto\left|i\right>\frac{1}{\|A_{i,\cdot}\|}\sum_{j=1}^{n}A_{ ij}\left|j\right>=\left|i,A_{i,\cdot}\right>,\] \[\widetilde{V}:\left|0\right>\left|j\right> \mapsto\frac{1}{\|A\|_{F}}\sum_{i=1}^{m}\|A_{i,\cdot}\|\left|i \right>\left|j\right>=\left|\widetilde{A},j\right>,\] _where \(\left|A_{i,\cdot}\right>\) is the normalized quantum state corresponding to the \(i\)-th row of \(A\) and \(\left|\widetilde{A}\right>\) is a normalized quantum state such that \(\left<i|\widetilde{A}\right>=\|A_{i,\cdot}\|\), i.e., the norm of the \(i\)-th row of \(A\)._ #### 2.1.3 Working with block-encoded matrices We now give a formal definition of a block-encoding from [12]. **Definition 1** (Block-encoding).: _Let \(A\in\mathbb{C}^{2^{w}\times 2^{w}}\) be a \(w\)-qubit operator. Then, a \((w+a)\)-qubit unitary \(U\) is an \((\alpha,a,\xi)\)-block-encoding of \(A\) if \(U=\begin{pmatrix}\widetilde{A}&\cdot\\ \cdot&\cdot\end{pmatrix}\), with the property that_ \[\left\|\alpha\widetilde{A}-A\right\|\leq\xi.\] It was shown by Kerenidis and Prakash [38] and Chakraborty et al. [12] how to efficiently implement block-encodings of matrices that are stored in a QRAM data structure, which is formalized in the next result. **Lemma 3** (Lemma 3.3.7 in [22]).: _Let \(A\in\mathbb{C}^{2^{w}\times 2^{w}}\) and \(\xi>0\)._ 1. _Fix_ \(q\in[0,2]\) _and define_ \(\mu_{q}(A)=\sqrt{n_{q}(A)n_{(2-q)}(A^{\top})}\) _where_ \(n_{q}(A)=\max_{i}\|A_{i,\cdot}\|_{q}^{q}\) _is the_ \(q\)_-th power of the maximum_ \(q\)_-norm of the rows of_ \(A\)_. Defining_ \(A^{\{q\}}\) _to be the matrix with elements_ \(A_{ij}^{\{q\}}=\sqrt{A_{ij}^{q}}\)_, if_ \(A^{\{q\}}\) _and_ \((A^{\{2-q\}})^{\dagger}\) _are both stored in QRAM data structures, then there exist unitaries_ \(U_{R}\) _and_ \(U_{L}\) _that can be implemented in time_ \(\mathcal{O}(\operatorname{poly}(w\log\frac{1}{\xi}))\) _and such that_ \(U_{R}^{\dagger}U_{L}\) _is a_ \((\mu_{q}(A),w+2,\xi)\)_-block-encoding of_ \(A\)_._ 2. _If_ \(A\) _is stored in a QRAM data structure, then there exist unitaries_ \(U_{R}\) _and_ \(U_{L}\) _that can be implemented in time_ \(\mathcal{O}(\operatorname{poly}(w\log\frac{1}{\xi}))\) _and such that_ \(U_{R}^{\dagger}U_{L}\) _is an_ \((\|A\|_{F},w+2,\xi)\)_-block-encoding of_ \(A\)_._ Linear combinations of block-encodings can also be constructed at cost that is merely logarithmic in the dimension. **Definition 2** (Definition 3.3.8 in [22]).: _(State preparation pair) Let \(y\in\mathbb{C}^{m}\) and \(\|y\|_{1}\leq\beta\). The pair of unitaries \((P_{L},P_{R})\) is called a \((\beta,p,\xi)\)-state-preparation-pair if \(P_{L}\ket{0}^{\otimes p}=\sum_{j=0}^{2^{p}-1}c_{j}\ket{j}\) and \(P_{R}\ket{0}^{\otimes p}=\sum_{j=1}^{2^{p}-1}d_{j}\ket{j}\) such that \(\sum_{j=0}^{m-1}\ket{\beta(c_{j}^{*}d_{j})-y_{j}}\leq\xi\) and for all \(j\in m,\ldots,2^{p}-1\) we have \(c_{j}^{*}d_{j}=0\)._ **Proposition 1** (Lemma 52 in [23]).: _(Linear combination of block-encoded matrices, with weights given by a state preparation pair) Let \(A=\sum_{j=0}^{m-1}y_{j}A_{j}\) be a \(w\)-qubit operator, where \(A_{j}\) are matrices. Suppose \(P_{L},P_{R}\) is a \((\beta,p,\xi_{1})\)-state-preparation pair for \(y\), \(W=\sum_{j=0}^{m-1}\ket{j}\bra{j}\otimes U_{j}+\left((I-\sum_{j=0}^{m-1}\ket{ j}\bra{j}\otimes I_{a}\otimes I_{s})\) is an \((w+a+p)\)-qubit unitary with the property that \(U_{j}\) is an \((\alpha,a,\xi_{2})\)-block-encoding of \(A_{j}\). Then we can implement a \((\alpha\beta,a+p,\alpha\xi_{1}+\alpha\beta\xi_{2})\)-block-encoding of \(A\) with a single use of \(W,P_{R}\) and \(P_{L}^{\dagger}\)._ It turns out that the sparse-access model reduces to the quantum operator model upon choosing \(\alpha=s\) (if row and column sparsity are the same). The next result from [23] describes how to implement block-encodings using the sparse-access input model, and the associated costs. **Lemma 4** (Lemma 48 in [23]).: _Let \(A\in\mathbb{C}^{2^{w}\times 2^{w}}\) be a matrix that is \(s_{r}\)-row-sparse and \(s_{c}\)-column-sparse, and each element of \(A\) has absolute value at most 1. Suppose that we have access to the following sparse-access oracles acting on two \((w+1)\) qubit registers:_ \[O_{r}:\ket{i}\ket{k}\mapsto\ket{i}\ket{r_{ik}} \forall i\in[2^{w}]-1,k\in[s_{r}],\text{ and}\] \[O_{c}:\ket{\ell}\ket{j}\mapsto\ket{c_{\ell j}}\ket{j} \forall\ell\in[s_{c}],j\in[2^{w}]-1,\text{ where}\] \(r_{ij}\) _is the index for the \(j\)-th non-zero entry of the \(i\)-th row of \(A\), or if there are less than \(i\) non-zero entries, then it is \(j+2^{w}\), and similarly \(c_{ij}\) is the index for the \(i\)-th non-zero entry of the \(j\)-th column of \(A\), or if there are less than \(j\) non-zero entries, then it is \(i+2^{w}\). Additionally, assume that we have access to an oracle \(O_{A}\) that returns the entries of \(A\) in a binary description:_ \[O_{A}:\ket{i}\ket{j}\ket{0}^{\otimes p}\mapsto\ket{i}\ket{j}\ket{a_{ij}}, \quad\forall i,j\in[2^{w}]-1,\] _where \(a_{ij}\) is a \(p\)-bit binary description of the \(ij\)-matrix element of \(A\). Then, we can implement a \((\sqrt{s_{r}s_{c}},w+3,\xi)\)-block-encoding of \(A\) with a single use of \(O_{r}\), \(O_{c}\) and two uses of \(O_{A}\), and additionally using \(\mathcal{O}\left(w+\log^{2.5}\left(\frac{s_{r}s_{c}}{\xi}\right)\right)\) one and two qubit gates while using \(\mathcal{O}\left(p+\log^{2.5}\left(\frac{s_{r}s_{c}}{\xi}\right)\right)\) ancilla qubits._ The block-encoding framework will be useful in speeding up the overall running time found in [10], as it allows us to perform matrix computations and Hamiltonian simulation efficiently. **Theorem 2** (Corollary 3.4.7 in [22]).: _(Optimal block-Hamiltonian simulation) Suppose that \(U\) is an \((\alpha,a,\xi/|2|t|)\)-block-encoding of the Hamiltonian \(H\). Then, we can implement a \(\xi\)-precise Hamiltonian simulation unitary \(V\) which is an \((1,a+2,\xi)\)-block-encoding of \(e^{iHt}\), with \(\mathcal{O}\left(|\alpha t|+\frac{\log(1/\xi)}{\log\log(1/\xi)}\right)\) uses of controlled-\(U\) or its inverse and with \(\mathcal{O}\left(a|\alpha t|+a\frac{\log(1/\xi)}{\log\log(1/\xi)}\right)\) two-qubit gates._ Additionally, one can easily take the product of block-encodings. **Proposition 2** (Lemma 4 in [12]).: _(Product of block-encoded matrices) If \(U_{A}\) is an \((\alpha_{1},a_{1},\xi_{A})\)-block-encoding of an \(s\)-qubit operator \(A\), and \(U_{B}\) is an \((\alpha_{2},a_{2},\xi_{B})\)-block-encoding of an \(s\)-qubit operator \(B\), then \((I_{a_{2}}\otimes U_{A})(I_{a_{1}}\otimes U_{B})\) is an \((\alpha_{1}\alpha_{2},a_{1}+a_{2},\alpha_{1}\xi_{B}+\alpha_{2}\xi_{A})\)-block-encoding of \(AB\)._ Relevant to our work in the quantum operator input model is the idea of block-encoding the _Hadamard_, or element-wise product of two matrices. We will demonstrate how one can carry out the Hadamard product of block-encodings of matrices \(A\) and \(B\) as a reduction of the Kronecker product of block-encodings, which is straightforward to construct given block encodings of \(A\) and \(B\). **Proposition 3**.: _(Kronecker product of block-encoded matrices) Suppose that \(U_{A}\) is an \((\alpha_{1},a_{1},\xi_{A})\)-block-encoding of \(A\in\mathbb{R}^{n\times n}\), and \(U_{B}\) is an \((\alpha_{2},a_{2},\xi_{B})\)-block-encoding of \(B\in\mathbb{R}^{n\times n}\). Then, taking the tensor product of \(U_{A}\) and \(U_{B}\), we obtain a \((\alpha_{1}\alpha_{2},a_{1}+a_{2},\xi_{A}+\xi_{B})\)-block-encoding of \(A\otimes B\)._ We do not give a formal proof here as the result directly follows from the definition of a block-encoding; to obtain the tensor product of two block-encoded matrices, it suffices to take the tensor product of their block-encodings while keeping the ancilla qubits separate. **Proposition 4** (Hadamard product of block-encoded matrices).: _Suppose that \(U_{A}\) is an \((\alpha_{1},a_{1},\xi_{A})\)-block-encoding of \(A\in\mathbb{R}^{n\times n}\), and \(U_{B}\) is a \((\alpha_{2},a_{2},\xi_{B})\)-block-encoding \(B\in\mathbb{R}^{n\times n}\). Then, using \(U_{A}\) and \(U_{B}\), we can implement an \((\alpha_{1}\alpha_{2},a_{1}+a_{2}+8\log(n)+12,5(\xi_{A}+\xi_{B}))\)-block-encoding of \(A\circ B\) using one application of \(U_{A}\) and \(U_{B}\), and \(\widetilde{\mathcal{O}}_{n}(1)\) additional gates._ Proof.: First, note that \[A\circ B=(A\otimes B)[\iota_{A},\iota_{B}],\] where \(\iota_{A}=\iota_{B}=\{1,n+2,2n+3,\ldots,n^{2}\}\) are index sets of cardinality \(n\) (see, e.g., Lemma 5.1.1 in [33]). Our goal is to use the index sets \(\iota_{A}\) and \(\iota_{B}\) along with a block encoding of \(A\otimes B\) to construct a unitary which block-encodes \(\mathcal{M}\in\mathbb{R}^{n^{2}\times n^{2}}\), a matrix which contains the elements of \(A\circ B\) in its upper left-most \(n\times n\) block, while all other entries are \(0\): \[\mathcal{M}_{ij}=\begin{cases}A_{ij}\cdot B_{ij}&\text{for }i,j=1,\ldots,n,\\ 0&\text{otherwise},\end{cases}\] i.e., \[\mathcal{M}=\begin{pmatrix}A\circ B&\mathbf{0}^{n\times(n^{2}-n)}\\ \mathbf{0}^{(n^{2}-n)\times n}&\mathbf{0}^{(n^{2}-n)\times(n^{2}-n)}\end{pmatrix}.\] We will first show how one can use \(\iota_{A}\) and \(\iota_{B}\) to construct sparse matrices that map \(A\otimes B\) to \(\mathcal{M}\), and then subsequently analyze the cost of constructing the corresponding unitary block-encoding. Consider the matrix \(Z\in\mathbb{R}^{n^{2}\times n^{2}}\), whose elements are defined as \[Z_{ij}=\begin{cases}1&\text{if }i=j=(k-1)n+k,\quad k=1,\ldots,n,\\ 0&\text{otherwise}.\end{cases}\] Multiplying \(A\otimes B\) on the left by \(Z\) sets the rows of \(A\otimes B\) which do not contain elements of \(A\circ B\) to zero, and subsequently multiplying \(Z(A\otimes B)\) on the right by \(Z\) will set the columns of \(Z(A\otimes B)\) which do not appear in \(A\circ B\) to zero. As a result, a block-encoding of \(Z(A\otimes B)Z\) corresponds to block-encoding \(A\otimes B\), and setting all terms not appearing in \(A\circ B\) to zero: \[[Z(A\otimes B)Z]_{ij}=\begin{cases}[A\otimes B]_{ij}&\text{if $i=(k-1)n+k$ and $j=(\ell-1)n+\ell$} \quad k,\ell=1,\ldots,n,\\ 0&\text{otherwise.}\end{cases}\] Next, let \(G\in\mathbb{R}^{n^{2}\times n^{2}}\) be a matrix whose elements are defined as follows: \[G_{ij}=\begin{cases}1&\text{if $i\in[n^{2}]$ and $i=j=(k-1)n+k$},\quad k=1, \ldots,n,\\ 1&\text{if $i\in[n^{2}]\setminus\{1,n+2,2n+3,\ldots,n^{2}\}$ and $j=(i-1)n+i$},\\ 0&\text{otherwise.}\end{cases}\] We will now establish that \(GZ(A\otimes B)Z)G^{\top}\) is precisely the matrix we seek to block-encode, by demonstrating that \(G(Z(A\otimes B)Z)G^{\top}=\mathcal{M}\). First, observe that \(G\) is a (partial) permutation matrix: multiplying \(Z(A\otimes B)Z\) on the left by \(G\) performs the necessary row-exchanges, as the elements of \(G(Z(A\otimes B)Z)\) are given by \[[G\left(Z(A\otimes B)Z\right)]_{ik}=\begin{cases}A_{ij}\cdot B_{ij}&\text{for $k =(j-1)n+j$},\quad i,j=1,\ldots,n,\\ 0&\text{otherwise.}\end{cases}\] On the other hand, multiplying \(Z(A\otimes B)Z)\) on the right by \(G^{\top}\) performs this transformation with respect to the columns such that \[[\left(Z(A\otimes B)Z\right)G]_{kj}=\begin{cases}A_{ij}\cdot B_{ij}&\text{for $k =(i-1)n+i$},\quad i,j=1,\ldots,n,\\ 0&\text{otherwise.}\end{cases}\] Hence, multiplying \(G\left(Z(A\otimes B)Z\right)\) on the right by \(G^{\top}\) conducts the column exchanges to move \(A\circ B\) to the top left \(n\)-dimensional block of \(Z(A\otimes B)Z\), i.e., \[[G\left(Z(A\otimes B)Z\right)G]_{ij}=\begin{cases}A_{ij}\cdot B_{ij}&\text{for $i,j=1,\ldots,n$},\\ 0&\text{otherwise.}\end{cases}\] Therefore, \(G(Z(A\otimes B)Z)G^{\top}=\mathcal{M}\) as desired. We now analyze the cost associated with block-encoding \(\mathcal{M}\). Under the stated hypothesis, we have access to an \((\alpha_{1},a_{1},\xi_{A})\)-block-encoding \(U_{A}\) of \(A\), and an \((\alpha_{2},a_{2},\xi_{B})\)-block-encoding \(U_{B}\) of \(B\), and thus applying Proposition 3 we can construct an \((\alpha_{1}\alpha_{2},a_{1}+a_{2},\xi_{A}+\xi_{B})\)-block-encoding \(U_{A\otimes B}\) of \(A\otimes B\) using one application of \(U_{A}\) and of \(U_{B}\), and no additional gates. Using the description of \(Z\), we can construct the sparse-access oracles \(O_{r}\) and \(O_{c}\) as defined in Lemma 4 (which act on two \((2\log n+1)\) qubit registers). Additionally, from the definition of \(Z\), we can construct an oracle \(O_{Z}\), which returns the entries of \(Z\) in a binary description: \[O_{Z}:\ket{i}\ket{j}\ket{0}^{\otimes p}\mapsto\ket{i}\ket{j}\ket{z_{ij}}, \quad\forall i,j\in[2^{2\log n}]-1,\] where \(z_{ij}\) is a \(p\)-bit binary description of the \(ij\)-matrix element of \(Z\). Note that the circuit for the position and value of the nonzero elements of \(Z\) using \(\widetilde{\mathcal{O}}_{n}(1)\) gates because they admit an efficient description: their value is \(1\) and we have a compact description of their position. By construction the matrix \(Z\) is \(1\)-row sparse and \(1\)-column sparse, and hence an application of Lemma 4 with \(s_{r}=s_{c}=1\) asserts that one can construct a \((1,2\log(n)+3,\xi_{Z})\)-block-encoding \(U_{Z}\) of \(Z\). Given block-encodings \(U_{Z}\) and \(U_{A\otimes B}\), we can apply Proposition 2 with \[\xi_{Z}=\frac{\xi_{A}+\xi_{B}}{\alpha_{1}\alpha_{2}},\quad\ \xi_{A\otimes B}=\xi_{A}+\xi_{B},\] yielding an \((\alpha_{1}\alpha_{2},a_{1}+a_{2}+2\log(n)+3,2(\xi_{A}+\xi_{B}))\)-block-encoding of \(Z(A\otimes B)\). Applying Proposition 2 once more with \[\xi_{Z}=\frac{\xi_{A}+\xi_{B}}{\alpha_{1}\alpha_{2}},\quad\ \xi_{Z(A\otimes B)}=2(\xi_{A}+\xi_{B}),\] we obtain an \((\alpha_{1}\alpha_{2},a_{1}+a_{2}+4\log(n)+6,3(\xi_{A}+\xi_{B}))\)-block-encoding of \(Z(A\otimes B)Z\). Just as was the case with \(Z\), we can use the description of \(G\) to construct the sparse-access oracles \(O_{r}\) and \(O_{c}\) as defined in Lemma 4 (which again, act on two \((2\log n+1)\) qubit registers), as well as an oracle \(O_{G}\) using \(\widetilde{\mathcal{O}}_{n}(1)\) gates, that returns the entries of \(G\) in a binary description: \[O_{G}:\left|i\right\rangle\left|j\right\rangle\left|0\right\rangle^{\otimes p }\mapsto\left|i\right\rangle\left|j\right\rangle\left|g_{ij}\right\rangle, \quad\forall i,j\in[2^{2\log n}]-1,\] where \(g_{ij}\) is a \(p\)-bit binary description of \(G_{ij}\) (the \(ij\)-matrix element of \(G\)). Noting that \(G\) is 1-row sparse and 1-column sparse (and hence, so its transpose); applying Lemma 4 twice more allows us to construct a \((1,2\log(n)+3,\xi_{G})\)-block-encoding \(U_{G}\) of \(G\), as well as a \((1,2\log(n)+3,\xi_{G}^{\top})\)-block-encoding \(U_{G^{\top}}\) of the transpose \(G^{\top}\). We can then use \(U_{G}\) and our \((\alpha_{1}\alpha_{2},a_{1}+a_{2}+4\log(n)+6,3(\xi_{A}+\xi_{B}))\)-block-encoding \(U_{Z(A\otimes B)Z}\) of \(Z(A\otimes B)Z\) to construct an \((\alpha_{1}\alpha_{2},a_{1}+a_{2}+6\log(n)+9,4(\xi_{A}+\xi_{B}))\)-block-encoding of \(G(Z(A\otimes B)Z)\) by applying Proposition 2 with \[\xi_{G}=\frac{\xi_{A}+\xi_{B}}{\alpha_{1}\alpha_{2}},\quad\ \xi_{Z(A\otimes B)Z}=3(\xi_{A}+\xi_{B}).\] Applying Proposition 2 a final time, with \[\xi_{G^{\top}}=\frac{\xi_{A}+\xi_{B}}{\alpha_{1}\alpha_{2}},\quad\ \xi_{G(Z(A\otimes B)Z)}=4(\xi_{A}+\xi_{B}),\] produces an \((\alpha_{1}\alpha_{2},a_{1}+a_{2}+8\log(n)+12,5(\xi_{A}+\xi_{B}))\)-block-encoding \(U_{\mathcal{M}}\) of \(\mathcal{M}=G(Z(A\otimes B)Z)G^{\top}\). The stated complexity result follows upon noting that the steps required to construct the unitary \[U_{\mathcal{M}}=U_{G}U_{Z}U_{A\otimes B}U_{Z}U_{G^{\top}}\] requires one application of \(U_{A\otimes B}\) and one application of each of the other matrices. In turn, this amounts to 1 application of \(U_{A}\) and \(U_{B}\) each, plus the \(\widetilde{\mathcal{O}}_{n}(1)\) gate cost of the remaining matrices \(U_{G}\), \(U_{Z}\) and \(U_{G^{\top}}\), and the proof is complete. We remark that a similar result to Proposition 4 was independently derived and discussed in the recent paper [13]. #### 2.1.4 Gibbs Samplers and Trace Estimators For clarity, we begin with a formal definition of a subnormalized density operators and their purifications. **Definition 3** (Definition 6.3.1 in [22]).: _(Subnormalized density operators & Purification) A subnormalized density operator \(\rho\) is a positive semidefinite matrix of trace at most 1. A purification \(\varrho\) of a subnormalized density operator \(\rho\) is a 3-register pure state such that tracing out the third register and projecting on the subspace where the second register is \(\left|0\right\rangle\) yields \(\rho\)._ The frameworks introduced later in this paper require that we implement a Gibbs sampler and a trace estimator, which we define next. **Definition 4** (Definition 4.11 in [58]).: _(Gibbs Sampler) A \(\theta\)-precise Gibbs-sampler for the input matrix \(H\), is a unitary that takes as input a data structure storing a Hamiltonian \(H\) and creates as output a purification of a \(\theta\)-approximation (in trace distance) of the Gibbs state_ \[\rho=\frac{\exp(-H)}{\operatorname{tr}(\exp(-H))}.\] We will use these approximate Gibbs states in order to check the diagonal entries of our solutions, as well as compute the trace inner products of matrices (or, expectation values), i.e., quantities of the form \(\operatorname{tr}(A\rho)\) **Definition 5** (Definition 4.12 in [58]).: _(Trace Estimator) A \(\theta\)-precise trace estimator is a unitary that as input takes a state \(\rho\) and a matrix \(A\). It outputs a sample from a random variable \(x\in\mathbb{R}\) such that \(x\) is an estimator for \(\operatorname{tr}(A\rho)\) that is at most \(\theta/4\) biased._ These implementations require polynomial approximations of the exponential function, which can be obtained using quantum singular value transformation techniques introduced in [22, 23]. **Lemma 5** (Lemma 4.14 in [58]).: _Let \(\xi\in(0,1/6]\) and \(\beta\geq 1\). There exists a polynomial \(P(x)\) such that_ * _For all_ \(x\in[-1,0]\)_, we have_ \(|P(x)-\exp(2\beta x)/4|\leq\xi\)_._ * _For all_ \(x\in[-1,1]\)_, we have_ \(|P(x)|\leq 1/2\)_._ * \(\deg(P)=\widetilde{\mathcal{O}}_{\frac{1}{\xi}}(\beta)\)_._ **Lemma 6** (Lemma 4.15 in [58]).: _Let \(\theta\in(0,1/3]\), \(\beta>1\), and let \(d\) be the degree of the polynomial from Lemma 5 when we let \(\xi=\frac{\theta}{128n}\). Let \(U\) be a \((\beta,a,\frac{\theta^{2}\beta}{1024^{2}d^{2}n^{2}})\)-block-encoding of a Hermitian operator \(H\in\mathbb{R}^{n\times n}\), i.e,, a \((\beta,a,\widetilde{\mathcal{O}}(\theta/\beta n^{2}))\)-block-encoding. Then, we can create a purification of a state \(\tilde{\rho}\) such that_ \[\left\|\tilde{\rho}-\frac{\exp(H)}{\operatorname{tr}\left(\exp(H)\right)} \right\|_{\operatorname{tr}}\leq\theta\] _using \(\widetilde{\mathcal{O}}_{\frac{1}{\theta}}(\sqrt{n}\beta)\) applications of \(U\) and \(\widetilde{\mathcal{O}}_{\frac{1}{\theta}}(\sqrt{n}\beta a)\) elementary operations._ Provided access to a unitary that prepares a purification of a density operator, we can also construct a block-encoding of it. This is formalized in the following lemma from [22], which was based on ideas found in [45, Corollary 9]. **Lemma 7** (Lemma 6.4.4 in [22]).: _(Block-encoding of a (subnormalized) density operator) Let \(G\) be a \((w+a)\) unitary which on the input state \(\left|0\right\rangle^{w}\left|0\right\rangle^{a}\) prepares a purification \(\left|\varrho\right\rangle\) of the subnormalized \(w\)-qubit density operator \(\rho\). Then we can implement a \((1,w+a,0)\)-block-encoding of \(\rho\) with a single use of \(G\) and its inverse and with \(w+1\) two-qubit gates._ We are now in a position to define a trace estimator using the quantum operator input model. **Lemma 8** (Lemma 4.18 in [58]).: _Let \(\rho\) be an \(n\)-dimensional quantum state and \(U\) an \((\alpha,a,\theta/2)\)-block-encoding of a matrix \(A\in\mathbb{R}^{n\times n}\) with \(\|A\|\leq 1\). A trace estimator for \(\operatorname{tr}\left(A\rho\right)\) with bias at most \(\theta\) and \(\sigma=\mathcal{O}(1)\) can be implemented using \(\widetilde{\mathcal{O}}(\alpha)\) uses of \(U\) and \(U^{\dagger}\) and \(\widetilde{\mathcal{O}}_{\frac{1}{\theta}}(\alpha)\) elementary operations._ #### 2.1.5 Computational complexity When discussing the computational complexity of quantum algorithms we normally express the cost in terms of the number of calls to some input oracle. Unless otherwise specified, the gate complexity is at most a polylogarithmic factor larger than the stated oracle complexity. The meaning of "input oracle access" depends on the input model: * For the sparse-oracle access model, it refers to a query to the oracle describing \(C/\|C\|_{F}\). * For the QRAM model, it refers to the number of accesses to QRAM. A QRAM of size \(\mathcal{O}\left(ns\log^{2}(n)\right)\) is sufficient for our algorithms, and in particular, we only need classical write access to the QRAM, i.e., we do not write in superposition. It is straightforward to translate each of these oracle costs into a running time in the standard gate model without QRAM, by considering the cost of implementing each oracle. Hamiltonian Updates In this section, we present the algorithm from [10] and relevant results required to prove its convergence and analyze its cost. ### Convex Feasibility Problems In order to avoid any normalization issues for the problems that arise over the course of our IR scheme, we deviate slightly from [10] and renormalize the problem (3) using the Frobenius norm of the cost matrix rather than use its operator norm: \[\begin{array}{rl}\mbox{find}&X\\ \mbox{s.t.}&\mbox{tr}\left(\frac{C}{\|C\|_{F}}X\right)\geq\gamma-\epsilon\\ &\sum_{i\in[n]}\left|\langle i|X|i\rangle-\frac{1}{n}\right|\leq \epsilon\\ &\mbox{tr}\left(X\right)=1,\quad X\succeq 0.\end{array} \tag{5}\] The relaxed renormalized SDO problem (5) is a specific example of the convex optimization problem \[\begin{array}{rl}\max&f(X)\\ \mbox{s.t.}&X\in\mathcal{P}_{1}\cap\mathcal{P}_{2}\cap\cdots\cap\mathcal{P}_ {m},\\ &\mbox{tr}(X)=1,\ X\succeq 0,\end{array} \tag{6}\] where \(\mathcal{P}_{1},\ldots,\mathcal{P}_{m}\) are convex sets. In this context, the trace constraint enforces normalization, but also allows us to obtain a bound on the optimal objective value. Letting \(\widetilde{C}=C\|C\|_{F}^{-1}\) and invoking the tracial matrix Holder inequality [8], it follows that any \(X^{*}\) that solves (6) satisfies the following relation: \[\left|\mbox{tr}(\widetilde{C}X^{*})\right|\leq\|\widetilde{C}\|\|X^{*}\|_{ \mbox{tr}}=\|\widetilde{C}\|.\] It is well known in the optimization literature that performing binary search over the range of values \[\gamma\in\left[-\|\widetilde{C}\|,\|\widetilde{C}\|\right]\subseteq[-1,1]\] that the objective can take reduces the task of solving (6) to solving a sequence of feasibility problems of the form: \[\begin{array}{rl}\mbox{find}&X\in\mathcal{S}_{+}^{n}\cap\{X:\mbox{tr}(X)=1\} \\ \mbox{s.t.}&\mbox{tr}(\widetilde{C}X)\geq\gamma\\ &X\in\mathcal{P}_{1}\cap\mathcal{P}_{2}\cap\cdots\cap\mathcal{P}_{m}.\end{array} \tag{7}\] In particular, \(\log(\|\widetilde{C}\|\epsilon^{-1})\) queries to (7) are sufficient to estimate the optimal objective value of of (6) up to additive error \(\epsilon\). ### Solving Convex Feasibility Problems via Hamiltonian Updates Hamiltonian Updates (HU) is a meta-algorithm for solving convex feasibility problems of the form (7), adapted from the work of Tsdua, Ratsch and Warmuth [57] as well as [4, 9, 32, 42]. At a high level, HU can be viewed as a mirror descent algorithm with the von Neumann entropy as the mirror map. In each iteration, the method uses certain subroutines to test \(\epsilon\)-closeness to convex sets \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{m}\), which we formally define next. **Definition 6** (Definition 2.1 in [10]).: _Let \(\mathcal{P}\subset\{X\in\mathcal{S}_{+}^{n}:\operatorname{tr}(X)=1\}\) be a closed, convex subset of quantum states, and \(\widetilde{\mathcal{P}}\subset\{X\in\mathbb{C}^{n\times n}:X=X^{\dagger},\|X\| \leq 1\}\) be a closed, convex subset of observables of operator norm at most 1. For \(\epsilon>0\), an \(\epsilon\)-separation oracle with respect to \(\widetilde{\mathcal{P}}\) is a subroutine that either accepts a state \(\rho\) (in the sense that observables from \(\widetilde{\mathcal{P}}\) cannot distinguish \(\rho\) from the elements of \(\mathcal{P}\)), or provides a normal vector (in the matrix space) \(P\) of a hyperplane that separates \(\rho\) from the set \(\mathcal{P}\) using a test from \(\widetilde{\mathcal{P}}\):_ \[O_{\mathcal{P},\epsilon}(\rho)=\begin{cases}\text{accept }\rho&\text{if }\min_{Y\in \mathcal{P}}\max_{P\in\widetilde{\mathcal{P}}}\operatorname{tr}(P(\rho-Y)) \leq\epsilon,\\ \text{output }P\in\widetilde{\mathcal{P}}\text{ s.t. }\operatorname{tr}(P(\rho-Y)) \geq\frac{\epsilon}{2}\text{ for all }Y\in\mathcal{P}&\text{otherwise}.\end{cases}\] The authors in [10] point out that the above oracle construction is well defined, as we can always choose some hyperplane \(P\in\widetilde{\mathcal{P}}\) such that \[\operatorname{tr}\left(P(\rho-Y)\right)\geq\frac{\epsilon}{2},\] holds for all \(Y\in\mathcal{P}\) whenever \[\min_{Y\in\mathcal{P}}\max_{P\in\widetilde{\mathcal{P}}}\operatorname{tr}(P( \rho-Y))>\epsilon.\] From Sion's min-max theorem [54], it follows that \[\max_{P\in\widetilde{\mathcal{P}}}\min_{Y\in\mathcal{P}}\operatorname{tr}(P( \rho-Y))=\min_{Y\in\mathcal{P}}\max_{P\in\widetilde{\mathcal{P}}}\operatorname {tr}(P(\rho-Y))>\epsilon,\] and hence there exists a hyperplane which separates \(\rho\) from \(\mathcal{P}\) by \(\epsilon\). By relaxing the requirement to \(\frac{\epsilon}{2}\)-separation, the algorithm is able to reconcile with the errors that result from approximating quantities computed with \(\rho\), or estimating its entries. The Hamiltonian Updates (HU) algorithm of Brandao et al. [10] is provided in full detail in Algorithm 1. The algorithm takes as input the precision parameter \(\epsilon\), and \(m\)\(\epsilon\)-separation oracles \(O_{1,\epsilon},O_{2,\epsilon},\ldots,O_{m,\epsilon}\). In the initialization steps, the starting point is defined to be the maximally mixed state \(\rho\gets n^{-1}I\). This is critical to ensuring the convergence of mirror descent-based approaches such as Algorithm 1 and the works in [4, 9, 32, 42, 57]; initialization to the maximally mixed state ensures that the quantum relative entropy between any feasible state and the initial state is bounded by \(\log(n)\) (see, e.g., Theorem 11.8 pt. 2 [51]), and is reduced at every iteration. Consequently, Algorithm 1 terminates in a finite number of iterations. As noted in [10], how we define \(\widetilde{\mathcal{P}}\) determines the number of closeness conditions that need to be tested. By using the Gibbs state change of variables, we do not need to test if our candidate solution is trace normalized or positive semidefinite; any Gibbs state \[\rho_{H}=\frac{\exp(-H)}{\operatorname{tr}(\exp(-H))}\] is an element of the set \(\{X\in\mathcal{S}_{+}^{n}:\operatorname{tr}(X)=1\}\) by definition. Our task therefore reduces to finding a \(\log(n)\)-qubit mixed state state \(\rho\) which is \(\epsilon\)-close to the convex sets \(\mathcal{P}_{i}\) that arise from any other constraints included in the feasibility problem. At each iteration, \(\epsilon\)-closeness is tested by querying \(\epsilon\)-separation oracles which are constructed using observables in \(\widetilde{\mathcal{P}}_{i}\). If each of our oracles accepts the candidate state, the algorithm terminates and reports \((\rho,H)\) as an \(\epsilon\)-precise solution. Otherwise, upon detecting infeasibility the matrix exponent is updated to penalize the infeasible directions using the rule \[H\gets H+\frac{\epsilon}{16}P,\] where \(P\) is a normal vector in the matrix space of a hyperplane that witnesses infeasibility. The following result establishes the iteration complexity of Algorithm 1. **Theorem 3** (Theorem 2.1 in [10]).: _Algorithm 1 requires at most \(T=\lceil 64\log(n)\epsilon^{-2}\rceil+1\) iterations to certify that (7) is infeasible or output a state \(\rho\) satisfying_ \[\text{for all }1\leq i\leq m:\ \max_{P_{i}\in\mathcal{P}_{i}}\min_{Y_{i}\in \mathcal{P}_{i}}\operatorname{tr}(P_{i}(\rho-Y_{i}))\leq\epsilon.\] Note that Theorem 3 applies to _any_ convex feasibility problem (on density operators, i.e., trace-normalized positive semidefinite matrices) for which we have separation oracles as outlined in Definition 6. This is crucial for the development of an iterative refinement scheme. There is an important distinction with respect to output across the models of computation we study. A classical implementation of Algorithm 1 outputs an explicit description of an \(\epsilon\)-precise solution \(\rho^{*}\) to (5) and its associated Hamiltonian \(H^{*}\), whereas a quantum implementation reports a real valued vector \(y\in\mathbb{R}^{2}\) along with a diagonal matrix \(D\) (with \(\|D\|\leq 1\)) such that \(H^{*}=y_{1}\frac{C}{\|C\|_{F}}+y_{2}D\). The vector \(y=(y_{1},y_{2})^{\top}\) is the _state preparation pair_ of \(\rho^{*}\), in particular: \[\rho^{*}=\frac{\exp\left(-\left(y_{1}\frac{C}{\|C\|_{F}}+y_{2}D\right)\right) }{\operatorname{tr}\left[\exp\left(-\left(y_{1}\frac{C}{\|C\|_{F}}+y_{2}D \right)\right)\right]},\] and we refer to this type of output as a _state preparation pair description_ of \(\rho\). This choice of output is used in all quantum SDO solvers based on Gibbs sampling techniques (see, e.g., [9, 10, 11, 60, 61]), and is motivated by the fact that it is difficult to develop quantum algorithms that are substantially faster than classical algorithms if we still have to output each entry of the solution (an \(n\times n\) matrix). The Gibbs sampling approaches that we apply later exhibit a cost that depends on a norm bound for \(y\). Observe that we initialize \(y\) to the all zeros vector of appropriate dimension, and in every iteration, at most one entry of \(y\) changes by a magnitude of \(\frac{\epsilon}{16}\) (specifically, an entry \(y_{i}\), where the oracle \(O_{i,\epsilon}\) has detected infeasibility). As a consequence, the vector \(y\) satisfies the inequality \[\left\|y^{(t+1)}-y^{(t)}\right\|\leq\frac{\epsilon}{16} \tag{8}\] for each iteration \(t\). In view of the iteration bound for Algorithm 1 provided in Theorem 3, it is easy to see that for any \(y\) obtained from Algorithm 1 we have \[\|y\|_{1}\leq\lceil 64\log(n)\epsilon^{-2}\rceil\left\|y^{(t+1)}-y^{(t)}\right\| \leq\lceil 64\log(n)\epsilon^{-2}\rceil\frac{\epsilon}{16}\leq 4\log(n) \epsilon^{-1}. \tag{9}\] To instantiate the algorithm to solve problem (3) we need to choose the sets \(\mathcal{P}_{i}\), and provide separation oracles for them. This is what we do in the following section. #### 3.2.1 Oracle Construction The goal of Hamiltonian Updates is to solve, for fixed \(\gamma\in[-1,1]\), the following feasibility problem: \[\begin{array}{ll}\mbox{find}&\rho\in\left\{X\in\mathcal{S}_{+}^{n}:\mbox{tr}( X)=1\right\}\cap\mathcal{C}_{\gamma}\cap\mathcal{D}_{n}\\ \mbox{where}&\mathcal{C}_{\gamma}=\left\{X:\mbox{tr}\left(\frac{C}{\|C\|_{F}}X \right)\geq\gamma\right\},\\ &\mathcal{D}_{n}=\left\{X:\langle i|X|i\rangle=\frac{1}{n},i\in[n] \right\}.\end{array} \tag{10}\] One can observe that the set \(\mathcal{C}_{\gamma}\) constitutes a halfspace, while \(\mathcal{D}_{n}\) is an affine space of codimension \(n\). The sets of observables for \(\mathcal{C}_{\gamma}\) and \(\mathcal{D}_{n}\) are given by \(\widetilde{\mathcal{C}}_{\gamma}\) and \(\widetilde{\mathcal{D}}_{n}\) respectively, with \[\widetilde{\mathcal{C}}_{\gamma}=\{-C\|C\|_{F}^{-1}\},\mbox{ and } \widetilde{\mathcal{D}}_{n}=\{D\in\mathbb{R}^{n\times n}:\|D\|\leq 1,\mbox{ $D$ is diagonal}\}.\] As noted in [10], it follows \[\max_{P\in\widetilde{\mathcal{C}}_{\gamma}}\min_{Y\in\widetilde{\mathcal{C}} _{\gamma}}\mbox{tr}(P(\rho-Y))\leq\epsilon\iff-\mbox{tr}\left(C\|C\|_{F}^{-1}( \rho-Y)\right)\leq\epsilon\quad\mbox{for some $Y\in\mathcal{C}_{\gamma}$},\] which in turn implies \(\mbox{tr}\left(C\|C\|_{F}^{-1}\rho\right)\geq\gamma-\epsilon\). Given the structure of \(\mathcal{C}_{\gamma}\) and \(\mathcal{D}_{n}\), the authors in [10] suggest the following two separation oracles: \[\begin{array}{ll}O_{\mathcal{C}_{\gamma}}:&\mbox{compute an approximation $\tilde{c}$ of }\mbox{tr}\left(C\|C\|_{F}^{-1}\rho\right)\mbox{ up to additive error }\frac{\epsilon}{4}.\mbox{ Check if $\tilde{c}\geq\gamma-\frac{3 \epsilon}{4}$ and}\\ &\mbox{output $P=-C\|C\|_{F}^{-1}$ if the inequality is violated.}\end{array}\] \[\begin{array}{ll}O_{\mathcal{D}_{n}}:&\mbox{compute an approximation $\tilde{p}\in\mathbb{R}^{n}$ of $p_{i}=\langle i|\rho|i\rangle$ satisfying }\sum_{i=1}^{n}|p_{i}-\tilde{p}_{i}|\leq\frac{\epsilon}{4}.\\ &\mbox{Check if $\sum_{i=1}^{n}\left|\tilde{p}_{i}-\frac{1}{n}\right| \leq\frac{3\epsilon}{4}$ and output $P=\sum_{i=1}^{n}\left(\mathbb{I}\left\{\tilde{p}_{i}>\frac{1}{n}\right\}- \mathbb{I}\left\{\tilde{p}_{i}<\frac{1}{n}\right\}\right)|i\rangle\,\langle i| \\ &\mbox{if the inequality is violated.}\end{array}\] For any given \[\rho_{H}=\frac{\exp(-H)}{\mbox{tr}(\exp(-H))},\] the required separation oracles are straightforward to implement on a classical computer that has access to \(\rho_{H}\). Thus, classically we only need to prepare \(\rho_{H}\) once and store it to build the separation oracles. The next result from [10] establishes that computing an \(\mathcal{O}(\log(n)\epsilon^{-1})\)-degree Taylor series suffices to produce accurate approximations. **Lemma 9** (Lemma 3.2 in [10]).: _Fix a Hermitian \(n\times n\) matrix \(H\), an accuracy \(\epsilon\), and let \(\ell\) be the smallest even number satisfying \((\ell+1)(\log(\ell+1)-1)\geq 2\|H\|+\log(n)+\log\left(\frac{1}{\epsilon}\right)\). Then, the truncated matrix exponential \(T_{\ell}=\sum_{k=0}^{\ell}\frac{1}{k!}(-H)^{k}\) satisfies_ \[\left\|\frac{\exp(-H)}{\mbox{tr}\left(\exp(-H)\right)}-\frac{T_{\ell}}{\mbox{ tr}(T_{\ell})}\right\|_{\mbox{tr}}\leq\epsilon.\] states [10], which are used to test closeness to the sets \(\mathcal{C}_{\gamma}\) and \(\mathcal{D}_{n}\) via quantum measurements. While in Lemma 9 we bound the number of required Taylor series steps for computing \(\rho\) via a matrix exponential, in the quantum case we bound the number of copies of \(\rho\) required to estimate its diagonal entries and expectation values \(\mbox{tr}(A\rho)\). **Lemma 10**.: _Fix \(\epsilon\in(0,1).\) Let \(\rho\) be a \(\log(n)\)-qubit quantum state and \(U\) a \((1,\log(n)+2,\epsilon/(2n))\)-block-encoding of \(C\|C\|_{F}^{-1}\). Then, we can implement the oracle \(O_{\mathcal{C}_{\gamma}}\) on a quantum computer given access to \(\mathcal{O}(\epsilon^{-1})\) copies of a state that is an \(\frac{\epsilon}{8}\)-approximation of the input state \(\rho\) in trace distance and \(\mathcal{O}(\epsilon^{-1})\) applications of \(U\) and \(U^{\dagger}\). The oracle \(O_{\mathcal{D}_{n}}\) can be implemented using \(\mathcal{O}(n\epsilon^{-2})\)\(\frac{\epsilon}{8}\)-approximate copies of the input, and the classical post-processing time needed to implement the oracle is \(\mathcal{O}(n\epsilon^{-2})\)._ Proof.: First, note that we can obtain an estimate \(\tilde{p}\) of the diagonal elements of \(\rho\) whose total variation distance from \(p\) is no more than \(\frac{\epsilon}{8}\) using \(\widetilde{\mathcal{O}}_{n}\left(n\epsilon^{-2}\right)\) copies of \(\rho\) to measure \(\rho\) in the computational basis. Further, provided accesses to \(\rho\) and a \((1,\log(n)+2,\epsilon/(2n))\)-block-encoding \(U\) of \(C\|C\|_{F}^{-1}\), by Lemma 8, a trace estimator for \(\operatorname{tr}\left(C\|C\|_{F}^{-1}\rho\right)\) with bias at most \(\frac{\epsilon}{n}\) can be implemented using \(\widetilde{\mathcal{O}}(1)\) uses of \(U\) and \(U^{\dagger}\) and \(\widetilde{\mathcal{O}}_{\mp}(1)\) elementary operations. From here, applying amplitude estimation using \(\mathcal{O}(\epsilon^{-1})\) quantum samples (i.e., state preparation unitaries) from the trace estimator to suffice to compute an approximation \(\operatorname{tr}\left(C\|C\|_{F}^{-1}\rho\right)\) up to additive \(\frac{\epsilon}{8}\) to implement \(O_{\mathcal{C}_{\gamma}}\). The rest of the proof exactly follows the proof of [10, Lemma 3.3]. We remark that multidimensional phase estimation techniques from [59] could improve the dependence on \(\epsilon^{-1}\) for estimating the diagonal elements of \(\rho\) to linear, which is a factor \(\epsilon^{-1}\) better than a naive application of computational basis measurements. However, in the context of the iterative refinement scheme we present later, the improvement would only reduce the amount of constant overhead in the overall running time, and multidimensional phase estimation has a larger gate complexity (which can be reduced with QRAM). There are also numerous ways to prepare Gibbs states using a quantum computer [16, 20, 37, 53, 60, 61, 63]. Following [10], we utilize the Gibbs sampler from [53] when working with the sparse-access input model, and for the QRAM input model we consider Gibbs sampling techniques introduced in [60]. ### Complexity Having understood the cost of constructing the oracles in both the classical and quantum settings, we are now in a position to analyze the complexity associated with using Algorithm 1 to obtain solutions to (5) and approximations to (3). Relevant to this discussion is the following result, which imposes precision requirements on solving (3) to an additive error of the order \(\mathcal{O}\left(n\|C\|_{F}\epsilon\right)\) using Algorithm 1. **Proposition 5** (Proposition 3.1 in [10]).: _Let \(\rho\) be an \(\epsilon^{4}\)-accurate solution to the relaxed SDO problem (5) with input matrix \(C\). Let \(\gamma_{\epsilon^{4}}=\operatorname{tr}\left(C\rho\right)\) be the value attained by \(\rho\). Then, there is a quantum state \(\rho^{*}\) at trace distance \(\mathcal{O}(\epsilon)\) of \(\rho\) such that \(n\rho^{*}\) is a feasible point of SDO problem (3). In particular_ \[|\gamma_{\epsilon^{4}}n\|C\|_{F}-\operatorname{tr}\left(n\rho^{*}C\right)|= \mathcal{O}\left(n\|C\|_{F}\epsilon\right).\] _Moreover, it is possible to construct \(\rho^{*}\) in time \(\mathcal{O}(n^{2})\) given the entries of \(\rho\)._ We do not provide a proof of this result here, as later we will provide an improved approximation guarantee and a proof of the improved statement. #### 3.3.1 Classical running time Using Lemma 9 in combination with Theorem 3, we can bound the running time required to solve (5) to additive error \(\epsilon\) using a classical implementation of Algorithm 1. **Proposition 6**.: _Suppose that \(C\) has row sparsity \(s\). Then, the classical cost of solving (5) up to additive error \(\epsilon\) using Algorithm 1 is \(\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\epsilon^{-3}\right)\)._ Proof.: The result follows directly from the proof of Corollary 3.1 in [10], but we repeat the argument here for completeness. First, observe that over the course of the iterations \(t=0,\ldots,T\), the operator norms \(\|H^{(t)}\|\) do not become prohibitively large. This follows from initializing \(H^{(0)}=\mathbf{0}^{n\times n}\), and that by (8), the inequality \[\left\|H^{(t+1)}-H^{(t)}\right\|\leq\frac{\epsilon}{16}\left\|P^{(t)}\right\| \leq\frac{\epsilon}{16}\] holds for all \(t\). By Theorem 3, Algorithm 1 requires at most \(T=\lceil 64\log(n)\epsilon^{-2}\rceil\) iterations, which implies \(\|H^{(t)}\|\leq 4\log(n)\epsilon^{-1}\) for all \(t\). By Lemma 9, it suffices to compute \(\mathcal{O}(\log(n)\epsilon^{-1})\) steps of the Taylor series corresponding to \(\exp(-H^{(t)})\) in order to obtain a matrix \(\tilde{\rho}^{(t)}\) that is at most a trace distance of \(\frac{\epsilon}{4}\) from \(\rho^{(t)}\). Moreover, given that \(H^{(t)}\) is defined as a linear combination of \(C\|C\|_{F}^{-1}\) with a diagonal matrix, matrix multiplication involving \(H^{(t)}\) can be carried out in \(\mathcal{O}(\min\{n^{2}s,n^{\omega}\})\) arithmetic operations. Given classical access to \(\tilde{\rho}^{(t)}\), the diagonal constraints comprising \(\mathcal{D}_{n}\) can be checked in time \(\mathcal{O}(n)\), whereas computing \(\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\tilde{\rho}^{(t)}\right)\) requires \(\mathcal{O}(ns)\) arithmetic operations. Thus, the dominant operation at each iteration is computing the matrix exponential and the classical per-iteration cost of Algorithm 1 is given by \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log(n)\epsilon^{-1}\right).\] Taking into account the iteration bound \(\mathcal{O}(\log(n)\epsilon^{-2})\) provided in Theorem 3, we arrive at an overall running time of \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\epsilon^{-3}\right).\] The proof is complete. The next corollary from [10] follows from Proposition 5 in the context of the previous result, and provides the overall running time of Algorithm 1 to solve (3) to additive error \(\mathcal{O}\left(n\|C\|_{F}\epsilon\right)\) in the classical setting. **Corollary 2**.: _Suppose that \(C\) has row-sparsity \(s\). Then, the classical cost of solving (3) up to an additive error \(\mathcal{O}\left(n\|C\|_{F}\epsilon\right)\) using Algorithm 1 is \(\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\epsilon^{-12}\right)\)._ Proof.: By Proposition 6, Algorithm 1 requires time \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\tilde{\epsilon}^{-3} \right),\] to solve (5) up to additive error \(\tilde{\epsilon}\). In order to satisfy the approximation guarantee for (3) given in Proposition 5, it suffices to solve (5) to error \(\tilde{\epsilon}=\epsilon^{4}\). Plugging in this value for the precision parameter, the total cost required to solve (3) up to an additive error \(\mathcal{O}\left(n\|C\|_{F}\epsilon\right)\) using Algorithm 1 is \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\tilde{\epsilon}^{-3} \right)=\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)(\epsilon^{4})^{ -3}\right)=\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\epsilon^{-12} \right).\] #### 3.3.2 Quantum running time Combining the sampling requirements provided in Lemma 10 with the cost of preparing a single Gibbs state and the iteration bound from Theorem 3 gives the complexity of Algorithm 1 when run on a quantum computer. However, Gibbs samplers based on the block-encoding framework depend only poly-logarithmically on the inverse precision, therefore they are exponentially faster (in the parameter \(\epsilon^{-1}\)) compared to the Gibbs sampling algorithm from [53] utilized in [10]. It thus makes sense to analyze the running time in the more efficient model. This will require an efficient data structure for storing \(y\) so that we can efficiently prepare linear combinations of block-encodings. **Lemma 11** (Lemma 15 in [60]).: _There is a data structure that can store an \(m\)-dimensional \(\chi\)-sparse vector \(y\) with \(\theta\)-precision using a QRAM of size \(\widetilde{\mathcal{O}}_{\frac{m}{\theta}}(\chi)\). Furthermore:_ * _Given a classical_ \(\mathcal{O}(1)\)_-sparse vector, adding it to the stored vector has classical cost_ \(\widetilde{\mathcal{O}}_{\frac{m}{\theta}}(1)\)_._ * _Given that_ \(\beta\geq\|y\|_{1}\)_, we can implement a (symmetric)_ \((\beta,\widetilde{\mathcal{O}}_{\frac{m}{\theta}}(1),\theta)\)_-state preparation pair for_ \(y\) _with_ \(\widetilde{\mathcal{O}}_{\frac{m}{\theta}}(1)\) _queries to the QRAM._ **Corollary 3** (Corollary 16 in [60]).: _Suppose \(A_{1},\ldots,A_{m}\) are Hermitian matrices with operator norm at most 1, and that \(y\in\mathbb{R}^{m}\) satisfies \(\|y\|_{1}\leq\beta\). Having access to the above data structure for \(y\), we can prepare one copy of the Gibbs state_ \[\rho=\frac{\exp\left(-\sum_{i=1}^{m}y_{i}A_{i}\right)}{\operatorname{tr}\left( \exp\left(-\sum_{i=1}^{m}y_{i}A_{i}\right)\right)}\] _using \(\widetilde{\mathcal{O}}_{\theta}(\sqrt{n}\alpha\beta)\) accesses to the data structure for \(y\) and block-encodings of \(A_{1},\ldots,A_{m}\)._ We can now use Corollary 3 in combination with results from Sections 2.1.3 and 2.1.4 to establish the running time of Algorithm 1 in the QRAM input model. **Proposition 7**.: _Let \(\frac{C}{\|C\|_{F}}\in\mathcal{S}^{n}\) be stored in QRAM. Then, the complexity of solving (5) up to additive error \(\epsilon\) with Algorithm 1 using the QRAM input model is_ \[\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(n^{1.5}\epsilon^{-5}\right).\] _Here, the complexity corresponds to the number of accesses to the QRAM._ Proof.: Given that \(\frac{C}{\|C\|_{F}}\) is stored in QRAM, Lemma 3_(ii)_ asserts that when constructing a block-encoding of \(\frac{C}{\|C\|_{F}}\), one can set the subnormalization factor to be \(\alpha_{C}=\left\|\frac{C}{\|C\|_{F}}\right\|_{F}=1\). Hence, one can construct a \((1,\log(n)+2,\epsilon/(2n))\)-block-encoding of \(C\|C\|_{F}^{-1}\) in time \(\widetilde{\mathcal{O}}_{n}(1)\). Next, recall that in iteration \(t\in[T]\) of Algorithm 1, our Hamiltonian is defined as \[H^{(t)}=y_{1}^{(t)}\frac{C}{\|C\|_{F}}+y_{2}^{(t)}D^{(t)},\] where \(D^{(t)}\) is a diagonal matrix with the diagonal entries taking value \(-1\), \(0\) or \(1\). The diagonal elements of \(D\) change in each iteration, and therefore, a new \(D\) must be block-encoded in each iteration. For this, we use the QRAM model described in Section 2.1.2, which allows for insertions to be made in time \(\widetilde{\mathcal{O}}_{n}(1)\) to keep the cost of this step negligible. Provided a classical description of \(D\), we can store \(D\) in the QRAM in time \(\mathcal{O}(n\log(n))\). Applying Lemma 4, a \((1,\log(n)+3,\epsilon)\)-block-encoding of \(D^{(t)}\) can be constructed in time \(\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}(1)\). In an earlier discussion we saw that any \(y\) obtained from a call to Algorithm 1 will satisfy \(\|y\|_{1}=\widetilde{\mathcal{O}}_{n}(\epsilon^{-1})\) if we call Algorithm 1 using precision \(\epsilon\) (see, e.g., equation (9)). Hence, an application of Corollary 3 with \(\beta=\widetilde{\mathcal{O}}_{n}(\epsilon^{-1})\) implies that we can prepare one copy of our Gibbs state using \[\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(\sqrt{n}\alpha\epsilon^{-1}\right)\] accesses to the data structure for \(y\) and the block-encodings of \(C\|C\|_{F}^{-1}\) and \(D\), where \(\alpha\) is defined as the maximum over the subnormalization factors used to block-encode \(\frac{C}{\|C\|_{F}}\) and \(D\). Since \(\alpha=\max\{\alpha_{C},\alpha_{D}\}=1\), it follows \[\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(\sqrt{n}\alpha\epsilon^{-1} \right)=\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(\sqrt{n}\epsilon^{- 1}\right).\] Now, one can see from Lemma 10 that the cost of constructing \(O_{\mathcal{D}_{n}}\) dominates that of constructing \(O_{\mathcal{C}_{\gamma}}\). Noting that \(O_{\mathcal{D}_{n}}\) can be implemented using \(\mathcal{O}(n\epsilon^{-2})\) copies of a state that is an \(\frac{\epsilon}{8}\)-approximation of the input state \(\rho\) in trace distance and its inverse, the per-iteration cost of Algorithm 1 in the QRAM input model is given by \[\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(n^{1.5}\epsilon^{-3}\right).\] Factoring in the iteration bound of \(\widetilde{\mathcal{O}}_{n}(\epsilon^{-2})\) from Theorem 3, it follows that when provided access to QRAM, Algorithm 1 solves (5) up to additive error \(\epsilon\) using \[\mathcal{T}_{HU}^{\text{quantum}}=\widetilde{\mathcal{O}}_{\frac{n}{ \epsilon}}\left(n^{1.5}\epsilon^{-5}\right)\] accesses to the QRAM. The proof is complete. **Corollary 4**.: _Let \(\frac{C}{\|C\|_{F}}\in\mathcal{S}^{n}\) be stored in QRAM. Then, the complexity of solving (3) up to additive error \(\mathcal{O}(n\|C\|_{F}\epsilon)\) with Algorithm 1 using the QRAM input model is_ \[\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(n^{1.5}\epsilon^{-20}\right).\] _Here, the complexity corresponds to the number of accesses to the QRAM._ Proof.: By Proposition 7, Algorithm 1 requires \[\widetilde{\mathcal{O}}_{\frac{n}{\tilde{\epsilon}}}\left(n^{1.5}\tilde{\epsilon} ^{-5}\right),\] accesses to the QRAM to solve (5) up to additive error \(\tilde{\epsilon}\). In order to satisfy the approximation guarantee for (3) given in Proposition 5, it suffices to solve (5) to error \(\tilde{\epsilon}=\epsilon^{4}\). Plugging in this value for the precision parameter, the total cost required to solve (3) up to an additive error \(\mathcal{O}\left(n\|C\|_{F}\epsilon\right)\) using Algorithm 1 is \[\widetilde{\mathcal{O}}_{\frac{n}{\tilde{\epsilon}}}\left(n^{1.5}\tilde{ \epsilon}^{-5}\right)=\widetilde{\mathcal{O}}_{\frac{n}{\tilde{\epsilon}}} \left(n^{1.5}(\epsilon^{4})^{-5}\right)=\widetilde{\mathcal{O}}_{\frac{n}{ \tilde{\epsilon}}}\left(n^{1.5}\epsilon^{-20}\right).\] The proof is complete. Corollary 4 establishes that utilizing Gibbs samplers and trace estimators based on the block-encoding framework for our oracle construction in Algorithm 1 leads to an \[\mathcal{O}\left(\sqrt{s}^{1+o(1)}\epsilon^{-8+o(1)}\exp\left(1.6\sqrt{\log( \epsilon^{-4})}\right)\right)\] speedup over the running time result provided in [10, Corollary 3.2] when applied to solving (3). Yet, the costly accuracy requirements for the rounding procedure (see, e.g., Proposition 5) lead to a prohibitive scaling in the inverse precision for the overall running time. Given the advantageous dependence on the dimension, as compared to classical algorithms, we study how to improve the dependence on the precision parameter. This is discussed next. ## 4 Iterative Refinement for SDO approximations of QUBOs In this section, we introduce an iterative refinement method for obtaining accurate solutions to the renormalized relaxed SDO problem (5), that at a high level can be viewed as solving a series of problems related to the _feasibility problem_ (10) associated with (5). We then discuss how to test \(\epsilon\)-closeness to the convex sets which comprise the feasible regions of the intermediate refining problems before presenting our algorithm in full detail. We conclude the section by proving our algorithm's correctness and iteration complexity, and use these results to provide an improved approximation guarantee. ### The refining problem To develop an iterative refinement scheme for (5), we need to design a problem whose solution can be used to improve the quality of solutions to (5). Suppose we run Algorithm 1 and obtain an \(\epsilon\)-precise solution \(\hat{\rho}\) to (5). Letting \(\hat{\gamma}=\operatorname{tr}\left(C\|C\|_{F}^{-1}\hat{\rho}\right)\), \(\hat{\rho}\) must satisfy \[\operatorname{tr}\left(C\|C\|_{F}^{-1}\hat{\rho}\right)=\hat{ \gamma} \geq\gamma-\epsilon,\] \[\sum_{i=1}^{n}\left|\langle i|\hat{\rho}|i\rangle-\frac{1}{n} \right| \leq\epsilon.\] In _refining_ our solution to (5), we should aim to reduce the trace distance to the maximally mixed state \(n^{-1}I\), while also improving the precision to which the optimal objective value is approximated. Thus, an improved solution \(\rho^{\prime}\) should obey \[\operatorname{tr}\left(C\|C\|_{F}^{-1}\rho^{\prime}\right) \geq\gamma-\epsilon^{\prime},\] \[\sum_{i=1}^{n}\left|\langle i|\rho^{\prime}|i\rangle-\frac{1}{n} \right| \leq\epsilon^{\prime},\] with \(\epsilon^{\prime}<\epsilon\). The basic idea behind constructing the refining problem is to use our current solution \(\hat{\rho}\) to first shift the renormalized relaxed SDO problem (5) to the origin, and then scale the shifted problem back to the domain of the original problem. In particular, we solve a series of problems related to the feasibility problem (10). Let \(\varepsilon\in\mathbb{R}^{n}\) be a vector whose elements are the residuals along the diagonal \(\varepsilon_{i}=\hat{\rho}_{ii}-\frac{1}{n}\) for \(i\in[n]\), and \(\eta\geq 1\) to be a scalar defined as \[\eta=\frac{1}{\max\left\{\gamma-\operatorname{tr}\left(C\|C\|_{F}^{-1}\hat{ \rho}\right),\sum_{i=1}^{n}|\varepsilon_{i}|\right\}}=\frac{1}{\max\left\{ \gamma-\operatorname{tr}\left(C\|C\|_{F}^{-1}\hat{\rho}\right),\|\hat{\rho}-n ^{-1}I\|_{\operatorname{tr}}\right\}}.\] Using these quantities, the _refining problem_ is given by: \[\begin{array}{rl}\text{find}&\rho^{r}\in\left\{X\in\mathcal{S}_{+}^{n}: \operatorname{tr}(X)=1\right\}\cap\mathcal{C}_{\eta(\gamma-\hat{\gamma})} \cap\mathcal{D}_{\eta\varepsilon}\\ \text{where}&\mathcal{C}_{\eta(\gamma-\hat{\gamma})}=\left\{X: \operatorname{tr}\left(\frac{C}{\|C\|_{F}}\left(Q\circ X\right)\right)\geq \eta(\gamma-\hat{\gamma})\right\},\\ &\mathcal{D}_{\eta\varepsilon}=\left\{X:\langle i|X|i\rangle=\eta|\varepsilon_{ i}|,\ \forall i\in[n]\right\},\end{array} \tag{11}\] where \(Q\in\mathcal{S}^{n}\) is a matrix whose diagonal elements are chosen such that for any \(X\in\mathcal{D}_{\eta\varepsilon}\), we have \[(Q\circ X)_{ii}=\operatorname{sign}(-\varepsilon_{i})\eta|\varepsilon_{i}|\] for \(i\in[n]\). Further details and requirements on the structure of \(Q\) are specified later in this section. We refer to solutions \(\rho^{r}\) to (11) as _refining solutions_, which we use to update our current solution \(\hat{\rho}\) to (5). The set \(\mathcal{D}_{\eta\varepsilon}\) is comprised of the diagonal constraints \[\langle i|X|i\rangle=\eta|\varepsilon_{i}|,\quad\forall i\in[n],\] and similar to \(\mathcal{D}_{n}\), is an affine space with codimension \(n\). Our use of the absolute value function of the residuals and scaling by \(\eta\) ensures the viability of applying Gibbs sampling techniques to solve the refining problem (11); the diagonal terms of any density matrix must be nonnegative and sum to \(1\). Whenever \[\sum_{i=1}^{n}|\varepsilon_{i}|>\gamma-\operatorname{tr}\left(C\|C\|_{F}^{-1} \hat{\rho}\right),\] then \(\eta\|\varepsilon\|_{1}=1\), and the parameter \(\eta\) therefore scales the shifted problem back to the space of the \(\log(n)\)-qubit mixed states, ensuring that any solution \(\rho^{r}\) to (11) is indeed a (trace normalized) Gibbs state. On the other hand, should it be the case that \[\sum_{i=1}^{n}|\varepsilon_{i}|\leq\gamma-\operatorname{tr}\left(C\|C\|_{F}^{ -1}\hat{\rho}\right),\] then for any \(X\in\mathcal{D}_{\eta\varepsilon}\) we have \(\operatorname{tr}(X)\leq 1\), rather than \(\operatorname{tr}(X)=1\). Our primal SDO oracle in Algorithm 1 solves feasibility problems in which the trace upper bound is tight, i.e., \(\operatorname{tr}(X)=1\). The authors in [60] note that this can be dealt with adding one extra variable \(w\) such that \[\bar{\rho}^{r}:=\begin{bmatrix}\rho^{r}&0\\ 0&w\end{bmatrix}.\] Then, \(\operatorname{tr}\left(\bar{\rho}^{r}\right)=1\) and \(\bar{\rho}^{r}\succeq 0\) imply that \(\operatorname{tr}(\rho^{r})\leq 1\), and as a result we obtain an SDO problem that is equivalent to (11). Since we know exactly the amount of subnormalization, we can also get rid of the extra variable in subsequent calculations and rescale the trace back to \(1\) when necessary (e.g., when combining solutions from multiple iterative refinement iterations for trace estimations). Crucially, using the input models described in Section 2.1, these modifications do not introduce more than constant overhead in the overall complexity, as the problem data in this case is simply given by \[\overline{C}=\begin{bmatrix}\frac{C}{\|C\|_{F}}&0\\ 0&0\end{bmatrix},\quad\overline{Q}=\begin{bmatrix}Q&0\\ 0&0\end{bmatrix},\] with \((\overline{C},\overline{Q})\in\mathcal{S}^{n+1}\times\mathcal{S}^{n+1}\). The Hadamard product \(Q\circ\rho^{r}\) that appears in the definition of \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\) is required for similar reasons; properly setting \(Q\) allows us to drive the trace distance to the maximally mixed state to zero using the solutions to the refining problem. Later, in Section 4.3 we demonstrate that this can be achieved by updating the current solution \(\hat{\rho}\) using the rule \[\hat{\rho}=\hat{\rho}+\frac{1}{\eta}Q\circ\rho^{r}, \tag{12}\] with a suitable choice for \(Q\) being \[Q=(ee^{\top}-I)+\operatorname{diag}\left(\operatorname{sign}(-\varepsilon) \right)=\begin{pmatrix}\operatorname{sign}(-\varepsilon_{1})&1&\cdots&1\\ 1&\operatorname{sign}(-\varepsilon_{2})&\ddots&\vdots\\ \vdots&\ddots&\ddots&1\\ 1&\cdots&1&\operatorname{sign}(-\varepsilon_{n})\end{pmatrix}. \tag{13}\] Choosing \(Q\) in this manner also implies that the Hadamard product \(Q\circ A\) can be carried out classically using \(\mathcal{O}(n)\) arithmetic operations for any \(A\in\mathbb{R}^{n\times n}\), as the element-wise products \(Q_{ij}A_{ij}=A_{ij}\) for \(i\neq j\). Similarly, updating \(Q\) at each iterate only requires updating its diagonal elements, an \(\mathcal{O}(n)\) operation. It is important to note that the update we propose in (12) does not preserve positive semidefiniteness in general. However, later in our analysis, we demonstrate that the eigenvalues of the final solution \(\hat{\rho}\) are only slightly negative in the worst case, i.e., \(\lambda_{\min}(\hat{\rho})\geq-\delta\) for a small constant \(\delta\); one can restore positive semidefiniteness by adding \(\delta\) to the diagonal elements of the final solution, and we renormalize by \((1+n\delta)\) to obtain unit trace. It turns out that the trace distance from resulting matrix to \(\hat{\rho}\) is bounded by the final precision tolerance parameter of our refining scheme. We show that these modifications required to restore positive semidefiniteness have only a mild (in fact, constant) impact on feasibility. To do this, we will bound the eigenvalues of \(Q\). We first state a special instance of Weyl's inequality. **Lemma 12**.: _Suppose that \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{n\times n}\) are Hermitian matrices. Then_ \[\lambda_{\min}(A+B)\geq\lambda_{\min}(A)+\lambda_{\min}(B).\] Using the preceding lemma, the following result bounds the minimum eigenvalue of \(Q\). **Lemma 13**.: _Suppose that \(Q\in\mathcal{S}^{n}\) is defined according to Equation (13). Then, \(\lambda_{\min}(Q)\geq-2\)._ Proof.: Let \(A=(ee^{\top}-I)\) and \(B=\operatorname{diag}\left(\operatorname{sign}(-\varepsilon)\right)\), such that \(Q=A+B\). Now, it can be easily seen from the definition of \(A\) that \(A+I\) is an all-ones matrix of dimension \(n\). Upon performing row-reduction (via, e.g., Guassian elimination) on \(A\), it is trivial to observe that the resulting row-echelon form will have \(n-1\) zero rows, and as a consequence, \(A\) has the eigenvalue \(-1\), repeated (at least) \(n-1\) times. Further, since \(\operatorname{tr}\left(A\right)=0\), the other eigenvalue is \(n-1\). Therefore, we have \(\lambda_{\min}(A)\geq-1\). On the other hand, \(B\) is a diagonal matrix whose diagonal elements can take value \(-1\), \(0\), or \(1\), from which \(\lambda_{\min}(B)\geq-1\) readily follows. Applying Lemma 12, we obtain \[\lambda_{\min}(Q)=\lambda_{\min}(A+B)\geq\lambda_{\min}(A)+\lambda_{\min}(B) \geq-2.\] The proof is complete. ### Oracle construction for the refining problem In order to construct separation oracles for testing closeness to \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\), we rely on the following result. **Lemma 14**.: _Let \(E\), \(F\) and \(G\in\mathcal{S}^{n}\). We have_ \[\operatorname{tr}\left(G(E\circ F)\right)=\operatorname{tr}\left((E\circ G)F\right).\] Proof.: Applying Lemma 1 with \(m=n\), we have \[\left[\left(E\circ F\right)G\right]_{ii}=\left[\left(E\circ G\right)F\right]_{ii} \quad\forall i\in[n].\] Note that we have dropped the transpose terms, as \(E\), \(F\) and \(G\) are symmetric matrices, and hence, so are \(E\circ F\) and \(E\circ G\). It follows \[\operatorname{tr}\left(G(E\circ F)\right)=\operatorname{tr}\left(\left(E \circ F\right)G\right)=\sum_{i\in[n]}\left[\left(E\circ F\right)G\right]_{ii}= \sum_{i\in[n]}\left[\left(E\circ G\right)F\right]_{ii}=\operatorname{tr} \left(\left(E\circ G\right)F\right).\] In addition to \(Q\in\mathcal{S}^{n}\), we also require \(\max_{i,j\in[n]}\{\left|Q_{ij}\right|\}\leq 1\) to avoid any normalization issues with respect to \(Q\circ\frac{C}{\left\|C\right\|_{F}}\). Note that defining of \(Q\) according to equation (13) satisfies both of these properties trivially, as each of the diagonal elements are \(1\), \(0\), or \(-1\), while the off-diagonal elements are all set to \(1\). This idea is formalized next. **Lemma 15**.: _Let \(A,Q\in\mathcal{S}^{n}\) be matrices satisfying \(\max_{i,j\in[n]}\{\left|Q_{ij}\right|\}\leq 1\) and \(\|A\|_{F}\leq 1\). Then,_ \[\left\|Q\circ A\right\|\leq\left\|Q\circ A\right\|_{F}\leq 1.\] Proof.: Under the stated conditions for \(Q\), it follows \[\left\|Q\circ A\right\|_{F}^{2}=\sum_{i\in[n]}\sum_{j\in[n]} \left(\left[Q\circ A\right]_{ij}\right)^{2}=\sum_{i\in[n]}\sum_{j\in[n]}\left( Q_{ij}\cdot A_{ij}\right)^{2} =\sum_{i\in[n]}\sum_{j\in[n]}\left(Q_{ij}\right)^{2}\left(A_{ij} \right)^{2}\] \[\leq\sum_{i\in[n]}\sum_{j\in[n]}\left(A_{ij}\right)^{2}=\left\|A \right\|_{F}^{2},\] and applying the square root throughout the above we obtain \(\left\|Q\circ A\right\|_{F}\leq\|A\|_{F}\). From here, the result follows upon noting \(\left\|A\right\|_{F}\leq 1\) and \(\|A\|\leq\|A\|_{F}\) is true for any \(A\in\mathbb{R}^{n\times n}\). Although the sets \(\mathcal{C}_{\gamma}\) and \(\mathcal{D}_{n}\) differ from their refining counterparts \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\) and \(\mathcal{D}_{\eta\varepsilon}\), their dissimilarity merely affects the right hand side of the inequality defining the sets, and are thus no more difficult to construct. Just as in the case of (10), the task of obtaining separation oracles for the refining problem (11) in the quantum regime reduces to preparing many copies of Gibbs states. Likewise, these oracles can also be implemented on a classical computer, given access to \(\rho^{r}\). The similarities between (10) and (11) become transparent when we demonstrate that they are specific instances of the same problem. In particular, it is easy to see that solving (10) corresponds to solving \[\begin{split}\text{find}&\rho\in\left\{X\in \mathcal{S}_{+}^{n}:\operatorname{tr}(X)=1\right\}\cap\mathcal{C}_{\eta(\gamma -\hat{\gamma})}\cap\mathcal{D}_{\eta\varepsilon}\\ \text{where}&\mathcal{C}_{\eta(\hat{\gamma}-\gamma)} =\left\{X:\operatorname{tr}\left(\frac{C}{\|C\|_{F}}Q\circ X\right)\geq\eta( \gamma-\hat{\gamma})\right\},\\ &\mathcal{D}_{\eta\varepsilon}=\left\{X:\langle i|X|i\rangle= \eta|\varepsilon_{i}|,\ \forall i\in[n]\right\},\end{split} \tag{14}\] with \(\varepsilon_{i}=\frac{1}{n}\), \(\eta=1\), \(Q=ee^{\top}\), and \(\hat{\gamma}=0\). In view of this relationship, we can unify the oracle construction for (10) and (11) as follows: \[\begin{split} O_{\mathcal{C}_{\eta(\gamma-\hat{\gamma})}}& :\text{Compute an approximation $\tilde{c}$ of }\operatorname{tr}\left(Q\circ C\|C\|_{F}^{-1}\rho\right)\text{ up to additive error }\frac{\epsilon}{4}.\\ &\text{Check if $\tilde{c}\geq\eta(\gamma-\hat{\gamma})+\frac{3 \epsilon}{4}$ and output $P=-Q\circ C\|C\|_{F}^{-1}$ if the inequality is violated.}\end{split}\] \[\begin{split} O_{\mathcal{D}_{\eta\varepsilon}}& :\text{Compute an approximation $\tilde{p}\in\mathbb{R}^{n}$ of }p_{i}=\langle i|\rho|i\rangle\text{ satisfying }\sum_{i\in[n]}|p_{i}-\tilde{p}_{i}|\leq\frac{\epsilon}{4}.\\ &\text{Check if }\sum_{i\in[n]}\lvert\tilde{p}_{i}-\eta \rvert\varepsilon_{i}\rvert\leq\frac{3\epsilon}{4}\text{ and output }P=\sum_{i\in[n]}\left(\mathbb{I}\{\tilde{p}_{i}>\eta|\varepsilon_{i}|\}-\mathbb{I} \{\tilde{p}_{i}<\eta|\varepsilon_{i}|\}\right)\left|i\right\rangle\left\langle i \right|\\ &\text{if the inequality is violated.}\end{split}\] Again, the sets of observables for \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\) and \(\mathcal{D}_{\eta\varepsilon}\) are given by \[\widetilde{\mathcal{C}}_{\eta(\gamma-\hat{\gamma})}=\{-Q\circ C\|C\|_{F}^{-1}\}, \text{ and }\widetilde{\mathcal{D}}_{\eta\varepsilon}=\{D\in\mathbb{R}^{n\times n}:\|D\| \leq 1,D\text{ is diagonal}\}.\] Although these observations are straightforward, they justify our use of Algorithm 1 as a semidefinite optimization oracle that solves a convex feasibility problem at hand in every iteration for different values of \(Q\). In particular, these facts, along with Lemmas 14 and 15 ensure that the complexity results in Propositions 6 and 7 hold when applying Algorithm 1 to solve (14). **Proposition 8**.: _Let \(Q\circ\frac{C}{\|C\|_{F}}\in\mathcal{S}^{n}\) be stored in QRAM. Algorithm 1 solves (14) up to additive error \(\epsilon\) using_ \[\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}\left(n^{1.5}\epsilon^{-5}\right)\] _accesses to the QRAM._ Proof.: Given that \(Q\circ\frac{C}{\|C\|_{F}}\) is stored in QRAM, Lemma 3_(ii)_ asserts that when constructing a block-encoding of \(Q\circ\frac{C}{\|C\|_{F}}\), one can set the subnormalization factor to be \(\alpha_{C}=\left\|Q\circ\frac{C}{\|C\|_{F}}\right\|_{F}\). In particular, one can always choose \(\alpha_{C}=1\), as it can be seen from the proof of Lemma 15 that the inequality \[\left\|Q\circ\frac{C}{\|C\|_{F}}\right\|_{F}\leq\left\|\frac{C}{\|C\|_{F}} \right\|_{F}=1\] always holds for any \(Q\) defined according to equation (13). Collecting these facts, one can construct a \((1,\mathcal{O}(\log(n)),\epsilon/(2n))\)-block-encoding of \(Q\circ C\|C\|_{F}^{-1}\) in time \(\widetilde{\mathcal{O}}_{\frac{n}{\epsilon}}(1)\). Note that the quantity \(Q\circ\frac{C}{\|C\|_{F}}\) remains unchanged for the duration of Algorithm 1. From here, the rest of the proof follows exactly that of Proposition 7 upon replacing \(\frac{C}{\|C\|_{F}}\), \(O_{\mathcal{C}_{\gamma}}\) and \(O_{\mathcal{D}_{n}}\) with \(Q\circ\frac{C}{\|C\|_{F}}\), \(O_{\mathcal{C}_{\eta(\gamma-\hat{\gamma})}}\) and \(O_{\mathcal{D}_{\eta\varepsilon}}\), respectively, in what remains. ### Iterative Refinement using Hamiltonian Updates We are now in a position to provide our iterative refinement method for SDO approximations of QUBOs presented in full detail in Algorithm 2. The algorithm takes three parameters as input; _(i)_\(\xi\), the fixed precision used to test closeness to the sets \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\) and \(\mathcal{D}_{\eta\varepsilon}\) in every iteration, _(ii)_\(\zeta\), the precision to which the final solution satisfies the functional constraints of (5), and _(iii)_\(\epsilon\), the additive error to which we seek to solve (3). In our initialization steps we set the values of \(Q\), \(\varepsilon\) and \(\eta\) such that the first iteration corresponds to solving the feasibility problem (10). In each iteration \(k\), Algorithm 2 calls Algorithm 1 with separation oracles \(O_{\mathcal{C}_{\eta(\gamma-\hat{\gamma})}}\) and \(O_{\mathcal{D}_{\eta\varepsilon}}\) using fixed precision \(\xi\) such that every call to Algorithm 1 produces a \(\xi\)-precise classical solution \(\rho^{(k)}\) to (14). If \(\hat{\rho}\) is indistinguishable up to precision \(\zeta\) from the maximally mixed state \(n^{-1}I\) upon measurement in the computational basis, and satisfies \(\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\hat{\rho}\right)\geq\gamma-\zeta\), the algorithm terminates and reports \(\hat{\rho}\). Otherwise, we construct the refining problem associated with our current solution, and proceed to the next iteration. To define the parameters for the next refining problem, we first calculate the deviation of the diagonal elements from \(\frac{1}{n}\), and the violation with respect to satisfying our objective value. Then, we define our scaling factor to be the maximum over the \(\ell_{1}\)-norm of the diagonal deviations, and the objective violation. We stress that \(\xi\) is a (chosen) constant, and does not change throughout the algorithm. We now state a series of results in order to bound the iteration complexity of Algorithm 2, and use our findings to improve the approximation guarantee given in Proposition 5. We begin by proving that the iterates generated by Algorithm 2 satisfy the constraints in (5) with increasing accuracy. **Theorem 4**.: _Let \(\hat{\rho}\) be the current overall solution, and let \(\rho^{(k)}\) be a solution to (14) obtained from running Algorithm 1 using fixed precision \(\xi\in(0,1)\) in iteration \(k\) of Algorithm 2. Then, the following hold:_ 1. _For_ \(k\geq 0\)_,_ \(\eta^{(k)}\geq\frac{1}{\xi^{k}}\)_._ **Input:** Error tolerances \(\epsilon\in(0,1)\) and \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\), upper bound on objective value \(\gamma\in[-1,1]\) **Output:** A matrix \(\hat{\rho}\in\mathcal{S}^{n}\) satisfying \(\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\hat{\rho} \right),\|\hat{\rho}-n^{-1}I\|_{\operatorname{tr}}\right\}\leq\zeta\) **Initialize**: \(\hat{\rho}\leftarrow\mathbf{0}^{n\times n}\), \(Q\gets ee^{\top}\), \(\varepsilon_{i}=\frac{1}{n}\) for \(i\in[n]\), \(\hat{\gamma}\gets 0\), \(\eta^{(0)}\gets 1\), \(k\gets 0\) **while**\(\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\hat{\rho}\right),\| \varepsilon\|_{1}\right\}>\zeta\)**do** 1. Solve (14) to precision \(\frac{\xi}{4}\) for \(\rho^{(k)}\) using Algorithm 1 with oracles \(O_{\mathcal{C}_{\eta(\gamma-\hat{\gamma})}}\) and \(O_{\mathcal{D}_{\eta e}}\) 2. Update Solution: \[\hat{\rho}\leftarrow\hat{\rho}+\frac{1}{\eta^{(k)}}Q\circ\rho^{(k)},\quad \hat{\gamma}\leftarrow\hat{\gamma}+\frac{1}{\eta^{(k)}}\operatorname{tr} \left(\frac{C}{\|C\|_{F}}Q\circ\rho^{(k)}\right)\] 3. Compute element-wise deviations from the maximally mixed state: \[\varepsilon_{i}\leftarrow\hat{\rho}_{ii}-\frac{1}{n}\text{ for }i\in[n]\] 4. Update refining problem parameters: \[Q_{ii}\leftarrow\text{sign}(-\varepsilon_{i})\text{ for }i\in[n],\quad\eta^{(k+1)} \leftarrow\frac{1}{\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{ \hat{\rho}}}\hat{\rho}\right),\|\varepsilon\|_{1}\right\}}\] 5. \(k\gets k+1\) **end** _._ 2. _For_ \(k\geq 0\)_,_ \(\rho=\hat{\rho}+\frac{1}{\eta^{(k)}}Q\circ\rho^{(k)}\) _satisfies_ \[\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\rho\right),\|\rho-n ^{-1}I\|_{\operatorname{tr}}\right\}\leq\xi^{k+1}.\] Proof.: We begin by establishing that for \(k\geq 0\), the current solution satisfies \[\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\rho\right),\|\rho -n^{-1}I\|_{\operatorname{tr}}\right\}\leq\frac{\xi}{\eta^{(k)}}. \tag{15}\] First, observe that when \(k=0\), we have \(\varepsilon_{i}=\frac{1}{n}\) for \(i\in[n]\), \(\hat{\rho}=\mathbf{0}^{n\times n}\), \(\eta^{(0)}=1\) and \(Q=ee^{\top}\). Under these conditions, one can observe that if \(\rho^{(0)}\) solves (14) to precision \(\xi\), then \[\sum_{i=1}^{n}\left|\langle i|\rho|\dot{i}\rangle-\frac{1}{n}\right|=\sum_{i=1 }^{n}\left|\langle i|\hat{\rho}+\frac{1}{\eta^{(0)}}\rho^{(0)}|i\rangle-\frac {1}{n}\right|=\sum_{i=1}^{n}\left|\frac{1}{\eta^{(0)}}\rho^{(0)}-\varepsilon_ {i}\right|=\frac{1}{\eta^{(0)}}\sum_{i=1}^{n}\left|\rho^{(0)}-\eta^{(0)} \varepsilon_{i}\right|\leq\frac{\xi}{\eta^{(0)}}.\] In other words, \(\rho=\rho^{(0)}\) satisfies \[\|\rho-n^{-1}I\|_{\operatorname{tr}}\leq\frac{\xi}{\eta^{(0)}}.\] Next, by the definition of \(O_{\mathcal{C}_{\eta(\gamma-\hat{\gamma})}}\) we have \[\operatorname{tr}\left(C\|C\|_{F}^{-1}\rho\right) =\operatorname{tr}\left(C\|C\|_{F}^{-1}\left[\hat{\rho}+\frac{1} {\eta^{(0)}}Q\circ\rho^{(0)}\right]\right)=\operatorname{tr}\left(C\|C\|_{F}^ {-1}\rho\right)+\frac{1}{\eta^{(0)}}\operatorname{tr}\left(C\|C\|_{F}^{-1} \left(Q\circ\rho^{(0)}\right)\right)\] \[=\underbrace{\operatorname{tr}\left(C\|C\|_{F}^{-1}\mathbf{0}^{n \times n}\right)}_{=0}+\frac{1}{\eta^{(0)}}\operatorname{tr}\left(C\|C\|_{F}^ {-1}\left(Q\circ\rho^{(0)}\right)\right)=\frac{1}{\eta^{(0)}}\operatorname{ tr}\left(C\|C\|_{F}^{-1}\left(Q\circ\rho^{(0)}\right)\right)\] \[=\frac{1}{\eta^{(0)}}\operatorname{tr}\left(C\|C\|_{F}^{-1}\rho^ {(0)}\right)\geq\frac{1}{\eta^{(0)}}\left([\eta^{(0)}(\gamma-\hat{\gamma}^{( 0)})]-\xi\right)=\gamma-\frac{\xi}{\eta^{(0)}},\] where we used the fact that \(\hat{\gamma}^{(0)}=0\). Next, letting \(\hat{\rho}\) be our current solution and \(\hat{\gamma}=\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\hat{\rho}\right)\) be the objective value attained at this solution. When \(k\geq 1\), we have \(\varepsilon_{i}=\hat{\rho}_{ii}-\frac{1}{n}\) for \(i\in[n]\) and \(Q=(ee^{\top}-I)+\operatorname{diag}(\operatorname{sign}(-\varepsilon))\). For this choice of parameters, the general feasibility problem (14) reduces to the refining problem (11) and the solution \(\rho^{(k)}\) obtained via Algorithm 1 is therefore a \(\xi\)-precise solution to (11). Accordingly, for \(k\geq 1\), setting \(\rho=\hat{\rho}+\frac{1}{\eta^{(k)}}Q\circ\rho^{(k)}\) improves the precision to which the maximally mixed state is approximated: \[\sum_{i=1}^{n}\left|\langle i|\rho|\dot{i}\rangle-\frac{1}{n}\right| =\sum_{i=1}^{n}\left|\langle i|\hat{\rho}+\frac{1}{\eta^{(k)}}Q \circ\rho^{(k)}|i\rangle-\frac{1}{n}\right|\] \[=\sum_{i=1}^{n}\left|\left(\hat{\rho}_{ii}+\frac{1}{\eta^{(k)}} \left(\operatorname{sign}(-\varepsilon_{i})\cdot\rho_{ii}^{(k)}\right)\right)- \frac{1}{n}\right|\] \[=\sum_{i=1}^{n}\left|\left(\hat{\rho}_{ii}-\frac{1}{n}\right)+ \frac{1}{\eta^{(k)}}\operatorname{sign}(-\varepsilon_{i})\rho_{ii}^{(k)}\right|\] \[=\sum_{i=1}^{n}\left|\varepsilon_{i}+\frac{1}{\eta^{(k)}} \operatorname{sign}(-\varepsilon_{i})\rho_{ii}^{(k)}\right|=\frac{1}{\eta^{(k )}}\sum_{i=1}^{n}\left|\eta^{(k)}\varepsilon_{i}+\operatorname{sign}(- \varepsilon_{i})\rho_{ii}^{(k)}\right|\leq\frac{\xi}{\eta^{(k)}}.\] Consequently, we can conclude that at iteration \(k\geq 1\), the trace distance from our solution to the maximally mixed state is \[\|\rho-n^{-1}I\|_{\operatorname{tr}}\leq\frac{\xi}{\eta^{(k)}}. \tag{16}\] Next, letting \(\tilde{\gamma}^{(k)}=\operatorname{tr}\left(\frac{C}{\|C\|_{F}}Q\circ\rho^{(k)}\right)\), one can observe \[\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\rho\right)=\operatorname{tr}\left( \frac{C}{\|C\|_{F}}\left(\hat{\rho}+\frac{1}{\eta^{(k)}}Q\circ\rho^{(k)}\right) \right)=\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\hat{\rho}\right)+\frac{1}{ \eta^{(k)}}\operatorname{tr}\left(\frac{C}{\|C\|_{F}}Q\circ\rho^{(k)}\right)= \hat{\gamma}+\frac{\tilde{\gamma}^{(k)}}{\eta^{(k)}}.\] For any \(\rho^{(k)}\) which is \(\xi\)-close to the set \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\) we must have \[\tilde{\gamma}^{(k)}\geq\eta^{(k)}(\gamma-\hat{\gamma})-\xi.\] It follows: \[\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\rho\right)=\hat{\gamma}+\frac{ \tilde{\gamma}^{(k)}}{\eta^{(k)}}\geq\hat{\gamma}+\frac{1}{\eta^{(k)}}\left[ \eta^{(k)}(\gamma-\hat{\gamma})-\xi\right]=\gamma-\frac{\xi}{\eta^{(k)}}. \tag{17}\] It therefore follows from (16) and (17) that \(\rho=\hat{\rho}+\frac{1}{\eta^{(k)}}Q\circ\rho^{(k)}\) satisfies inequality (15) for all \(k\geq 0\), and we can now use this fact to establish the lower bound on \(\eta^{(k)}\), which we prove by induction. For \(k=0\), we have \(\eta^{(0)}=1\), for which \(\eta^{(k)}\geq\frac{1}{\xi^{k}}\) trivially holds. By the induction hypothesis, it assumed that \(\eta^{(\ell)}\geq\frac{1}{\xi^{\ell}}\) is true for \(\ell=1,\ldots,k\). From here, applying (15) yields \[\eta^{(k+1)}=\frac{1}{\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\| _{F}}\rho\right),\|\rho-n^{-1}I\|_{\operatorname{tr}}\right\}}\geq\frac{1}{ \frac{\xi}{\eta^{(k)}}}\geq\frac{1}{\xi^{k+1}},\] which completes the proof of _(a)_. Having demonstrated that _(a)_ holds, to prove _(b)_, we can simply combine inequality (15) with the lower bound \(\eta^{(k)}\geq\frac{1}{\xi^{k}}\), which together imply \[\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\rho\right),\|\rho -n^{-1}I\|_{\operatorname{tr}}\right\}\leq\frac{\xi}{\eta^{(k)}}\leq\xi^{k+1}.\] That is, _(b)_ holds, and the proof is complete. The next result establishes polynomial convergence of Algorithm 2. **Corollary 5**.: _Let \(0<\zeta\ll\xi<1\), and \(\eta^{(0)}=1\). Then, Algorithm 2 terminates in at most_ \[K=\mathcal{O}\left(\log\left(\frac{1}{\zeta}\right)\right)\] _iterations._ Proof.: The result follows from Theorem 4_(b)_. Observe that in Theorem 4, we do not consider whether the updated solution remains positive semidefinite. It turns out that, by the nature of our update scheme, the minimum eigenvalue of the final solution obtained via Algorithm 2 will never fall significantly below zero. Moreover, we employ a rounding procedure to modify the entries of the solution we obtain from Algorithm 2 so that we arrive at an exactly feasible solution to (3). Before proceeding further, we formalize this notion, and derive a lower bound on the smallest eigenvalue of the final solution. We utilize Lemma 13 to bound the minimum eigenvalue of the terms \(\frac{1}{\eta^{(k)}}Q^{(k)}\circ\rho^{(k)}\) that are used to update the overall solution in each iteration, from which a lower bound minimum eigenvalue bound for the final solution obtained by Algorithm 2 readily follows upon applying Lemma 12. **Proposition 9**.: _Let \(\rho^{(k)}\) be a solution to (14) obtained from running Algorithm 1 using fixed precision \(\xi\in(0,1)\) in iteration \(k\) of Algorithm 2. Then:_ 1. _For_ \(k\geq 1\)_,_ \(\frac{1}{\eta^{(k)}}Q^{(k)}\circ\rho^{(k)}\succeq-2\cdot\xi^{k}I\)_._ 2. _Suppose Algorithm_ 2 _is run with final precision_ \(\zeta\)_, and terminates after_ \(K\) _iterations. Then, the solution_ \(\rho\) _output by Algorithm_ 2 _satisfies_ \[\lambda_{\min}(\rho)\geq\lambda_{\min}\left(\rho^{(0)}\right)-2\cdot\sum_{k=1} ^{K}\xi^{k}.\] Proof.: We begin with a proof of _(a)_. In what follows, we assume without loss of generality that \(Q^{(k)}\) has at least one negative eigenvalue (otherwise, \(Q^{(k)}\circ\rho^{(k)}\succeq 0\) trivially holds). Hence, a combined application of Corollary 1 and Lemma 13 yields \[\lambda_{\min}\left(Q^{(k)}\circ\rho^{(k)}\right)\geq\lambda_{\max}\left(\rho^ {(k)}\right)\cdot\lambda_{\min}\left(Q^{(k)}\right)\geq-2.\] Therefore, recalling that by Theorem 4_(a)_ we have \(\eta^{(k)}\geq\frac{1}{\xi^{k}}\), it follows \[\lambda_{\min}\left(\frac{1}{\eta^{(k)}}Q\circ\rho^{(k)}\right)=\frac{1}{ \eta^{(k)}}\lambda_{\min}\left(Q\circ\rho^{(k)}\right)\geq\frac{-2}{\eta^{(k) }}\geq-2\cdot\xi^{k},\] which completes the proof of _(a)_. Noting that the final solution can be expressed as \[\rho=\sum_{k=0}^{K}\frac{1}{\eta^{(k)}}Q^{(k)}\circ\rho^{(k)}=\rho^{(0)}+\sum_ {k=1}^{K}\frac{1}{\eta^{(k)}}Q^{(k)}\circ\rho^{(k)},\] the result in _(b)_ follows from _(a)_ and repeated application of Lemma 12. The next corollary computes bounds for the geometric series that appears in Proposition 9_(b)_ for different values of the fixed precision parameter \(\xi\), by computing the value of the series to \(\infty\). **Corollary 6**.: _Let \(\xi=\frac{\tilde{\xi}}{4}\). Then, for any positive integer \(K\), the following hold._ 1. _If_ \(\tilde{\xi}=10^{-2}\)_, then_ \(-2\cdot\sum_{k=1}^{K}\xi^{k}\geq-0.005.\)__ 2. _If_ \(\tilde{\xi}=10^{-4}\)_, then_ \(-2\cdot\sum_{k=1}^{K}\xi^{k}\geq-0.00005.\)__ The goal of Corollary 6 is simply to show that with fixed precision we obtain matrices with eigenvalues that are, in the worst case, slightly negative. A shift of the spectrum suffices to restore positive semidefiniteness, and it does not change the constraint violation or the objective function value by a large amount, as we show next. **Proposition 10**.: _Suppose Algorithm 2 is run with final precision \(\zeta\), and terminates after \(K\) iterations. Let \(\rho\) be the solution output by Algorithm 2, and let \(\xi\) be the fixed precision used in every iteration. Then, letting_ \[\delta=2\cdot\sum_{k=1}^{K}\xi^{k}, \tag{18}\] _it follows that_ \[\tilde{\rho}=\frac{1}{1+n\delta}\left(\rho+\delta I\right)\] _is a positive semidefinite matrix at trace distance \(\zeta\) from \(\rho\). Moreover, \(\tilde{\rho}\) satisfies:_ \[\max\left\{\gamma-\operatorname{tr}\left(\frac{C}{\|C\|_{F}}\tilde{\rho} \right),\left\|\tilde{\rho}-n^{-1}I\right\|_{\operatorname{tr}}\right\}\leq 2\zeta.\] _In other words, \(\tilde{\rho}\) is a \(2\zeta\)-precise solution to (5)._ Proof.: First, observe that \(\tilde{\rho}\succeq 0\) by the definition of \(\tilde{\rho}\); applying Proposition 9_(b)_, we have \(\lambda_{\min}(\rho)\geq-\delta\), which implies that \(\rho+\delta I\succeq 0\). From the definition of \(\tilde{\rho}\), we also have: \[\left\|\rho-\tilde{\rho}\right\|_{\mathrm{tr}}=\left\|\rho-\left[ \frac{1}{1+n\delta}\left(\rho+\delta I\right)\right]\right\|_{\mathrm{tr}} =\left\|\frac{1+n\delta-1}{1+n\delta}\rho-\frac{\delta}{1+n\delta }I\right\|_{\mathrm{tr}}\] \[=\frac{\delta}{1+n\delta}\left\|n\rho-I\right\|_{\mathrm{tr}}\] \[=\frac{n\delta}{1+n\delta}\left\|\rho-n^{-1}I\right\|_{\mathrm{tr}}\] \[\leq\frac{n\delta}{1+n\delta}\zeta\] \[<\zeta,\] where the second to last inequality follows from the fact that \(\rho\) is obtained from running Algorithm 2 with \(\zeta\) as the final precision parameter. Next, we leverage our bound on the trace distance from \(\tilde{\rho}\) to \(\rho\) to establish that \(\tilde{\rho}\) is indeed an accurate solution to the renormalized SDO problem (5). First, note that \[\left\|\tilde{\rho}-n^{-1}I\right\|_{\mathrm{tr}} =\left\|\tilde{\rho}-n^{-1}I+\left(\rho-\rho\right)\right\|_{ \mathrm{tr}}\] \[=\left\|(\tilde{\rho}-\rho)+\left(\rho-n^{-1}I\right)\right\|_{ \mathrm{tr}}\leq\left\|\tilde{\rho}-\rho\right\|_{\mathrm{tr}}+\left\|\rho-n^ {-1}I\right\|_{\mathrm{tr}}\leq 2\zeta.\] Further, applying a matrix Holder inequality, one can observe: \[\left|\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\rho\right)-\mathrm{tr}\left(\frac{C }{\|C\|_{F}}\tilde{\rho}\right)\right|\leq\left\|\frac{C}{\|C\|_{F}}\right\| \|\rho-\tilde{\rho}\|_{\mathrm{tr}}\leq\|\rho-\tilde{\rho}\|_{\mathrm{tr}}<\zeta,\] from which we can conclude \[\gamma-\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\tilde{\rho}\right) =\gamma-\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\tilde{\rho}\right)+ \left[\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\rho\right)-\mathrm{tr}\left(\frac{C }{\|C\|_{F}}\rho\right)\right]\] \[=\left[\gamma-\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\rho\right) \right]+\left[\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\rho\right)-\mathrm{tr} \left(\frac{C}{\|C\|_{F}}\tilde{\rho}\right)\right]\leq 2\zeta,\] as \(\gamma-\mathrm{tr}\left(\frac{C}{\|C\|_{F}}\rho\right)\leq\zeta\). It is important at this point for us to remark that fixing \(\xi\in(0,1)\) does not limit us with respect to how accurately we can solve (3). We can always make the final precision parameter arbitrarily small using only \(\widetilde{\mathcal{O}}_{\frac{1}{\zeta}}(1)\) iterations, as the overall running time depends only poly-logarithmically on \(\zeta^{-1}\). Accordingly, we take advantage of this fact and revisit the approximation guarantee provided in Proposition 5. **Proposition 11**.: _Let \(\rho\) be a \(\zeta\)-accurate solution to the renormalized and relaxed SDO problem (5) with input matrix \(C\) and \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\). Let \(\gamma_{\zeta}=\mathrm{tr}\left(C\rho\right)\) be the value attained by \(\rho\). Then, there is a quantum state \(\rho^{*}\) at trace distance \(\mathcal{O}\left(\frac{\epsilon}{n\|C\|_{F}}\right)\) of \(\rho\) such that \(n\rho^{*}\) is a feasible point of SDO problem (3). In particular_ \[|\gamma_{\zeta}n\|C\|_{F}-\mathrm{tr}\left(n\rho^{*}C\right)|=\mathcal{O}\left( \epsilon\right).\] _Moreover, it is possible to construct \(\rho^{*}\) in time \(\mathcal{O}(n^{2})\) given the entries of \(\rho\)._ Proof.: The proof almost exactly follows the proof of Proposition 3.1 in [10], regardless, we present the adjusted proof for completeness. Our aim is to show that a \(\zeta\)-precise solution \(\rho\) to (5) obtained using Algorithm 2 can be used to construct \(\rho^{*}\) such that \(n\rho^{*}\) is an exactly feasible solution to (3). We begin shifting \(\rho\) in order to ensure that our solution is positive semidefinite. In particular, choosing \(\delta\) according to (18), we set \[\tilde{\rho}=\frac{1}{1+n\delta}\left(\rho+\delta I\right).\] It then follows from Proposition 10 that \(\tilde{\rho}\) satisfies \[\tilde{\rho}\succeq 0,\quad\left\|\rho-\tilde{\rho}\right\|_{\mathrm{tr}}\leq \zeta,\quad\max\left\{\gamma-\mathrm{tr}\left(\frac{C}{\left\|C\right\|_{F}} \tilde{\rho}\right),\left\|\tilde{\rho}-n^{-1}I\right\|_{\mathrm{tr}}\right\} \leq 2\zeta. \tag{19}\] Next, we examine the diagonal elements of \(\tilde{\rho}\) and check whether modifications need to be made to ensure that our solution is an exactly feasible point to the renormalized SDO problem (5). Namely, if \(|\langle i|\tilde{\rho}|i\rangle-\frac{1}{n}|>\frac{\sqrt{2\zeta}}{n}\) for \(i\in[n]\), we replace \(\tilde{\rho}_{ii}\) with \(\frac{1}{n}\) and set all elements in the \(i\)-th row and the \(i\)-th column to \(0\), and denote the resulting matrix by \(\rho^{\prime}\). From here we introduce another matrix \(W\) which we obtain by replacing each diagonal entry of \(\rho^{\prime}\) with \(\frac{1}{n}\). In general we may not have \(W\succeq 0\), so the authors in [10] suggest using the convex combination: \[\rho^{*}=\frac{1}{1+\sqrt{2\zeta}}\left(W+\frac{\sqrt{2\zeta}}{n}I\right).\] Then, \(\rho^{*}\succeq 0\) and by construction \(\langle i|\rho^{*}|i\rangle=\frac{1}{n}\) for all \(i\in[n]\). Hence, \(\rho^{*}\) is a feasible solution to the renormalized SDO problem (5). What remains is to show that the above reformulations yield the desired approximation. Denote by \(\mathcal{B}=\{i:|n\langle i|\tilde{\rho}|i\rangle-1|>\sqrt{2\zeta}\}\subset[n]\) the set of diagonal entries that deviate substantially from \(\frac{1}{n}\). Without loss of generality, it suffices to assume that such elements are found in the first \(|\mathcal{B}|\) rows of \(\tilde{\rho}\), in which case \[\left\|\rho^{\prime}-\tilde{\rho}\right\|_{\mathrm{tr}}=\left\| \begin{pmatrix}n^{-1}I_{\mathcal{B}}&0\\ 0&\tilde{\rho}_{22}\end{pmatrix}-\begin{pmatrix}\tilde{\rho}_{11}&\tilde{\rho }_{12}\\ \tilde{\rho}_{21}&\tilde{\rho}_{22}\end{pmatrix}\right\|_{\mathrm{tr}}= \left\|\begin{pmatrix}n^{-1}I_{\mathcal{B}}-\tilde{\rho}_{11}&- \tilde{\rho}_{12}\\ -\tilde{\rho}_{21}&0\end{pmatrix}\right\|_{\mathrm{tr}}\] \[\leq\left\|\tilde{\rho}_{11}\right\|_{\mathrm{tr}}+2\|\tilde{ \rho}_{12}\|_{\mathrm{tr}}+\|n^{-1}I_{\mathcal{B}}\|_{\mathrm{tr}}. \tag{20}\] Since \(\tilde{\rho}\) is a \(2\zeta\)-precise solution to (5), \(\tilde{\rho}\) obeys \[\sum_{i=1}^{n}\left|\langle i|\tilde{\rho}|i\rangle-\frac{1}{n}\right|\leq 2\zeta.\] Therefore, we must have \[|\mathcal{B}|\frac{\sqrt{2\zeta}}{n}\leq 2\zeta,\] which equates to \(|\mathcal{B}|\leq n\sqrt{2\zeta}\). Now, by the definition of \(\mathcal{B}\), it follows \[\|\tilde{\rho}_{22}\|_{\mathrm{tr}}\geq(n-|\mathcal{B}|)\frac{1-\sqrt{2\zeta} }{n}\geq(n-n\sqrt{2\zeta})\frac{1-\sqrt{2\zeta}}{n}=(1-\sqrt{2\zeta})^{2}.\] Following [10], we invoke a result from [40], which states \[\left\|\begin{bmatrix}\left\|\tilde{\rho}_{11}\right\|_{\mathrm{tr}}&\left\| \tilde{\rho}_{12}\right\|_{\mathrm{tr}}\\ \left\|\tilde{\rho}_{12}^{\top}\right\|_{\mathrm{tr}}&\left\|\tilde{\rho}_{22 }\right\|_{\mathrm{tr}}\end{bmatrix}\right\|\leq\left\|\begin{bmatrix} \tilde{\rho}_{11}&\tilde{\rho}_{12}\\ \tilde{\rho}_{12}^{\top}&\tilde{\rho}_{22}\end{bmatrix}\right\|_{\mathrm{tr} }=\|\tilde{\rho}\|_{\mathrm{tr}}=\mathrm{tr}\left(\tilde{\rho}\right)=1.\] Using the fact that \(\|\cdot\|_{\mathrm{tr}}\geq\|\cdot\|_{2}\), where \(\|\cdot\|_{2}\) is the Frobenius, or Schatten-2 norm, the above implies \[\|\tilde{\rho}_{11}\|_{\mathrm{tr}}^{2}+2\|\tilde{\rho}_{12}\|_{\mathrm{tr}}^{ 2}+\|\tilde{\rho}_{22}\|_{\mathrm{tr}}^{2}\leq 1.\] As \(\|\tilde{\rho}_{22}\|_{\mathrm{tr}}\geq(1-\sqrt{2\zeta})^{2}\), it can be seen trivially that \(\|\tilde{\rho}_{22}\|_{\mathrm{tr}}^{2}\geq(1-\sqrt{2\zeta})^{4}\), and thus \[\|\tilde{\rho}_{11}\|_{\mathrm{tr}}^{2}+2\|\tilde{\rho}_{12}\|_{\mathrm{tr}}^{ 2}\leq 1-(1-\sqrt{2\zeta})^{4}=\mathcal{O}(\sqrt{\zeta}).\] Consequently \(\|\tilde{\rho}_{11}\|_{\mathrm{tr}}+2\|\tilde{\rho}_{12}\|_{\mathrm{tr}}=\mathcal{O }\left(\zeta^{\frac{1}{4}}\right)\), and plugging this into equation (20) asserts \[\|\rho^{\prime}-\tilde{\rho}\|_{\mathrm{tr}}=\mathcal{O}\left(\zeta^{\frac{1}{4} }\right). \tag{21}\] Let \(R\) be a diagonal matrix whose elements are \(R_{ii}\in\left[-\frac{\sqrt{2\zeta}}{n},\frac{\sqrt{2\zeta}}{n}\right]\) for \(i\in[n]\), such that \[W=\rho^{\prime}+R,\] and note that \(R+\sqrt{2\zeta}n^{-1}I\succeq 0\). Upon normalizing the trace, one can observe \[\rho^{*}=\frac{1}{1+\sqrt{2\zeta}}\left(\rho^{\prime}+R+\sqrt{2\zeta}n^{-1}I \right)\succeq 0,\] with \(\rho^{*}_{ii}=\frac{1}{n}n\) for all \(i\in[n]\). Thus, \(n\rho^{*}\) is a feasible solution to the SDO problem (3). Further, by a triangle inequality we have \[\|\rho^{\prime}-\rho^{*}\|_{\mathrm{tr}}=\frac{1}{1+\sqrt{2\zeta}}\left\|\sqrt {2\zeta}\rho^{\prime}+R+\sqrt{2\zeta}n^{-1}I\right\|_{\mathrm{tr}}=\mathcal{O} (\sqrt{\zeta}). \tag{22}\] Combining equations (21) and (22) and noting \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\), applying another triangle inequality yields \[\|\tilde{\rho}-\rho^{*}\|_{\mathrm{tr}}=\mathcal{O}\left(\zeta^{\frac{1}{4}} \right)=\mathcal{O}\left(\left[\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4} \right]^{\frac{1}{4}}\right)=\mathcal{O}\left(\frac{\epsilon}{n\|C\|_{F}} \right).\] Then, the result follows from a matrix Holder inequality: \[|\mathrm{tr}\left(nC\rho\right)-\mathrm{tr}\left(nC\rho^{*}\right)| \leq n\|C\|\|\rho-\rho^{*}\|_{\mathrm{tr}} \leq n\|C\|_{F}\left(\|\rho-\tilde{\rho}\|_{\mathrm{tr}}+\| \tilde{\rho}-\rho^{*}\|_{\mathrm{tr}}\right)\] \[=\mathcal{O}\left(n\|C\|_{F}\left(\zeta+\zeta^{\frac{1}{4}}\right)\right.\] \[=\mathcal{O}\left(n\|C\|_{F}\left[\left(\frac{\epsilon}{n\|C\|_{F }}\right)^{4}+\frac{\epsilon}{n\|C\|_{F}}\right]\right)\] \[=\mathcal{O}\left(\frac{\epsilon^{4}}{\left(n\|C\|_{F}\right)^{3} }+\epsilon\right)\] \[=\mathcal{O}\left(\epsilon\right).\] ## 5 Complexity We now analyze the worst case overall running time of our Iterative Refinement Method given in Algorithm 2 in both the classical and quantum settings. ### Classical running time As we saw in Section 3, the complexity of using Algorithm 1 to solve the SDO problem (3) scales poorly in the inverse precision, with the classical algorithm exhibiting an \(\mathcal{O}(\epsilon^{-12})\) dependence. In both the classical and quantum cases, our iterative refinement scheme reconciles the poor scaling in \(\epsilon\) because it possesses the following two properties. First, we can obtain an arbitrarily precise solution to (5) in at most \(\widetilde{\mathcal{O}}_{\frac{1}{\zeta}}(1)\) iterations. Second, it suffices to treat \(\xi\) as fixed for the oracle calls that occur in each iteration, as the precision of the final solution is a byproduct of how we use these solution of the refining problems to produce a solution to (5). The next result formalizes the above argument, and establishes the complexity of Algorithm 2 for the classical case. **Theorem 5**.: _Let \(C\in\mathcal{S}^{n}\) with row sparsity \(s\) and \(\epsilon\in(0,1)\). Then, fixing \(\xi=10^{-2}\), and setting \(\zeta\;=\;\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\), a classical implementation of Algorithm 2 solves (3) up to additive error \(\mathcal{O}(\epsilon)\) in time_ \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\cdot\mathrm{polylog}\left(n,\|C\|_{ F},\frac{1}{\epsilon}\right)\right).\] _The output of the algorithm is a classical description of a matrix \(\hat{\rho}\in\mathcal{S}^{n}\) such that_ \[\tilde{\rho}=\frac{1}{1+n\delta}\left(\hat{\rho}+\delta I\right),\] _is a \(2\zeta\)-precise solution to (5), where \(\delta\) is defined according to (18). The entries of \(\tilde{\rho}\) can be modified to construct a matrix \(\rho^{*}\) at trace distance \(\mathcal{O}\left(\frac{\epsilon}{n\|C\|_{F}}\right)\) of \(\tilde{\rho}\) in time \(\mathcal{O}(n^{2})\), such that \(n\rho^{*}\) is a feasible point of the SDO problem (3)._ Proof.: Given that \(C\) is an \(s\)-sparse matrix, we can load \(C\) in \(\mathcal{O}(ns)\) time, and from here we must compute \(\|C\|_{F}\), which requires \(\mathcal{O}(ns)\) arithmetic operations. In every iteration of Algorithm 2, we make a call to our subroutine in Algorithm 1, before updating the solution and preparing the next refining problem. Updating the solution involves matrix addition between two \(n\times n\) matrices and requires \(\mathcal{O}(n^{2})\) arithmetic operations, whereas updating \(Q\) and \(\varepsilon\) for the next refining problem can be accomplished using \(\mathcal{O}(n)\) arithmetic operations, as only the diagonal entries of \(Q\) need to be stored and maintained. In view of Proposition 6, the dominant operation at each iteration is the use of Algorithm 1 to solve the SDO problem at hand. By Proposition 6, Algorithm 1 can be used to solve (14) to additive error \(\xi\) in time \[\mathcal{T}^{\mathrm{classical}}_{HU}=\mathcal{O}\left(\min\{n^{2}s,n^{\omega} \}\log^{2}(n)\xi^{-3}\right).\] If every call to Algorithm 1 is made using precision \(\xi\), then by Corollary 5, Algorithm 2 converges in at most \(\mathcal{O}\left(\log(\zeta^{-1})\right)\) iterations, and we can thus express the overall running time of Algorithm 2 as \[\mathcal{O}\left(\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\xi^{-3}\right) \log(\zeta^{-1})\right).\] In the context of Algorithm 2, it suffices to carry out each of the calls to the SDO subroutine (i.e., calls to Algorithm 1) using fixed precision \(\xi\) to obtain a \(2\zeta\)-precise solution to (5) (see, e.g., Proposition 10). The above complexity thus reduces to \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\log(\zeta^{-1})\right).\] For our choice of \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\), one can observe \[\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\log^{2}(n)\log(\zeta^{-1})\right)= \mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\cdot\mathrm{polylog}\left(n,\|C\|_ {F},\frac{1}{\epsilon}\right)\right).\] Proposition 11 certifies that the above running time suffices to obtain a \(\rho\) from which we can construct \(\rho^{*}\) in time \(\mathcal{O}(n^{2})\), such that \(n\rho^{*}\) is a feasible point of the SDO problem (3) satisfying \[|\gamma_{\zeta}n\|C\|_{F}-\mathrm{tr}\left(n\rho^{*}C\right)|=\mathcal{O}( \epsilon),\] and the proof is complete. ### Quantum running time Just as in the classical case, we show that a quantum implementation of Algorithm 2 mitigates the poor scaling in the running time with respect to the inverse precision. Our quantum implementation of Algorithm 2 is provided in Algorithm 3. The relevant error parameters are the same as those appearing in Algorithm 2: _(i)_\(\xi\), the fixed precision used to test closeness to the sets \(\mathcal{C}_{\eta(\gamma-\hat{\gamma})}\) and \(\mathcal{D}_{\eta\varepsilon}\) in every iteration, _(ii)_\(\zeta\), the precision to which the final solution satisfies the functional constraints in (5), and _(iii)_\(\epsilon\), the additive error to which we seek to solve (3). In our initialization steps we set the values of \(Q\), \(\varepsilon\) and \(\eta\) such that the first iteration corresponds to solving the feasibility problem (10). We also create a vector \(p=\mathbf{0}^{n\times 1}\) that will be used to maintain a classical description of the diagonal elements of our solution over the course of the algorithm. At every iteration \(k\), a call is made to Algorithm 1 with separation oracles \(O_{\mathcal{C}_{\eta(\gamma-\hat{\gamma})}}\) and \(O_{\mathcal{D}_{\eta\varepsilon}}\) to solve (14) using fixed precision \(\xi\). If the oracles accept the candidate state, then Algorithm 1 returns a real-valued vector \(y^{(k)}\in\mathbb{R}^{2}\) along with a diagonal matrix \(D^{(k)}\) such that the Hamiltonian associated with the Gibbs state that solves the refining problem is \[H^{(k)}=y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}D^{(k)},\] with \(\|y^{(k)}\|_{1}\leq 4\log(n)\xi^{-1}\) and \(\|D^{(k)}\|\leq 1\) for every \(k\geq 0\). This allows us to efficiently describe the solution to each refining problem, and once the algorithm has terminated, it facilitates an efficient way to describe the final solution as well.5 First, observe that the matrices \(Q^{(k)}\) and \(D^{(k)}\) can be completely described by their diagonal elements; letting \(q^{(k)}\in\mathbb{R}^{n}\) and \(d^{(k)}\in\mathbb{R}^{n}\) be the vectors that store the diagonal elements of \(Q^{(k)}\) and \(D^{(k)}\), respectively, we have Footnote 5: Requiring an explicit classical description of the solution would in fact lead to a worse running time overall when compared to the classical implementation we studied in Section 5.1. \[Q^{(k)} =(ee^{\top}-I)+\operatorname{diag}\left(q^{(k)}\right),\] \[D^{(k)} =\operatorname{diag}\left(d^{(k)}\right).\] Therefore, we store the solution to the refining problem at iteration \(k\) as the tuple \[(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)}),\] and the final solution to (5) is defined as \[\tilde{\rho}=\frac{1}{1+n\delta}\left[\left(\sum_{k=0}^{K}\frac{1}{\eta^{(k)} }Q^{(k)}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}} +y_{2}^{(k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\operatorname {tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{ (k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\right)+\delta I \right], \tag{23}\] where \(\delta\) is defined according to (18) (see, e.g., Proposition 10). We point out that this marks a key difference between the output of our algorithm and other quantum SDO solvers based on Gibbs sampling [9, 10, 11, 60, 61], which need only return a single state preparation pair. This however does not increase the cost of the method; the iteration bound in Corollary 5 ensures that there are only at most \(\widetilde{\mathcal{O}}_{\frac{1}{2}}(1)\) (i.e., a poly-logarithmic number) of these tuples to be stored over the course of the algorithm. Using the \(\tilde{\mathcal{Q}}\)RAM input model, one can use the stored tuples to construct a block-encoding of the final solution up to error \(\theta\) using \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}(\sqrt{n})\) queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}(n)\) classical operations. This construction, and the associated time complexity are analyzed later in Proposition 12. We further demonstrate that provided classical access to an \(s\)-sparse matrix \(A\in\mathbb{R}^{n\times n}\) (with subnormalization factor 1) and access to QRAM, one can estimate \(\operatorname{tr}(A\tilde{\rho})\) to additive error \(\theta\) using \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}\left(\sqrt{\theta}\right)\) queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}(ns)\) classical operations. If \(A\) has a subnormalization factor \(\alpha_{A}>1\), then \(\theta\) must be scaled down by \(\alpha_{A}\), increasing the cost. Additionally, we require Algorithm 1 to return the estimates \(\tilde{p}^{(k)}\in\mathbb{R}^{n}\) (a classical estimate of the diagonal elements of the solution to the refining problem) and \(\tilde{c}^{(k)}\in\mathbb{R}\) (a classical estimate of the objective value attained by the solution of the refining problem) that are used to test \(\xi\)-closeness for the accepted state. In this fashion, we can (classically) prepare the refining problem data for the next iteration without increasing the cost of the algorithm with respect to \(n\); the objective value can be updated using \(\mathcal{O}(1)\) arithmetic operations using \(\tilde{c}^{(k)}\), while updating the residuals along the diagonal of \(\rho\) requires \(\mathcal{O}(n)\) arithmetic operations provided classical access to \(\tilde{p}^{(k)}\). If the current solution is indistinguishable up to precision \(\zeta\) from the maximally mixed state \(n^{-1}I\), and provides an objective value of at least \(\gamma-\zeta\), the algorithm terminates and reports the current solution. Otherwise, we construct the refining problem associated with our current solution and proceed to the next iteration. ``` 0: Error tolerances \(\epsilon\in(0,1)\) and \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\), upper bound on objective value \(\gamma\in[-1,1]\) 0: Tuples \((\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\) that define a \(2\zeta\)-precise solution to (5) using Equation (23) 0:\(p\leftarrow\mathbf{0}^{n}\), \(Q\gets ee^{\top}\), \(\varepsilon_{i}=\frac{1}{n}\) for \(i\in[n]\), \(\hat{\gamma}\gets 0\), \(\eta^{(0)}\gets 1\), \(k\gets 0\) while\(\max\left\{\gamma-\hat{\gamma},\|\varepsilon\|_{1}\right\}>\zeta\)do 1. \((y^{(k)},D^{(k)},\tilde{p}^{(k)},\tilde{c}^{(k)})\leftarrow\) Solve (14) to precision \(\xi\) using Algorithm 1 2. Store diagonal elements of \(Q\) and \(D^{(k)}\) \[q_{i}^{(k)}\gets Q_{ii},\hskip 14.226378ptd_{i}^{(k)}\gets D_{ii}^{(k)} \hskip 14.226378pt\text{for }i\in[n]\] 3. Store description of solution to the refining problem \((\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\) 4. Update the objective value of solution: \[\hat{\gamma}\leftarrow\hat{\gamma}+\frac{1}{\eta^{(k)}}\tilde{c}^{(k)}\] 5. Update estimate of diagonal entries: \[p_{i}\gets p_{i}+\frac{Q_{ii}}{\eta^{(k)}}\tilde{p}_{i}^{(k)}\hskip 14.226378pt \text{for }i\in[n]\] 6. Compute element-wise deviations from the maximally mixed state: \[\varepsilon_{i}\gets p_{i}-\frac{1}{n}\hskip 14.226378pt\text{for }i\in[n]\] 7. Classically update: \[Q_{ii}\leftarrow\text{sign}(-\varepsilon_{i})\text{ for }i\in[n],\hskip 14.226378pt \eta^{(k+1)}\leftarrow\frac{1}{\max\left\{\gamma-\hat{\gamma},\|\varepsilon\|_ {1}\right\}}\] 8. \(k\gets k+1\) ``` **Algorithm 3** Iterative Refinement for SDO Approximations of QUBOs using a quantum computer The next result gives the overall running time required to solve (3) to additive error \(\mathcal{O}(\epsilon)\) using the QRAM input model. **Theorem 6**.: _Let \(C\in\mathcal{S}^{n}\), \(\epsilon\in(0,1)\), and set \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\). Assume we have classical access to \(C\). Then, in the QRAM input model, Algorithm 3 solves (3) up to additive error \(\mathcal{O}(\epsilon)\) using_ \[\mathcal{O}\left(n^{1.5}\cdot\mathrm{polylog}\left(n,\|C\|_{F},\frac{1}{ \epsilon}\right)\right)\] accesses to the QRAM and \(\mathcal{O}(ns)\) classical arithmetic operations._ _The output of the algorithm is a collection of tuples \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\) such that_ \[\tilde{\rho}=\frac{1}{1+n\delta}\left[\left(\sum_{k=0}^{K}\frac{1}{\eta^{(k)}}Q ^{(k)}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_ {2}^{(k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\operatorname{ tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\right)+\delta I \right],\] _is a \(2\zeta\)-precise solution to (5), where \(\delta\) is defined according to (18). The entries of \(\tilde{\rho}\) can be modified to construct a matrix \(\rho^{*}\) at trace distance \(\mathcal{O}\left(\frac{\epsilon}{n\|C\|_{F}}\right)\) of \(\tilde{\rho}\) in time \(\mathcal{O}(n^{2})\), such that \(n\rho^{*}\) is a feasible point of the SDO problem (3)._ Proof.: Given that \(C\) is an \(s\)-sparse matrix, we can classically load \(C\) in \(\mathcal{O}(ns)\) time. Similarly, for normalization purposes we classically compute \(\|C\|_{F}\), which requires \(\mathcal{O}(ns)\) arithmetic operations. In each iteration we use Algorithm 1 to solve (14), and use classical estimates of the diagonal elements of the refining solution, and a classical estimate of the objective value attained by the refining solution to update the solution and data for the refining problem we need to solve in the next iteration. Letting \(\mathcal{T}_{HU}^{\text{quantum}}\) be the cost of using Algorithm 1 as an approximate SDO subroutine, by Proposition 8, Algorithm 1 solves (14) to additive error \(\xi\) using at most \[\mathcal{T}_{HU}^{\text{quantum}}=\widetilde{\mathcal{O}}_{\frac{\epsilon}{ \epsilon}}\left(n^{1.5}\xi^{-5}\right)\] accesses to the QRAM. Classically updating the objective value requires \(\mathcal{O}(1)\) arithmetic operations while updating the vector \(p\) which stores a classical description of the diagonal elements of our solution as \[p_{i}\gets p_{i}+\frac{Q_{ii}}{\eta^{(k)}}\tilde{p}_{i}^{(k)}\] requires \(\mathcal{O}(n)\) classical arithmetic operations. Likewise, \(\varepsilon\) and \(Q\) can each be updated using \(\mathcal{O}(n)\) classical arithmetic operations, as we only need to store the diagonal elements of \(Q\). This also implies that we can update \(Q\circ\frac{C}{\|C\|_{F}}\) using \(\widetilde{\mathcal{O}}_{n}(n)\) operations, for only the diagonal elements need to be updated. When compared to loading and normalizing the coefficient matrix \(C\), or our use of Algorithm 1 as a subroutine for solving (14), these intermediate computation steps are negligible and do not factor into the overall running time using \(\mathcal{O}\) notation. By Corollary 5, Algorithm 3 terminates in at most \(\widetilde{\mathcal{O}}_{\frac{1}{\zeta}}\left(1\right)\) iterations. Therefore, the worst case complexity of Algorithm 3 can be bounded by \[\mathcal{O}\left(n^{1.5}\xi^{-5}\cdot\operatorname{polylog}\left(n,\|C\|_{F}, \frac{1}{\epsilon}\right)\right)\] accesses to the QRAM, and \(\mathcal{O}\left(ns\right)\) classical arithmetic operations to load and normalize \(C\). Further, it suffices to use fixed precision \(\xi\) for the every call to Algorithm 1 to reach a final solution that solves (5) to additive error \(\zeta\), as the final solution can always be arbitrarily precise using a \(\widetilde{\mathcal{O}}_{\frac{1}{\zeta}}(1)\) calls to Algorithm 1. Since \(\xi\) is a fixed constant in the context of Algorithm 3, the overall running time of Algorithm 3 simplifies to \[\mathcal{O}\left(n^{1.5}\cdot\operatorname{polylog}\left(n,\|C\|_{F},\frac{1}{ \epsilon}\right)\right).\] accesses to the QRAM, and \(\mathcal{O}\left(ns\right)\) classical arithmetic operations. Just as in the proof of Theorem 5, applying Proposition 11 with our choice of \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\) implies that the above running time is sufficient to obtain a solution that can be used to solve (3) up to additive error \(\mathcal{O}(\epsilon)\), and the proof is complete. We analyze the cost of Algorithm 3 without access to QRAM in Appendix A. Using the sparse-access input model, one can show that the resulting scheme exhibits an oracle complexity of \[\mathcal{O}\left(n^{1.5}s^{0.5+o(1)}\cdot\mathrm{polylog}\left(n,\|C\|_{F},\frac{ 1}{\epsilon}\right)\right),\] and requires \(\mathcal{O}\left(n^{2.5}s^{0.5+o(1)}\cdot\mathrm{polylog}\left(n,\|C\|_{F}, \frac{1}{\epsilon}\right)\right)\) additional gates. To summarize, in the absence of QRAM, the number of oracle accesses is a factor \(\sqrt{s}\) larger due to the Hamiltonian simulation, and the gate complexity increases by a factor \(n\) due to the cost of constructing \(O_{D}\) without QRAM. We conclude this section by establishing the costs of preparing a block-encoding of the final solution, and estimating trace inner products of the form \(\mathrm{tr}(A\tilde{\rho})\) for a given matrix \(A\). **Proposition 12**.: _Suppose that Algorithm 3 is run with \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\) for some \(\epsilon\in(0,1)\), and terminates after \(K\) iterations, classically outputting the tuples \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\). Then, letting \(C\|C\|_{F}^{-1}\) be stored in QRAM, and denoting the refining problem at iteration \(k\) by \(\rho^{(k)}\), one can use \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\) to implement an \((n,\mathcal{O}(\log(n),\theta)\)-block-encoding of_ \[\tilde{\rho}=\frac{1}{1+n\delta}\left[\left(\sum_{k=0}^{K}\frac{1}{\eta^{(k)} }Q^{(k)}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}} +y_{2}^{(k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\mathrm{tr} \left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\right)+\delta I \right],\] _with at most \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon},\frac{1}{\epsilon}} \left(\sqrt{n}\right)\) queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon},\frac{1}{\epsilon}} \left(n\right)\) classical operations._ Proof.: First, note that \[\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\mathrm{tr}\left(\exp \left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}=n^{-1}I\] whenever \(y=(0,0)^{\top}\). Thus, by choosing \(y^{(K+1)}=(0,0)^{\top}\), \(\eta^{(K+1)}=\frac{1}{n\delta}\), and \(Q^{(K+1)}=ee^{\top}\) we can simplify the expression of the final solution to \[\frac{1}{1+n\delta}\left[\sum_{k=0}^{K+1}\frac{1}{\eta^{(k)}}Q^{(k)}\circ\frac {\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\mathrm{tr}\left(\exp \left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\right].\] To ensure that the stated complexity holds, for each \(k\in[K+1]\), we block-encode \[A^{(k)}=\frac{Q^{(k)}\circ C}{\|C\|_{F}}+D^{(k)}.\] First, note that with classical access to \(C\) and \(q^{(k)}\), one can store \(\frac{Q^{(k)}\circ C}{\|C\|_{F}}\) in the QRAM by properly updating \(C\|C\|_{F}^{-1}\) in the QRAM. This step requires \(\mathcal{O}(n)\) classical operations, as the only non-trivial computation that is performed is limited to the diagonal elements of the involved matrices. Then, with \(\frac{Q^{(k)}\circ C}{\|C\|_{F}}\) stored in QRAM, noting that \(\left\|\frac{Q^{(k)}\circ C}{\|C\|_{F}}\right\|_{F}\leq 1\) holds for every \(k\in[K+1]\), we apply Lemma 3 to construct a \((1,\log(n)+2,\theta_{1})\)-block-encoding of \(\frac{Q^{(k)}\circ C}{\|C\|_{F}}\) in time \(\mathcal{O}\left(\mathrm{polylog}\left(\frac{n}{\theta_{1}}\right)\right)\). Similarly, as we saw in the proof of Proposition 7, classical access to \(d^{(k)}\) and access to QRAM implies one can implement a \((1,\log(n)+3,\theta_{1})\)-block-encoding of \(D^{(k)}\) can be constructed in time \(\widetilde{\mathcal{O}}_{\frac{n}{\theta_{1}}}(1)\). Again following the proof of Proposition 7, applying Corollary 3 with \(y^{(k)}\) satisfying \(\|y^{(k)}\|_{1}=\widetilde{\mathcal{O}}_{n}(\xi^{-1})\) implies that we can construct a unitary which prepares a copy of the Gibbs state \(\rho^{(k)}\) encoding the solution to the refining problem at iteration \(k\) with at most \[\widetilde{\mathcal{O}}_{\frac{n}{\theta}}\left(\sqrt{n}\alpha\xi^{-1}\right)= \widetilde{\mathcal{O}}_{n}\left(\sqrt{n}\right),\] accesses to the QRAM, as \(\alpha=1\) and \(\xi\) is a fixed constant. Therefore, by Lemma 7, preparing a \((1,\log(n)+a,\theta_{1})\) block-encoding of a purification of \(\rho^{(k)}\) thus requires \(\widetilde{\mathcal{O}}_{\frac{n}{\theta_{1}}}(\sqrt{n})\) queries to the QRAM. Next, provided classical access to the vector \(q^{(k)}\) that store the diagonal elements of \(Q^{(k)}\), access to QRAM implies that we can efficiently implement an oracle \(O_{Q^{(k)}}\) that returns the entries of \(Q^{(k)}\) in a binary description: \[O_{Q^{(k)}}:\left|i\right\rangle\left|j\right\rangle\left|0\right\rangle^{ \otimes p}\mapsto\left|i\right\rangle\left|j\right\rangle\left|q_{ij}^{(k)} \right\rangle,\quad\forall i,j\in[2^{\log n}]-1,\] where \(q_{ij}^{(k)}\) is a \(p\)-bit binary description of the \(ij\)-matrix element of \(Q^{(k)}\) for \(k=0,\ldots,K+1\). By construction each matrix \(Q^{(k)}\) may be fully dense, and hence an application of Lemma 4 with \(s_{r}=s_{c}=n\) asserts that in the presence of QRAM, one can construct a \((n,\log(n)+3,\theta_{2})\)-block-encoding of \(Q^{(k)}\) in time \(\widetilde{\mathcal{O}}_{\frac{n}{\theta_{2}}}(1)\). From here, we can utilize Proposition 4 with \(\theta_{1}=\theta_{2}=\frac{\tilde{\theta}}{10}\) to construct an \((n,a+4\log(n^{2})+12,\tilde{\theta})\)-block-encoding of \(Q^{(k)}\circ\rho^{(k)}\) in time \(\widetilde{\mathcal{O}}_{\frac{n}{\theta}}(1)\). Repeating the above steps for \(k=0,\ldots,K+1\), it follows that we can block-encode each of the terms \(Q^{(k)}\circ\rho^{(k)}\) using at most \[\widetilde{\mathcal{O}}_{n,\frac{1}{\theta}}\left(K\sqrt{n}\right)=\widetilde {\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon},\frac{1}{\theta}}\left(\sqrt{n}\right)\] queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon},\frac{1}{\theta}} \left(n\right)\) classical operations, as \(K=\mathcal{O}\left(\operatorname{polylog}\left(n,\|C\|_{F},\frac{1}{\epsilon} \right)\right)\) by Corollary 5. Finally, what remains is to take the linear combination of these terms. To do so, we choose our weights to be \(w_{k}=\frac{1}{2(1+n\delta)\eta^{(k)}}\), which indeed satisfies \(\|w\|_{1}\leq 1\). Then, we can construct a \((K+2,\log(K+2),0)\)-state-preparation pair \(P_{L}\), \(P_{R}\) for \(w\), which can be constructed by taking a \(\log(K+2)\)-fold tensor product of the Hadamard gate, i.e., \[P_{L}=P_{R}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}^{\otimes\log(K+2)}.\] We are now in a position to apply Proposition 1, and choosing \(\tilde{\theta}=\frac{\theta}{n}\), we can obtain \(W\) upon adding a control qubit to the circuits used to construct the block-encoding of each \(Q^{(k)}\circ\rho^{(k)}\). As a result, we obtain an \((n,\mathcal{O}(\log(n),\theta)\)-block-encoding of \(\tilde{\rho}\) with a single use of \(W,P_{R}\) and \(P_{L}^{\dagger}\). Summing the cost of each step in the construction we arrive at total cost of \[\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon},\frac{1}{\theta}} \left(\sqrt{n}\right)\] queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon},\frac{1}{\theta}} \left(n\right)\) classical operations, and proof is complete. **Proposition 13**.: _Suppose that Algorithm 3 is run with \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\) for some \(\epsilon\in(0,1)\), and terminates after \(K\) iterations, classically outputting the tuples \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\). Let \(A\in\mathbb{R}^{n\times n}\) be a matrix with \(\|A\|_{F}\leq 1\) and assume classical access to \(A\) and \(C/\|C\|_{F}\). Then, with access to QRAM, one can compute a \(\theta\)-precise estimate of \(\operatorname{tr}(A\tilde{\rho})\) using at most \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{\sqrt{n} }{\theta}\right)\) queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(ns\right)\) classical operations._ Proof.: See the proof of Theorem 8 in Appendix B. A QRAM-free version of Proposition 13 is also analyzed in Appendix B, and the cost is summarized in Corollary 7. Without access to QRAM, the cost increases with respect to \(n\) because computing the Hadamard product of block-encodings introduces \(n\) as a subnormalization factor. This is compounded in the running time, upon noting that we then have to scale down the error for the amplitude estimation steps by \(n\), and constructing sparse-access oracles for the intermediate block-encodings of \(Q\) and \(D\) that arise in the trace estimation procedure requires \(\widetilde{\mathcal{O}}_{n}(n)\) gates. ### Comparison to existing SDO algorithms Table 1 presents a comparison of the running time results for the algorithms we have proposed with the running times of the best performing methods from both the classical and quantum literature when applied to solving (3). Note that when directly solving (3), \(m=n\), and any feasible solution \(X\) to (3) satisfies \(\operatorname{tr}\left(X\right)=n\), implying \(R=n\) for the algorithms based on the (Q)MMWU framework. We also point out that the running times in Table 1 take into account the role of sparsity in context of the algorithms, which is measured as the maximum number of nonzero entries per row of the constraint matrices \(A_{1},\ldots,A_{n}\). When using either an IPM or CPM to solve (3), the \(n\) constraint matrices are \(A_{i}=e_{i}e_{i}^{\top}\) (with row sparsity one) enforcing \(X_{ii}=1\) for each diagonal element. On the other hand, algorithms based on the (Q)MMWU or HU frameworks solve (3) by reducing the problem to a feasibility problem; \(C\) enters into the resulting formulation as another constraint matrix, and as a result, the relevant sparsity parameter is the maximum number of non-zeroes per row of \(C\), which we denote by \(s\) in Table 1. There are additional considerations that need to be taken into account when making comparisons across methodologies listed in Table 1. Broadly speaking, both (Q)MMWUs and HU require normalizing the problem by an upper bound on the trace of a primal solution, and in the case of (3), we have the natural bound \(\operatorname{tr}(X)=n\). Moreover, (Q)MMWUs and HU additionally normalize the cost matrix so that it exhibits unit norm with respect to some norm. While these modifications amount to scaling the optimal objective value of (3) by a fixed quantity, without employing any safeguards such as IR, these modifications impact the scaling of the error as reflected in the fourth column of Table 1. On the contrary, (Q)IPMs do not require the SDO problem to be normalized in any way. Finally there is a distinction with regard to output; (Q)IPMs explicitly report a classical description of the solution \(X\), whereas only the classical HU algorithm of [10] and our own classical IR-HU method do so; the primal QMMWU of [60] reports a state-preparation pair \(y\), and the MMWU algorithm found in [41] reports a "gradient" \(G\in\mathcal{S}^{n}\) such that \(X=W\exp(G)W\) for a diagonal matrix \(W\). As we noted earlier, (Q)IPMs and (Q)MMWUs also utilize different definitions of optimality. It can be easily seen that both the classical and quantum implementations of our proposed methodology outperform all existing algorithms that exhibit poly-logarithmic dependence on the precision \(\epsilon\). Our classical algorithm is only outperformed with respect to dimension by our own quantum algorithms, and the algorithm from [41], which has an exponentially worse dependence on the inverse precision. Moreover, to achieve the same error scaling as our algorithms, the algorithm from [41] would require time \(\widetilde{\mathcal{O}}_{n,\frac{1}{2}}\left(n^{4.5}s\epsilon^{-3.5}\right)\) (as it is assumed \(\|C\|_{\ell_{1}}=n\) in [41]). Up to poly-logarithmic factors, our quantum algorithms outperform each of the classical and quantum solvers in every parameter, suggesting the first evidence of quantum advantage \begin{table} \begin{tabular}{l l l l} \hline \hline **References** & **Method** & **Runtime** & **Error Scaling** \\ \hline [34] & IPM & \(\widetilde{\mathcal{O}}_{n,\frac{1}{\epsilon}}\left(n^{\omega+0.5}\right)\) & \(\epsilon\) \\ [6] & QIPM & \(\widetilde{\mathcal{O}}_{n,\kappa,\frac{1}{2}}\left(\sqrt{n}(n^{3}\kappa \epsilon^{-1}+n^{4})\right)\) & \(\epsilon\) \\ [41] & MMWU & \(\widetilde{\mathcal{O}}_{n,\frac{1}{2}}\left(nse^{-3.5}\right)\) & \(\|C\|_{\ell_{1}}\epsilon\) \\ [60] & QMMWU & \(\widetilde{\mathcal{O}}_{n,\frac{1}{2}}\left(n^{5.5}s\epsilon^{-4}\right)\) & \(n\|C\|_{\ell}\) \\ [10] (Classical) & HU & \(\widetilde{\mathcal{O}}_{n,\|C\|}\left(\min\{n^{2}s,n^{\omega}\}\epsilon^{-1 2}\right)\) & \(n\|C\|\epsilon\) \\ [10] (Quantum) & HU & \(\widetilde{\mathcal{O}}_{n,\|C\|,\frac{1}{2}}\left(n^{2.5}s^{0.5+o(1)} \epsilon^{-28+o(1)}\exp\left(1.6\sqrt{\log(\epsilon^{-1})}\right)\right)\) & \(n\|C\|\epsilon\) \\ [10] (Quantum) & HU-QRAM & \(\widetilde{\mathcal{O}}_{n,\|C\|,\frac{1}{2}}\left(n^{1.5}s^{0.5+o(1)} \epsilon^{-28+o(1)}\exp\left(1.6\sqrt{\log(\epsilon^{-1})}\right)\right)\) & \(n\|C\|\epsilon\) \\ This work (Classical) & IR-HU & \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}\left(\min\{n^{2}s,n^{\omega}\}\right)\) & \(\epsilon\) \\ This work (Quantum) & IR-HU & \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}\left(n^{2.5}s^{0.5+o(1)}\right)\) & \(\epsilon\) \\ This work (Quantum) & IR-HU-QRAM & \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{2}}(n^{1.5})+ns\) & \(\epsilon\) \\ \hline \hline \end{tabular} \end{table} Table 1: Total running times for classical and quantum algorithms to solve (3). for solving a special class of SDO problems. Moreover, our implementation with access to QRAM dominates all other algorithms. We therefore conclude that our proposed algorithms are respectively, the fastest both in the classical and quantum regimes. ## 6 Conclusion In this work we devised an iterative refinement scheme for a particular class of semidefinite optimization problems. The key to our idea behind our speedup is to solve a sequence of related SDO problems in fixed low precision, rather than solve one SDO problem using high accuracy requirements. Moreover, our solutions satisfy a far stronger approximation guarantee over previous quantum solution methodologies for this class of problem. We show that, provided access to QRAM, a quantum implementation of our algorithm can produce accurate solutions to SDO approximations of QUBO problems in time \(\mathcal{O}\left(ns+n^{1.5}\cdot\operatorname{polylog}\left(n,\|C\|_{F},\frac{1 }{\epsilon}\right)\right)\) in the worst case. In the absence of QRAM, one can bound the running time of the quantum algorithm using using the sparse-access input model, in which case the algorithm exhibits an oracle complexity of \(\mathcal{O}\left(n^{2.5}s^{0.5+o(1)}\cdot\operatorname{polylog}\left(n,\|C\|_{ F},\frac{1}{\epsilon}\right)\right)\). A classical implementation of the algorithm exhibits worst case running time of \(\mathcal{O}\left(\min\{n^{2}s,n^{\omega}\}\cdot\operatorname{polylog}\left(n, \|C\|_{F},\frac{1}{\epsilon}\right)\right)\), which is at least a \(\sqrt{n}\) factor better than classical IPMs. When compared to the best performing algorithms in the literature, our algorithms are the fastest in both the quantum and classical regimes, respectively. This work indicates that there could be a genuine quantum advantage (in the QRAM model) for this specific class of SDO problems; to establish such an advantage, one would have to show that no classical algorithm can beat the quantum running time. At the moment, we can only make the weaker claim that our quantum algorithm is faster than any currently known classical algorithm. We believe one can improve the theoretical performance of our classical algorithm by not explicitly computing the density operator in our subroutines. In particular, it may be possible to construct the separation oracles as we do in the quantum setting using techniques to classically estimate trace inner products of the form \(\operatorname{tr}(A\rho)\) (see, e.g., Appendix A in [61]), and applying ideas developed in [3, 41] to estimate the diagonal elements of matrix exponentials via randomized projection [36]. It remains an open question as to whether our techniques can be applied to general SDO problems using the matrix-multiplicative weights update framework as a subroutine. ## Acknowledgements This project has been carried out thanks to funding by the Defense Advanced Research Projects Agency (DARPA), ONISQ grant W911NF2010022, titled The Quantum Computing Revolution and Optimization: Challenges and Opportunities. Giacomo Nannicini is partially supported by the Army Research Office under grant number W911NF-20-1-0014. ## Appendix A Running time of Algorithm 3 without QRAM The following result from [10] gives the sample complexity of implementing the oracles in the sparse-access model. **Lemma 16** (see, proof of Lemma 3.3 in [10]).: _We can implement the oracle \(O_{\mathcal{C}_{\gamma}}\) on a quantum computer given access to \(\mathcal{O}(\epsilon^{-2})\) copies of a state that is an \(\frac{\epsilon}{8}\)-approximation of the input state \(\rho\) in trace distance. The oracle \(O_{\mathcal{D}_{n}}\) can be implemented using \(\mathcal{O}(n\epsilon^{-2})\)\(\frac{\epsilon}{8}\)-approximate copies of the input, and the classical post-processing time needed to implement the oracle is \(\mathcal{O}(n\epsilon^{-2})\)._ Next, we bound the overall complexity of Algorithm 1 without access to QRAM. **Proposition 14**.: _Suppose that \(C\in\mathcal{S}^{n}\) has row sparsity \(s\) and \(\xi\in(0,1)\). Then, in the sparse-access input model, the complexity of solving (5) up to additive error \(\xi\) using Algorithm 1 on a quantum computer requires_ \[\widetilde{\mathcal{O}}_{n}\left(n^{1.5}\sqrt{s}^{1+o(1)}\xi^{-7+o(1)}\exp\left( 1.6\sqrt{\log(\xi^{-1})}\right)\right)\] _queries to the input oracle \(O_{C}\) and \(\widetilde{\mathcal{O}}_{n}\left(n^{2.5}\sqrt{s}^{1+o(1)}\xi^{-7+o(1)}\exp \left(1.6\sqrt{\log(\xi^{-1})}\right)\right)\) additional gates._ Proof.: Our proof can be viewed as the QRAM-free analogue of the discussion found in [10, Section 3.4], and we repeat it here for completeness. In order to derive an appropriate bound on the per-iteration cost, we need to evaluate the cost of constructing our separation oracles. By Lemma 16, we can conclude that the time to construct the oracle \(O_{\mathcal{D}_{n}}\) for the diagonal elements dominates that of constructing the oracle \(O_{\mathcal{C}_{\gamma}}\) to test the objective value. We now turn our attention to the cost of simulating our Hamiltonian \(H\). From the results in [53, Appendix] it follows that we can produce a state that is \(\frac{\xi}{8}\) close to \(\rho\) using \(\widetilde{\mathcal{O}}(\sqrt{n}\xi^{-3})\) invocations of a controlled \(U\) which satisfies \[\left\|U-e^{it_{0}H}\right\|\leq\mathcal{O}\left(\xi^{3}\right),\] with \(t_{0}=\frac{\pi}{4\left\|H\right\|}\). Further, the authors in [10] note that each of the Hamiltonians we seek to simulate are of the form \(H=y_{1}C\|C\|_{F}^{-1}+y_{2}D\) where \(y_{1},y_{2}=\mathcal{O}(\log(n)\xi^{-1})\) and \(D\) is a diagonal matrix which satisfies \(\left\|D\right\|\leq 1\). Invoking [15, Theorem 1], we can simulate \(H\) for time \(t\) up to error \(\xi^{3}\) using \[\widetilde{\mathcal{O}}\left(t(a+b)\exp\left(1.6\sqrt{\log\left(\log(n)t\xi^{ -3}\right)}\right)\right)\] separate simulations of \(y_{1}C\|C\|_{F}\) and \(y_{2}D\). As noted in [10], access to the oracles \(O_{\text{sparse}}\) and \(O_{C}\) we described in Section 2.1.1 allows us to simulate \(\exp(itC\|C\|_{F}^{-1})\) in time \(\mathcal{O}\left((t\sqrt{s})^{1+o(1)}\xi^{o(1)}\right)\) if we utilize the algorithm in [44]. Similarly, we follow [10] in constructing an oracle \(O_{D}\) acting on \(\mathbb{C}\otimes(\mathbb{C}^{2})^{\otimes a}\), where \(a\) is a sufficiently large constant such that we can represent the diagonal elements of \(D\) as \[O_{D}\left|i,z\right\rangle\mapsto\left|i,z\oplus D_{ii}\right\rangle\] to the desired level of precision in binary. Accordingly, we can simulate \(e^{iDt}\) for \(t=\widetilde{\mathcal{O}}(\xi^{-1})\) using \(\widetilde{\mathcal{O}}_{n}(1)\) queries to \(O_{D}\) and \(\widetilde{\mathcal{O}}_{n}(1)\) elementary operations [7], and we can implement \(O_{D}\) using \(\widetilde{\mathcal{O}}_{n}(n)\) gates. To summarize, the Gibbs sampler from [53] requires \(\widetilde{\mathcal{O}}(\sqrt{n}\xi^{-3})\) Hamiltonian simulation steps, each of which requires time \[\widetilde{\mathcal{O}}\left(\sqrt{s}^{1+o(1)}\xi^{o(1)}\exp\left(1.6\sqrt{ \log(\xi^{-1})}\right)\right).\] Hence, each iteration of Algorithm 1 requires a total of \[\widetilde{\mathcal{O}}_{n}\left(n^{1.5}\sqrt{s}^{1+o(1)}\xi^{-5+o(1)}\exp \left(1.6\sqrt{\log(\xi^{-1})}\right)\right)\] sparse-access oracle queries. Combining the above per-iteration cost with the iteration bound \(\mathcal{O}(\log(n)\xi^{-2})\) provided in Theorem 3, it follows that Algorithm 1 solves (5) up to additive error \(\xi\) with at most \[\widetilde{\mathcal{O}}_{n}\left(n^{1.5}\sqrt{s}^{1+o(1)}\xi^{-7+o(1)}\exp \left(1.6\sqrt{\log(\xi^{-1})}\right)\right)\] queries to the input oracle \(O_{C}\) and \(\widetilde{\mathcal{O}}_{n}\left(n^{2.5}\sqrt{s}^{1+o(1)}\xi^{-7+o(1)}\exp \left(1.6\sqrt{\log(\xi^{-1})}\right)\right)\) additional gates. Theorem 7 formalizes the complexity of of Algorithm 3 in the quantum setting without access to QRAM. In our analysis, we employ the same Hamiltonian simulation subroutines and Gibbs sampler used in [10] to construct our separation oracles. **Theorem 7**.: _Let \(C\in\mathcal{S}^{n}\) with row sparsity \(s\) and \(\epsilon\in(0,1)\). Then, setting \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\) and fixing \(\xi=10^{-2}\), a quantum implementation of Algorithm 3 using the sparse-access input model solves (3) up to additive error \(\mathcal{O}(\epsilon)\) using_ \[\mathcal{O}\left(n^{1.5}s^{0.5+o(1)}\cdot\mathrm{polylog}\left(n,\|C\|_{F}, \frac{1}{\epsilon}\right)\right)\] _queries to the input oracle \(O_{C}\) and \(\mathcal{O}\left(n^{2.5}s^{0.5+o(1)}\cdot\mathrm{polylog}\left(n,\|C\|_{F}, \frac{1}{\epsilon}\right)\right)\) additional gates._ _The output of the algorithm is a collection of tuples \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\) such that_ \[\tilde{\rho}=\frac{1}{1+n\delta}\left[\left(\sum_{k=0}^{K}\frac{1}{\eta^{(k)} }Q^{(k)}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}} +y_{2}^{(k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\operatorname {tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{( k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\right)+\delta I \right],\] _is a \(2\zeta\)-precise solution to (5), where \(\delta\) is defined according to (18). The entries of \(\tilde{\rho}\) can be modified to construct a matrix \(\rho^{*}\) at trace distance \(\mathcal{O}\left(\frac{\epsilon}{n\|C\|_{F}}\right)\) of \(\tilde{\rho}\) in time \(\mathcal{O}(n^{2})\), such that \(n\rho^{*}\) is a feasible point of the SDO problem (3)._ Proof.: Given that \(C\) is an \(s\)-sparse matrix, we can load \(C\) in \(\mathcal{O}(ns)\) time. Similarly, for normalization purposes we classically compute \(\|C\|_{F}\), which requires \(\mathcal{O}(ns)\) arithmetic operations. In each iteration we use Algorithm 1 to solve (14), and use classical estimates of the diagonal elements of the refining solution, and a classical estimate of the objective value attained by the refining solution to update the solution and data for the refining problem we need to solve in the next iteration. Letting \(\mathcal{T}_{HU}^{\mathrm{sparse}}\) be the cost of using Algorithm 1 as an approximate SDO subroutine, we saw in Proposition 14, Algorithm 1 solves (14) to additive error \(\xi\) using \[\mathcal{T}_{HU}^{\mathrm{sparse}}=\widetilde{\mathcal{O}}_{n}\left(n^{1.5} \sqrt{s}^{1+o(1)}\xi^{-7+o(1)}\exp\left(1.6\sqrt{\log(\xi^{-1})}\right)\right)\] queries to the oracle describing the problem data and \(\widetilde{\mathcal{O}}_{n}\left(n^{2.5}\sqrt{s}^{1+o(1)}\xi^{-7+o(1)}\exp \left(1.6\sqrt{\log(\xi^{-1})}\right)\right)\) additional gates. In the context of Algorithm 3, \(\xi\) is a fixed constant, so the cost of our oracle call to Algorithm 1 simplifies to \[\mathcal{T}_{HU}^{\mathrm{sparse}}=\widetilde{\mathcal{O}}_{n}\left(n^{1.5} \sqrt{s}^{1+o(1)}\right)\] queries to the oracle describing the problem data and \(\widetilde{\mathcal{O}}_{n}\left(n^{2.5}\sqrt{s}^{1+o(1)}\right)\) additional gates. Classically updating the objective value requires \(\mathcal{O}(1)\) arithmetic operations while updating the vector \(p\) which stores a classical description of the diagonal elements of our solution as \[p_{i}\gets p_{i}+\frac{Q_{ii}}{\eta^{(k)}}\tilde{p}_{i}^{(k)}\] requires \(\mathcal{O}(n)\) arithmetic operations. Again, \(\varepsilon\) and \(Q\) can each be updated using \(\mathcal{O}(n)\) arithmetic operations, as we only need to store the diagonal elements of \(Q\). This also implies that we can also calculate \(Q\circ\frac{C}{\|C\|_{F}}\) in time \(\mathcal{O}(n)\), for only the element-wise products along the diagonal are non-trivial. When compared to loading and normalizing the data or our use of Algorithm 1 as a subroutine for solving (14), these intermediate computation steps are negligible and do not factor into the overall running time using \(\mathcal{O}\) notation. Factoring in the \(\mathcal{O}\left(\mathrm{polylog}\left(\frac{1}{\zeta}\right)\right)=\mathcal{ O}\left(\mathrm{polylog}\left(n,\|C\|_{F},\frac{1}{\epsilon}\right)\right)\) from Corollary 5, it follows that a quantum implementation of Algorithm 3 requires at most \[\mathcal{O}\left(n^{1.5}s^{0.5+o(1)}\cdot\mathrm{polylog}\left(n,\|C\|_{F}, \frac{1}{\epsilon}\right)\right)\] queries to the input oracle \(O_{C}\) and \(\mathcal{O}\left(n^{2.5}s^{0.5+o(1)}\cdot\mathrm{polylog}\left(n,\|C\|_{F}, \frac{1}{\epsilon}\right)\right)\) additional gates. Just as in the proof of Theorem 5, applying Proposition 11 with our choice of \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\) implies that the above running time is sufficient to obtain a solution that can be used to solve (3) up to additive error \(\mathcal{O}(\epsilon)\), and the proof is complete. ## Appendix B Estimating trace inner products with the final solution Given that we do not explicitly report a classical description of the final solution \(\tilde{\rho}\) defined in equation (23), it may be of interest to understand how, for a user specified matrix \(A\), one can compute the trace inner product \(\operatorname{tr}(A\tilde{\rho})\). We outline a procedure for doing so using the state preparation pair description of solution \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\) in Algorithm 4, and subsequently analyze the complexity of doing so. ``` 0: Access to an \(s\)-sparse matrix \(A\in\mathbb{R}^{n\times n}\) with \(\|A\|_{F}\leq 1\), state preparation pair description of solution \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\), precision \(\theta\in(0,1)\), \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\) 0: A \(\theta\)-precise classical estimate of \(\operatorname{tr}(A\tilde{\rho})\) Initialize: \(a\gets 0\), \(k\gets 0\), \(y^{(K+1)}\leftarrow(0,0)^{\top}\), \(\eta^{(K+1)}\leftarrow\frac{1}{n\delta}\), \(Q^{(K+1)}\gets ee^{\top}\) for\(k=0,\dots,K+1\)do 1. Implement an \((\alpha,a,\zeta/2(K+2))\)-block-encoding of \(Q^{(k)}\circ A\) 2. Use block-encoding of \(Q^{(k)}\circ A\) to implement a trace estimator for \[a^{(k)}=\operatorname{tr}\left[\left(Q^{(k)}\circ A\right)\left(\frac{\exp \left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname {diag}\left(d^{(k)}\right)\right]\right)}{\operatorname{tr}\left(\exp\left(- \left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag }\left(d^{(k)}\right)\right]\right)\right)}\right)\right]\] 3. Use \(\mathcal{O}\left(\frac{K}{\theta}\right)\) samples from the trace estimator to produce \(\frac{\theta}{K+2}\)-precise estimate \(\tilde{a}^{(k)}\) of \(a^{(k)}\) 4. Update solution: \[a\gets a+\frac{1}{\eta^{(k)}}\tilde{a}^{(k)}\] 5. \(k\gets k+1\) end while Scale down estimate to account for spectrum shift: \[a\leftarrow\frac{1}{1+n\delta}a\] **Theorem 8**.: _Let \(A\in\mathbb{R}^{n\times n}\), and \(\frac{C}{\|C\|_{F}}\in\mathcal{S}^{n}\) be stored in QRAM, \(\theta\in(0,1)\), and \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\) be a state preparation pair description of the solution obtained from running Algorithm 3 to final precision \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\). Suppose \(A\) is an \(s\)-sparse matrix with \(\|A\|_{F}\leq 1\), and assume classical access to \(A\) and \(\frac{C}{\|C\|_{F}}\in\mathcal{S}^{n}\). Then, Algorithm 4 outputs a \(\theta\)-precise estimate of_ \[\operatorname{tr}(A\tilde{\rho})=\frac{1}{1+n\delta}\operatorname{tr}\left(A \left[\left(\sum_{k=0}^{K}\frac{1}{\eta^{(k)}}Q^{(k)}\circ\frac{\exp\left(- \left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag }\left(d^{(k)}\right)\right]\right)}{\operatorname{tr}\left(\exp\left(-\left[ y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{(k)} \right)\right]\right)\right)}+\delta I\right]\right)\] _using at most_ \[\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\varepsilon}}\left(\frac{\sqrt{n }}{\theta}\right)\] queries to the QRAM and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(ns\right)\) classical operations._ Proof.: We begin by establishing the correctness of Algorithm 4. First, note that following the proof of Proposition 12, we can simplify the expression of the final solution to \[\tilde{\rho}=\frac{1}{1+n\delta}\left[\sum_{k=0}^{K+1}\frac{1}{\eta^{(k)}}Q^{(k )}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^ {(k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\operatorname{tr} \left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\right].\] by setting \(y^{(K+1)}=(0,0)^{\top}\), \(\eta^{(K+1)}=\frac{1}{n\delta}\), and \(Q^{(K+1)}=ee^{\top}\). Then, by linearity of the trace and Lemma 1, one has: \[\operatorname{tr}(A\tilde{\rho}) =\operatorname{tr}\left(A\frac{1}{1+n\delta}\left[\sum_{k=0}^{K+1 }\frac{1}{\eta^{(k)}}Q^{(k)}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)} \circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{(k)}\right) \right]\right)}{\operatorname{tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)} \circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{(k)}\right) \right]\right)\right)}\right]\] \[=\frac{1}{1+n\delta}\sum_{k=0}^{K+1}\frac{1}{\eta^{(k)}} \operatorname{tr}\left(A\left[Q^{(k)}\circ\frac{\exp\left(-\left[y_{1}^{(k)}Q ^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{(k)} \right)\right]\right)}{\operatorname{tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{( k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{(k)} \right)\right]\right)\right)}\right]\right)\] \[=\frac{1}{1+n\delta}\sum_{k=0}^{K+1}\frac{1}{\eta^{(k)}} \operatorname{tr}\left(\left(Q^{(k)}\circ A\right)\frac{\exp\left(-\left[y_{1} ^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{( k)}\right)\right]\right)}{\operatorname{tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{( k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag}\left(d^{(k)} \right)\right]\right)\right)}\right).\] In other words, the output of Algorithm 4 is indeed an estimate of \(\operatorname{tr}(A\tilde{\rho})\). Next, we analyze the complexity of the procedure. If \(A\) is classically known, one can store \(Q^{(k)}\circ A\) in the QRAM using \(\mathcal{O}(ns)\) classical operations, as \(A\) is \(s\)-sparse. With \(Q\circ A\) stored in a QRAM data structure, one can apply Lemma 3 to implement an \((1,\log(n)+2,\zeta/2(K+2))\)-block-encoding of \(Q\circ A\) in time \(\widetilde{\mathcal{O}}_{\frac{nK}{\zeta}}(1)\) (as \(\|Q\circ A\|_{F}\leq\|A\|_{F}\leq 1\) for any \(Q\) defined according to (13)). As we saw in the proof of Proposition 12, with \(\frac{C}{\|C\|_{F}}\) stored in QRAM, one can implement the state \[\rho^{(k)}=\frac{\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y _{2}^{(k)}\operatorname{diag}\left(d^{(k)}\right)\right]\right)}{\operatorname{ tr}\left(\exp\left(-\left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)} \operatorname{diag}\left(d^{(k)}\right)\right]\right)\right)}\] using at most \[\widetilde{\mathcal{O}}_{n}\left(\sqrt{n}\right),\] accesses to the QRAM and \(\mathcal{O}(n)\) classical operations. Having prepared the state \(\rho^{(k)}\) and a \((1,\log(n)+2,\zeta/2(K+2))\)-block-encoding \(U_{k}\) of \(Q^{(k)}\circ A\), Lemma 8 asserts that one can implement a trace estimator for \[\operatorname{tr}\left[\left(Q^{(k)}\circ A\right)\rho^{(k)}\right]\] with bias at most \(\frac{\zeta}{K+2}\) using \(\widetilde{\mathcal{O}}(1)\) applications of \(U_{k}\) and \(U_{k}^{\dagger}\). Applying amplitude estimation using \(\mathcal{O}\left(\frac{K}{\theta}\right)=\widetilde{\mathcal{O}}_{n,\|C\|_{F}, \frac{1}{\epsilon}}\left(\frac{1}{\theta}\right)\) samples from the estimator, we obtain a \(\frac{\theta}{K+2}\)-precise classical estimate \(\tilde{a}^{(k)}\) of \(a^{(k)}\), as \(K=\mathcal{O}\left(\operatorname{polylog}\left(n,\|C\|_{F},\frac{1}{\epsilon} \right)\right)\). From here, we classically update \(a\) using \(\mathcal{O}(1)\) arithmetic operations. Therefore, each iteration of Algorithm 4 requires at most \[\widetilde{\mathcal{O}}_{n,\frac{K}{\zeta}}\left(\frac{\sqrt{n}}{\theta}\right)\] accesses to the QRAM and \(\mathcal{O}(ns)\) classical operations. Summing over \(K+2\) iterations implies a total of \[\widetilde{\mathcal{O}}_{n,\frac{K}{\zeta}}\left(K\left(\frac{\sqrt{n}}{\theta} \right)\right)=\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left( \frac{\sqrt{n}}{\theta}\right)\] accesses to the QRAM and \[\mathcal{O}\left(Kns\right)=\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{ \epsilon}}\left(ns\right)\] classical operations. The proof is complete. Note that if \(\|A\|_{F}>1\), because of the subnormalization to block-encode \(A\) we need to increase precision of the estimation procedure: the cost increases by a factor proportional to \(\|A\|_{F}\). **Corollary 7**.: _Let \(A\in\mathbb{R}^{n\times n}\), \(\theta\in(0,1)\), and \(\{(\eta^{(k)},y^{(k)},q^{(k)},d^{(k)})\}_{k=0}^{K}\) be a state preparation pair description of the solution obtained from running Algorithm 3 to final precision \(\zeta=\left(\frac{\epsilon}{n\|C\|_{F}}\right)^{4}\). Suppose \(A\) is an \(s\)-sparse matrix with \(\|A\|_{F}\leq 1\), and assume sparse oracle access to \(A\) and \(\frac{C}{\|C\|_{F}}\in\mathcal{S}^{n}\). Then, Algorithm 4 outputs a \(\theta\)-precise estimate of_ \[\operatorname{tr}(A\tilde{\rho})=\frac{1}{1+n\delta}\operatorname{tr}\left(A \left[\left(\sum_{k=0}^{K}\frac{1}{\eta^{(k)}}Q^{(k)}\circ\frac{\exp\left(- \left[y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag }\left(d^{(k)}\right)\right]\right)}{\operatorname{tr}\left(\exp\left(-\left[ y_{1}^{(k)}Q^{(k)}\circ\frac{C}{\|C\|_{F}}+y_{2}^{(k)}\operatorname{diag} \left(d^{(k)}\right)\right]\right)\right)}+\delta I\right]\right)\] _using at most_ \[\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{n^{2.5}s^ {2}}{\theta}\right)\] _queries to \(O_{A}\), \(O_{C}\), and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{n^{3.5}s^ {2}}{\theta}\right)\) additional gates._ Proof.: Provided classical access to \(A\), we use Lemma 4 with \(s_{r}=s_{c}\) to construct an \((s,\log(n)+3,\theta/n)\)-block-encoding of \(A\) with two uses of \(O_{A}\) (an oracle describing the elements of \(A\) in binary), and additionally using \(\widetilde{\mathcal{O}}_{n}\left(1\right)\) one and two qubit gates. Likewise, with access to the oracle \(O_{C}\) describing the elements of \(C\|C\|_{F}^{-1}\), one can construct an \((s,\log(n)+3,\theta/n)\)-block-encoding of \(C\|C\|_{F}^{-1}\) with two uses of \(O_{C}\), and additionally using \(\widetilde{\mathcal{O}}_{n}\left(1\right)\) one and two qubit gates. Note that without access to QRAM, we must compute the Hadamard products by taking the Hadamard products of block-encodings, which causes the subnormalization factor for the Hadamard product \(Q^{(k)}\circ C\|C\|_{F}^{-1}\) to be \(ns\), as \(Q^{(k)}\) may be fully dense and \(C\) is \(s\)-sparse. It follows that preparing one copy of each Gibbs state requires \[\widetilde{\mathcal{O}}_{n}\left(\sqrt{n}(ns)\right)=\widetilde{\mathcal{O}} _{n}\left(n^{1.5}s\right)\] accesses to block-encodings of \(Q^{(k)}\circ C\|C\|_{F}^{-1}\) and \(D\), which each require an additional \(\widetilde{\mathcal{O}}_{n}(n)\) gates (to construct sparse-access oracles for \(Q^{(k)}\) and \(D\)). Similarly, the subnormalization factor for a block-encoding \(U_{k}\) of \(Q^{(k)}\circ A\) will be \(ns\). Having prepared the state \(\rho^{(k)}\) and a block-encoding \(Q^{(k)}\circ A\), Lemma 8 asserts that one can implement a trace estimator for \[\operatorname{tr}\left[\left(Q^{(k)}\circ A\right)\rho^{(k)}\right]\] with bias at most \(\frac{\zeta}{K+2}\) using \(\widetilde{\mathcal{O}}(ns)\) applications of \(U_{k}\) and \(U_{k}^{\dagger}\). Applying amplitude estimation using \(\mathcal{O}\left(\frac{K}{\theta}\right)=\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{1}{\theta}\right)\) samples from the estimator to obtain a \(\frac{\theta}{K+2}\)-precise classical estimate \(\tilde{a}^{(k)}\) of \(a^{(k)}\), as \(K=\mathcal{O}\left(\operatorname{polylog}\left(n,\|C\|_{F},\frac{1}{\epsilon} \right)\right)\). Just as in the QRAM setting, classically updating \(a\) requires \(\mathcal{O}(1)\) arithmetic operations. Therefore, without access to QRAM, each iteration of Algorithm 4 requires at most \[\widetilde{\mathcal{O}}_{n,\frac{K}{\zeta}}\left(\frac{n^{2.5}s^{2}}{\theta} \right)=\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{n^ {2.5}s^{2}}{\theta}\right)\] applications of block-encodings for \(Q^{(k)}\circ C\|C\|_{F}^{-1}\), \(D^{(k)}\) and \(Q^{(k)}\circ A\) and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{n^{3.5}s^ {2}}{\theta}\right)\) additional gates. This corresponds to \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{n^{2.5}s^ {2}}{\theta}\right)\) queries to \(O_{A}\) and \(O_{C}\) in each iteration, and \(\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}\left(\frac{n^{3.5}s^ {2}}{\theta}\right)\) additional gates. Summing over the \(K+2=\widetilde{\mathcal{O}}_{n,\|C\|_{F},\frac{1}{\epsilon}}(1)\) iterations yields the stated complexity.
2308.08106
Efficient relaxation scheme for the SIR and related compartmental models
In this paper, we introduce a novel numerical approach for approximating the SIR model in epidemiology. Our method enhances the existing linearization procedure by incorporating a suitable relaxation term to tackle the transcendental equation of nonlinear type. Developed within the continuous framework, our relaxation method is explicit and easy to implement, relying on a sequence of linear differential equations. This approach yields accurate approximations in both discrete and analytical forms. Through rigorous analysis, we prove that, with an appropriate choice of the relaxation parameter, our numerical scheme is non-negativity-preserving and globally strongly convergent towards the true solution. These theoretical findings have not received sufficient attention in various existing SIR solvers. We also extend the applicability of our relaxation method to handle some variations of the traditional SIR model. Finally, we present numerical examples using simulated data to demonstrate the effectiveness of our proposed method.
Vo Anh Khoa, Pham Minh Quan, Ja'Niyah Allen, Kbenesh W. Blayneh
2023-08-16T02:37:00Z
http://arxiv.org/abs/2308.08106v1
# Efficient relaxation scheme for the SIR and related compartmental models ###### Abstract In this paper, we introduce a novel numerical approach for approximating the SIR model in epidemiology. Our method enhances the existing linearization procedure by incorporating a suitable relaxation term to tackle the transcendental equation of nonlinear type. Developed within the continuous framework, our relaxation method is explicit and easy to implement, relying on a sequence of linear differential equations. This approach yields accurate approximations in both discrete and analytical forms. Through rigorous analysis, we prove that, with an appropriate choice of the relaxation parameter, our numerical scheme is non-negativity-preserving and globally strongly convergent towards the true solution. These theoretical findings have not received sufficient attention in various existing SIR solvers. We also extend the applicability of our relaxation method to handle some variations of the traditional SIR model. Finally, we present numerical examples using simulated data to demonstrate the effectiveness of our proposed method. SIR models, relaxation, global convergence, infectious diseases ## I Introduction In recent years, the world has witnessed the devastating impact of infectious diseases on a global scale. From the rapid spread of COVID-19 to the resurgence of long-standing ailments like measles and influenza, understanding the dynamics of epidemics has become crucial for protecting public health. To gain a deeper understanding of these intricate phenomena, scientists have increasingly relied on mathematical modeling as an influential tool for unraveling the complex mechanisms governing disease transmission. Among the various models, the Susceptible-Infectious-Recovered (SIR) model has emerged as a fundamental framework, providing valuable insights into epidemic dynamics; cf. e.g. [7; 11; 15] for its applications to modeling the influenza, Ebola and COVID-19. The SIR model, initially proposed in the early 20th century by Kermack and McKendrick [10], has since been refined and adapted to address contemporary challenges. This model effectively captures the fundamental dynamics of epidemics by dividing a population into three distinct compartments: susceptible individuals, infectious individuals, and removed individuals. By considering the interactions between these compartments, the SIR model takes into account various factors such as transmission rates, removal rates, and the depletion of susceptible individuals over time. The "removals" in this context encompass individuals who are isolated, deceased, or have recovered and gained immunity. Additionally, the model assumes that individuals who have gained immunity or recovered enter a new category that is not susceptible to the disease. Consider a homogeneously mixed group of individuals of total size \(N\gg 1\). Let \(t\in\left(0,T\right)\) be the time variable with \(T>0\) being the final time of observations. We take into account the following functions: \[S\left(t\right) =\text{number of susceptibles at time }t,\] \[I\left(t\right) =\text{number of infectives at time }t,\] \[R\left(t\right) =\text{number of removals at time }t.\] Initiated, again, by Kermack and McKendrick in 1927 [10], the evolutionary dynamics of these individuals can be modeled through the following system of ordinary differential equations (ODEs): \[\begin{cases}I^{\prime}\left(t\right)=\beta S\left(t\right)I\left(t\right)- \gamma I\left(t\right),\\ S^{\prime}\left(t\right)=-\beta S\left(t\right)I\left(t\right),\\ R^{\prime}\left(t\right)=\gamma I\left(t\right).\end{cases} \tag{1}\] Here, the following assumptions are considered. **(A1)** The total population size is always \(N>0\), meaning that \(S\left(t\right)+I\left(t\right)+R\left(t\right)=N\) for all \(t\). **(A2)** We know the infection rate \(\beta>0\) from the infection process, and the removal rate \(\gamma>0\) from the removal process. **(A3)** The initial conditions are \(S\left(0\right)=n>1\), \(I\left(0\right)=a\geq 1\) and \(R\left(0\right)=0\). The explicit solution of the SIR model, despite its basic structure, is widely known to be unattainable due to the exponential nonlinearity of the transcendental equation governing removals. Consequently, numerous numerical methods have been proposed to address this fundamental model. The Taylor expansion method, initially employed by Kermack and McKendrick in 1927, approximates the exponential term, leading to an approximate analytic solution. This technique was utilized to simulate the plague epidemic of 1905-1906 in Bombay, India, and has since become a tutorial resource for students at both undergraduate and graduate levels; cf. [4]. Over the years, many different methods have been studied to solve the SIR model. Piyawong et al. [20] introduced an unconditionally convergent scheme that captures the long-term behavior of the SIR model, offering improved numerical stability compared to conventional explicit finite difference methods. Mickens [18] and, recently, Conte et al. [6] proposed and analyzed stable nonstandard finite difference methods that effectively preserve the positivity of the SIR solutions. Semi-analytical methods, such as the Adomian decomposition approach [17], have also been proposed, alongside other methods cited therein, to derive approximate analytical solutions. Additionally, the solutions of the SIR model can be expressed in terms of the Lambert function, as demonstrated in the publications by [2; 21]. Furthermore, an alternative approach involving parametric analysis has been employed to obtain analytical solutions; see e.g. [9]. While most of these approaches are local approximations, a recent global semi-analytical approach utilizing the Pade approximation has been presented in [5]. While the above-mentioned approximation methods have demonstrated numerical effectiveness, their convergence theories have received limited investigation. Some publications discussing discrete methods focus solely on stability analysis, leaving the global convergence analysis unexplored. In this study, we propose a novel numerical approach that guarantees global convergence. Our approach employs a relaxation procedure, derived from the conventional linearization technique, to approximate the SIR model in a continuous setting. Unlike the classical version, our modified procedure introduces a relaxation term. In the existing literature on partial differential equations (e.g., [13; 14; 19; 25] and references cited therein), this relaxation term mitigates the local convergence issues encountered by conventional linearization techniques such as Newton's method. Consequently, it permits to choose an arbitrary starting point while guaranteeing the global convergence. Within our specific context, the relaxation term facilitates capturing the non-negativity of solutions while preserving global convergence. In addition, by relying only on the dependence of the relaxation constant on the removal rate, our approach accurately captures the long-term behavior of the system. Besides, our explicit and easy-to-implement approximate scheme is governed by a sequence of linear differential equations. The desired approximate solution can be obtained discretely or analytically based on individual preference. Our paper is four-fold. In section II, we begin by revisiting the transcendental equation for removals and discussing the essential properties of the SIR model. The latter part of this section focuses on introducing the proposed relaxation scheme and establishing its theoretical foundations. We prove that this scheme is globally strongly convergent and preserves non-negativity. Additionally, we derive an error estimate in \(C^{0}\). In section III, we extend the applicability of our proposed scheme to some variants of the SIR model. Then, to validate the effectiveness of our method, numerical examples are presented in Section IV. Finally, we provide some concluding remarks in section V, and the appendix contains the proofs of our central theorems. ## II Relaxation procedure ### Transcendental equation revisited It is well known that the SIR model can be solved from a transcendental differential equation. Here, we revisit how to get such an equation to complete our analysis of the proposed scheme below. Let \(\mu=\beta/\gamma\) be the reciprocal relative removal rate. By the second and third equations of system (I.1), we have \[\frac{dS}{dR}=-\mu S,\] which leads to \(\ln\left(S\right)=-\mu R+c\). Therefore, we arrive at \[S\left(t\right)=e^{c-\mu R\left(t\right)}.\] (II.1) To find \(c\), we set \(t=0\) in (II.1) and use (A3). Indeed, by \(n=S\left(0\right)=e^{c-\mu R\left(0\right)}=e^{c}\), we get \(c=\ln\left(n\right)\) and thus, deduce that \[S\left(t\right)=ne^{-\mu R\left(t\right)}.\] (II.2) Combining this and (A1), we derive the following nonlinear differential equation for \(R\left(t\right)\): \[R^{\prime}\left(t\right)=\gamma\left(N-ne^{-\mu R\left(t\right)}-R\left(t \right)\right).\] (II.3) _Remark_.: To this end, the notation \(C^{m}\) is used to denote the space of functions with \(m\) continuous derivatives. For the particular \(C^{0}\) space, it is the space of continuous functions on \(\left[0,T\right]\) with the standard max norm. **Theorem 1**.: _The differential equation (II.3) admits a unique \(C^{1}\) non-negative solution \(R\left(t\right)\). Moreover, the existence and uniqueness in \(C^{1}\) of positive \(S\left(t\right)\) and \(I\left(t\right)\) to the SIR system (I.1) follow._ Proof.: The positivity of \(S\left(t\right)\) and \(I\left(t\right)\) is guaranteed by the first and second equations of system (I.1), i.e. \[S\left(t\right) =n\exp\left(-\beta\int_{0}^{t}I\left(s\right)ds\right),\] \[I\left(t\right) =a\exp\left(\int_{0}^{t}\left(\beta S\left(s\right)-\gamma \right)ds\right).\] Then, by the third equation of (I.1), the non-negativity of \(R\left(t\right)\) follows. Cf. [24, Theorem 3.2], since the right hand side of (II.3) is globally Lipschitzian, the equation admits a unique local \(C^{1}\) solution. Moreover, in view of the fact that \(\left|\gamma\left(N-ne^{-\mu R\left(t\right)}-R\left(t\right)\right)\right| \leq\gamma\left|R\left(t\right)\right|+\gamma\left(N+n\right)\) for any \(t\geq 0\), the obtained solution is global as a by-product of [24, Theorem 3.9]. Observe that the right hand side of (II.2) is decreasing in the argument of \(R\left(t\right)\). Thereby, the existence and uniqueness of \(S\left(t\right)\) follows. We also get the existence and uniqueness of \(I\left(t\right)\) in view of the fact that the total population is conserved; cf. (A1). Hence, we complete the proof of the theorem. Observe that if one can approximate \(R\left(t\right)\) well in (II.3), then \(S\left(t\right)\) and \(I\left(t\right)\) will be well approximated via (II.2) and (I.1), respectively. Define \(g\left(r\right)=\gamma ne^{-\mu r}+\gamma r\) for \(r\in\mathbb{R}\). We can rewrite (II.3) as \[R^{\prime}\left(t\right)=\gamma N-g\left(R\left(t\right)\right).\] _Remark_.: By the first and second equations of the SIR system (I.1), we get \[\frac{dI}{dS}=-1+\frac{1}{\mu S}.\] Therefore, using \(S\left(0\right)=n\) and \(I\left(0\right)=a\), we find that \(I\left(t\right)-a=-S\left(t\right)+n+\frac{1}{\mu}\ln\left(S\left(t\right) \right)-\frac{1}{\mu}\ln\left(n\right)\). Equivalently, we deduce that \[I\left(t\right)=\frac{1}{\mu}\ln\left(S\left(t\right)\right)-S\left(t\right)+ a+n-\frac{1}{\mu}\ln\left(n\right).\] Since function \(f\left(S\right)=\frac{1}{\mu}\ln\left(S\right)-S\) for \(S>0\) attains its maximum at \(S=\mu^{-1}\), we can estimate the so-called amplitude, which is the maximum value of \(I\), in the following manner. \[I_{\max}=-\frac{1}{\mu}\ln\left(\mu\right)-\frac{1}{\mu}+a+n-\frac{1}{\mu}\ln \left(n\right).\] (II.4) ### Derivation and analysis of the numerical scheme Let \(\left\{R_{k}\right\}_{k=0}^{\infty}\) be a time-dependent sequence satisfying, for \(k=1,2,3,\ldots\), \[R_{k}^{\prime}\left(t\right)+MR_{k}\left(t\right)=\gamma N-g\left(R_{k-1}\left( t\right)\right)+MR_{k-1}\left(t\right).\] (II.5) The sequence \(\left\{R_{k}\right\}_{k=0}^{\infty}\) aims to approximate \(R\left(t\right)\) in (II.3) in the sense that \(R_{k}\) will be close to \(R\) as \(k\rightarrow\infty\) uniformly in time. The accompanying initial condition for equation (II.5) is \(R_{k}\left(0\right)=0\) for any \(k\geq 0\). Since our approximate model performs as an iterative scheme, we need a starting point, \(R_{0}\left(t\right)\). Here, we choose \(R_{0}\left(t\right)=0\) based on the initial condition of \(R\left(t\right)\) (cf. (A3)), which is the best information given to the sought \(R\left(t\right)\). Also, in (II.5), we introduce a \(k\)-independent constant \(M\geq\gamma>0\) for the so-called relaxation process. This relaxation term plays a very important role. It allows us to prove the non-negativity of the relaxation scheme, while many numerical approaches, including the regular linearization method, do not have or cannot prove this feature. Herewith, the regular linearization method we meant is the scheme \(\left\{R_{k}\right\}_{k=0}^{\infty}\) in (II.5) with either \(M=0\) or only the exponential term in \(g\left(r\right)\) being linearized. Formulated below is the theorem showing that the scheme \(\left\{R_{k}\right\}_{k=0}^{\infty}\) preserves the non-negativity of the removals over the relaxation process. **Theorem 2**.: _The sequence \(\left\{R_{k}\right\}_{k=0}^{\infty}\) is a non-negativity-preserving scheme. Moreover, it holds true that for all \(M\geq\gamma\),_ \[0\leq g\left(R_{k}\right)\leq\gamma n+MR_{k}\quad\text{for any $k\geq 0$ and $t\geq 0$.}\] Proof.: We prove this theorem by induction. The statement holds true for \(k=1\). Indeed, since \(R_{0}\left(t\right)=0\), the equation for \(R_{1}\left(t\right)\) reads as \[R_{1}^{\prime}\left(t\right)+MR_{1}\left(t\right)=\gamma N-\gamma n\geq 0.\] Therefore, we get \[R_{1}\left(t\right) =\frac{\gamma\left(N-n\right)}{M}\left(1-e^{-Mt}\right)\geq 0,\] \[g\left(R_{1}\right) =\gamma ne^{-\mu R_{1}\left(t\right)}+\gamma R_{1}\left(t\right) \leq\gamma n+MR_{1}\left(t\right).\] Next, assume that the statement holds true for \(k=k_{0}\). We prove that it also holds true for \(k=k_{0}+1\). By (II.5), we have \[R_{k_{0}+1}^{\prime}\left(t\right) +MR_{k_{0}+1}\left(t\right)\] \[=\gamma N-g\left(R_{k_{0}}\left(t\right)\right)+MR_{k_{0}}\left(t \right)\geq 0.\] Thus, we obtain \(R_{k_{0}+1}\left(t\right)\geq e^{-Mt}R_{k_{0}+1}\left(0\right)\geq 0.\) As a by product, we can estimate that \[g\left(R_{k_{0}+1}\right) =\gamma ne^{-\mu R_{k_{0}+1}\left(t\right)}+\gamma R_{k_{0}+1} \left(t\right)\] \[\leq\gamma n+MR_{k_{0}+1}\left(t\right).\] Hence, we complete the proof of the theorem. In the following, we formulate the strong convergence result for the scheme \(\left\{R_{k}\right\}_{k=0}^{\infty}\). For ease of presentation, proof of this result is deliberately placed in the Appendix. It is worth mentioning that proof of the strong convergence of the scheme relies so much on the strict estimation of \(g^{\prime}\). Such an estimation can merely be obtained by the aid of the non-negativity of the scheme. **Theorem 3**.: _The sequence \(\left\{R_{k}\right\}_{k=0}^{\infty}\) defined in (II.5) is strongly convergent in \(C^{0}\) toward the true solution \(R\left(t\right)\) to Equation (II.3). In particular, we can find a number \(C=C\left(T,M,\gamma,n,\mu\right)>0\) independent of \(k\) such that the following error estimate holds true:_ \[\max_{0\leq t\leq T}\left|R_{k}\left(t\right)-R\left(t\right)\right|^{2}\leq \frac{C^{k}}{k!}\max_{0\leq t\leq T}\left|R\left(t\right)\right|^{2}.\] Our theoretical finding below shows that when \(n\mu<1\), the scheme \(\left\{R_{k}\left(t\right)\right\}_{k=0}^{\infty}\) converges faster than the case \(n\mu\geq 1\). The proof of the following corollary is also found in the Appendix. **Corollary 4**.: _Assume that \(n\mu<1\). We can find a constant \(c\in\left(0,1\right)\) independent of \(k\) such that the following error estimate holds true:_ \[\max_{0\leq t\leq T}\left|R_{k}\left(t\right)-R\left(t\right)\right|\leq c^{k }\max_{0\leq t\leq T}\left|R\left(t\right)\right|.\] (II.6) As readily expected, for every step \(k\), we obtain a non-homogeneous differential equation that can be effectively approximated in the discrete framework. Mimicking the proof of Theorem II.3 and using Theorem 2, we have \[R_{k}^{\prime}\left(t\right)+MR_{k}\left(t\right)\leq\gamma N-\gamma n+\left( M-\gamma\right)R_{k-1}\left(t\right),\] (II.7) leading to \[R_{k}\left(t\right) \leq e^{-Mt}\int_{0}^{t}e^{Ms}\left(\gamma a+\left(M-\gamma \right)R_{k-1}\left(s\right)\right)ds\] \[\leq t\gamma a+\left(M-\gamma\right)\int_{0}^{t}R_{k-1}\left(s \right)ds.\] By induction and by the choice \(R_{0}\left(t\right)=0\), we can show that \[\left|R_{k}\left(t\right)\right|\leq\gamma a\sum_{i=1}^{k}\left(M-\gamma \right)^{i-1}\frac{t^{i}}{i!}.\] Therefore, if we choose \(\gamma\leq M\leq\gamma+\frac{1}{T}\), then \(\left|R_{k}\left(t\right)\right|\leq T\gamma a\left(e-1\right)\). Note that this bound is independent of \(k\). Thus, by Theorem 1, we get \(R_{k}\in C^{1}\) for any \(k\) with, cf. (II.7), Theorem 2 and the choice \(M\leq\gamma+\frac{1}{T}\), \[\left|R_{k}^{\prime}\left(t\right)\right|\leq\gamma a+\left(M-\gamma\right)T \gamma a\left(e-1\right)\leq\gamma ae.\] Furthermore, by differentiating both sides of (II.5) with respect to time, we can demonstrate that \(R_{k}\in C^{2}\) for any \(k\). Indeed, \[R_{k}^{\prime\prime}\left(t\right) +MR_{k}^{\prime}\left(t\right)\] \[=\left(M-\gamma+\gamma n\mu e^{-\mu R_{k-1}\left(t\right)}\right) R_{k-1}^{\prime}\left(t\right)\] \[\leq\left(M-\gamma+\gamma n\mu\right)\left|R_{k-1}^{\prime}\left( t\right)\right|.\] This yields that \(\left|R_{k}^{\prime\prime}\left(t\right)\right|\leq 2M-\gamma+\gamma n\mu\), which is a \(k\)-independent upper bound. This ensures the Euler method's global error by leveraging its existing convergence theory. For completeness, we present below the discrete solution to our proposed scheme. Consider the time increment \(\Delta t=T/P\) for \(P\geq 2\) being a fixed integer. Then, we set the mesh-point in time by \(t_{p}=p\Delta t\) for \(0\leq p\leq P\). We seek \(R_{k}^{p}\approx R_{k}\left(t_{p}\right)\) as a discrete solution to equation (II.5). By the standard Euler method, \(R_{k}^{p}\) is determined by the following equation: \[R_{k}^{p} +\Delta tMR_{k}^{p}\] \[=R_{k}^{p-1}+\Delta t\left(\gamma N-g\left(R_{k-1}^{p}\right)+MR_ {k-1}^{p}\right).\] (II.8) By this way, the global error of the Euler method is attained in the sense that for every \(k\), there exists a constant \(\tilde{C}>0\) such that \[\max_{0\leq p\leq P}\left|R_{k}^{p}-R_{k}\left(t_{p}\right)\right|\leq\tilde{C }\Delta t.\] We accentuate that by the above analysis of \(R_{k}\), i.e. \(R_{k}\in C^{2}\) for any \(k\), the constant \(\tilde{C}\) is independent of \(k\). Thus, by Theorem 3, we can estimate the distance between the discrete (approximate) solution \(R_{k}^{p}\) and the true solution \(R\) at each mesh-point, \[\max_{0\leq p\leq P}\left|R_{k}^{p}-R\left(t_{p}\right)\right|\leq\tilde{C} \Delta t+\sqrt{\frac{C^{k}}{k!}}\max_{0\leq t\leq T}\left|R\left(t\right) \right|.\] (II.9) _Remark_.: We have the following remarks: * After obtaining the approximator \(R_{k}^{p}\) for \(R\left(t_{p}\right)\), we can compute \(S\left(t_{p}\right)\) using (II.2). Then, the approximate solution for \(I\left(t_{p}\right)\) can be determined using (A1), specifically \(I\left(t_{p}\right)=N-S\left(t_{p}\right)-R\left(t_{p}\right)\). * Both \(\tilde{C}\) and \(C\) in (II.9) are independent of \(P\) and \(k\). As a by product, our discrete relaxation scheme \(\left\{R_{k}^{p}\right\}_{k=0}^{\infty}\), as defined in (II.8), is globally strongly convergent in \(C^{0}\). Similar to the proof of Theorem 2, we can demonstrate that \(R_{k}^{p}\) is non-negativity-preserving. It is important to note that many previous approximations, such as the method of series expansions [5; 17], parametrization method [9; 22] and finite difference method [20; 23], did not adequately address the preservation of non-negativity/positivity. Furthermore, certain recent positive numerical schemes fail to provide an error bound, as observed in the publications [6; 12]. * While the application of the Euler method is initiated, we can further employ higher-order numerical methods to produce a faster convergent solver for the linear differential equation of \(R_{k}\). Among these, the Runge-Kutta method stands out as the most favorable choice, offering a convergence rate of order \(q\geq 2\). Building upon the analysis of the Euler method above, we can prove that all derivatives of the right hand side of (II.5) exist up to order \(q\) and \(R_{k}\in C^{q}\) for any \(k\). Therefore, we can show that \(R_{k}^{p}\) globally converges to \(R_{k}\) with a rate of \(\mathcal{O}\left(\Delta t^{q}\right)\); cf. e.g. [8, Theorem 3.4] for the existing theory on the global convergence of the Runge-Kutta method. Note here that \(\mathcal{O}\left(x\right)\) is the conventional Landau symbol. * Similar to (II.9), the convergence of the Runge-Kutta method remains unaffected by \(k\). Nevertheless, it is important to emphasize that this convergence is heavily contingent upon the upper bounds of the involved derivatives. Considering the boundedness of \(R_{k},R_{k}^{\prime}\) and \(R_{k}^{\prime\prime}\) established above, it becomes evident that these bounds tend to increase as the order rises. Consequently, it is crucial to exclusively employ variants of the Runge-Kutta method with appropriately high orders. This perspective holds true when applying the Runge-Kutta method directly to the differential equation (II.3). ## III Extensions to other SIR models In this section, we briefly discuss the applicability of the relaxation method to other population models of SIR type. In particular, we show below how the proposed approach can be adapted to approximate the SIRD (Susceptible-Infectious-Recovered-Deceased) and SIRX models; cf. [1; 3; 16] for an overview of these models. ### SIRD model The SIRD model extends the SIR model by distinguishing between recovered and deceased individuals. In this framework, the removals in the SIR model no longer encompass the number of infected individuals who have passed away. To account for mortality, a mortality rate \(\sigma>0\) is introduced, representing the rate at which infected individuals succumb to death. Consequently, the death rate per unit of time is calculated as the product of the mortality rate and the number of infected individuals. Additionally, as the number of deceased individuals is excluded from the removals, the rate of change of infections over time is adjusted to reflect the loss caused by mortality. Mathematically, the SIRD model reads as \[\begin{cases}I^{\prime}\left(t\right)=\beta S\left(t\right)I\left(t\right)- \left(\gamma+\sigma\right)I\left(t\right),\\ S^{\prime}\left(t\right)=-\beta S\left(t\right)I\left(t\right),\\ R^{\prime}\left(t\right)=\gamma I\left(t\right),\\ D^{\prime}\left(t\right)=\sigma I\left(t\right),\end{cases}\] (III.1) where \(D\left(t\right)\) stands for the number of deceased people (after infection) at time \(t\). In (III.1), we make use of the following assumptions. **(B1)** The total population size is always conserved with \(N>0\), meaning that \(S\left(t\right)+I\left(t\right)+R\left(t\right)+D\left(t\right)=N\) for all \(t\). **(B2)** We know the infection rate \(\beta>0\) from the infection process, the removal rate \(\gamma>0\) from the removal process, and the death rate \(\sigma>0\) from the mortality process. **(B3)** The initial conditions are \(S\left(0\right)=n>1\), \(I\left(0\right)=a\geq 1\), \(R\left(0\right)=0\) and \(D\left(0\right)=0\). _Remark_.: By the first and second equations of the SIRD system (III.1), we see that \[\frac{dI}{dS}=-1+\frac{\gamma+\sigma}{\beta S}.\] Similar to the classic SIR model (I.1), we can thus formulate the so-called amplitude in the following fashion: \[I_{\max} =\frac{\gamma+\sigma}{\beta}\ln\left(\frac{\gamma+\sigma}{\beta}\right)\] \[-\frac{\gamma+\sigma}{\beta}+a+n-\frac{\gamma+\sigma}{\beta}\ln \left(n\right)\] (III.2) when \(S\) reaches \(\frac{\gamma+\sigma}{\beta}\). _Remark_.: The SIRD model (III.1) resembles the SIRX model without containment rate. In the SIRX model, an additional class called "\(X\)" was introduced to account for the impact of social or individual behavioral changes during quarantine. Individuals in this class, referred to as symptomatic quarantined individuals, no longer contribute to the transmission of the infection. Instead of \(\sigma\), the SIRX model without containment rate consideres \(\kappa>0\) that represents the rate at which infected individuals are removed due to quarantine measures. The SIRX model with the containment rate is not the scope of our paper since the associated transcendental system does not take the same form of (II.5). Indeed, the transcendental system governing the full SIRX model is of an integro-differential equation. Now, we detail the transcendental equation for \(R\left(t\right)\) and the application of the relaxation scheme. From the second and third equations of system (III.1), we deduce that \[S\left(t\right)=ne^{-\mu R\left(t\right)},\] (III.3) where we have recalled the reciprocal relative removal constant \(\mu=\beta/\gamma\). Using the same way, the third and last equations of system (III.1) give \[D\left(t\right)=\frac{\sigma}{\gamma}R\left(t\right)\] (III.4) by virtue of \(R\left(0\right)=D\left(0\right)=0\) (cf. (B3)). Then, plugging (III.3), (III.4) and (B1) into the third equation of (III.1), we obtain the following differential equation for \(R\left(t\right)\): \[R^{\prime}\left(t\right)=\gamma\left[N-ne^{-\mu R\left(t\right)}-\left(1+ \frac{\sigma}{\gamma}\right)R\left(t\right)\right].\] (III.5) Henceforth, our relaxation scheme in this case becomes \[R_{k}^{\prime}\left(t\right)+\overline{M}R_{k}\left(t\right)=\gamma N- \overline{g}\left(R_{k-1}\left(t\right)\right)+\overline{M}R_{k-1}\left(t \right),\] (III.6) where \(\overline{g}\left(r\right)=\gamma ne^{-\mu r}+\left(\gamma+\sigma\right)r\). Similar to the SIR model, here we rely on (B3) to choose \(R_{k}\left(0\right)=0\) for any \(k\geq 0\) as the initial condition and \(R_{0}\left(t\right)=0\) as the starting point. By choosing \(\overline{M}\geq\gamma+\sigma\), our sequence \(\left\{R_{k}\right\}_{k=0}^{\infty}\) (defined in (III.6)) is non-negativity-preserving and globally strongly convergent to \(R\) of the transcendental equation (III.5). These findings are analogous to our central Theorems 2 and 3, and therefore, we omit their formulations. Besides, Theorem 1 is applied to (III.5), guaranteeing the global existence and uniqueness of the \(C^{1}\) solutions to the SIRD model (III.1). ### SIR model with background mortality The SIR model, along with its variants SIRD and SIRX, assumes a constant population size. These models, known as epidemiological SIR-type models without vital dynamics, are limited in their representation of population changes; see (A1), (B1) and (C1). The SIR model with vital dynamics addresses this limitation by incorporating birth and death rates to account for population size fluctuations. In the present work, we explore that the transcendental system governing the SIR model with background mortality takes the form of (II.5). With \(\sigma\) being the death rate, the population experiences changes over time. Here, individuals from all compartments can exit through deaths, allowing for a more realistic representation of population dynamics. Mathematically, the SIR model with background mortality can be expressed as follows: \[\begin{cases}I^{\prime}\left(t\right)=\beta S\left(t\right)I\left(t\right)- \gamma I\left(t\right)-\sigma I\left(t\right),\\ S^{\prime}\left(t\right)=-\beta S\left(t\right)I\left(t\right)-\sigma S\left( t\right),\\ R^{\prime}\left(t\right)=\gamma I\left(t\right)-\sigma R\left(t\right).\end{cases}\] (III.7) In this perspective, we make use of the following assumptions. **(C1)** The total population size is dependent of \(t\), i.e. \(N=N\left(t\right)>0\). It can be computed that \(N\left(t\right)=S\left(t\right)+I\left(t\right)+R\left(t\right)=e^{-\sigma t} N_{0}\) for some fixed \(N_{0}>0\). **(C2)** We know the infection rate \(\beta>0\) from the infection process, the removal rate \(\gamma>0\) from the removal process, and the death rate \(\sigma>0\) from the mortality process. **(C3)** The initial conditions are \(S\left(0\right)=n>1\), \(I\left(0\right)=a\geq 1\) and \(R\left(0\right)=0\). This implies that \(N_{0}=n+a\). Similar to the classical SIR model, we seek the transcendental equation for \(R\) prior to the application of our proposed numerical scheme. When doing so, we define \(\overline{R}\left(t\right)=e^{\sigma t}R\left(t\right)\) and \(\overline{S}\left(t\right)=e^{\sigma t}S\left(t\right)\). By the second and third equations of system (III.7), we find that \[\overline{S}^{\prime}\left(t\right) =-\beta\overline{S}\left(t\right)I\left(t\right),\] (III.8) \[\overline{R}^{\prime}\left(t\right) =\gamma e^{\sigma t}I\left(t\right).\] (III.9) Therefore, we deduce that \[\frac{d\overline{S}}{d\overline{R}}=-\mu e^{-\sigma t}\overline{S},\] (III.10) or equivalently, \(\ln\left(\overline{S}\right)=-\mu e^{-\sigma t}\overline{R}+\tilde{c}\left(t\right)\). Herewith, we have recalled the reciprocal relative removal rate \(\mu=\beta/\gamma\). Henceforth, we have \[\overline{S}\left(t\right)=e^{\tilde{c}\left(t\right)-\mu e^{-\sigma t} \overline{R}\left(t\right)}.\] (III.11) Since \(\overline{S}\left(0\right)=n\) and \(\overline{R}\left(0\right)=R\left(0\right)=0\) by (D3), we find that \(\tilde{c}\left(0\right)=\ln\left(n\right)\). Moreover, by taking the derivative in time of (III.11), we arrive at \[\overline{S}^{\prime}\left(t\right)=e^{\tilde{c}\left(t\right)-\mu e^{-\sigma t }\overline{R}\left(t\right)}\left[\tilde{c}^{\prime}\left(t\right)-\mu e^{- \sigma t}\left(-\sigma+\overline{R}^{\prime}\left(t\right)\right)\right].\] (III.12) Dividing both sides of (III.12) by \(\overline{R}^{\prime}\left(t\right)\), we find that \[\frac{\overline{S}^{\prime}\left(t\right)}{\overline{R}^{\prime} \left(t\right)}\] \[=\frac{e^{\tilde{e}\left(t\right)-\mu e^{-\sigma t}\overline{R} \left(t\right)}\left[\tilde{c}^{\prime}\left(t\right)-\mu e^{-\sigma t}\left(- \sigma+\overline{R}^{\prime}\left(t\right)\right)\right]}{\overline{R}^{ \prime}\left(t\right)}.\] Then combining this with (III.10) and (III.11), we derive the following differential equation for \(\tilde{c}\left(t\right)\): \[e^{\tilde{c}\left(t\right)-\mu e^{-\sigma t}\overline{R}\left(t \right)}\left[\tilde{c}^{\prime}\left(t\right)-\mu e^{-\sigma t}\left(-\sigma +\overline{R}^{\prime}\left(t\right)\right)\right]\] \[=-\mu e^{-\sigma t}e^{\tilde{c}\left(t\right)-\mu e^{-\sigma t} \overline{R}\left(t\right)}\overline{R}^{\prime}\left(t\right),\] or equivalently, \(\tilde{c}^{\prime}\left(t\right)=-\mu\sigma e^{-\sigma t}\). Thus, we obtain \(\tilde{c}\left(t\right)=\ln\left(n\right)+\mu\left(e^{-\sigma t}-1\right)\) and \[\overline{S}\left(t\right)=ne^{\mu\left(e^{-\sigma t}-1\right)}e^{-\mu e^{- \sigma t}\overline{R}\left(t\right)}.\] Together with the back-substitution \(e^{-\sigma t}\overline{R}\left(t\right)=R\left(t\right)\), we thereby get \(\overline{S}\left(t\right)=ne^{\mu\left(e^{-\sigma t}-1\right)}e^{-\mu R\left( t\right)}\). Now, we note that by (C1) and (C3), \(e^{\sigma t}I\left(t\right)=N_{0}-\overline{S}\left(t\right)-\overline{R} \left(t\right)\) holds true for any \(t\). Plugging this into (III.9) and using the fact that \(\overline{S}\left(t\right)=ne^{\mu\left(e^{-\sigma t}-1\right)}e^{-\mu R\left( t\right)}=ne^{\mu\left(e^{-\sigma t}-1\right)}e^{-\mu e^{-\sigma t}\overline{R} \left(t\right)}\), we derive the transcendental equation for \(\overline{R}\) as follows: \[\overline{R}^{\prime}\left(t\right)=\gamma\left[N_{0}-ne^{\mu\left(e^{-\sigma t }-1\right)}e^{-\mu e^{-\sigma t}\overline{R}\left(t\right)}-\overline{R} \left(t\right)\right].\] (III.13) By setting \(\hat{g}\left(r\right)=\gamma ne^{\mu\left(e^{-\sigma t}-1\right)}e^{-\mu e^{- \sigma t}r}+\gamma r\), our relaxation scheme for the SIR model with background mortality is structured by \[\overline{R}_{k}^{\prime}\left(t\right) +\hat{M}\overline{R}_{k}\left(t\right)\] \[=\gamma N_{0}-\hat{g}\left(\overline{R}_{k-1}\left(t\right) \right)+\hat{M}\overline{R}_{k-1}\left(t\right).\] (III.14) Similar to the above-mentioned SIR-based models, we choose \(\overline{R}_{k}\left(0\right)=0\) for any \(k\geq 0\) as the initial condition and \(\overline{R}_{0}\left(t\right)=0\) as the starting point, based on the fact that \(\overline{R}\left(0\right)=R\left(0\right)=0\). Also, here we take \(\hat{M}\geq\gamma\) to ensure the non-negativity preservation and global strong convergence of the sequence \(\left\{\overline{R}_{k}\right\}_{k=0}^{\infty}\) (defined in (III.14)) to the sought \(\overline{R}\) of the transcendental equation (III.13). As another analog of Theorems 2 and 3, we omit details of the formulations of the theoretical results for the sequence \(\left\{\overline{R}_{k}\right\}_{k=0}^{\infty}\). It is also worth mentioning that Theorem 1 remains true in this case, providing the global existence and uniqueness of \(C^{1}\) solutions to the SIR model with mortality (III.7). ## IV Numerical experiments In this section, we verify the numerical performance of the proposed relaxation method. Initially, we employ various approaches to solve the SIR model (I.1) for the purpose of comparison. These include our method (II.5), as well as the standard methods: approximate analytic solution, regular linearization procedure, and conventional explicit Euler method. It is important to note that since the conventional explicit Euler method is considered in this comparison, we also apply the Euler method to our relaxation scheme, as outlined in (II.8), as well as to the regular linearization procedure. Additionally, it is worth mentioning that the approximate analytic solution for \(R\left(t\right)\) (referred to as \(R_{\text{a}}\)) can be found in [3; 4; 10]. In particular, it is of the following form: \[R_{\text{a}}\left(t\right)=\frac{1}{n\mu^{2}}\left[n\mu-1+\eta\tanh\left( \frac{\gamma nt}{2}-\psi\right)\right],\] (IV.1) where \[\eta =\left[2n\mu^{2}\left(N-n\right)+\left(n\mu-1\right)^{2}\right] ^{1/2},\] \[\psi =\tanh^{-1}\left[\frac{1}{\eta}\left(n\mu-1\right)\right].\] The approximate analytic solution mentioned above corresponds to the solution of the Riccati equation. However, it is applicable only when \(\mu R\) is sufficiently small. Furthermore, the conventional explicit Euler method is expressed as follows: \[R^{p}=R^{p-1}+\Delta t\left(\gamma N-g\left(R^{p-1}\right)\right),\] (IV.2) where \(R^{p}\approx R\left(t_{p}\right)\) is the discrete solution to the nonlinear differential equation (II.3) for \(t_{p}=p\Delta t\) being the mesh-point in time. In the second test, we utilize the widely used Runge-Kutta RK4 method to solve the SIR model (II.5) by applying it to our relaxation scheme (I.1). We then compare its performance when using the Euler-relaxation method (II.8) and when directly applying the Runge-Kutta RK4 method to (II.3). For sake of clarity, we provide the formulation of the RK4 method for solving a differential equation of a general type \(R^{\prime}\left(t\right)=F\left(t,R\left(t\right)\right)\): \[R^{p} =R^{p-1}+\frac{1}{6}K_{1}\left(t_{p-1},R^{p-1}\right)+\frac{1}{3} K_{2}\left(t_{p-1},R^{p-1}\right)\] \[+\frac{1}{3}K_{3}\left(t_{p-1},R^{p-1}\right)+\frac{1}{6}K_{4} \left(t_{p-1},R^{p-1}\right),\] (IV.3) where we have denoted the intermediate values by \[K_{1}\left(t_{p-1},R^{p-1}\right) =\Delta tF\left(t_{p-1},R^{p-1}\right),\] (IV.4) \[K_{2}\left(t_{p-1},R^{p-1}\right) =\Delta tF\left(t_{p-1}+\frac{\Delta t}{2},R^{p-1}+\frac{K_{1}}{2 }\right),\] (IV.5) \[K_{3}\left(t_{p-1},R^{p-1}\right) =\Delta tF\left(t_{p-1}+\frac{\Delta t}{2},R^{p-1}+\frac{K_{2}}{2 }\right),\] (IV.6) \[K_{4}\left(t_{p-1},R^{p-1}\right) =\Delta tF\left(t_{p-1}+\Delta t,R^{p-1}+K_{3}\right).\] (IV.7) When using our method, it is important to note that for each iteration \(k\), we solve the linear differential equation \(R_{k}^{\prime}\left(t\right)=F\left(t,R_{k}\left(t\right)\right)\), where \(F\left(t,R_{k}\left(t\right)\right)=-MR_{k}\left(t\right)+\gamma N-g\left(R_{k -1}\left(t\right)\right)+MR_{k-1}\left(t\right)\). Notice that in this equation, the presence of the midpoint \(t_{p-1}+\frac{\Delta t}{2}\), applied to \(R_{k-1}\left(t\right)\) obtained from the previous step, leads to the following linear approximation: \[R_{k-1}\left(t_{p-1}+\frac{\Delta t}{2}\right)=\frac{1}{2}\left[R_{k-1}\left( t_{p-1}\right)+R_{k-1}\left(t_{p-1}+\Delta t\right)\right].\] (IV.8) Denote this approximation by \(R_{k-1}^{p-0.5}=R_{k-1}\left(t_{p-1}+\frac{\Delta t}{2}\right)\). Thereby, we seek \(R_{k}^{p}\) satisfying (IV.3) in which the intermediate values are given by \[K_{1} =\Delta t\left(-MR_{k}^{p-1}+\gamma N-g\left(R_{k-1}^{p-1}\right) +MR_{k-1}^{p-1}\right),\] (IV.9) \[K_{2} =\Delta t\left[-M\left(R_{k}^{p-1}+\frac{K_{1}}{2}\right)+\gamma N -g\left(R_{k-1}^{p-0.5}\right)+MR_{k-1}^{p-0.5}\right],\] (IV.10) \[K_{3} =\Delta t\left[-M\left(R_{k}^{p-1}+\frac{K_{2}}{2}\right)+\gamma N -g\left(R_{k-1}^{p-0.5}\right)+MR_{k-1}^{p-0.5}\right],\] (IV.11) \[K_{4} =\Delta t\left[-M\left(R_{k}^{p-1}+K_{3}\right)+\gamma N-g\left(R _{k-1}^{p}\right)+MR_{k-1}^{p}\right].\] (IV.12) On the other hand, when directly applying the Runge-Kutta RK4 method to the nonlinear differential equation (II.3), we have \(F\left(t,R\left(t\right)\right)=\gamma\left(N-ne^{-\mu R\left(t\right)}-R \left(t\right)\right)\). In the third test, we present the numerical performance of the proposed method in solving the SIR-based models discussed in section III. In particular, our focus in this test is on 1. the scheme \(\left\{R_{k}\right\}_{k=0}^{\infty}\), defined in (III.6), for the SIRD model (III.1). In this model, the relaxation parameter satisfies \(\overline{M}\geq\gamma+\sigma\). 2. the scheme \(\left\{R_{k}\right\}_{k=0}^{\infty}\) computed from \(\left\{\overline{R}_{k}\right\}_{k=0}^{\infty}\) (defined in (III.14)) for the SIR model with background mortality (III.7). In this case, we condition that \(\bar{M}\geq\gamma\). To evaluate the accuracy of the relaxation scheme, we assess the proximity of the approximation when approaching the maximum value of \(I\). It is important to recall that explicit expressions for \(I_{\max}\) have been derived for each specific case. The reader is referred to (II.4) for the classical SIRD model, and (III.2) for the SIRD model. For the SIR model with background mortality, since the maximum value of \(I\) cannot be found explicitly, we run the simulation with several values of \(P\) and \(K\) to verify the numerical stability. When increasing these parameters, we also identify the numerical amplitude and peak day to see the performance of our relaxation method in the Euler and RK4 frameworks. ### Test 1 In this test, we compare our Euler-relaxation approach with the approximate analytic solution (IV.1), the regular linearization procedure (which arises when the relaxation parameter vanishes), and the direct explicit Euler method (IV.2). Alongside assessing numerical stability, we evaluate the performance of these methods based on the amplitude \(I_{\rm max}\) presented in (II.4) and the peak day. Here, we consider a population sample of \(N=1000\) for the SIR model (I.1) over the course of one year (\(T=365\)). Initially, we assume that there are \(a=2\) infected people in this sample, leaving \(n=998\) individuals susceptible to infection. Furthermore, we choose a removal rate of \(\gamma=0.02\) and an infection rate of \(\beta=0.0004\). With these choices, we obtain a reciprocal relative removal rate of \(\mu=\beta/\gamma=0.02\), indicating that \(n\mu=19.96>1\). Additionally, for our relaxation process, we set \(M=0.02\). Our numerical results for Test 1 are presented in Table I. Based on the maximum amplitude (\(I_{\rm max}\)), our proposed method within the Euler context (method #1) outperforms the approximations obtained from methods #2-4. The first two columns of Table I demonstrate the numerical stability of our proposed method, particularly when dealing with relatively small values of \(P\) and \(K\). Remarkably, when \(P=100\) and \(K=5\), our method yields an \(I_{\rm max}\) value of 797, which is very close to the true value of 800 as shown in (II.4). In contrast, the \(I_{\rm max}\) obtained from the approximate analytic solution (method #3) shows a significant deviation. We also observe that the amplitude \(I_{\rm max}\) obtained from method #3 remains unaffected regardless of the choice of \(P\). A comparison between methods #1 and #2 reveals that while the regular linearization technique can provide a satisfactory estimate of \(I_{\rm max}\) (800 when considering \(P=100\) and \(K=2\)), method #2 suffers from severe numerical instability as illustrated in the second row of Figure IV.1, particularly when increasing \(K\) to obtain a more accurate graphical representation. Furthermore, when \(P\) and \(K\) are relatively small, our proposed method shows a slight improvement over the conventional Euler method (method #4) within the same Euler context. At a coarse grid level, method #4 yields a relative error of 0.875%, while our method achieves a lower relative error of 0.375%. Upon increasing \(P\) to 1000, both methods #1 (with an increased \(K=50\)) and #4 demonstrate comparable accuracy in terms of amplitude and graphical representation, as depicted in the first and last rows of Figure IV.1. Our numerical investigation reveals that the true peak value (\(I_{\rm max,true}\)) is attained on the 24th day by employing sufficiently large values of \(P\) (over 3000) in both reliable methods #1 and #4. Comparing the peak days, it becomes evident from the last row of Table I that our relaxation method outperforms methods #2 and #3. While our method and method #4 achieve similar accuracy in terms of graphical simulation and amplitude, our proposed method detects the peak day earlier and with greater reliability. Specifically, considering small \(P\) and \(K\), our relaxation method identifies the peak outbreak on the 25th day, which closely aligns with the true peak (24th), in contrast to the peak day of 32nd obtained from method #4. For larger \(P\), our method predicts an earlier peak occurrence (day 23rd), which proves advantageous in practical scenarios compared to the peak day of 25th obtained from method #4. The ability to predict the peak event of a disease earlier is of practical significance for decision-makers, enabling them to implement and sustain timely public health measures and interventions aimed at mitigating the disease risk. ## Test 2 Our second test focuses on the numerical comparison between two approaches: applying the well-known Runge-Kutta RK4 method (referred to as method #5) to our relaxation scheme (II.5) and applying it directly to the nonlinear differential equation (II.3) (denoted as method #6). Additionally, we compare the convergence speed of method #5 with method #1, referred above to as the Euler-relaxation method (II.8). As RK4 is a fourth-order method, we deliberately Figure IV.1: Graphical illustrations of Test 1. Row 1: Euler-relaxation method. Row 2: regular linearization method. Row 3: approximate analytic method. Row 4: direct Euler method. choose a large population size of \(N=97.47\times 10^{6}\) and a transmission rate of \(\beta=3\times 10^{-9}\). Assuming the initial infected population is \(a=11\), and the removal rate remains constant at \(\gamma=0.05\) throughout the entire six-month period (\(T=180\)), we can calculate that the simulated disease reaches its peak at \(I_{\rm max,true}=51367769\); cf. (II.4). Moreover, based on numerical observations with a sufficiently large value of \(P\) (\(>\)2000), we find that this peak is reached on the 73rd day. Our numerical results are tabulated in Table 2, accompanied by corresponding graphical illustrations in Figure 2. We see that within the same RK4 framework, our proposed relaxation method (method #5) outperforms the direct approach. When the number of time steps is small (\(P=50\)), method #5 with \(K=20\) yields an amplitude \(I_{\rm max}\) of 51295165 with a relative accuracy of 0.14%, while method #6 achieves 0.81%. Both methods capture the peak day (72) well compared to the true value of 73. Note in this case that we choose \(K=20\), a larger value than in Test 1, due to the larger population under consideration. Cf. Theorem 3, the choice of \(K\) does affect our error estimation which heavily depends on the total size of the removal population. We also see that when increasing \(P\) to 2000, our proposed method #5 with an increased \(K=50\) precisely achieves the true amplitude, \(I_{\rm max,true}=51367769\), while the direct RK4 method produces a very close approximation of 51367765. Both methods also identify the peak day as the 73rd day. Furthermore, we compare our relaxation method to the Euler and Runge-Kutta frameworks. In terms of amplitude, although method #1 initially provides a better value of 51341234 with an accuracy of 0.05%, it fails to accurately detect the peak day, significantly deviating from the true value of 73 (predicting 54 instead). Increasing \(P\) to 2000 improves the amplitude to 51367573, but it still performs worse than the direct RK4 method's amplitude of 51367765. Herewith, method #1 achieves an improved peak day of 72. Additionally, based on the simulation of method #1, we observe that to reach the true amplitude (\(I_{\rm max,true}=51367769\)) and the true peak day of 73, at least \(P=19000\) and \(K=100\) are required. Henceforth, our relaxation method in the RK4 framework, as readily expected, outperforms itself in the Euler framework. ## Test 3 As previously mentioned, in our last experiment, we aim to broaden the scope of the proposed relaxation method by applying it to various SIR-type models: specifically, the SIRD model and the SIR model with background mortality. These models share the same input parameters as those used in Test 1, where we set \(N=1000\), \(n=998\), \(T=365\), \(\gamma=0.02\), \(\beta=0.0004\). In the SIRD model (III.1), we choose a death rate of \(\sigma=0.01\), which implies a choice of the relaxation parameter \(\overline{M}\geq\gamma+\sigma=0.03\). In the SIR model with background mortality (III.7), we use the background death of \(\sigma=0.001\) and select \(\bar{M}\geq\gamma=0.02\). Our numerical findings for the SIRD model are detailed in Figure 3. We specifically investigate the scenario where \(\overline{M}=0.015\), thereby contravening the relaxation condition (\(\overline{M}\geq 0.03\)). Consistent with our theorem concerning non-negativity preservation, we observe that the relaxed solution with \(\overline{M}=0.015\), obtained from both the Euler and RK4 frameworks, fails to maintain non-negativity over time. This is evident in the first column of Figure 3. When \(\overline{M}=0.03\) (a case that adheres to the condition), we also note that the RK4-relaxation outperforms the Euler-relaxation approach. Given the \(I_{\rm max,true}=730\) (as formulated in (III.2)), and with a coarse mesh of \(P=200\) and \(K=10\), we observe that the RK4-relaxation produces an amplitude identical to the true value. In contrast, the Euler-relaxation yields a value of 729. It is also worth mentioning that the accurate amplitude is attained when applying the Euler-relaxation with \(P=800\) and \(K=10\). Our numerical results pertaining to the SIR model with background mortality are presented in Figure 4. It is evident that both the Euler and RK4 relaxation methods show numerical stability and non-negativity preservation as we increase the values of \((P,K)\) from \((100,5)\) to \((1000,50)\). Consistent with our prior tests, the RK4-relaxation method continues to outperform the Euler-relaxation method. Leveraging this numerical stability, we run the RK4-relaxation method using large values of \(P\) and \(K\) to determine the numerical amplitude and peak day. Our findings reveal a numerical amplitude of 777, peaking on the 24th day. Within the RK4 framework, achieving this numerical amplitude and peak day requires approximately \(P=300\) and \(K=20\). In contrast, the Euler framework demands a minimum of \(P=1700\) and \(K=20\) for similar outcomes. ## V Concluding remarks This work presents a novel numerical approach for solving the SIR model in population dynamics. While various approximation methods have been proposed for this classical model, the analysis of their convergence has been limited and challenging. Our approach introduces the relaxation procedure to approximate the continuous model. By carefully selecting the relaxation parameter, we achieve global strong convergence of the scheme and effectively preserve non-negativity. The proposed scheme is explicit and straightforward to implement, enabling us to obtain the approximate solution at either the discrete or analytical level. Additionally, we showcase the applicability of our scheme to numerous variants of the SIR model. In our future work, we aim to extend our method to a globally strongly convergent higher-order scheme. Furthermore, we plan to apply the method to more complex SIR-based models involving multiple compartments. ## Credit authorship contribution statement **V. A. Khoa:** Supervision, Conceptualization, Formal analysis, Writing-review & editing. **P. M. Quan:** Software, Visualization. **K. W. Blayneh:** Formal analysis, Writing-review & editing. **J. Allen:** Formal analysis, Visualization. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability Simulated data will be made available on request. ## Acknowledgments This research has received funding from the National Science Foundation (NSF). Specifically, V. A. K., P. M. Q., and J. A. extend their gratitude for the invaluable support provided by NSF. V. A. K. and J. A. also hold deep appreciation for the Florida A&M University Rattler Research program, its esteemed committee, and Dr. Tiffany W. Ardley. Their unwavering dedication has facilitated an exceptional academic journey for the mentee (J. A.) and the mentor (V. A. K.). Furthermore, V. A. K. wishes to express many thanks to Dr. Charles Weatherford (Florida A&M University) and Dr. Ziad Musslimani (Florida State University). Their support has been instrumental in shaping V. A. K.'s early research career. Lastly, this work is complete on a momentous personal milestone - the wedding day of V. A. K. and the bride, Huynh Thi Kim Ngan. ## Appendix ### Proof of Theorem 3 \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Method \# & 5 & 5 & 6 & 6 & 1 & 1 \\ \hline \# of time step \(P\) & 50 & 2000 & 50 & 2000 & 50 & 2000 \\ \hline \# of iteration \(K\) & 20 & 50 & None & None & 20 & 50 \\ \hline Amplitude \(I_{\rm max}\) & 51295165 & 51367769 & 50948480 & 51367765 & 51341234 & 51367573 \\ \hline Peak day & 72 & 73 & 72 & 73 & 54 & 72 \\ \hline \end{tabular} \end{table} Table 2: Values of the computed amplitude \(I_{\rm max}\) obtained from different methods and the corresponding peak days. Method #5: our RK4-relaxation method (IV.3) applied with (IV.8)–(IV.12). Method #6: the conventional RK4 method (IV.3)–(IV.7) applied directly to the nonlinear differential equation (II.3). Method #1: our Euler-relaxation method (II.8). By (II.4), the true amplitude \(I_{\rm max,true}\) is 51367769 in this scenario. Step 1: Define \(\mathcal{E}_{k}\left(t\right)=R_{k}\left(t\right)-R\left(t\right)\) for \(k=1,2,3,\ldots\). It follows from (II.5) and (II.3) that \(\mathcal{E}_{k}\) satisfies the following differential equation: \[\mathcal{E}_{k}^{\prime}\left(t\right)+M\mathcal{E}_{k}\left(t\right) =-g\left(R_{k-1}\left(t\right)\right)+g\left(R\left(t\right) \right)+M\mathcal{E}_{k-1}\left(t\right)\] \[=p\left(R\left(t\right)\right)-p\left(R_{k-1}\left(t\right) \right),\] (V.1) where we have denoted \(p\left(r\right)=g\left(r\right)-Mr\) for \(r\geq 0\). Herewith, by Theorems 1 and 2, we are allowed to consider \(r\geq 0\). We can compute that \(p^{\prime}\left(r\right)=-\gamma n\mu e^{-\mu r}+\gamma-M\). Then, for \(M\geq\gamma\) and since \(r\geq 0\), we Figure IV.2: Graphical illustrations of Test 2. Row 1: RK4-relaxation method. Row 2: direct RK4 method. Row 3: Euler-relaxation method. estimate that \[\gamma-\gamma n\mu-M\leq p^{\prime}\left(r\right)=-\gamma n\mu e^{-\mu r}+\gamma-M<0,\] which shows \[\left|p^{\prime}\left(r\right)\right|\leq M+\gamma n\mu-\gamma.\] (V.2) Therefore, the left-hand side of (V.1) can be bounded from above by \[\mathcal{E}_{k}^{\prime}\left(t\right)+M\mathcal{E}_{k}\left(t \right)\leq\left(M+\gamma n\mu-\gamma\right)\left|\mathcal{E}_{k-1}\left(t \right)\right|.\] Using the Holder inequality, we find that \[e^{2Mt}\left|\mathcal{E}_{k}\left(t\right)\right|^{2} \leq\left(M+\gamma n\mu-\gamma\right)^{2}\left(\int_{0}^{t}e^{Ms} \left|\mathcal{E}_{k-1}\left(s\right)\right|ds\right)^{2}\] \[\leq\left(M+\gamma n\mu-\gamma\right)^{2}\int_{0}^{t}e^{2Ms}ds \int_{0}^{t}\left|\mathcal{E}_{k-1}\left(s\right)\right|^{2}ds.\] Thus, we deduce that \[\left|\mathcal{E}_{k}\left(t\right)\right|^{2}\leq\frac{1}{2M} \left(M+\gamma n\mu-\gamma\right)^{2}\left(1-e^{-2Mt}\right)\int_{0}^{t}\left| \mathcal{E}_{k-1}\left(s\right)\right|^{2}ds.\] By the elementary inequality \(e^{-x}+x\geq 1\), we obtain the following estimate \[\left|\mathcal{E}_{k}\left(t\right)\right|^{2}\leq\left(M+\gamma n\mu-\gamma \right)^{2}t\int_{0}^{t}\left|\mathcal{E}_{k-1}\left(s\right)\right|^{2}ds.\] (V.3) Step 2: By induction, we can show that for any \(2\leq k\in\mathbb{N}\) \[\left|\mathcal{E}_{k}\left(t\right)\right|^{2}\leq\left(M+\gamma n\mu-\gamma \right)^{2k}t\int_{0}^{t}s_{1}\int_{0}^{s_{1}}\ldots s_{k-1}\int_{0}^{s_{k-1}} \left|\mathcal{E}_{0}\left(s_{k}\right)\right|^{2}ds_{k}ds_{k-1}\ldots ds_{1}\] (V.4) It follows from (V.3) that (V.4) holds true for \(k=2\). Indeed, \[\left|\mathcal{E}_{2}\left(t\right)\right|^{2}\leq\left(M+\gamma n\mu-\gamma \right)^{2}t\int_{0}^{t}\left|\mathcal{E}_{1}\left(s_{1}\right)\right|^{2}ds_ {1}\leq\left(M+\gamma n\mu-\gamma\right)^{4}t\int_{0}^{t}s_{1}\int_{0}^{s_{1} }\left|\mathcal{E}_{0}\left(s_{2}\right)\right|^{2}ds_{2}ds_{1}.\] Assume that (V.4) holds true for \(k=k_{0}\). We show that it also holds true for \(k=k_{0}+1\). By (V.3), we have \[\left|\mathcal{E}_{k_{0}+1}\left(t\right)\right|^{2}\] \[\leq\left(M+\gamma n\mu-\gamma\right)^{2}t\int_{0}^{t}\left| \mathcal{E}_{k_{0}}\left(s\right)\right|^{2}ds\] \[\leq\left(M+\gamma n\mu-\gamma\right)^{2}t\int_{0}^{t}\left(M+ \gamma n\mu-\gamma\right)^{2k_{0}}s\int_{0}^{s}s_{1}\int_{0}^{s_{1}}\ldots s _{k_{0}-1}\int_{0}^{s_{k_{0}-1}}\left|\mathcal{E}_{0}\left(s_{k_{0}}\right) \right|^{2}ds_{k_{0}}ds_{k_{0}-1}\ldots ds_{1}ds\] \[\leq\left(M+\gamma n\mu-\gamma\right)^{2\left(k_{0}+1\right)} \int_{0}^{t}s_{1}\int_{0}^{s_{1}}\ldots s_{k_{0}}\int_{0}^{s_{k_{0}}}\left| \mathcal{E}_{0}\left(s_{k_{0}+1}\right)\right|^{2}ds_{k_{0}+1}ds_{k_{0}}\ldots ds _{1}.\] Figure IV.4: Graphical illustrations of Test 3 – SIR model with background death. Row 1: Euler-relaxation method with different values of \(P\) and \(K\). Row 2: RK4-relaxation method with diverse \(P\) and \(K\) values. In these illustrations, numerical stability and non-negativity preservation are observed. Hence, we complete Step 2. Step 3: By (V.4), observe that \(0\leq s_{k}\leq s_{k-1}\leq\ldots\leq s_{1}\leq t\). Combining this, (V.4) and the fact that \(R\in C^{1}\) gives \[\left|\mathcal{E}_{k}\left(t\right)\right|^{2} \leq\left(M+\gamma n\mu-\gamma\right)^{2k}t^{k+1}\max_{0\leq t\leq T }\left|\mathcal{E}_{0}\left(t\right)\right|^{2}\int_{0}^{t}s_{1}\int_{0}^{s_{ 1}}\ldots s_{k-1}\int_{0}^{s_{k-1}}ds_{k}ds_{k-1}\ldots ds_{1}\] \[\leq\left(M+\gamma n\mu-\gamma\right)^{2k}\frac{t^{k+1}}{k!}\max_ {0\leq t\leq T}\left|\mathcal{E}_{0}\left(t\right)\right|^{2}.\] Note that we have the \(k\) and time independence of \(M+\gamma n\mu-\gamma\) and \(t\leq T\). Moreover, we know that \(\mathcal{E}_{0}\left(t\right)=R_{0}\left(t\right)-R\left(t\right)=-R\left(t\right)\) by the choice \(R_{0}\left(t\right)=0\). Therefore, in view of the fact that \(\lim_{k\rightarrow\infty}\frac{Q^{k}}{k!}=0\) for any constant \(Q>0\), we can always find \(\overline{k}>0\) such that for any \(k\geq\overline{k}\), \[\left(M+\gamma n\mu-\gamma\right)^{2k}\frac{T^{k+1}}{k!}<1.\] Hence, we obtain the strong convergence of the sequence \(\left\{R_{k}\right\}_{k=0}^{\infty}\) toward the true solution \(R\). ### Proof of Corollary 4 We define \(\mathcal{E}_{k}\left(t\right)=R_{k}\left(t\right)-R\left(t\right)\) and \(p\left(r\right)=g\left(r\right)-Mr\) as above. Multiplying (V.1) by \(\mathcal{E}_{k}\left(t\right)\) and using (V.2) yield \[\frac{1}{2}\frac{d}{dt}\mathcal{E}_{k}^{2}\left(t\right)+M\mathcal{E}_{k}^{2} \left(t\right)=\left[p\left(R\left(t\right)\right)-p\left(R_{k-1}\left(t \right)\right)\right]\mathcal{E}_{k}\left(t\right)\leq\frac{M+\gamma n\mu- \gamma}{2}\mathcal{E}_{k-1}^{2}\left(t\right)+\frac{M+\gamma n\mu-\gamma}{2} \mathcal{E}_{k}^{2}\left(t\right).\] Equivalently, we obtain \[\frac{d}{dt}\mathcal{E}_{k}^{2}\left(t\right)+\left(M-\gamma n\mu+\gamma \right)\mathcal{E}_{k}^{2}\left(t\right)\leq\left(M+\gamma n\mu-\gamma\right) \mathcal{E}_{k-1}^{2}\left(t\right).\] Notice that by the choice \(M\geq\gamma\), it holds true that \(M>\gamma n\mu-\gamma\) when \(n\mu<1\). Using the integrating factor \(e^{\left(M-\gamma n\mu+\gamma\right)t}\) and taking integration with respect to \(t\), we get \[\mathcal{E}_{k}^{2}\left(t\right) \leq e^{-\left(M-\gamma n\mu+\gamma\right)t}\left(M+\gamma n\mu- \gamma\right)\int_{0}^{t}e^{\left(M-\gamma n\mu+\gamma\right)s}\mathcal{E}_{k -1}^{2}\left(s\right)ds\] \[\leq e^{-\left(M-\gamma n\mu+\gamma\right)t}\left[e^{\left(M- \gamma n\mu+\gamma\right)t}-1\right]\frac{M+\gamma n\mu-\gamma}{M-\gamma n\mu +\gamma}\max_{0\leq t\leq T}\mathcal{E}_{k-1}^{2}\left(t\right).\] Henceforth, we obtain \[\max_{0\leq t\leq T}\left|\mathcal{E}_{k}\left(t\right)\right|\leq\left( \frac{M+\gamma n\mu-\gamma}{M-\gamma n\mu+\gamma}\right)^{1/2}\max_{0\leq t \leq T}\left|\mathcal{E}_{k-1}\left(t\right)\right|.\] (V.5) By induction and the fact that \(R_{0}\left(t\right)=0\), we deduce \[\max_{0\leq t\leq T}\left|\mathcal{E}_{k}\left(t\right)\right|\leq\left(\frac {M+\gamma n\mu-\gamma}{M-\gamma n\mu+\gamma}\right)^{k/2}\max_{0\leq t\leq T} \left|\mathcal{E}_{0}\left(t\right)\right|=\left(\frac{M+\gamma n\mu-\gamma}{M -\gamma n\mu+\gamma}\right)^{k/2}\max_{0\leq t\leq T}\left|R\left(t\right) \right|.\] Since \(M+\gamma n\mu-\gamma<M-\gamma n\mu+\gamma\) when \(n\mu<1\), we obtain the target estimate (II.6).
2310.12314
Upper bound for the grand canonical free energy of the Bose gas in the Gross-Pitaevskii limit for general interaction potentials
We consider a homogeneous Bose gas in the Gross--Pitaevskii limit at temperatures that are comparable to the critical temperature for Bose--Einstein condensation. Recently, an upper bound for the grand canonical free energy was proved in arXiv:2305.19173 [math-ph] capturing two novel contributions. First, the free energy of the interacting condensate is given in terms of an effective theory describing the probability distribution of the number of condensed particles. Second, the free energy of the thermally excited particles equals that of a temperature-dependent Bogoliubov Hamiltonian. We extend this result to a more general class of interaction potentials, including interactions with a hard core. Our proof follows a different approach than the one in arXiv:2305.19173 [math-ph]: we model microscopic correlations between the particles by a Jastrow factor, and exploit a cancellation in the computation of the energy that emerges due to the different length scales in the system.
Marco Caporaletti, Andreas Deuchert
2023-10-18T20:31:08Z
http://arxiv.org/abs/2310.12314v1
Upper bound for the grand canonical free energy of the Bose gas in the Gross-Pitaevskii limit for general interaction potentials ###### Abstract We consider a homogeneous Bose gas in the Gross-Pitaevskii limit at temperatures that are comparable to the critical temperature for Bose-Einstein condensation. Recently, an upper bound for the grand canonical free energy was proved in [13] capturing two novel contributions. First, the free energy of the interacting condensate is given in terms of an effective theory describing the probability distribution of the number of condensed particles. Second, the free energy of the thermally excited particles equals that of a temperature-dependent Bogoliubov Hamiltonian. We extend this result to a more general class of interaction potentials, including interactions with a hard core. Our proof follows a different approach than the one in [13]: we model microscopic correlations between the particles by a Jastrow factor, and exploit a cancellation in the computation of the energy that emerges due to the different length scales in the system. ###### Contents * 1 Introduction and main result * 1.1 Background and summary * 1.2 Notation * 1.3 The grand canonical free energy and the Gibbs variational principle * 1.4 The ideal Bose gas on the torus * 1.5 Main results * 1.6 Outline of the article * 2 The trial state * 2.1 Second quantization * 2.2 Definition of the trial state * 2.3 Properties of the trial state * 2.3.1 The Bogoliubov and Weyl transformations * 2.3.2 The Bogoliubov Gibbs state in the diagonal representation * 2.3.3 The Bogoliubov Gibbs state in the original representation * 2.3.4 The uncorrelated trial state * 3 Estimate for the energy * 3.1 Analysis of the kinetic energy Analysis of the renormalized interaction * 3.3 Final upper bound * 4 Estimate for the entropy * 5 Proof of Theorem 1.1 * A Properties of the solution to the scattering equation * B Properties of the effective functional for the condensate * C Estimate of the expected number of particles in \(\Gamma\) ## 1 Introduction and main result ### Background and summary Since the first experimental realizations of Bose-Einstein condensation (BEC) in cold alkali gases in 1995 [4, 25], the dilute Bose Gas has become a prominent topic of experimental and theoretical research. The most relevant parameter regime to describe experiments with trapped quantum gases theoretically is the Gross-Pitaevskii (GP) limit. Here the scattering length of the interaction potential is scaled with the number of particles \(N\) in such a way that the interaction energy per particle is of the same order of magnitude as the spectral gap in the trap, as \(N\to\infty\). Many rigorous mathematical results about the GP limit of interacting Bose gases have been proved over the past twenty years. In the foundational works [39, 41, 43] it was shown that the ground state energy per particle can be approximated by the minimum of the GP energy functional, and that approximate ground states display BEC and superfluidity. These results have later been extended in [38, 47, 51] to the case of rotating Bose gases. Condensation with an optimal rate was, in the GP limit, first proven in [11] for approximate ground states of a Bose gas captured in a three-dimensional flat torus. In [10] the same authors show that the second-order correction to the ground state energy, the low-lying eigenvalues of the many-body Hamiltonian and the corresponding eigenfunctions are well approximated by related quantities of a quadratic Hamiltonian, called Bogoliubov Hamiltonian. This confirms predictions of Bogoliubov from 1947 [15]. Similar results have later been obtained for the trapped Bose gas in the GP limit [18, 19], for the homogeneous gas in a Thomas-Fermi limit [2, 16], and for the homogeneous gas in the GP limit in two space dimensions [21, 22]. More recently, a second-order upper bound for the ground state energy of a hard sphere Bose gas has been proven in [5]. The homogeneous gas in a box with Neumann boundary conditions has been studied in [14]. While low-energy eigenstates of the Hamiltonian accurately describe the dilute Bose gas at (or near) zero temperature, understanding the system at positive temperature is crucial to describe modern experiments. In this setting the natural analogues of the ground state energy and its corresponding eigenfunction are the free energy and the Gibbs state associated to the many-body Hamiltonian. In the article [28] the trapped Bose gas is studied in a combination of thermodynamic limit in the trap and GP limit. It is proven that the free energy of the system minus that of the ideal gas is well approximated by the minimum of a GP energy functional. Moreover, the one-particle density matrix of any approximate minimizer of the free energy is, to leading order, given by the one of the ideal gas, where the condensate wavefunction has been replaced by the minimizer of the GP energy functional. This, in particular, establishes the existence of a BEC phase transition in the system. Comparable results for the homogeneous Bose gas have been obtained in [27]. The GP limit is appropriate to describe experiments with atomic clouds containing \(10^{2}-10^{6}\) particles. To describe truly macroscopic samples with particle numbers of the order of the Avogadro constant \(N_{\rm A}\approx 6.022\times 10^{23}\), one needs to consider a thermodynamic limit followed by a dilute limit. The leading order asymptotics of the ground state energy per particle in this regime has been established in the influential works [29, 44] (three space dimensions) and [45] (two space dimensions). The one-dimensional case has been studied in [3]. Recently, also the second-order correction predicted by Lee, Huang and Yang (LHY) in 1957 [36] could be justified, see [7, 54] for upper bounds and [32, 33] for matching lower bounds. It is interesting to note that, to this date, there is no upper bound available that captures the LHY correction for a gas of hard spheres (the lower bound in [33] applies in this case). A comparable second order expansion for the two-dimensional Bose has been obtained in [31]. For the dilute Bose gas at positive temperature, asymptotic expansions capturing the leading order correction to the free energy caused by the interaction between the particles have been proved in [50, 55] (three space dimensions) and [26, 46] (two space dimensions). In [34] a LHY-type lower bound for the three-dimensional gas at suitably low temperatures is established. In the recent work [13] the authors consider a grand canonical homogeneous Bose gas in the GP limit at temperatures that are comparable to the critical temperature for BEC in the ideal gas1. Under the assumption that the interaction potential is of class \(L^{3}\), they establish an upper bound for the grand canonical free energy that contains two novel contributions: the free energy of the interacting condensate is given in terms of an effective theory describing its particle number fluctuations. Moreover, the free energy of the thermally excited particles equals that of a temperature-dependent Bogoliubov Hamiltonian. In the present article, we extend this result to systems with interactions in a more general class, including the hard-core potential. Our proof is based on the use of a trial state that is similar in spirit to the one in [13]. However, due to the lack of regularity of the interaction potential, we are forced to implement microscopic correlations between the particles by a full Jastrow factor. Because of this, the computation of the energy requires different arguments. A crucial step in our proof is a cancellation in the computation of the energy that emerges due to the different length scales in the problem. Footnote 1: The critical temperature in the interacting gas is expected to be the same, to leading order in \(N\), but this has so far been proven only in the canonical setting, see [27]. ### Notation Given two functions \(a,b\) of the particle number \(N\) and other parameters of the system, we write \(a\lesssim b\) if there exists a constant \(C\), independent of \(N\), such that \(a\leq Cb\). If we want to highlight the dependency of the constant on some (\(N\)-independent) parameter \(k\), we use the notation \(a\lesssim_{k}b\). We write \(a\sim b\) if \(a\lesssim b\) and \(b\lesssim a\), and \(a\simeq b\) means that \(a/b\to 1\) in the limit considered. The letters \(C,c\) denote generic positive constants, whose values may change from line to line. The Fourier coefficients of a function \(f:\Lambda=[-1/2,1/2]^{3}\to\mathbb{C}\) are denoted by \(\hat{f}(p)=\int_{\Lambda}e^{-{\rm i}p\cdot x}f(x)\,{\rm d}x\) and, given a sequence \(g:\Lambda^{*}=2\pi\mathbb{Z}^{3}\to\mathbb{C}\), the inverse Fourier transformation reads \(\check{g}(x)=\sum_{p\in\Lambda^{*}}g(p)e^{{\rm i}p\cdot x}\). Standard \(L^{p}(\Lambda)\) and \(\ell^{p}(\Lambda^{*})\) norms are denoted by \(\|\cdot\|_{p}\). If \(H\) is a (separable, complex) Hilbert space we denote by \(\langle\cdot,\cdot\rangle\) its inner product, and by \(\mathcal{L}^{1}(H)\) the space of trace-class operators on \(H\). If \(A\) is an operator on \(H\) and \(\psi\in H\) belongs to the form domain of \(A\), we use the notation \(\langle A\rangle_{\psi}=\langle\psi,A\psi\rangle\). ### The grand canonical free energy and the Gibbs variational principle We consider a Bose gas captured in the three-dimensional box \(\Lambda=[-1/2,1/2]^{3}\) with periodic boundary conditions. Since we are interested in a system with a fluctuating particle number, its Hilbert is given by the bosonic Fock space \[\mathfrak{F}=\bigoplus_{n\geq 0}L_{\rm s}^{2}(\Lambda^{n}). \tag{1.1}\] Here \(L_{\rm s}^{2}(\Lambda^{n})\) denotes the space of permutation symmetric functions in \(L^{2}(\Lambda^{n})\). That is, the closed linear subspace of \(L^{2}(\Lambda^{n})\), whose elements \(\Psi(x_{1},...,x_{n})\) are invariant under any permutation of its \(n\) particle coordinates \(x_{1},...,x_{n}\). The Hamiltonian of the system in the GP scaling reads \[\mathcal{H}_{N}=\bigoplus_{n\geq 0}H_{N}^{(n)},\quad\text{ with }\quad H_{N}^{(n)}=-\sum_{i=1}^{n}\Delta_{i}+N^{2}\sum_{1\leq i<j\leq n}V(N(x_{ i}-x_{j})). \tag{1.2}\] Here, \(\Delta_{i}\) denotes the Laplacian with periodic boundary conditions acting on the \(i\)-th coordinate, and \(V(N(x_{i}-x_{j}))\) is a multiplication operator. We assume that the interaction potential \(V:\mathbb{R}^{3}\to[0,\infty]\) is measurable, radial and compactly supported. The parameter \(N\) will be chosen such that it coincides with the expected number of particles in the system. Our assumptions on \(V\) guarantee that its scattering length \(\mathfrak{a}\) is well-defined: this is a combined measure of the range and strength of the interaction potential \(V\). For a precise definition of the scattering length, we refer the reader to Appendix A. By scaling, the scattering length \(\mathfrak{a}_{N}\) of \(V_{N}=N^{2}V(N\cdot)\) satisfies \(\mathfrak{a}_{N}=\mathfrak{a}/N\). The fact that \(V\geq 0\) allows us to define the Hamiltonian \(\mathcal{H}_{N}\) in (1.2) as a self-adjoint operator via Friedrichs extension. For states \(\Gamma\in\Sigma\coloneqq\{\Gamma\in\mathcal{L}^{1}(\mathfrak{F})\mid\Gamma \geq 0,\ \operatorname{Tr}\Gamma=1\}\) we define the Gibbs free energy functional \(\mathcal{F}(\cdot)\) by2 Footnote 2: Here and in the following, we interpret \(\operatorname{Tr}[AB]\) for positive operators \(A\) and \(B\) as \(\operatorname{Tr}[A^{1/2}BA^{1/2}]\). By positivity, this expression is always well defined and takes values in \([0,\infty]\). In particular, finiteness of \(\operatorname{Tr}[AB]\) under this convention requires only that the operator \(A^{1/2}BA^{1/2}\) is trace-class, and not necessarily \(AB\). \[\mathcal{F}(\Gamma)=\operatorname{Tr}[\mathcal{H}_{N}\Gamma]-\beta^{-1}S( \Gamma), \tag{1.3}\] where \(S(\Gamma)=-\operatorname{Tr}[\Gamma\ln(\Gamma)]\) denotes the von Neumann entropy and \(\beta>0\) is the inverse temperature of the system. The grand canonical free energy is defined as the minimum of the Gibbs free energy functional among states with expected number of particles equal to \(N\): \[F(\beta,N)=\min\left\{\mathcal{F}(\Gamma)\mid\Gamma\in\Sigma,\ \operatorname{Tr}[ \mathcal{N}\Gamma]=N\right\}=-\beta^{-1}\log\operatorname{Tr}[\exp(-\beta( \mathcal{H}_{N}-\mu\mathcal{N}))]+\mu N. \tag{1.4}\] Here \(\mathcal{N}=\bigoplus_{n\geq 0}n\) denotes the number of particles operator on \(\mathfrak{F}\). The minimum in (1.4) is attained uniquely at the Gibbs state \[G=\frac{\exp(-\beta(\mathcal{H}_{N}-\mu\mathcal{N}))}{\operatorname{Tr}[\exp(- \beta(\mathcal{H}_{N}-\mu\mathcal{N}))]}, \tag{1.5}\] where the chemical potential \(\mu=\mu(N,\beta)\in\mathbb{R}\) is defined implicitly by the equation \(\operatorname{Tr}[\mathcal{N}G]=N\). ### The ideal Bose gas on the torus Before we state our main result, we recall some well-known facts about the ideal Bose gas on the unit torus, that is, the system described by the Hamiltonian in (1.2) with \(V=0\). In this case the chemical potential \(\mu_{0}=\mu_{0}(\beta,N)<0\) can be defined by the equation \[N=\sum_{p\in\Lambda^{*}}\frac{1}{\exp(\beta(|p|^{2}-\mu_{0}(\beta,N)))-1}. \tag{1.6}\] The expected number of particles with momentum \(p=0\) is given by \[N_{0}(\beta,N)=\frac{1}{\exp(-\beta\mu_{0})-1} \tag{1.7}\] and satisfies \[\frac{N_{0}(\beta,N)}{N}\simeq\left[1-\frac{\beta_{\mathrm{c}}}{\beta}\right] _{+},\quad\text{ with }\quad\beta_{\mathrm{c}}=\frac{1}{4\pi}\left(\frac{N}{\zeta(3/2)} \right)^{-2/3} \tag{1.8}\] in the limit \(N\to\infty\). Here \(\zeta\) denotes the Riemann zeta function and \([\cdot]_{+}=\max\{\cdot,0\}\). Equation (1.8) implies that the ideal gas displays a BEC phase transition with critical inverse temperature \(\beta_{\mathrm{c}}\). More precisely, if \(\beta=\kappa\beta_{\mathrm{c}}\) with \(\kappa\in(1,\infty)\), we have \(N_{0}\simeq N(1-1/\kappa)\) and \(|\mu_{0}|\sim N^{-1/3}\). If, in contrast, \(\kappa\in(0,1)\) then \(N_{0}\sim 1\) and \(|\mu_{0}|\sim N^{2/3}\). The free energy of the ideal Bose gas reads \(F_{0}(\beta,N)=F_{0}^{\mathrm{BEC}}+F_{0}^{+}\), where \[F_{0}^{\mathrm{BEC}}=\frac{1}{\beta}\log(1-\exp(\beta\mu_{0}))+\mu_{0}N_{0} \tag{1.9}\] denotes the free energy of the condensate and \[F_{0}^{+}=\frac{1}{\beta}\sum_{p\in\Lambda_{+}^{*}}\log(1-\exp(-\beta(|p|^{2}- \mu_{0})))+\mu_{0}(N-N_{0}) \tag{1.10}\] that of the thermally excited particles. ### Main results The following theorem is the main result of this article. **Theorem 1.1**.: _Let \(V:\mathbb{R}^{3}\to[0,\infty]\) be measurable, spherically symmetric and compactly supported. In the limit \(N\to\infty\), with \(\beta=\kappa\beta_{\mathrm{c}}\), \(\kappa\in(0,\infty)\) and \(\beta_{\mathrm{c}}\) in (1.8), the free energy in (1.4) satisfies_ \[\begin{split} F(\beta,N)\leq& F_{0}^{+}(\beta,N)+8 \pi\mathfrak{a}_{N}N^{2}+\min\{F^{\mathrm{BEC}}-8\pi\mathfrak{a}_{N}N_{0}^{2},F _{0}^{\mathrm{BEC}}\}\\ &-\frac{1}{2\beta}\sum_{p\in\Lambda_{+}^{*}}\left[\frac{16\pi \mathfrak{a}_{N}N_{0}}{|p|^{2}}-\log\left(1+\frac{16\pi\mathfrak{a}_{N}N_{0}} {|p|^{2}}\right)\right]+\mathcal{O}(N^{11/18}),\end{split} \tag{1.11}\] _with \(N_{0}\), \(F_{0}^{\mathrm{BEC}}\) and \(F_{0}^{+}\) defined respectively in (1.7), (1.9) and (1.10), and_ \[F^{\mathrm{BEC}}=F^{\mathrm{BEC}}(\beta,N_{0},\mathfrak{a}_{N})=-\frac{1}{ \beta}\log\left(\int_{\mathbb{C}}\exp\left(-\beta\left(4\pi\mathfrak{a}_{N}|z| ^{4}-\mu|z|^{2}\right)\right)\,\mathrm{d}z\right)+\mu N_{0}(\beta,N). \tag{1.12}\] _Here, \(\mathrm{d}z=\pi^{-1}\mathrm{d}x\,\mathrm{d}y\), where \(\mathrm{d}x\,\mathrm{d}y\) denotes the Lebesgue measure on \(\mathbb{C}\), and \(\mu\) is chosen as the unique solution of the equation_ \[\int_{\mathbb{C}}|z|^{2}g(z)\,\mathrm{d}z=N_{0}(\beta,N), \tag{1.13}\] _with the probability density_ \[g(z)=\frac{\exp\left(-\beta\left(4\pi\mathfrak{a}_{N}|z|^{4}-\mu|z|^{2}\right) \right)}{\int_{\mathbb{C}}\exp\left(-\beta\left(4\pi\mathfrak{a}_{N}|z|^{4}- \mu|z|^{2}\right)\right)\,\mathrm{d}z}. \tag{1.14}\] The terms on the right-hand side of (1.11) appear in descending order according to their order of magnitude in the limit \(N\to\infty\). The free energy \(F_{0}^{+}(\beta,N)\) of the thermal cloud of the ideal gas is proportional to \(N^{5/3}\). The second term is a density-density interaction, which is of order \(N\). The third term represents the free energy of the interacting condensate. If \(\kappa>1\), it contributes two terms, one of order \(N\) and one of order \(N^{2/3}\log N\). For \(\kappa<1\) it is proportional to \(N^{2/3}\). Finally, the term on the second line of (1.11) is a correction to the free energy of the thermally excited particles coming from Bogoliubov theory. It is of order \(N^{2/3}\) in the presence of a macroscopic condensate occupation (\(\kappa>1\)), and of order \(N^{-4/3}\) if \(\kappa<1\). More details concerning the last two terms can be found in Remark 1.4 below. The following proposition, which is proved in [13, Proposition 1.2], allows us to simplify the right-hand side of (1.11) in the parameter regimes strictly above and strictly below the critical point. **Proposition 1.2**.: _We consider the limit \(N\to\infty\), with \(\beta=\kappa\beta_{\mathrm{c}}\), \(\kappa\in(0,\infty)\) and \(\beta_{\mathrm{c}}\) in (1.8). The following statements hold for given \(\varepsilon>0\):_ 1. _Assume that_ \(N_{0}\gtrsim N^{5/6+\varepsilon}\) _and that_ \(\mathfrak{a}_{N}>0\)_. There exists a constant_ \(c>0\) _such that_ \[F^{\mathrm{BEC}}=4\pi\mathfrak{a}_{N}N_{0}^{2}+\frac{1}{2\beta}\log(4\beta \mathfrak{a}_{N})+\mathcal{O}(\exp(-cN^{\varepsilon})).\] (1.15) 2. _If_ \(N_{0}\lesssim N^{5/6-\varepsilon}\)_, then_ \[F^{\mathrm{BEC}}=-\frac{1}{\beta}\log(N_{0})-\frac{1}{\beta}+\mathcal{O}(N^{2/3 -2\varepsilon}).\] (1.16) Proposition 1.2 describes a transition in the behavior of the effective theory of the interacting condensate. If \(N_{0}\gtrsim N^{5/6+\varepsilon}\), the free energy in (1.12) is given, up to a small remainder, by the usual density-density interaction plus a contribution of order \(N^{2/3}\log(N)\), which is related to the free energy of the fluctuations of the number of condensed particles. We refer to part 4 of Remark 1.4 for further details. Both contributions are caused by the self-interaction of the condensate, as is evident from their dependence on the scattering length \(\mathfrak{a}_{N}\). If instead \(1\ll N_{0}\lesssim N^{5/6-\varepsilon}\), the free energy of the condensate (1.16) equals the one of its non-interacting counterpart, up to \(o(N^{2/3})\). The threshold arises from the fact that, when \(N_{0}\sim N^{5/6}\), the interaction energy \(4\pi\mathfrak{a}_{N}N_{0}^{2}\sim N^{2/3}\) of the condensate becomes much smaller than \(\beta^{-1}\) times the classical entropy \(S^{\mathrm{cl}}\) of \(g(z)\) (see (1.21)), which is always of order \(N^{2/3}\log N\) if \(N^{\varepsilon}\lesssim N_{0}\lesssim N^{5/6}\). In the transition regime \(N^{5/6-\varepsilon}\lesssim N_{0}\lesssim N^{5/6+\varepsilon}\), the free energy of the condensate does not have a simple form as in (1.15) or (1.16). With Proposition 1.2 at hand, one readily checks that the minimum on the right-hand side of (1.11) is attained by the first term if \(\kappa\in(1,\infty)\) (condensed phase) and by the second if \(\kappa\in(0,1)\) (non-condensed phase). This leads to the following reformulation of Theorem 1.1, which is better suited for a comparison with the existing literature. **Corollary 1.3**.: _Let \(V:\mathbb{R}^{3}\to[0,\infty]\) be a measurable, spherically symmetric and compactly supported function that is strictly positive on a set of positive measure. We consider the limit \(N\to\infty\), with \(\beta=\kappa\beta_{\mathrm{c}}\), \(\kappa\in(0,\infty)\) and \(\beta_{\mathrm{c}}\) in (1.8). If \(\kappa\in(1,\infty)\), the free energy (1.4) satisfies_ \[\begin{split} F(\beta,N)\leq& F_{0}^{+}(\beta,N)+4 \pi\mathfrak{a}_{N}(2N^{2}-N_{0}^{2})+\frac{1}{2\beta}\log(4\beta\mathfrak{a}_ {N})\\ &-\frac{1}{2\beta}\sum_{p\in\Lambda_{+}^{*}}\left[\frac{16\pi \mathfrak{a}_{N}N_{0}}{|p|^{2}}-\log\left(1+\frac{16\pi\mathfrak{a}_{N}N_{0}}{ |p|^{2}}\right)\right]+\mathcal{O}(N^{11/18})\end{split} \tag{1.17}\] _with \(N_{0}\) and \(F_{0}^{+}\) defined in (1.7) and (1.10), respectively. If \(\kappa\in(0,1)\) we have_ \[F(\beta,N)\leq F_{0}(\beta,N)+8\pi\mathfrak{a}_{N}N^{2}+\mathcal{O}(N^{11/18}), \tag{1.18}\] _with the free energy of the ideal gas \(F_{0}(\beta,N)\) above (1.9)._ At the critical point, corresponding to \(\kappa=1\), Proposition 1.2 does not apply, and the minimum in (1.11) is needed. We have the following remarks concerning the above results. **Remark 1.4**.: 1. The first two terms in (1.17) were first identified for the dilute Bose gas in the thermodynamic limit in [55] (upper bound) and [50] (lower bound). An asymptotic expansion for the canonical free energy of the Bose gas in the GP limit was given, up to remainders of order \(o(N)\), in [27]. The same expansion is, however, expected to hold in the grand canonical setting, and it coincides with the first two terms on the right-hand sides of (1.17) and (1.18). The first upper bound capturing the third and fourth term on the right-hand side of (1.17) was proved in [13] for Bose gases interacting through sufficiently regular interaction potentials (of class \(L^{3}\)). In contrast to [13], we make no such regularity assumption, and our result applies, in particular, to the case of the hard-core interaction \[V(x)=\begin{cases}+\infty&\text{if }|x|\leq\mathfrak{a},\\ 0&\text{otherwise}.\end{cases}\] (1.19) This generalization is the main contribution of the present article. 2. Our proof of Theorem 1.1 is based on a trial state that is similar to the one used in [13]. In particular, we use (up to technicalities) the same uncorrelated trial state. However, in the absence of integrability assumptions on the interaction potential, we need to describe the correlations between particles by a full Jastrow factor. As a consequence our proof of the upper bound in (1.11) is not an adaption of the one in [13] and requires different arguments. One key step in our proof is a cancellation in the computation of the interaction energy that is similar in spirit to a cancellation observed in [6] in the computation of the Lee-Huang-Yang correction to the ground state energy of the hard-sphere gas. To see this cancellation, we exploit the fact that the interaction between the particles lives on a much smaller length scale than the thermal wavelength \(\beta^{1/2}\). Moreover, precise pointwise bounds for the reduced densities of our trial state without correlations and of its eigenfunctions are needed. In contrast, in [13] it was possible to implement correlations between the particles with a truncated quartic (in creation and annihilation operators) transformation in Fock space. In combination with the use of suitable momentum cutoffs in the trial state, this allowed the authors of [13] to obtain an upper bound for the free energy in a more direct way. 3. The third term on the right-hand side of (1.17) is the free energy of the fluctuations of the number of particles in the condensate. To explain this, we describe the condensate with a trial state of the form \[G_{0}=\int_{\mathbb{C}}|z\rangle\langle z|\varrho(z)\,\mathrm{d}z,\] (1.20) where \(|z\rangle=\exp(za_{0}^{*}-\overline{z}a_{0})\Omega\) is the usual coherent state on the \(p=0\) mode and \(\varrho(z)\) is a probability density with respect to the measure \(\mathrm{d}z\) introduced below (1.12). This is motivated by the fact that a \(c\)-number substitution for one momentum mode is known to introduce only small corrections to the free energy, see for instance [42]. If we take \(4\pi\mathfrak{a}_{N}a_{0}^{*}a_{0}^{*}a_{0}a_{0}\) as the effective interaction Hamiltonian of the condensate (i.e., we replace the potential by a renormalized one proportional to the scattering length), we can write the free energy of \(G_{0}\) as \[\mathcal{F}^{\rm BEC}(G_{0})=4\pi\mathfrak{a}_{N}\int_{\mathbb{C}}\varrho(z)|z|^ {4}\,{\rm d}z-\frac{1}{\beta}S(G_{0})\leq 4\pi\mathfrak{a}_{N}\int_{\mathbb{C}} \varrho(z)|z|^{4}\,{\rm d}z-\frac{1}{\beta}S^{\rm cl}(\varrho),\] (1.21) where \(S^{\rm cl}(\varrho)=-\int_{\mathbb{C}}\varrho(z)\log(\varrho(z))\,{\rm d}z\) denotes the classical entropy of \(\varrho\). The inequality in (1.21) is a consequence of the Berezin-Lieb inequality, see [8, 9, 37]. If we minimize the right-hand side of (1.21) under the constraint \(\int_{\mathbb{C}}|z|^{2}\varrho(z)\,{\rm d}z=N_{0}\), we find \(F^{\rm BEC}\) in (1.12), with the unique minimizer \(g(z)\) in (1.14). Using Proposition 1.2, we thus see that \[\frac{1}{2\beta}\log(4\beta\mathfrak{a}_{N})=4\pi a_{N}\Big{(}\int_{\mathbb{C }}|z|^{4}g(z)\,{\rm d}z-\Big{(}\int_{\mathbb{C}}|z|^{2}g(z)\,{\rm d}z\Big{)}^{ 2}\Big{)}-\frac{1}{\beta}S^{\rm cl}(g)+\mathcal{O}(\exp(-cN^{\varepsilon}))\] (1.22) if \(N_{0}\gtrsim N^{5/6+\varepsilon}\) for some \(\varepsilon>0\). That is, according to this effective theory, the third term on the right-hand side of (1.17) indeed equals the free energy of the fluctuations of the number of particles in the condensate. 4. While the variance of the number of particles in the condensate of the ideal gas is of order \(N^{2}\), for the Gibbs distribution \(g\) we have \[\text{Var}_{g}(|z|^{2})=\int_{\mathbb{C}}|z|^{4}g(z)\,{\rm d}z-\Big{(}\int_{ \mathbb{C}}|z|^{2}g(z)\Big{)}^{2}\sim N^{5/3},\] (1.23) provided \(\kappa>1\). This decrease of the fluctuations of the number of condensed particles caused by the repulsive interaction between them is a well-known effect, see e.g. [20, 24]. Motivated by the recent experimental realization [49] of a system with grand canonical number statistics, a discrete version of \(g\) has been used in [53] to compute the size of the fluctuations of the number of condensed particles for a trapped gas. The computations in [53] could be rigorously justified by showing that \(g(z)\) approximates \(\text{Tr}[|z\rangle\langle z|G]\), where \(|z\rangle\) is the coherent state defined below (1.20) and \(G\) is the interacting Gibbs state in (1.5). This interesting mathematical problem is, however, beyond the scope of our present investigation. 5. The term on the second line of (1.11) is a correction to the free energy of the thermally excited particles, which is related to Bogoliubov theory. This can be seen with the following heuristic computation. In the first step we assume, for the sake of simplicity, that \(V\in L^{1}(\Lambda)\). We start by writing the Hamiltonian in (1.2) in terms of the usual creation and annihilation operators \(a_{p}^{*},a_{p}\) of a particle with momentum \(p\in\Lambda^{*}\) as \[\mathcal{H}_{N}=\sum_{p\in\Lambda^{*}_{+}}|p|^{2}a_{p}^{*}a_{p}+\sum_{p,q,r\in \Lambda^{*}}\widehat{V}_{N}(r)a_{p+r}^{*}a_{q}^{*}a_{p}a_{q+r}.\] Replacing \(a_{0}^{*},a_{0}\) with \(\sqrt{N}_{0}\), the potential \(\widehat{V}_{N}(r)\) with its renormalized version \(4\pi\mathfrak{a}_{N}\), and neglecting cubic and quartic terms in \(a_{p}^{*},a_{p}\), we arrive at the Bogoliubov Hamiltonian \[\mathcal{H}^{\rm Bog}=\sum_{p\in\Lambda^{*}_{+}}|p|^{2}a_{p}^{*}a_{p}+4\pi \mathfrak{a}_{N}N_{0}\sum_{p\in\Lambda^{*}_{+}}\Big{(}2a_{p}^{*}a_{p}+a_{p}^{* }a_{-p}^{*}+a_{p}a_{-p}\Big{)}.\] A careful analysis shows that the grand potential \(\Phi^{\rm Bog}(\beta,\mu_{0})\) associated to \(\mathcal{H}^{\rm Bog}-\mu_{0}\mathcal{N}\) with the chemical potential \(\mu_{0}\) in (1.6) satisfies (compare to Lemma 5.1) \[\Phi^{\mathrm{Bog}}(\beta,\mu_{0})= \frac{1}{\beta}\sum_{p\in\Lambda_{+}^{*}}\log\left(1-e^{\beta\sqrt{ |p|^{2}-\mu_{0}}\sqrt{|p|^{2}-\mu_{0}+16\pi\mathfrak{a}_{N}N_{0}}}\right)\] \[= \frac{1}{\beta}\sum_{p\in\Lambda_{+}^{*}}\log\left(1-e^{-\beta(|p |^{2}-\mu_{0})}\right)+8\pi\mathfrak{a}_{N}N_{0}(N-N_{0})\] \[-\frac{1}{2\beta}\sum_{p\in\Lambda_{+}^{*}}\left[\frac{16\pi \mathfrak{a}_{N}N_{0}}{|p|^{2}}-\log\left(1+\frac{16\pi\mathfrak{a}_{N}N_{0}}{ |p|^{2}}\right)\right]+o(N^{2/3}).\] The first term on the right-hand side contributes to \(F_{0}^{+}\), the second term to the density-density interaction \(4\pi\mathfrak{a}_{N}(2N^{2}-N_{0}^{2})\), and the third term appears on the second line of (1.11). 6. Let us denote by \(H_{N}^{(N)}\) the restriction of \(\mathcal{H}_{N}\) to the \(N\)-particle sector of Fock space (see (1.2)), and by \(E_{0}\) the ground state of \(H_{N}^{(N)}\). It has been shown in [12] that the eigenvalues \(E\) of \(H_{N}^{(N)}-E_{0}\) that satisfy \(E\ll N^{1/8}\) are well approximated by those of a Bogoliubov Hamiltonian. If we compare this threshold to the energy scale \(\beta^{-1}\sim N^{2/3}\), which represents the energy per particle in our system, we see that the results in [12] are far from being sufficient to draw conclusions on the free energy in our setting. 7. If we replace the torus \(\Lambda=[-1/2,1/2]^{3}\) by \(\Lambda_{L}=[-L/2,L/2]^{3}\) with fixed \(L>0\), Theorem 1.1 and a scaling argument imply a similar upper bound for the grand canonical free energy in this setting. In this case, the term on the second line of (1.11) reads \[-\frac{1}{2\beta}\sum_{p\in\frac{2\pi}{L}\mathbb{Z}^{3}\backslash\{0\}}\left[ \frac{16\pi\mathfrak{a}_{N}\varrho_{0}(\beta,N,L)}{|p|^{2}}-\log\left(1+\frac{ 16\pi\mathfrak{a}_{N}\varrho_{0}(\beta,N,L)}{|p|^{2}}\right)\right].\] If we replace \(\mathfrak{a}_{N}\) by \(\mathfrak{a}\), divide the term in (7) by \(L^{3}\) and take a formal thermodynamic limit (i.e. letting \(N,L\to\infty\) with \(\varrho=N/L^{3}\) fixed), we obtain \[-\frac{1}{2\beta(2\pi)^{3}}\int_{\mathbb{R}^{3}}\left[\frac{16\pi \mathfrak{a}\varrho_{0}}{|p|^{2}}-\log\left(1+\frac{16\pi\mathfrak{a}\varrho_ {0}}{|p|^{2}}\right)\right]\,\mathrm{d}p=-\frac{16\sqrt{\pi}}{3\beta}( \mathfrak{a}\varrho_{0})^{3/2}.\] The right-hand side has been conjectured to appear in the asymptotic expansion of the specific free energy in the dilute limit, see [48, Theorem 11]. 8. The minimum on the right-hand side of (1.11) is needed, because \(F^{\mathrm{BEC}}\) does not accurately describe the free energy of the condensate if \(N_{0}\sim 1\), see (5.13). This is related to the fact that we approximate the discrete random variable associated with the operator \(a_{0}^{*}a_{0}\) with a continuous one. 9. We state and prove Theorem 1.1 with \(\kappa\in(0,\infty)\) fixed. However, a straightforward adaptation of our proof applies to the case in which \(\kappa\) depends on \(N\), as long as \(\kappa\gtrsim 1\). In particular, it is possible to take a zero-temperature limit (corresponding to \(\kappa\to\infty\)). 10. We expect the upper bound in Theorem 1.1 to be sharp. That is, we expect it to be possible to prove a matching lower bound, up to remainders of order \(o(N^{2/3})\). 11. A similar expansion as in (1.17) is expected to hold for the interacting canonical free energy if \(F_{0}^{+}\) is replaced by the canonical free energy of the ideal gas and \(F^{\mathrm{BEC}}\) by \(4\pi\mathfrak{a}_{N}N_{0}^{2}\). The reason for the latter replacement lies in the fact that, in the canonical ensemble, the variance of the number of particles in the condensate is expected to be of the order \(N^{4/3}\) if \(\beta=\kappa\beta_{\mathrm{c}}\) with \(\kappa>1\) (This is the order of magnitude of these fluctuations in the ideal gas.). When we compare this to (1.22) and (1.23), we see that these fluctuations are too small (when compared to \(\beta^{-1}\) times the entropy) to generate a contribution to the free energy of the order \(N^{2/3}\) or \(N^{2/3}\ln(N)\). We refer the reader to [23] for a detailed analysis of the condensate fluctuations in the canonical ideal gas. ### Outline of the article To prove Theorem 1.1, we apply a trial state argument with two distinct trial states corresponding to the regimes of high and low occupation of the condensate, respectively. The analysis of the former parameter regime is considerably more difficult, and we therefore focus on it. The adaption of the proof to the (simpler) case of low condensate occupation is discussed at the end of Section 5. In Section 2 we define the trial state and we prove some of its properties that are needed for the computation of the free energy. In particular, we prove pointwise bounds for the two- and four-body reduced densities of our trial state and of its eigenfunctions. These estimates determine their leading order behavior as \(N\to\infty\). In Section 3 we provide an upper bound for the energy. One main step in our proof is the use of the pointwise bounds for the reduced densities from Section 2 to establish in Section 3.2 a cancellation between the numerator and the denominator of the effective interaction energy. An upper bound for the entropy is provided in Section 4. In Section 5 we collect the results from Sections 3 and 4 and give a proof of Theorem 1.1. To not disrupt the main line of the argument, we defer some technical lemmas to the Appendix. In Appendix A we recall some properties of the solution to the scattering equation. Appendix B contains useful estimates for the effective chemical potential in the BEC. Finally, the expected number of particles in our trial state is computed in Appendix C. ## 2 The trial state In this section we define our trial state, which consists of the following parts: (a) the Gibbs state of a temperature-dependent Bogoliubov Hamiltonian that describes the thermally excited particles, (b) a suitable convex combination of coherent states describing the BEC, and (c) a correlation structure given by a Jastrow factor. In Section 2.3 we state and prove several lemmas that are needed in the computation of the free energy of our trial state. Before constructing the trial state, we recall some definitions concerning the formalism of second quantization, which also allows us to set some notation. ### Second quantization An important class of operators on \(\mathfrak{F}\) is given by the creation and annihilation operators \(a^{*}(f)\), \(a(f)\) of a one-particle wave function \(f\). They satisfy the canonical commutation relations (CCR) \[[a(f),a(g)^{*}]=\langle f,g\rangle,\qquad[a(f),a(g)]=0=[a^{*}(f),a^{*}(g)]\] for every \(f,g\in L^{2}(\Lambda)\). In the special case \(f(x)=\varphi_{p}(x)=e^{\mathrm{i}p\cdot x}\) with \(p\in 2\pi\mathbb{Z}^{3}\) we write \(a_{p}=a(\varphi_{p})\). We also introduce the operator-valued distributions \(a^{*}_{x},a_{x}\) creating and annihilating a particle at a point \(x\in\mathbb{R}^{3}\), respectively, which satisfy the CCR \([a_{x},a^{*}_{y}]=\delta(x-y)\), \([a_{x},a_{y}]=0=[a^{*}_{x},a^{*}_{y}]\) for \(x,y\in\Lambda\). Here \(\delta(x)\) denotes Dirac's delta distribution with unit mass at the origin. To be able to distinguish between the condensate and the thermally excited particles, we introduce the Fock spaces \[\mathfrak{F}_{0}=\bigoplus_{n\geq 0}\mathrm{Span}\{\varphi_{0}\}^{\otimes^{n}}, \qquad\mathfrak{F}_{+}=\bigoplus_{n\geq 0}L_{\perp}^{2}(\Lambda)^{\otimes^{n} _{\mathrm{\mathbb{N}}}},\] where \(\varphi_{0}\) is the (normalized) constant function on \(\Lambda\), and \(L_{\perp}^{2}(\Lambda)\) denotes the orthogonal complement of \(\mathrm{Span}\{\varphi_{0}\}\) in \(L^{2}(\Lambda)\). We denote by \(\Omega_{0}\) and \(\Omega_{+}\) the vacuum vectors in \(\mathfrak{F}_{0}\) and \(\mathfrak{F}_{+}\), respectively. The full Fock space can be identified with the tensor product \(\mathfrak{F}_{0}\otimes\mathfrak{F}_{+}\) thanks to the unitary equivalence \(\mathfrak{F}=\mathcal{U}(\mathfrak{F}_{0}\otimes\mathfrak{F}_{+})\) defined by \(\Omega=\mathcal{U}(\Omega_{0}\otimes\Omega_{+})\), where \(\Omega\) denotes the vacuum vector in \(\mathfrak{F}\), and \[\begin{split}\mathcal{U}^{*}a(\varphi_{0}\oplus 0)\mathcal{U}=& a(\varphi_{0})\otimes \mathds{1},\\ \mathcal{U}^{*}a(0\oplus f)\mathcal{U}=&\mathds{1} \otimes a(f),\end{split} \tag{2.1}\] for every \(f\in L^{2}(\Lambda)\). ### Definition of the trial state We are now prepared to give the definition of our trial state. On the excitation Fock space \(\mathfrak{F}_{+}\), we define the Bogoliubov Hamiltonian \[\mathcal{H}_{\mathrm{B}}(z)=\sum_{p\in\Lambda_{+}^{*}}(|p|^{2}-\mu_{0})a_{p}^ {*}a_{p}+4\pi\mathfrak{a}_{N}N_{0}\sum_{p\in P_{\mathrm{B}}}\left[2a_{p}^{*}a _{p}+\frac{z^{2}}{|z|^{2}}a_{p}^{*}a_{-p}^{*}+\frac{\overline{z}^{2}}{|z|^{2}} a_{p}a_{-p}\right] \tag{2.2}\] with \(z\in\mathbb{C}\), \(\mu_{0}(\beta,N)\) in (1.6) and \(N_{0}(\beta,N)\) in (1.7). It is important to note that \(\mathcal{H}_{\mathrm{B}}(z)\) depends on \(\beta\) via the latter two quantities. The momentum set \(P_{\mathrm{B}}\) in the second sum is defined by \[P_{\mathrm{B}}=\{p\in\Lambda_{+}^{*}\mid|p|\leq N^{\delta_{\mathrm{Bog}}}\}\] with some \(\delta_{\mathrm{Bog}}>0\) that will be chosen later (independently of \(N\)). The Hamiltonian in (2.2) can be diagonalized with a (unitary) Bogoliubov transformation \(T_{z}\), that is, \[T_{z}^{*}\mathcal{H}_{\mathrm{B}}(z)T_{z}=\mathcal{H}^{\mathrm{diag}}=E_{0}+ \sum_{p\in\Lambda_{+}^{*}}\varepsilon(p)a_{p}^{*}a_{p}, \tag{2.3}\] with \(E_{0},\varepsilon(p)\in\mathbb{R}\) (precise definitions will be given later in Section 2.3.1). The thermally excited particles will be described by the following Gibbs states related to \(\mathcal{H}_{\mathrm{B}}(z)\): \[G^{\mathrm{diag}}:=\frac{\exp(-\beta\mathcal{H}^{\mathrm{diag}})}{\mathrm{Tr }_{\mathfrak{F}_{+}}\left[\exp(-\beta\mathcal{H}^{\mathrm{diag}})\right]}, \qquad\widetilde{G}^{\mathrm{diag}}:=\frac{\mathds{P}_{\widetilde{c}}\exp(- \beta\mathcal{H}^{\mathrm{diag}})}{\mathrm{Tr}_{\mathfrak{F}_{+}}\left[ \mathds{P}_{\widetilde{c}}\exp(-\beta\mathcal{H}^{\mathrm{diag}})\right]}. \tag{2.4}\] Here the spectral projection \(\mathds{P}_{\widetilde{c}}\) is defined as \[\mathds{P}_{\widetilde{c}}\coloneqq\mathds{1}_{\{\mathcal{N}^{<}\leq\widehat {c}\widehat{\beta}^{-1}N^{\delta_{\mathrm{Bog}}}\}}\mathds{1}_{\{\mathcal{N}^{ >}\leq\widetilde{c}\widetilde{c}N\}}, \tag{2.5}\] with some \(\widetilde{c}>1\) to be specified later and with the number operators \[\mathcal{N}^{<}\coloneqq\sum_{p\in P_{\mathrm{B}}}a_{p}^{*}a_{p}\quad\text{ and }\quad\mathcal{N}^{>}\coloneqq\sum_{p\in(\Lambda_{+}^{*}\setminus P_{\mathrm{B}})}a_{p}^{*}a_{p}. \tag{2.6}\] Here and in the following we introduce all states once with and once without a particle number cutoff. This is motivated by the fact that both objects appear frequently in the computation of the free energy of our final trial state. To make the identification of quantities with cutoff easier, we always denote them with a tilde. The states in (2.4), when transformed by \(T_{z}\), are denoted by \[G(z)\coloneqq\frac{\exp(-\beta\mathcal{H}_{\mathrm{B}}(z))}{\operatorname{Tr}_{ \mathfrak{F}_{+}}\left[\exp(-\beta\mathcal{H}_{\mathrm{B}}(z))\right]}=T_{z}G ^{\mathrm{diag}}T_{z}^{*}\quad\text{ and }\quad\widetilde{G}(z)\coloneqq T_{z} \widetilde{G}^{\mathrm{diag}}T_{z}^{*}. \tag{2.7}\] Finally, the uncorrelated trial states read \[\Gamma_{0}=\mathcal{U}\left(\int_{\mathbb{C}}\zeta(z)|z\rangle\langle z| \otimes G(z)\,\mathrm{d}z\right)\mathcal{U}^{*},\qquad\widetilde{\Gamma}_{0} =\mathcal{U}\left(\int_{\mathbb{C}}\zeta(z)|z\rangle\langle z|\otimes \widetilde{G}(z)\,\mathrm{d}z\right)\mathcal{U}^{*}, \tag{2.8}\] with \(\mathcal{U}\) in (2.1) and the coherent state \(|z\rangle=W_{z}\Omega_{0}=\exp(za^{*}(\varphi_{0})-\overline{z}a(\varphi_{0})) \Omega_{0}\in\mathfrak{F}_{0}\). The probability density \(\zeta(z)\) on \(\mathbb{C}\) with respect to the measure \(\,\mathrm{d}z=\,\mathrm{d}x\,\mathrm{d}y/\pi\) with \(z=x+\mathrm{i}y\) is given by \[\zeta(z)=\frac{\mathds{1}_{\{|z|^{2}\leq\widetilde{c}N\}}\exp(-\beta(4\pi \mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2}))}{\int_{\{|z|^{2}\leq \widetilde{c}N\}}\exp(-\beta(4\pi\mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2 }))\,\mathrm{d}z}. \tag{2.9}\] The chemical potential \(\widetilde{\mu}=\widetilde{\mu}(N)\in\mathbb{R}\) will be chosen later such that our final trial state has the correct particle number. All particle number cutoffs we have introduced so far are needed for technical reasons. The restriction of the momenta in the sum in the interaction terms in (2.2) to the set \(P_{\mathrm{B}}\) is very convenient from a mathematical point of view and still allows us to obtain the term in the second line in (1.11). This is possible because \(\varepsilon(p)\simeq p^{2}-\mu_{0}\) for \(|p|\gg 1\). In [13] the state \(\Gamma_{0}\) in (2.8) has been used as uncorrelated trial state, while we are forced to work with \(\widetilde{\Gamma}_{0}\) instead. This is related to the fact that we implement correlations between the particles differently than in [13]. It remains to add microscopic correlation induced by the interaction \(v_{N}\) to our trial state. To that end, we first apply the spectral theorem and write \(G^{\mathrm{diag}}=\sum_{\alpha\in\mathcal{A}}\lambda_{\alpha}|\Psi_{\alpha} \rangle\langle\Psi_{\alpha}|\). We assume that each \(\Psi_{\alpha}\) is a symmetrized product of plane waves with a definite particle number. This choice is possible because of (2.3), and it is important for our analysis. We highlight that \(\{\Psi_{\alpha}\}_{\alpha}\) is a basis that jointly diagonalizes \(\mathcal{H}^{\mathrm{diag}}\), \(\mathcal{N}\), \(\mathcal{N}^{<}\) and \(\mathcal{N}^{>}\), i.e. \[\mathcal{H}^{\mathrm{diag}}\Psi_{\alpha}=E_{\alpha}\Psi_{\alpha},\qquad \mathcal{N}\Psi_{\alpha}=N_{\alpha}\Psi_{\alpha},\qquad\mathcal{N}^{<}\Psi_{ \alpha}=N_{\alpha}^{<}\Psi_{\alpha},\qquad\mathcal{N}^{>}\Psi_{\alpha}=N_{ \alpha}^{>}\Psi_{\alpha}, \tag{2.10}\] where \(N_{\alpha}=N_{\alpha}^{<}+N_{\alpha}^{>}\). In this representation the particle number cutoff in the definition of \(\widetilde{G}^{\mathrm{diag}}\) amounts to restricting the sum over \(\alpha\) to the set \[\widetilde{\mathcal{A}}=\{\alpha\in\mathcal{A}\mid N_{\alpha}^{<}\leq \widetilde{c}\beta^{-1}N^{\delta_{\mathrm{Bog}}}\text{ and }N_{\alpha}^{>}\leq \widetilde{c}N\} \tag{2.11}\] and to normalizing by the factor \(\kappa_{0}\coloneqq\sum_{\alpha\in\widetilde{\mathcal{A}}}\lambda_{\alpha}\). The eigenvalues of \(\widetilde{G}^{\mathrm{diag}}\) therefore read \(\widetilde{\lambda}_{\alpha}=\kappa_{0}^{-1}\lambda_{\alpha}\). We define the correlation structure in terms of the solution \(f(|x|)\) to the zero energy scattering equation \(\Delta f(|x|)=V_{N}(x)f(|x|)/2\) in \(\mathbb{R}^{3}\) with the boundary condition \(\lim_{|x|\to\infty}f(|x|)=1\). Let \[f_{\ell}(x)=\begin{cases}f(|x|)/f(\ell)&\text{ for }|x|<\ell,\\ 1&\text{ for }|x|\geq\ell,\end{cases} \tag{2.12}\] where the parameter \(\ell>0\) is required to be strictly larger than the radius of the support of \(V_{N}\). In the following, we assume that \(\ell\) is at least twice as large as that radius. This, in particular, implies \(\ell\geq 2\mathfrak{a}_{N}\). We also define the operator \(F\) on \(\mathfrak{F}\) by \[(F\Psi)^{(n)}(x_{1},...,x_{n}) =F_{n}(x_{1},...,x_{n})\Psi^{(n)}(x_{1},...,x_{n})\quad\text{ with}\] \[F_{n}(x_{1},...,x_{n}) =\prod_{1\leq i<j\leq n}f_{\ell}(x_{i}-x_{j}). \tag{2.13}\] That is, \(F\) multiplies each \(n\)-particle component of a Fock space vector \(\Psi\) by a Jastrow factor. This should be compared with [27, 29, 30, 35, 46]. Finally, our trial state with correlations is given by \[\Gamma=\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}}\frac{ \widetilde{\lambda}_{\alpha}}{\|F\mathcal{U}\phi_{z,\alpha}\|^{2}}|F\mathcal{U }\phi_{z,\alpha}\rangle\langle F\mathcal{U}\phi_{z,\alpha}|\,\mathrm{d}z \tag{2.14}\] with \(\mathcal{U}\) in (2.1) and \(\phi_{z,\alpha}=|z\rangle\otimes T_{z}\Psi_{\alpha}\). For \(\Gamma\) to be an admissible trial state in the Gibbs variational principle, we require that \(\mathrm{Tr}[\mathcal{N}\Gamma]=N\) holds. This is not a trivial matter because (a) the chemical potential in the definition of \(\widetilde{G}(z)\) is fixed and (b) the correlation structure changes the particle number of \(\Gamma\) with respect to that of \(\widetilde{\Gamma}_{0}\) because \(\widetilde{\Gamma}_{0}\) and \(\mathcal{N}\) do not commute. Under suitable assumptions on the parameters, the following lemma guarantees the existence of \(\widetilde{\mu}\in\mathbb{R}\) such that \(\Gamma\) is an admissible trial state. **Lemma 2.1**.: _We consider the combined limit \(N\to\infty\), \(\beta\gtrsim\beta_{\mathrm{c}}\) with \(\beta_{\mathrm{c}}\) in (1.8) and assume \(N_{0}\geq N^{2/3}\), \(\delta_{\mathrm{Bog}}<1/12\) as well as that \(\widetilde{c}\) is sufficiently large. Then there are constants \(c,M>0\) such that if \(2\mathfrak{a}_{N}\leq\ell\leq cN^{-7/12}\) and \(N\geq M\) the following holds: There exists \(\widetilde{\mu}\in\mathbb{R}\) such that the state \(\Gamma\) in (2.14) satisfies \(\mathrm{Tr}[\mathcal{N}\Gamma]=N\) and we have the bound_ \[\big{|}\operatorname{Tr}[\mathcal{N}\Gamma]-\mathrm{Tr}[\mathcal{N}\widetilde{ \Gamma}_{0}]\big{|}\lesssim N^{3}\ell^{4}+N^{1+\delta_{\mathrm{Bog}}}\ell^{2} (\beta^{-1}+1) \tag{2.15}\] _with \(\widetilde{\Gamma}_{0}\) in (2.8)._ In the proof of the above lemma we use a simpler version of a cancellation that we observe in the computation of the energy of \(\Gamma\) in the proof of Proposition 3.6. To not dilute the main line of the argument, we therefore defer it to Appendix C. **Remark 2.2**.: As is apparent from the assumption in the above lemma, the trial state \(\Gamma\) is only well defined for inverse temperatures such that \(N_{0}(\beta,N)\geq N^{2/3}\) holds. To obtain a proof of Theorem 1.1 for all inverse temperatures satisfying \(\beta\gtrsim\beta_{\mathrm{c}}\), we use a second and much simpler trial state in the parameter regime defined by \(N_{0}(\beta,N)\leq N^{2/3}\). More details can be found in Section 5. **Remark 2.3**.: The way correlations are implemented in [13] differs from our approach in two ways. The first difference is that, instead of a Jastrow factor, the authors of [13] use a certain quartic (in creation and annihilation operators) transformation. Up to technicalities, their transformation amounts to multiplying an uncorrelated \(n\)-particle wave function by a factor \(1-\sum_{1\leq i<j\leq n}w(x_{i}-x_{j})\), where \(w(x)=1-f_{\ell}(x)\). This approach is very convenient from a computational perspective, but it suffers from the disadvantage that the resulting trial state is not an element of the form domain of \(\mathcal{H}_{N}\), if \(V\) is chosen as in (1.19). The trial state \(\Gamma\) in (2.14) does not have this problem. The second main difference between our approaches lies in the level at which correlations are introduced. In [13] correlations are added to the eigenfunctions of \(\Gamma_{0}\). With this choice, it is easier (when compared to our case) to estimate the effect of the correlation structure on the entropy of the trial state. However, the eigenfunctions of \(\Gamma_{0}\) have a somewhat complicated structure, which makes it difficult to access useful properties of the eigenfunctions of the Gibbs state \(G(z)\) in computing the energy and number of particles of the trial state. This is not a problem in [13], thanks to the special form of the correlation structure chosen there (however, it causes additional difficulties in proving the existence of a chemical potential \(\widetilde{\mu}\) such that the trial state has the correct particle number, see the discussion below Lemma 2.1 in [13]). In contrast, we add correlations to the eigenfunctions of the state \(|z\rangle\langle z|\otimes G(z)\). This allows us to harness properties of a (suitably chosen) basis of eigenfunctions of \(G(z)\), which turns out to be crucial in computing the energy of our trial state. We therefore had to find a different way to estimate the influence of the correlations on the entropy. More details can be found in Section 4. To simplify the notation we will, by a slight abuse of notation, drop the isomorphism \(\mathcal{U}\) from all formulas, and identify vectors in \(\mathfrak{F}\) and \(\mathfrak{F}_{0}\otimes\mathfrak{F}_{+}\). In the remaining part of the article we prove an upper bound for the free energy of \(\Gamma\) that implies Theorem 1.1. ### Properties of the trial state To compute the free energy of our trial state, precise information about its properties is needed. In this section we prove the relevant statements to not interrupt the main line of the argument later. The following lemma, which allows us to estimate momentum sums in terms of integrals, will be used frequently in our analysis. A proof can be found in [27, Lemma 3.3]. **Lemma 2.4**.: _Let \(f:[0,\infty)\to\mathbb{R}\) be nonnegative and monotone decreasing, and let \(\lambda\geq 0\). Then we have_ \[\sum_{p\in\Lambda_{+}^{*}}f(p)\mathds{1}_{[\lambda,\infty)}(|p|)\leq(2\pi)^{- 3}\int_{|p|\geq[\lambda-2\pi\sqrt{3}]_{+}}f(|p|)\left(1+\frac{2\pi}{|p|}+\frac {6\pi}{|p|^{2}}\right)\,\mathrm{d}p.\] In the following subsection we recall some properties of the operators \(T_{z}\) and \(W_{z}\). #### 2.3.1 The Bogoliubov and Weyl transformations For \(p\in\Lambda_{+}^{*}\) we define \[\tau_{p}\coloneqq-\frac{1}{4}\log\left[1+\frac{16\pi\mathfrak{a}_{N}N_{0} \mathds{1}_{P_{\mathrm{B}}}(p)}{|p|^{2}-\mu_{0}}\right] \tag{2.16}\] as well as the Bogoliubov transformation \[T_{z}\coloneqq\exp\Big{(}\sum_{p\in\Lambda_{+}^{*}}\tau_{p}\Big{(}\frac{z^{2} }{|z|^{2}}a_{p}^{*}a_{-p}^{*}-\mathrm{h.c.}\Big{)}\Big{)}. \tag{2.17}\] Its action on creation and annihilation is given by \[T_{z}^{*}a_{p}^{*}T_{z}=u_{p}a_{p}^{*}+v_{p}\frac{\overline{z}^{2}}{|z|^{2}}a_ {-p},\qquad T_{z}^{*}a_{p}T_{z}=u_{p}a_{p}+v_{p}\frac{z^{2}}{|z|^{2}}a_{-p}^{*}, \tag{2.18}\] with \(u_{p}\coloneqq\cosh(\tau_{p})\) and \(v_{p}\coloneqq\sinh(\tau_{p})\), that is, \[\begin{split} u_{p}=&\frac{1}{2}\left(\frac{|p|^{2}- \mu_{0}}{|p|^{2}-\mu_{0}+16\pi\mathfrak{a}_{N}N_{0}\mathds{1}_{P_{\mathrm{B}} }(p)}\right)^{1/4}+\frac{1}{2}\left(\frac{|p|^{2}-\mu_{0}}{|p|^{2}-\mu_{0}+16 \pi\mathfrak{a}_{N}N_{0}\mathds{1}_{P_{\mathrm{B}}}(p)}\right)^{-1/4},\\ v_{p}=&\frac{1}{2}\left(\frac{|p|^{2}-\mu_{0}}{|p|^ {2}-\mu_{0}+16\pi\mathfrak{a}_{N}N_{0}\mathds{1}_{P_{\mathrm{B}}}(p)}\right)^ {1/4}-\frac{1}{2}\left(\frac{|p|^{2}-\mu_{0}}{|p|^{2}-\mu_{0}+16\pi\mathfrak{a} _{N}N_{0}\mathds{1}_{P_{\mathrm{B}}}(p)}\right)^{-1/4}.\end{split} \tag{2.19}\] The coefficients \(u_{p},v_{p}\) satisfy the following bounds. **Lemma 2.5**.: _The coefficient \(v_{p}\) in (2.19) satisfies the bounds_ \[v_{p}\lesssim\frac{N_{0}}{N|p|^{2}}\lesssim\frac{N_{0}}{N},\qquad\|v\|_{2} \lesssim\frac{N_{0}}{N}\quad\text{ and }\quad\sum_{p\in\Lambda_{+}^{*}}|p|^{k}|v_{p}|\lesssim\frac{N_{0}}{N}N^{(k+1 )\delta_{\mathrm{Bog}}} \tag{2.20}\] _for \(k\in\{0,1,2\}\). For \(u_{p}\) we have_ \[\|u-1\|_{\infty}\lesssim\frac{N_{0}^{2}}{N^{2}}. \tag{2.21}\] Proof.: We first observe that (2.19) implies \[v_{p}^{2}=\frac{1}{4}\left(\frac{|p|^{2}-\mu_{0}}{|p|^{2}-\mu_{0}+16\pi\mathfrak{a }_{N}N_{0}\mathds{1}_{P_{\mathrm{B}}}(p)}\right)^{1/2}+\frac{1}{4}\left(\frac{|p |^{2}-\mu_{0}}{|p|^{2}-\mu_{0}+16\pi\mathfrak{a}_{N}N_{0}\mathds{1}_{P_{\mathrm{ B}}}(p)}\right)^{-1/2}-\frac{1}{2}.\] Using \(\mu_{0}<0\), \(\mathfrak{a}_{N}=\mathfrak{a}/N\) and the inequality \(0\leq(1+x)^{1/2}+(1+x)^{-1/2}-2\leq x^{2}/4\) for \(x\geq 0\), we find \[v_{p}^{2}\lesssim\frac{N_{0}^{2}}{N^{2}|p|^{4}}. \tag{2.22}\] The remaining estimates in (2.20) and (2.21) follow from (2.22), \(|p|\geq 2\pi\) for \(p\in\Lambda_{+}^{*}\), the identity \(u_{p}^{2}=1+v_{p}^{2}\) and the inequality \(\sqrt{1+x}-1\leq x\) for \(x\geq 0\). As we mentioned earlier, the unitary \(T_{z}\) diagonalizes the Bogoliubov Hamiltonian. The precise statement is the following. **Lemma 2.6**.: _Let \(z\in\mathbb{C}\), \(\mathcal{H}_{\mathrm{B}}(z)\) in (2.2) and \(\tau_{p}\) in (2.16). We have_ \[T_{z}^{*}\mathcal{H}_{\mathrm{B}}(z)T_{z}=\mathcal{H}^{\mathrm{diag}}\coloneqq E _{0}+\sum_{p\in\Lambda_{+}^{*}}\varepsilon(p)a_{p}^{*}a_{p}, \tag{2.23}\] _with the ground state energy_ \[E_{0}\coloneqq-\frac{1}{2}\sum_{p\in P_{\mathrm{B}}}\left[|p|^{2}-\mu_{0}+8 \pi\mathfrak{a}_{N}N_{0}-\varepsilon(p)\right] \tag{2.24}\] _and the Bogoliubov dispersion relation_ \[\varepsilon(p)\coloneqq\begin{cases}\sqrt{|p|^{2}-\mu_{0}}\sqrt{|p|^{2}-\mu_{ 0}+16\pi\mathfrak{a}_{N}N_{0}},&p\in P_{\mathrm{B}},\\ |p|^{2}-\mu_{0},&p\in\Lambda_{+}^{*}\setminus P_{\mathrm{B}}.\end{cases} \tag{2.25}\] The proof is a standard computation based on (2.18) and (2.19), which can be found for instance in [12, Lemma 5.2]. Next, we recall the definition of the Weyl operator \(W_{z}\coloneqq\exp(za^{*}(\varphi_{0})-\overline{z}a(\varphi_{0}))\) and the well-known formulas \[W_{z}^{*}a_{x}W_{z}=a_{x}+z,\qquad W_{z}^{*}a_{x}^{*}W_{z}=a_{x}^{*}+\overline{z}, \tag{2.26}\] \[W_{z}^{*}a_{p}W_{z}=a_{p}+z\delta_{p,0},\qquad W_{z}^{*}a_{p}^{*}W_{z}=a_{p}^{* }+\overline{z}\delta_{p,0}, \tag{2.27}\] for every \(x\in\Lambda\) and \(p\in\Lambda^{*}\). The Bogoliubov and the Weyl transformations cannot change the particle number too much. The precise statement is captured in the following lemma. **Lemma 2.7**.: _The operator inequalities_ \[T_{z}^{*}(\mathcal{N}^{<}+1)^{k}T_{z}\lesssim_{k}(\mathcal{N}^{<}+1)^{k} \tag{2.28}\] _and_ \[W_{z}^{*}(\mathcal{N}^{<}+1)^{k}W_{z}\lesssim_{k}(N^{<}+|z|^{2}+1)^{k} \tag{2.29}\] _hold for all \(k\in\mathbb{N}\)._ We omit the proof, which is a standard application of Gronwall's inequality, compare for instance with [17, Lemma 3.1]. We are now prepared to prove several important properties of the Bogoliubov Gibbs state in the diagonal and in the non-diagonal representation. #### 2.3.2 The Bogoliubov Gibbs state in the diagonal representation We start by computing the \(1\)-pdm of \(G^{\rm diag}\) in (2.4). **Lemma 2.8**.: _For every \(p,q\in\Lambda_{+}^{*}\), we have_ \[\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{p}^{*}a_{q}G^{\rm diag}]=\gamma_{p}^{ \rm diag}\delta_{p,q},\qquad\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{p}a_{q}G^{ \rm diag}]=0, \tag{2.30}\] _with_ \[\gamma_{p}^{\rm diag}:=\frac{1}{\exp(\beta\varepsilon(p))-1} \tag{2.31}\] _and \(\varepsilon(p)\) in (2.25). The sequence of eigenvalues \(\gamma_{p}^{\rm diag}\) satisfies_ \[\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}^{\rm diag}\lesssim\beta^{-3/2}+\beta^{-1 },\qquad\sum_{p\in\Lambda_{+}^{*}}p^{2}\gamma_{p}^{\rm diag}\lesssim\beta^{-5/2 }+\beta^{-3/2},\qquad\sum_{p\in\Lambda_{+}^{*}}|\gamma_{p}^{\rm diag}|^{2} \lesssim\beta^{-2}. \tag{2.32}\] _Moreover,_ \[\int_{\Lambda}|\check{\gamma}^{\rm diag}(x)|\,{\rm d}x\lesssim\beta^{-1}. \tag{2.33}\] Proof.: The formulas (2.30), (2.31) follow directly from the definition of \(G^{\rm diag}\). We apply Lemma 2.4 with \(\lambda\in(2\pi\sqrt{3},4\pi)\), use \(\varepsilon(p)\geq|p|^{2}\) and \((\exp(x)-1)^{-1}\leq x^{-1}\) to see that \[\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}^{\rm diag}\leq\sum_{p\in\Lambda_{+}^{*}} \frac{\mathds{1}_{[\lambda,\infty)}(|p|)}{\exp(\beta|p|^{2})-1}+C\beta^{-1} \lesssim\int_{|p|\geq\lambda-2\pi\sqrt{3}}\frac{1+|p|^{-2}}{\exp(\beta|p|^{2} )-1}\,{\rm d}p+\beta^{-1}\lesssim\beta^{-3/2}+\beta^{-1}\] holds. With the same argument we prove the second bound in (2.32). We also have \[\sum_{p\in\Lambda_{+}^{*}}|\gamma_{p}^{\rm diag}|^{2}\leq\frac{1}{\beta^{2}} \sum_{p\in\Lambda_{+}^{*}}\frac{1}{|p|^{4}}\lesssim\beta^{-2},\] which completes the proof of (2.32). Finally, to obtain (2.33), we apply Cauchy-Schwartz and (2.32): \[\int_{\Lambda}|\check{\gamma}^{\rm diag}(x)|\,{\rm d}x\leq\left(\int_{ \Lambda}|\check{\gamma}^{\rm diag}(x)|^{2}\,{\rm d}x\right)^{1/2}\lesssim \beta^{-1}.\] The following lemma allows us to control the expectation of powers of \(\mathcal{N}_{+}\) (the restriction of \(\mathcal{N}\) to \(\mathfrak{F}_{+}\)) in the state \(G^{\rm diag}\). **Lemma 2.9**.: _For every \(k\geq 2\), we have_ \[\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}^{k}G^{\rm diag}]-\big{(} \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\rm diag}]\big{)}^{k} \lesssim_{k}\beta^{-\frac{3(k-2)}{2}}\beta^{-2}+\beta^{-(k-1)}. \tag{2.34}\] Proof.: Let us introduce the notation \({\rm d}X^{k}={\rm d}x_{1},...,\,{\rm d}x_{k}\). Using the CCR we see that \[\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}^{k}G^{\rm diag}] \tag{2.35}\] \[= \int_{\Lambda^{k+1}}\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x_{1} }^{*}...a_{x_{k}}^{*}a_{x_{1}}...a_{x_{k}}G^{\rm diag}]\,{\rm d}X^{k}+ \operatorname{Tr}_{\mathfrak{F}_{+}}[\big{(}\mathcal{N}^{k}-\mathcal{N}\cdot...\cdot(\mathcal{N}-k+1)\big{)}G^{\rm diag}].\] An application of Wick's theorem and the identity \(\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x}^{*}a_{x}G^{\operatorname{diag}}]= \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\operatorname{diag}}]\) show that the first term on the right-hand side equals \[\big{(}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\operatorname{ diag}}]\big{)}^{k}+\sum_{\pi\in S_{k}\setminus\{\operatorname{Id}_{k}\}}\int_{ \Lambda^{k}}\prod_{i=1}^{k}\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x_{i}}^{*} a_{x_{\pi(i)}}G^{\operatorname{diag}}]\mathrm{d}X^{k},\] where \(S_{k}\) denotes the set of permutations of \(\{1,...,k\}\) and \(\operatorname{Id}_{k}\in S_{k}\) is the identity. To bound the sum on the right-hand side, we first observe that an application of Cauchy-Schwartz and Lemma 2.8 imply \[|\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x}^{*}a_{y}G^{\operatorname{diag}}]| \leq\sqrt{\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x}^{*}a_{x}G^{ \operatorname{diag}}]\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{y}^{*}a_{y}G^{ \operatorname{diag}}]}\lesssim\beta^{-3/2}+\beta^{-1}.\] Moreover, if \(\pi\in S_{k}\setminus\{\operatorname{Id}_{k}\}\), there exist distinct \(i,j\in\{1,...,k\}\) such that \(\pi(i)\neq i\), \(\pi(j)\neq j\). We can thus estimate \[\sum_{\pi\in S_{k}\setminus\{\operatorname{Id}_{k}\}}\int_{ \Lambda^{k}}\prod_{i=1}^{k}\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x_{i}}^{*} a_{x_{\pi(i)}}G^{\operatorname{diag}}]\mathrm{d}X^{k}\] \[\lesssim k!(\beta^{-3/2}+\beta^{-1})^{k-2}\Big{(}\int_{\Lambda^{4}}|\bar{ \gamma}^{\operatorname{diag}}(x_{1}-x_{3})\bar{\gamma}^{\operatorname{diag} }(x_{2}-x_{4})|\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}\,\mathrm{ d}x_{4}\] \[\qquad+\int_{\Lambda^{4}}|\bar{\gamma}^{\operatorname{diag}}(x_ {1}-x_{3})\bar{\gamma}^{\operatorname{diag}}(x_{2}-x_{3})|\,\mathrm{d}x_{1} \,\mathrm{d}x_{2}\,\mathrm{d}x_{3}\Big{)}\] \[\lesssim_{k} (\beta^{-3/2}+\beta^{-1})^{k-2}\beta^{-2},\] where we used (2.33) in the last step. To obtain a bound for the second term on the right-hand side of (2.35), we argue as above and find \[\operatorname{Tr}_{\mathfrak{F}_{+}}[\big{(}\mathcal{N}^{k}-\mathcal{N}\cdot...\cdot(\mathcal{N}-k+1)\big{)}G^{\operatorname{diag}}]\lesssim_{k} \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}^{k-1}G^{\operatorname{diag}}] \lesssim_{k}(\beta^{-3/2}+\beta^{-1})^{(k-1)}.\] In combination, these considerations prove (2.34). To compare the energy of our trial state to the expression in (1.11), we need to remove the cutoff on the number of particles in several terms. The next lemma allows us to control the corresponding errors. **Lemma 2.10**.: _Let \(\beta\gtrsim\beta_{\mathrm{c}}\). There exist constants \(\widetilde{c},c>0\) such that, for every \(m\in\mathbb{N}\), we have_ \[\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathds{1}-\mathds{P}_{\widetilde{c}}) \mathcal{N}_{+}^{m}G^{\operatorname{diag}}]=\sum_{\alpha\in\mathcal{A}\setminus \widetilde{\mathcal{A}}}\lambda_{\alpha}N_{\alpha}^{m}\lesssim_{m}\exp(-cN^{ \delta_{\operatorname{Bog}}}) \tag{2.36}\] _with \(\mathds{P}_{\widetilde{c}}\) in (2.5). In particular, \(1\geq\kappa_{0}\geq 1-C\exp(-cN^{\delta_{\operatorname{Bog}}})\). Moreover,_ \[\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathds{1}-\mathds{P}_{\widetilde{c}}) \mathcal{H}^{\operatorname{diag}}G^{\operatorname{diag}}]=\sum_{\alpha\in \mathcal{A}\setminus\widetilde{\mathcal{A}}}\lambda_{\alpha}E_{\alpha} \lesssim\exp(-cN^{\delta_{\operatorname{Bog}}}). \tag{2.37}\] Proof.: We start by proving the first statement in the case \(m=0\). For notational convenience, let us introduce \(\widetilde{N}^{<}=\widetilde{c}\beta^{-1}N^{\delta_{\operatorname{Bog}}}\) and \(\widetilde{N}=\widetilde{c}N\). Using the fact that \[\mathds{1}-\mathds{P}_{\widetilde{c}}\leq\mathds{1}_{\{\mathcal{N}^{<}> \widetilde{N}^{<}\}}+\mathds{1}_{\{\mathcal{N}^{>}>\widetilde{N}\}}\leq \mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{<}\}}+\mathds{1}_{\{\mathcal{N} _{+}>\widetilde{N}\}},\] we can bound \[\mathrm{Tr}_{\mathfrak{F}_{+}}[(\mathds{1}-\mathds{P}_{\widehat{c}})G^{\mathrm{ diag}}]\leq\mathrm{Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}^{<}> \widetilde{N}^{<}\}}G^{\mathrm{diag}}]+\mathrm{Tr}_{\mathfrak{F}_{+}}[ \mathds{1}_{\{\mathcal{N}_{+}>\widetilde{N}\}}G^{\mathrm{diag}}]. \tag{2.38}\] With the inequality \(\mathds{1}_{\{x>0\}}\leq e^{\eta x}\) valid for any \(x\in\mathbb{R}\) and \(\eta>0\), we estimate the second term on the right-hand side by \[\mathrm{Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}_{+}>\widetilde{N}\}} G^{\mathrm{diag}}]\leq\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{\eta(\mathcal{N}_{+}- \widetilde{N})}G^{\mathrm{diag}}]=e^{-\eta\widetilde{N}}\frac{\mathrm{Tr}_{ \mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{diag}}+\eta\mathcal{N}_{+}}]} {\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{diag}}}]}. \tag{2.39}\] For the choice \(\eta=2\pi^{2}\beta\), which ensures \(\eta/\beta<4\pi^{2}\), we have \[\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{ diag}}+\eta\mathcal{N}_{+}}]=\exp\Bigg{(}-\sum_{p\in\Lambda_{+}^{*}}\log \Big{(}1-e^{-\beta\varepsilon(p)+\eta}\Big{)}\Bigg{)}.\] We expand the logarithm to second order and use the fact that the function \(x\mapsto(\cosh(x)-1)^{-1}\) is decreasing and satisfies \((\cosh(x)-1)^{-1}\leq 2x^{-2}\) for \(x>0\), to see that \[\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{ diag}}+\eta\mathcal{N}_{+}}]\leq \mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{ diag}}}]\exp\Bigg{(}\eta\sum_{p\in\Lambda_{+}^{*}}\frac{1}{e^{\beta\varepsilon(p)}-1} \Bigg{)}\exp\Bigg{(}\frac{1}{4}\eta^{2}\sum_{p\in\Lambda_{+}^{*}}\frac{1}{ \cosh(\frac{1}{2}\beta\varepsilon(p))-1}\Bigg{)}\] \[\leq\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{ \mathrm{diag}}}]\exp\Bigg{(}\eta\sum_{p\in\Lambda_{+}^{*}}\frac{1}{e^{\beta \varepsilon(p)}-1}\Bigg{)}\exp\Bigg{(}2\eta^{2}\sum_{p\in\Lambda_{+}^{*}} \frac{1}{\beta^{2}|p|^{4}}\Bigg{)}\] \[\lesssim\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{ \mathrm{diag}}}]\exp\Bigg{(}\eta\sum_{p\in\Lambda_{+}^{*}}\frac{1}{e^{\beta \varepsilon(p)}-1}\Bigg{)}.\] It follows from (1.6) that \[\sum_{p\in\Lambda_{+}^{*}}\frac{1}{e^{\beta\varepsilon(p)}-1}\leq\sum_{p\in \Lambda_{+}^{*}}\frac{1}{e^{\beta(|p|^{2}-\mu_{0})}-1}\leq N.\] In combination with (2.39) and (2.40), this bound implies \[\mathrm{Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}_{+}>\widetilde{N}\}} G^{\mathrm{diag}}]\lesssim e^{-\eta(\widetilde{N}-N)}. \tag{2.41}\] Next, we prove a similar estimate for the first term on the right-hand side of (2.38). We have \[\mathrm{Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{ <}\}}G^{\mathrm{diag}}]\leq e^{-\eta\widetilde{N}^{<}}\frac{\mathrm{Tr}_{ \mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{diag},<}+\eta\mathcal{N}^{<}} ]}{\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{diag},<}}]}\] with \(\mathcal{H}^{\mathrm{diag},<}=E_{0}+\sum_{p\in P_{\mathrm{B}}}\varepsilon(p)a _{p}^{*}a_{p}\). Arguing as in (2.40) with the same choice of \(\eta\), we also see that \[\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^{\mathrm{diag},<}+\eta \mathcal{N}^{<}}]\lesssim\mathrm{Tr}_{\mathfrak{F}_{+}}[e^{-\beta\mathcal{H}^ {\mathrm{diag},<}}]\exp\Bigg{(}\eta\sum_{p\in P_{\mathrm{B}}}\frac{1}{e^{\beta \varepsilon(p)}-1}\Bigg{)}.\] The sum in the above exponential is bounded by \[\beta^{-1}\sum_{p\in P_{\mathrm{B}}}\frac{1}{|p|^{2}}\lesssim\beta^{-1}N^{ \delta_{\mathrm{Bog}}},\] and hence \[\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{< }\smallsetminus\widetilde{N}^{<}\smallsetminus\widetilde{C}^{\operatorname{diag}} \}}\lesssim e^{-\eta(\widetilde{N}^{<}-c_{1}\beta^{-1}N^{\delta_{\operatorname{ Bog}}})}, \tag{2.42}\] for some \(c_{1}>0\) independent of \(N\). In combination, (2.38), (2.41) and (2.42) imply (2.36) for \(m=0\) and \(\widetilde{c}>\max\{1,c_{1}\}\). If \(m\geq 1\), we write \[\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathds{1}-\mathds{P}_{ \widetilde{c}})\mathcal{N}_{+}^{m}G^{\operatorname{diag}}]\leq \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}^{m}\mathds{1 }_{\{\mathcal{N}^{<}>\widetilde{N}\smallsetminus\widetilde{I}\}}G^{ \operatorname{diag}}]+\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}^{m }\mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{<}\smallsetminus\widetilde{C}^{ \operatorname{diag}}\}}G^{\operatorname{diag}}]\] \[\lesssim_{m} \operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathcal{N}_{+}- \widetilde{N})^{m}\mathds{1}_{\{\mathcal{N}_{+}>\widetilde{N}\}}G^{ \operatorname{diag}}]+\widetilde{N}^{m}\operatorname{Tr}_{\mathfrak{F}_{+}}[ \mathds{1}_{\{\mathcal{N}_{+}>\widetilde{N}\}}G^{\operatorname{diag}}]\] \[+\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathcal{N}^{<}- \widetilde{N}^{<})^{m}\mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{<}\smallsetminus \widetilde{C}\}}G^{\operatorname{diag}}]+(\widetilde{N}^{<})^{m}\operatorname {Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{<} \smallsetminus\widetilde{C}\}}G^{\operatorname{diag}}]\] \[+\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathcal{N}^{>})^{m} \mathds{1}_{\{\mathcal{N}^{<}>\widetilde{N}^{<}\smallsetminus\widetilde{C}\}}G ^{\operatorname{diag}}].\] To estimate the first and the third term on the right-hand side, we apply the inequality \(x^{m}\mathds{1}_{\{x\geq 0\}}\leq m!e^{\eta x}/\eta^{m}\). The rest of the argument is the same as in the case \(m=0\). With (2.41), (2.42) and \(\widetilde{N},\widetilde{N}^{<}\lesssim N\) we see that the second and the fourth term are bounded by a constant times \(\exp(-cN^{\delta_{\operatorname{Bog}}})\). To obtain a bound for the last term, we observe that it equals \[\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathds{1}_{\{\mathcal{N}^{<}>\widetilde{ N}^{<}\smallsetminus\widetilde{C}\}}G^{\operatorname{diag}}]\operatorname{Tr}_{ \mathfrak{F}_{+}}[(\mathcal{N}^{>})^{m}G^{\operatorname{diag}}]\lesssim e^{-cN^{\delta_{ \operatorname{Bog}}}}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}^{m}G ^{\operatorname{diag}}]\lesssim_{m}N^{m}e^{-cN^{\delta_{\operatorname{Bog}}}},\] where we used \([\mathcal{N}^{<},\mathcal{N}^{>}]=0\), (2.41) and (2.42). It remains to prove (2.37). The Cauchy-Schwarz inequality and (2.36) for \(m=0\) imply \[\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathds{1}-\mathds{P}_{ \widetilde{c}})\mathcal{H}^{\operatorname{diag}}G^{\operatorname{diag}}]\leq \big{(}\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathds{1}-\mathds{ P}_{\widetilde{c}})G^{\operatorname{diag}}]\big{)}^{1/2}\big{(} \operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathcal{H}^{\operatorname{diag}})^{2}G ^{\operatorname{diag}}]\big{)}^{1/2} \tag{2.43}\] \[\lesssim e^{-cN^{\delta_{\operatorname{Bog}}}}\big{(}\operatorname{Tr}_{ \mathfrak{F}_{+}}[(\mathcal{H}^{\operatorname{diag}})^{2}G^{\operatorname{ diag}}]\big{)}^{1/2}.\] To estimate the second factor on the right-hand side we apply Wick's theorem: \[\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathcal{H}^{\operatorname{diag}})^{2} G^{\operatorname{diag}}]=\Big{(}\sum_{p\in\Lambda_{+}^{*}}\varepsilon(p)\gamma_{p}^{ \operatorname{diag}}\Big{)}^{2}+\sum_{p\in\Lambda_{+}^{*}}\varepsilon(p)^{2} \gamma_{p}^{\operatorname{diag}}(1+\gamma_{p}^{\operatorname{diag}})\lesssim \beta^{-5}+\beta^{-5/2}.\] In the last step we used the monotonicity of \(x\mapsto(e^{x}-1)^{-1}\), \(\varepsilon(p)\geq|p|^{2}\) and Lemma 2.4. Inserting this bound into (2.43) we get (2.37). #### 2.3.3 The Bogoliubov Gibbs state in the original representation We now compute and estimate the correlation functions of \(G(z)\). **Lemma 2.11**.: _For every \(z\in\mathbb{C}\), we have_ \[\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{q}^{*}a_{p}G(z)]=\gamma_{p}\delta_{p,q},\qquad\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{q}a_{p}G(z)]=(z/|z|)^{2}\alpha_{p }\delta_{p,-q}, \tag{2.44}\] _with_ \[\gamma_{p}=(1+2v_{p}^{2})\gamma_{p}^{\operatorname{diag}}+v_{p}^{2},\qquad \alpha_{p}=u_{p}v_{p}\Big{(}2\gamma_{p}^{\operatorname{diag}}+1\Big{)} \tag{2.45}\] _and \(u_{p},v_{p}\) in (2.19). Moreover, we have the bounds_ \[\gamma_{p}\lesssim\frac{N_{0}^{2}\mathds{1}_{P_{\operatorname{B}}}(p)}{N^{2}|p |^{4}}+\frac{1}{\exp(\beta|p|^{2})-1},\qquad\alpha_{p}\lesssim\frac{N_{0} \mathds{1}_{P_{\operatorname{B}}}(p)}{N|p|^{2}}\left(1+\frac{1}{\beta|p|^{2}} \right), \tag{2.46}\] _as well as_ \[\sum_{p\in P_{\rm B}}\gamma_{p}\lesssim \beta^{-1}N^{\delta_{\rm Bog}}+\frac{N_{0}^{2}}{N^{2}}, \sum_{p\in\Lambda_{+}^{*}}\gamma_{p}\lesssim \beta^{-3/2}+\beta^{-1/2}+\frac{N_{0}^{2}}{N^{2}},\] \[\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}^{2}\lesssim \beta^{-2}+\frac{N_{0}^{4}}{N^{4}}, \sum_{p\in\Lambda_{+}^{*}}|p|\gamma_{p}\lesssim \beta^{-2}+\beta^{-1}+\frac{N_{0}^{2}}{N^{2}}\log N, \tag{2.47}\] \[\sum_{p\in\Lambda_{+}^{*}}|\alpha_{p}|\lesssim \frac{N_{0}}{N}\left(\beta^{-1}+N^{\delta_{\rm Bog}}\right), \sum_{p\in\Lambda_{+}^{*}}|\alpha_{p}|^{2}\lesssim \frac{N_{0}^{2}}{N^{2}}\left(\beta^{-2}+1\right),\] \[\sum_{p\in\Lambda_{+}^{*}}|p||\alpha_{p}|\lesssim \frac{N_{0}}{N}\left(\beta^{-1}\log N+N^{2\delta_{\rm Bog}}\right). \tag{2.48}\] Proof.: Using (2.18), Lemma 2.8, the fact that \(\gamma^{\rm diag}\) and \(v\) are even functions of \(p\), and the identity \(u_{p}^{2}=1+v_{p}^{2}\), we find \[\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{q}^{*}a_{p}G(z)]= \operatorname{Tr}_{\mathfrak{F}_{+}}\Big{[}\Big{(}u_{q}a_{q}^{*} +v_{q}\frac{\overline{z}^{2}}{|z|^{2}}a_{-q}\Big{)}\Big{(}u_{p}a_{p}+v_{p}\frac {z^{2}}{|z|^{2}}a_{-p}^{*}\Big{)}G^{\rm diag}\Big{]} \tag{2.49}\] \[= u_{p}^{2}\gamma_{p}^{\rm diag}\delta_{p,q}+v_{p}^{2}(1+\gamma_{p }^{\rm diag})\delta_{p,q}=(1+2v_{p}^{2})\gamma_{p}^{\rm diag}\delta_{p,q}+v_{p} ^{2}\delta_{p,q},\] which is the first identity in (2.45). We find the second identity after an analogous computation for \(\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{q}^{*}a_{p}^{*}G(z)]\). In combination, (2.45) and Lemma 2.5 show (2.46). The bounds in (2.47) and (2.48) are a consequence of (2.46) and Lemma 2.4. Lemma 2.11 allows us to compare the number of particles in \(G(z)\) with the one in the Gibbs state \[G^{\rm id}=\frac{\exp(-\beta\sum_{p\in\Lambda_{+}}(p^{2}-\mu_{0})a_{p}^{*}a_ {p})}{\operatorname{Tr}_{\mathfrak{F}_{+}}[\exp(-\beta\sum_{p\in\Lambda_{+}}( p^{2}-\mu_{0})a_{p}^{*}a_{p})]} \tag{2.50}\] describing the thermally excited particles in the ideal Bose gas. **Lemma 2.12**.: _We have_ \[\big{|}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G(z)]- \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\rm id}]\big{|} \lesssim\frac{N_{0}}{\beta N}+\frac{N_{0}^{2}}{N^{2}}. \tag{2.51}\] Proof.: With Lemma 2.11 we compute \[\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G(z)]- \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\rm id}]= \sum_{p\in P_{\rm B}}\left(\gamma_{p}-\frac{1}{e^{\beta(|p|^{2}- \mu_{0})-1}}\right)\] \[= \sum_{p\in P_{\rm B}}\left(\frac{1}{e^{\beta\varepsilon(p)}-1}- \frac{1}{e^{\beta(|p|^{2}-\mu_{0})}-1}\right)+\sum_{p\in P_{\rm B}}\left(\frac{2 v_{p}^{2}}{e^{\beta\varepsilon(p)}-1}+v_{p}^{2}\right). \tag{2.52}\] Using \(e^{x}-1\geq x\), \(\varepsilon(p)\geq|p|^{2}\), and Lemma 2.5, it is straightforward to see that \[\sum_{p\in P_{\rm B}}\left(\frac{2v_{p}^{2}}{e^{\beta\varepsilon(p)}-1}+v_{p}^ {2}\right)\lesssim\frac{N_{0}^{2}}{N^{2}}(1+\beta^{-1}). \tag{2.53}\] To bound the first term on the right-hand side of (2.52), we write \[\frac{1}{e^{\beta(|p|^{2}-\mu_{0})}-1}-\frac{1}{e^{\beta\varepsilon(p)}-1}=\int_{0 }^{1}\frac{\beta\left(\varepsilon(p)-(|p|^{2}-\mu_{0})\right)}{4\sinh^{2}\left( \frac{1}{2}(t\beta\varepsilon(p)+(1-t)\beta(|p|^{2}-\mu_{0}))\right)}\,\mathrm{ d}t.\] With \[\big{|}\varepsilon(p)-(|p|^{2}-\mu_{0})\big{|}=(|p|^{2}-\mu_{0})\left|\sqrt{1+ \frac{16\pi a_{N}N_{0}}{|p|^{2}-\mu_{0}}}-1\right|\lesssim\frac{N_{0}}{N},\] we obtain the bound \[\left|\sum_{p\in P_{\mathrm{B}}}\left(\frac{1}{e^{\beta\varepsilon(p)}-1}- \frac{1}{e^{\beta(|p|^{2}-\mu_{0})}-1}\right)\right|\lesssim\frac{\beta N_{0} }{N}\sum_{p\in P_{\mathrm{B}}}\frac{1}{\sinh^{2}(\beta|p|^{2}/2)}\lesssim \frac{N_{0}}{\beta N}. \tag{2.54}\] In the last step we used \(\sinh(x)\geq x\) for \(x\geq 0\). In combination, (2.52), (2.53) and (2.54) prove (2.51). We now consider the eigenfunctions \(\phi_{z,\alpha}=|z\rangle\otimes T_{z}\Psi_{\alpha}\). For \(k\in\mathbb{N}\), we introduce the notation \[\varrho_{z,\alpha}^{(k)}(x_{1},...,x_{k})= \langle a_{x_{1}}^{*}...a_{x_{k}}^{*}a_{x_{1}}...a_{x_{k}}\rangle_ {\phi_{z,\alpha}}. \tag{2.55}\] In the cases \(k=2,4\) we have the following bounds. **Lemma 2.13**.: _We have_ \[\sup_{x_{1},x_{2}\in\Lambda}\varrho_{z,\alpha}^{(2)}(x_{1},x_{2})\leq |z|^{4}+4|z|^{2}N_{\alpha}+2N_{\alpha}(N_{\alpha}-1)\] \[+C(|z|^{2}+N_{\alpha})(N_{\alpha}^{<}+N^{\delta_{\mathrm{Bog}}})+ CN^{3\delta_{\mathrm{Bog}}}(N_{\alpha}^{<}+1)^{2}, \tag{2.56}\] \[\sup_{x_{1},x_{2}\in\Lambda}\Big{|}\nabla_{2}\varrho_{z,\alpha}^ {(2)}(x_{1},x_{2})\Big{|}\lesssim N_{\alpha}^{3/2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|+N^{4\delta_{ \mathrm{Bog}}}(N_{\alpha}^{<}+1)^{2}\] \[+(N_{\alpha}^{<}+N^{\delta_{\mathrm{Bog}}})\Big{(}N_{\alpha}^{1/ 2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|+N_{\alpha}N^{\delta_{\mathrm{Bog}}}\Big{)}\] \[+|z|^{2}\Big{(}N_{\alpha}^{1/2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|+ N^{\delta_{\mathrm{Bog}}}(N_{\alpha}^{<}+N^{\delta_{\mathrm{Bog}}})\Big{)},\] (2.57) \[\sup_{x_{1},x_{2}\in\Lambda}\int_{\Lambda^{2}}\varrho_{z,\alpha}^ {(4)}(x_{1},x_{2},x_{3},x_{4})\,\mathrm{d}x_{3}\,\mathrm{d}x_{4}\lesssim (|z|^{2}+N_{\alpha}+1)^{2}\left((N_{\alpha}+|z|^{2})^{2}+N^{3 \delta_{\mathrm{Bog}}}(N_{\alpha}^{<}+1)^{2}\right). \tag{2.58}\] Proof.: The function \(\varrho_{z,\alpha}^{(2)}(x_{1},x_{2})\) is bounded by \[\varrho_{z,\alpha}^{(2)}(x_{1},x_{2}) =\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}}e^{\mathrm{i}(p_{1}- q_{1})x_{1}+\mathrm{i}(p_{2}-q_{2})x_{2}}\langle a_{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{ q_{2}}\rangle_{\phi_{z,\alpha}} \tag{2.59}\] \[\leq\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}}\langle a_{p_{1}} ^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{\phi_{z,\alpha}}=|z|^{4}+|z|^{2} \sum_{p,q\in\Lambda_{+}^{*}}\left(\frac{z^{2}}{|z|^{2}}\langle a_{p}^{*}a_{q} ^{*}\rangle_{T_{z}\Psi_{\alpha}}+\mathrm{h.c.}\right)\] \[\quad+4|z|^{2}\sum_{p,q\in\Lambda_{+}^{*}}\langle a_{p}^{*}a_{q} \rangle_{T_{z}\Psi_{\alpha}}+\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda_{+}^{*}} \langle a_{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{ \alpha}}.\] In the last step we used (2.27) and \(\langle a_{p}^{*}a_{q}^{*}a_{r}\rangle_{T_{z}\Psi_{\alpha}}=\langle a_{p}^{*} \rangle_{T_{z}\Psi_{\alpha}}=0\) for every \(p,q,r\in\Lambda_{+}^{*}\), which follows from (2.18) and the fact that \(\Psi_{\alpha}\) is an eigenfunction of \(\mathcal{N}_{+}\). We first estimate the quadratic terms on the right-hand side of (2.59). Since \(\Psi_{\alpha}\) is a symmetrized product of plane waves, we have, for every \(k\geq 1\), \(\langle a^{*}_{p_{1}}...a^{*}_{p_{k}}a_{q_{1}}...a_{q_{k}}\rangle_{\Psi_{\alpha}}=0\) unless \(\{p_{1},...,p_{k}\}=\{q_{1},...q_{k}\}\). This, in particular, implies \[\begin{split}\langle a^{*}_{p}a_{q}\rangle_{T_{z}\Psi_{\alpha}}=& \,\delta_{p,q}\Big{[}u^{2}_{p}\langle a^{*}_{p}a_{p}\rangle_{\Psi_{\alpha}}+ v^{2}_{p}(\langle a^{*}_{-p}a_{-p}\rangle_{\Psi_{\alpha}}+1)\Big{]},\\ \langle a^{*}_{p}a^{*}_{q}\rangle_{T_{z}\Psi_{\alpha}}=& \,\frac{\overline{z}^{2}}{|z|^{2}}u_{p}v_{p}\delta_{p,-q}\left( \langle a^{*}_{p}a_{p}\rangle_{\Psi_{\alpha}}+\langle a^{*}_{-p}a_{-p}\rangle _{\Psi_{\alpha}}+1\right).\end{split} \tag{2.60}\] Using Lemma 2.5 and \(N_{0}\leq N\) we estimate \[\begin{split}\sum_{p,q\in\Lambda^{*}_{+}}\left(\langle a^{*}_{p }a^{*}_{q}\rangle_{T_{z}\Psi_{\alpha}}+\text{h.c.}\right)=2\frac{\mathfrak{R }\varepsilon z^{2}}{|z|^{2}}\sum_{p\in\Lambda^{*}_{+}}u_{p}v_{p}\left(2 \langle a^{*}_{p}a_{p}\rangle_{\Psi_{\alpha}}+1\right)\lesssim N^{<}_{\alpha }+N^{\delta_{\text{Bog}}}\end{split} \tag{2.61}\] and \[\begin{split}\sum_{p,q\in\Lambda^{*}_{+}}\langle a^{*}_{p}a_{q} \rangle_{T_{z}\Psi_{\alpha}}=\sum_{p\in\Lambda^{*}_{+}}\left[(1+2v^{2}_{p}) \langle a^{*}_{p}a_{p}\rangle_{\Psi_{\alpha}}+v^{2}_{p}\right]\leq N_{\alpha}+ C(N^{<}_{\alpha}+1).\end{split} \tag{2.62}\] We now estimate the last term on the right-hand side of (2.59). To do so, we observe that, since \(T_{z}\) only acts on low momenta, the expectation in the sum vanishes if \(|\{p_{1},p_{2}\}\cap P_{\text{B}}|\neq|\{q_{1},q_{2}\}\cap P_{\text{B}}|\), and hence \[\begin{split}\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}}& \,\langle a^{*}_{p_{1}}a^{*}_{p_{2}}a_{q_{1}}a_{q_{2}}\rangle_{T_{z} \Psi_{\alpha}}=\sum_{p_{1},p_{2},q_{1},q_{2}\in(\Lambda^{*}_{+}\setminus P_{ \text{B}})}\langle a^{*}_{p_{1}}a^{*}_{p_{2}}a_{q_{1}}a_{q_{2}}\rangle_{T_{z} \Psi_{\alpha}}\\ &+\sum_{p_{1},p_{2},q_{1},q_{2}\in P_{\text{B}}}\langle a^{*}_{p _{1}}a^{*}_{p_{2}}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}+4\sum_{ \begin{subarray}{c}p_{1},q_{1}\in(\Lambda^{*}_{+}\setminus P_{\text{B}})\\ p_{2},q_{2}\in P_{\text{B}}\end{subarray}}\langle a^{*}_{p_{1}}a^{*}_{p_{2}} a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}.\end{split} \tag{2.63}\] The first term on the right-hand side equals \[\sum_{p_{1},p_{2},q_{1},q_{2}\in(\Lambda^{*}_{+}\setminus P_{\text{B}})} \langle a^{*}_{p_{1}}a^{*}_{p_{2}}a_{q_{1}}a_{q_{2}}\rangle_{\Psi_{\alpha}}=2N^ {>}_{\alpha}(N^{>}_{\alpha}-1)\leq 2N_{\alpha}(N_{\alpha}-1).\] As for the second term, using the translation invariance of \(T_{z}\Psi_{\alpha}\), the Cauchy-Schwarz inequality and (2.28), we find \[\begin{split}\sum_{p_{1},p_{2},q_{1},q_{2}\in P_{\text{B}}}\langle a ^{*}_{p_{1}}a^{*}_{p_{2}}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}=& \sum_{\begin{subarray}{c}p_{1},p_{2},q_{1}\in P_{\text{B}}\\ p_{1}+p_{2}-q_{1}\in P_{\text{B}}\end{subarray}}\langle a^{*}_{p_{1}}a^{*}_{p _{2}}a_{q_{1}}a_{p_{1}+p_{2}-q_{1}}\rangle_{T_{z}\Psi_{\alpha}}\\ \leq& 2|P_{\text{B}}|\sum_{p_{1},p_{2}\in P_{\text{B}}} \langle a^{*}_{p_{1}}a^{*}_{p_{2}}a_{p_{1}}a_{p_{2}}\rangle_{T_{z}\Psi_{ \alpha}}\lesssim N^{3\delta_{\text{Bog}}}(N^{<}_{\alpha}+1)^{2}.\end{split}\] The last term on the right-hand side of (2.63) equals \[N^{>}_{\alpha}\sum_{p,q\in P_{\text{B}}}\langle a^{*}_{p}a_{q}\rangle_{T_{z} \Psi_{\alpha}}\lesssim N_{\alpha}(N^{<}_{\alpha}+N^{\delta_{\text{Bog}}}),\] where we used (2.62) in the last step. Putting these considerations together, we find \[\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}}\langle a^{*}_{p_{1}}a^{*}_{p_{2 }}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}\leq 2N_{\alpha}(N_{\alpha}-1)+CN_{ \alpha}(N^{<}_{\alpha}+N^{\delta_{\text{Bog}}})+CN^{3\delta_{\text{Bog}}}(N^{<}_ {\alpha}+1)^{2},\] which combined with (2.59), (2.61) and (2.62) implies (2.56). We now show (2.57). We start by taking the gradient of (2.59): \[-\mathrm{i}\nabla_{2}\varrho_{z,\alpha}^{(2)}(x,y) =\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}}(p_{2}-q_{2})e^{ \mathrm{i}(p_{1}-q_{1})x+\mathrm{i}(p_{2}-q_{2})y}\langle a_{p_{1}}^{*}a_{p_{2} }^{*}a_{q_{1}}a_{q_{2}}\rangle_{\phi_{z,\alpha}} \tag{2.64}\] \[=\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}}(p_{2}-q_{2})e^{ \mathrm{i}(p_{1}-q_{1})x+\mathrm{i}(p_{2}-q_{2})y}\langle a_{p_{1}}^{*}a_{p_{2} }^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}\] \[\quad+|z|^{2}\sum_{p,q\in\Lambda^{*}_{+}}\Bigg{(}qe^{\mathrm{i}p \cdot x+\mathrm{i}q\cdot y}\langle a_{p}^{*}a_{q}^{*}\rangle_{T_{z}\Psi_{ \alpha}}\] \[\qquad\qquad-qe^{\mathrm{i}p\cdot x-\mathrm{i}q\cdot y}\langle a _{p}^{*}a_{q}\rangle_{T_{z}\Psi_{\alpha}}+pe^{-\mathrm{i}q\cdot x+\mathrm{i}p \cdot y}\langle a_{p}^{*}a_{q}\rangle_{T_{z}\Psi_{\alpha}}\] \[\qquad\qquad+(p-q)e^{\mathrm{i}(p-q)\cdot y}\langle a_{p}^{*}a_{ q}\rangle_{T_{z}\Psi_{\alpha}}-qe^{-\mathrm{i}p\cdot x-\mathrm{i}q\cdot y} \langle a_{p}a_{q}\rangle_{T_{z}\Psi_{\alpha}}\Bigg{)}.\] With a similar argument as in (2.63), we see that the absolute value of first term on the right-hand side is bounded by \[\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}}|p_{1}|\langle a_ {p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}\leq\sum _{p_{1},p_{2},q_{1},q_{2}\in(\Lambda^{*}_{+}\backslash P_{\mathrm{B}})}|p_{1} |\langle a_{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}} \tag{2.65}\] \[+N^{\delta_{\mathrm{Bog}}}\sum_{p_{1},p_{2},q_{1},q_{2}\in P_{ \mathrm{B}}}\langle a_{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z }\Psi_{\alpha}}+2\sum_{p_{1},q_{1}\in(\Lambda^{*}_{+}\backslash P_{\mathrm{B}} )\atop p_{2},q_{2}\in P_{\mathrm{B}}}(|p_{1}|+N^{\delta_{\mathrm{Bog}}}) \langle a_{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{ \alpha}}.\] With the Cauchy-Schwarz inequality we find \[\sum_{p_{1},p_{2},q_{1},q_{2}\in(\Lambda^{*}_{+}\backslash P_{ \mathrm{B}})} |p_{1}|\langle a_{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}} \rangle_{T_{z}\Psi_{\alpha}}\] \[\leq \Big{(}\sum_{p,q\in\Lambda^{*}_{+}}|p|^{2}\langle a_{p}^{*}a_{q} ^{*}a_{p}a_{q}\rangle_{\Psi_{\alpha}}\Big{)}^{1/2}\Big{(}\sum_{p,q\in\Lambda ^{*}_{+}}\langle a_{p}^{*}a_{q}^{*}a_{p}a_{q}\rangle_{\Psi_{\alpha}}\Big{)}^{1 /2}\lesssim N_{\alpha}^{3/2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|.\] The remaining terms on the right-hand side of (2.65) are bounded similarly, and we get \[\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}}|p_{1}|\langle a _{p_{1}}^{*}a_{p_{2}}^{*}a_{q_{1}}a_{q_{2}}\rangle_{T_{z}\Psi_{\alpha}}\] \[\lesssim N_{\alpha}^{3/2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|+N^{4 \delta_{\mathrm{Bog}}}(N_{\alpha}^{<}+1)^{2}+(N_{\alpha}^{<}+N^{\delta_{ \mathrm{Bog}}})\Big{(}N_{\alpha}^{1/2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|+N_ {\alpha}N^{\delta_{\mathrm{Bog}}}\Big{)}.\] The quadratic terms on the right-hand side of (2.64) can be estimated easily using the explicit formulas in (2.60). Doing so, we see that they are bounded by \[C|z|^{2}\Big{(}N_{\alpha}^{1/2}\|\mathcal{K}^{1/2}\Psi_{\alpha}\|+N^{\delta_{ \mathrm{Bog}}}(N_{\alpha}^{<}+N^{\delta_{\mathrm{Bog}}})\Big{)}.\] Combining the previous two bounds proves (2.57). Finally, we prove (2.58). By (2.27) we have \[\begin{split}\int_{\Lambda^{2}}\varrho^{(4)}_{z,\alpha}(x_{1},x_{2},x_{3},x_{4})\,\mathrm{d}x_{3}\,\mathrm{d}x_{4}=\sum_{p_{1},p_{2},q_{1},q_{2} \in\Lambda^{*}}e^{\mathrm{i}(p_{1}-q_{1})x_{1}+\mathrm{i}(p_{2}-q_{2})x_{2}} \langle a^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{q_{2} }\rangle_{\phi_{z,\alpha}}\\ \leq&|z|^{4}\langle\mathcal{N}(\mathcal{N}-1)\rangle _{\phi_{z,\alpha}}+|z|^{2}\Bigg{[}4\sum_{p_{1},q_{1}\in\Lambda^{*}_{+}} \langle a^{*}_{p_{1}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}\rangle_{\phi_{z, \alpha}}\\ &+\Big{(}\sum_{p_{1},p_{2}\in\Lambda^{*}_{+}}\langle a^{*}_{p_{1} }a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)\rangle_{\phi_{z,\alpha}}+\mathrm{h.c.}\Big{)}\Bigg{]}+\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}}\langle a^{* }_{p_{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{q_{2}}\rangle_{ \phi_{z,\alpha}}.\end{split} \tag{2.66}\] The first term on the right-hand side of (2.66) can be bounded by a constant times \(|z|^{4}(N_{\alpha}+|z|^{2}+1)^{2}\) using Lemma 2.7. By the translation invariance of the state \(|\phi_{z,\alpha}\rangle\langle\phi_{z,\alpha}|\) and Lemma 2.7, we see that the second term equals \[4|z|^{2}\sum_{p\in\Lambda^{*}_{+}}\langle a^{*}_{p}\mathcal{N}(\mathcal{N}-1) a_{p}\rangle_{\phi_{z,\alpha}}\lesssim|z|^{2}(N_{\alpha}+|z|^{2}+1)^{3},\] and the same bound holds for the third term. To estimate the last term on the right-hand side of (2.66) we split the sum in momentum sets as in (2.63): \[\begin{split}\sum_{p_{1},p_{2},q_{1},q_{2}\in\Lambda^{*}_{+}} \langle a^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{q_{2} }\rangle_{\phi_{z,\alpha}}=\sum_{p_{1},p_{2},q_{1},q_{2}\in(\Lambda^{*}_{+} \backslash P_{\mathrm{B}})}\langle a^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}( \mathcal{N}-1)a_{q_{1}}a_{q_{2}}\rangle_{\phi_{z,\alpha}}\\ +\sum_{p_{1},p_{2},q_{1},q_{2}\in P_{\mathrm{B}}}\langle a^{*}_{p _{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{q_{2}}\rangle_{\phi_{z,\alpha}}+4\sum_{p_{1},q_{1}\in(\Lambda^{*}_{+}\backslash P_{\mathrm{B}})} \langle a^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{q_{2} }\rangle_{\phi_{z,\alpha}}.\end{split}\] The first and the third term are easily seen to be bounded by a constant times \((N_{\alpha}+|z|^{2}+1)^{4}\), by Lemma 2.7. The second term can be estimated using translation invariance, Cauchy-Schwarz, and Lemma 2.7: \[\begin{split}\sum_{p_{1},p_{2},q_{1},q_{2}\in P_{\mathrm{B}}}& \langle a^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{q_{2} }\rangle_{\phi_{z,\alpha}}=\sum_{p_{1},p_{2},q_{1}\in P_{\mathrm{B}}}\langle a ^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}(\mathcal{N}-1)a_{q_{1}}a_{p_{1}+p_{2}-q_{1 }}\rangle_{\phi_{z,\alpha}}\\ \lesssim& N^{3\delta_{\mathrm{Bog}}}\sum_{p_{1},p_{ 2}\in P_{\mathrm{B}}}\langle a^{*}_{p_{1}}a^{*}_{p_{2}}\mathcal{N}^{2}a_{p_{1}}a _{p_{2}}\rangle_{\phi_{z,\alpha}}\leq N^{3\delta_{\mathrm{Bog}}}\|(\mathcal{N }^{<})^{2}\phi_{z,\alpha}\|\cdot\|\mathcal{N}^{2}\phi_{z,\alpha}\|\\ \lesssim& N^{3\delta_{\mathrm{Bog}}}(N^{<}_{\alpha}+1 )^{2}(N_{\alpha}+|z|^{2}+1)^{2}.\end{split}\] Collecting the above bounds, we find (2.58). The following statement quantifies how the Jastrow factor changes the norm of an eigenfunction of \(|z\rangle\langle z|\otimes G(z)\). **Corollary 2.14**.: _Let \(\beta\gtrsim\beta_{\mathrm{c}}\) and \(\delta_{\mathrm{Bog}}\leq 2/15\). There exists \(C>0\) such that_ \[\|F\phi_{\alpha,z}\|^{2}\geq 1-CN\ell^{2}, \tag{2.67}\] _for every \(|z|^{2}\leq\widetilde{c}N\), \(\alpha\in\widetilde{\mathcal{A}}\) and \(N\in\mathbb{N}\)._ Proof.: We use the inequality \[\prod_{i<j}\left(1-u_{\ell}(x_{i}-x_{j})\right)\geq 1-\sum_{i<j}u_{\ell}(x_{i}-x_{j}), \tag{2.68}\] for \(u_{\ell}=1-f_{\ell}^{2}\geq 0\), to estimate \[\|F\phi_{\alpha,z}\|=\sum_{n}\int_{\Lambda^{n}}\left|\phi_{\alpha,z}^{(n)} \right|^{2}\prod_{i<j}\left(1-u_{\ell}(x_{i}-x_{j})\right)\geq 1-\frac{1}{2} \int_{\Lambda^{2}}\varrho_{\alpha,z}^{(2)}(x,y)u_{\ell}(x-y)\mathrm{d}x \mathrm{d}y. \tag{2.69}\] By (2.56) and the assumptions \(\beta\gtrsim\beta_{\mathrm{c}}\), \(\delta_{\mathrm{Bog}}\leq 2/15\), \(\alpha\in\widetilde{\mathcal{A}}\) and \(|z|^{2}\leq\widetilde{c}N\), we have \[\varrho_{z,\alpha}^{(2)}(x_{1},x_{2})\lesssim N^{2}+N^{5\delta_{\mathrm{Bog} }}\beta^{-2}\lesssim N^{2}.\] We insert this into (2.69), use (A.3) and \(\mathfrak{a}_{N}N\lesssim 1\), and find (2.67). #### 2.3.4 The uncorrelated trial state Finally, we need some estimates on the densities of the uncorrelated trial state \(\Gamma_{0}\). For \(k\in\mathbb{N}\), we define \[\varrho_{\Gamma_{0}}^{(k)}(x_{1},...x_{k})= \operatorname{Tr}[a_{x_{1}}^{*}...a_{x_{k}}^{*}a_{x_{1}}...a_{x_{ k}}\Gamma_{0}]. \tag{2.70}\] We have the following. **Lemma 2.15**.: _Let \(\beta\gtrsim\beta_{\mathrm{c}}\), \(k\geq 2\) and \(i\in\{1,...,k\}\). Then the bounds_ \[\sup_{x_{1},...,x_{k}\in\Lambda}|\varrho_{\Gamma_{0}}^{(k)}(x_{1 },...,x_{k})|\lesssim_{k} \!\!N^{k}, \tag{2.71}\] \[\sup_{x_{1},...,x_{k}\in\Lambda}|\nabla_{i}\varrho_{\Gamma_{0}}^ {(k)}(x_{1},...x_{k})|\lesssim_{k}\!\!\beta^{-1/2}N^{k},\] _hold._ Proof.: For \(x\in\Lambda\) and \(z\in\mathbb{C}\) we introduce the operators \(a_{x,z}^{*}=\frac{z}{|z|}a_{x}^{*}\), \(a_{x,z}=\frac{\pi}{|z|}a_{x}\), which satisfy \[W_{z}^{*}a_{x,z}^{*}W_{z}=a_{x,z}^{*}+|z|,\qquad W_{z}^{*}a_{x,z}W_{z}=a_{x,z} +|z| \tag{2.72}\] by (2.26). With the notation \(y_{i}=y_{i+k}=x_{i}\), \(\sharp_{i}=*\), \(\sharp_{i+k}=\cdot\) for \(i=1,...,k\), we can write the \(k\)-body density of \(\Gamma_{0}\) as \[\varrho_{\Gamma_{0}}^{(k)}(x_{1},...,x_{k})=\int_{\mathbb{C}} \zeta(z)\operatorname{Tr}_{\widetilde{\mathcal{A}}}\big{[}a_{y_{1},z}^{\sharp _{1}}...a_{y_{2k},z}^{\sharp_{2k}}|z\rangle\langle z|\otimes G(z)\big{]}\, \mathrm{d}z. \tag{2.73}\] Using (2.72) and Wick's theorem for the quasi-free state \(G(z)\), we find \[\operatorname{Tr}_{\widetilde{\mathcal{A}}}\big{[}a_{y_{1},z}^{ \sharp_{1}}...a_{y_{2k},z}^{\sharp_{2k}}|z\rangle\langle z|\otimes G(z)\big{]}\] \[= |z|^{2k}+\sum_{h=1}^{k}|z|^{2(k-h)}\sum_{1\leq i_{1}<...<i_{2h} \leq 2k}\sum_{\sigma\in P_{2h}}\prod_{j=1}^{h}\operatorname{Tr}_{\widetilde{ \mathcal{A}}_{+}}\Big{[}a_{y_{i_{\sigma(2j-1)},z}}^{\sharp_{i_{\sigma(2j)},z} }a_{y_{i_{\sigma(2j)},z}}^{\sharp_{i_{\sigma(2j)},z}}G(z)\Big{]},\] where \[P_{2h}=\{\sigma\in S_{2h}\mid\sigma(2j-1)<\sigma(2j),\,\sigma(2j-1)<\sigma(2j+ 1)\}\] denotes the set of pairings of \(\{1,...,2h\}\). Lemma 2.11 implies that for every \(\sharp,\flat\in\{*,\cdot\}\) there exists \(f_{z}^{(\sharp,\flat)}:\Lambda\to\mathbb{C}\) such that \(\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{x,z}^{\sharp}a_{y,z}^{\flat}G(z)]=f_{z} ^{(\sharp,\flat)}(x-y)\). Moreover, the functions \(f_{z}^{(\sharp,\flat)}\) satisfy the bounds \[\|f_{z}^{(\sharp,\flat)}\|_{\infty}\lesssim N\quad\text{ and }\quad\|\nabla f_{z}^{( \sharp,\flat)}\|_{\infty}\lesssim\beta^{-1/2}N,\] uniformly in \(z\in\mathbb{C}\). Taking into account the particle number cutoff in \(\zeta(z)\), we thus find \[|\varrho_{\Gamma_{0}}^{(k)}(x_{1},...,x_{k})|\lesssim \int_{\mathbb{C}}\zeta(z)\Big{(}|z|^{2k}+\sum_{h=1}^{k}|z|^{2(k-h )}\binom{2k}{2h}|P_{2h}|N^{h}\Big{)}\,\mathrm{d}z\lesssim_{k}N^{k},\] \[|\nabla_{i}\varrho_{\Gamma_{0}}^{(k)}(x_{1},...,x_{k})|\lesssim \sum_{h=1}^{k}\int_{\mathbb{C}}\zeta(z)|z|^{2(k-h)}\binom{2k}{2h}|P _{2h}|\beta^{-1/2}N^{h}\,\mathrm{d}z\lesssim_{k}\beta^{-1/2}N^{k},\] which is the claim of the lemma. Let us introduce the notation \[\widetilde{N}_{0}=\operatorname{Tr}[a_{0}^{*}a_{0}\widetilde{\Gamma}_{0}]= \int_{\mathbb{C}}|z|^{2}\zeta(z)\,\mathrm{d}z \tag{2.74}\] for the expected number of particles in the condensate of \(\widetilde{\Gamma}_{0}\) in (2.8), with \(\widetilde{\mu}\) chosen according to Lemma 2.1. As a corollary to Lemmas 2.1, 2.10 and 2.12 we prove the following estimate for the difference between \(\widetilde{N}_{0}\) and the expected number of particles in the condensate of the ideal gas. **Corollary 2.16**.: _Let the assumptions of Lemma 2.1 be satisfied. Then_ \[|N_{0}-\widetilde{N}_{0}|\lesssim\frac{N_{0}}{\beta N}+\frac{N_{0}^{2}}{N^{2}} +N^{3}\ell^{4}+N^{1+\delta_{\mathrm{log}}}\ell^{2}(\beta^{-1}+1) \tag{2.75}\] _holds with \(N_{0}\) in (1.7)._ Proof.: We have \[N_{0}-\widetilde{N}_{0}= \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G(z)]- \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\mathrm{id}}]+ \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}\widetilde{G}(z)]- \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G(z)]\] \[+\operatorname{Tr}_{\mathfrak{F}}[\mathcal{N}\Gamma]- \operatorname{Tr}_{\mathfrak{F}}[\mathcal{N}\widetilde{\Gamma}_{0}]\] with \(G,\widetilde{G}\) in (2.7), \(\widetilde{\Gamma}_{0}\) in (2.8), \(\Gamma\) in (2.14) and \(G^{\mathrm{id}}\) in (2.50). In combination with Lemmas 2.1, 2.10 and 2.12, this identity proves the claim. ## 3 Estimate for the energy Throughout Sections 3, 4 and 5, we adopt the assumptions of Lemma 2.1, and take \(N\) sufficiently large so that the trial state \(\Gamma\) is well defined. In addition, we suppose that \(\beta\sim\beta_{\mathrm{c}}\), with \(\beta_{\mathrm{c}}\) in (1.8), and we fix a sufficiently large cutoff parameter \(\widetilde{c}>2\) in (2.11) such that the statements of Lemma 2.10 hold. Finally, we assume that \(N\ell^{2}\) is sufficiently small. In this section we will prove the following statement. **Proposition 3.1**.: _The energy of \(\Gamma\) is given by_ \[\begin{split}\operatorname{Tr}[\mathcal{H}_{N}\Gamma]=& \operatorname{Tr}[\mathcal{H}^{\operatorname{diag}}G^{\operatorname{diag}}]+ \mu_{0}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}+4\pi\mathfrak{a}_{N}\int_{ \mathbb{C}}\zeta(z)|z|^{4}\,\mathrm{d}z\\ &+8\pi\mathfrak{a}_{N}\Bigg{[}\Big{(}\sum_{p\in\Lambda_{+}^{*}} \gamma_{p}\Big{)}^{2}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\setminus P_{ \mathrm{B}}}\gamma(p)+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p} \Bigg{]}+\mathcal{E}_{\mathcal{H}},\end{split} \tag{3.1}\] _with_ \[\mathcal{E}_{\mathcal{H}}\lesssim N^{1/3+\delta_{\mathrm{Bog}}}+N^{2/3+3 \delta_{\mathrm{Bog}}}\ell^{1/2}+N^{5/3+3\delta_{\mathrm{Bog}}}\ell^{2}+N^{3} \ell^{4}+\ell^{-1}. \tag{3.2}\] To prove the above proposition, we decompose \[\operatorname{Tr}[\mathcal{H}_{N}\Gamma]=\operatorname{Tr}[\mathcal{K}\Gamma ]+\operatorname{Tr}[\mathcal{V}_{N}\Gamma],\] where \[\mathcal{K}=\int_{\Lambda}a_{x}^{*}(-\Delta)a_{x}\,\mathrm{d}x\quad\text{ and }\quad \mathcal{V}_{N}=\int_{\Lambda^{2}}v_{N}(x-y)a_{x}^{*}a_{y}^{*}a_{x}a_{y}\, \mathrm{d}x\,\mathrm{d}y\] denote the kinetic energy operator and the interaction potential, respectively. We first compute the contribution of the kinetic term. ### Analysis of the kinetic energy The goal of this section is to prove the following proposition. **Proposition 3.2**.: _We have_ \[\operatorname{Tr}[\mathcal{K}\Gamma]=\operatorname{Tr}[\mathcal{K}\widetilde{ \Gamma}_{0}]+\operatorname{Tr}[\chi\Gamma]+\mathcal{E}_{\mathcal{K}}, \tag{3.3}\] _where_ \[\chi:=\int_{\Lambda^{2}}\frac{|\nabla f_{\ell}(x-y)|^{2}}{f_{\ell}(x-y)^{2}}a_ {x}^{*}a_{y}^{*}a_{x}a_{y}\,\mathrm{d}x\,\mathrm{d}y \tag{3.4}\] _with \(f_{\ell}\) in (2.12). The error term satisfies_ \[\mathcal{E}_{\mathcal{K}}\lesssim N^{2/3+3\delta_{\mathrm{Bog}}}\sqrt{\ell(1+ N^{2}\ell^{3})}. \tag{3.5}\] **Remark 3.3**.: The operator \(\chi\) contains the leading-order contribution of the correlation structure to the kinetic energy of the trial state. We will combine it with contributions from the potential \(\mathcal{V}_{N}\) to obtain the full interaction energy. To prove Proposition 3.2, we start by expanding the trace on the left-hand side of (3.3) in particle number sectors. With the notation \(\mathrm{d}X^{n}=\mathrm{d}x_{1}\,...\,\mathrm{d}x_{n}\), we write \[\operatorname{Tr}[\mathcal{K}\Gamma]= \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\bar{\mathcal{A}}}\frac{ \widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}}\sum_{n=1}^{\infty}\sum _{i=1}^{n}\int_{\Lambda^{n}}\left|\nabla_{i}\big{(}F_{n}\phi_{z,\alpha}^{n} \big{)}\right|^{2}\,\mathrm{d}X^{n}\,\mathrm{d}z, \tag{3.6}\] where \(\phi_{z,\alpha}^{n}\) denotes the projection of \(\phi_{z,\alpha}\) onto the \(n\)-particle sector of the Fock space. Integrating by parts, we find \[\begin{split}\int_{\Lambda^{n}}\left|\nabla_{i}\big{(}& F_{n}\phi_{z,\alpha}^{n}\big{)}\right|^{2}\mathrm{d}X^{n}=-\int_{\Lambda^{n}}F_{n} \overline{\phi_{z,\alpha}^{n}}\Delta_{i}\big{(}F_{n}\phi_{z,\alpha}^{n}\big{)} \,\mathrm{d}X^{n}\\ =&\int_{\Lambda^{n}}\left[(-\Delta_{i}F_{n})F_{n} \big{|}\phi_{z,\alpha}^{n}\big{|}^{2}-2\nabla_{i}F_{n}\cdot\nabla_{i}\phi_{z, \alpha}^{n}F_{n}\overline{\phi_{z,\alpha}^{n}}+F_{n}^{2}\overline{\phi_{z, \alpha}^{n}}(-\Delta_{i}\phi_{z,\alpha}^{n})\right]\mathrm{d}X^{n}.\end{split} \tag{3.7}\] A further integration by parts shows \[\int_{\Lambda^{n}}(-\Delta_{i}F_{n})F_{n}\big{|}\phi^{n}_{z,\alpha} \big{|}^{2}\,\mathrm{d}X^{n}= \int_{\Lambda^{n}}|\nabla_{i}F_{n}|^{2}\big{|}\phi^{n}_{z,\alpha} \big{|}^{2}\,\mathrm{d}X^{n}\] \[+2\mathfrak{Re}\,\mathrm{e}\int_{\Lambda^{n}}F_{n}\left(\nabla_{i }F_{n}\cdot\overline{\nabla_{i}\phi^{n}_{z,\alpha}}\right)\phi^{n}_{z,\alpha} \,\mathrm{d}X^{n}.\] When we plug this into (3.7) and take the real part on both sides, we get \[\int_{\Lambda^{n}}\big{|}\nabla_{i}\big{(}F_{n}\phi^{n}_{z,\alpha}\big{)} \big{|}^{2}\,\mathrm{d}X^{n}=\int_{\Lambda^{n}}|\nabla_{i}F_{n}|^{2}\,\big{|} \phi^{n}_{z,\alpha}\big{|}^{2}\,\mathrm{d}X^{n}+\mathfrak{Re}\,\mathrm{e}\int _{\Lambda^{n}}F_{n}^{2}\overline{\phi^{n}_{z,\alpha}}\big{(}-\Delta_{i}\phi^{ n}_{z,\alpha}\big{)}\,\mathrm{d}X^{n}.\] Inserted into (3.6), this yields \(\mathrm{Tr}[\mathcal{K}\Gamma]=K_{1}+K_{2}\) with \[K_{1}\coloneqq \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}} \frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}}\sum_{n=1}^{ \infty}\sum_{i=1}^{n}\int_{\Lambda^{n}}|\nabla_{i}F_{n}|^{2}\,\big{|}\phi^{n}_ {z,\alpha}\big{|}^{2}\,\,\mathrm{d}X^{n}\,\mathrm{d}z,\] \[K_{2}\coloneqq \mathfrak{Re}\,\mathrm{e}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in \widetilde{\mathcal{A}}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha} \|^{2}}\langle F^{2}\phi_{z,\alpha},\mathcal{K}\phi_{z,\alpha}\rangle\, \mathrm{d}z.\] In the next two lemmas we provide estimates for \(K_{1}\) and \(K_{2}\). **Lemma 3.4**.: _We have_ \[K_{1}=\mathrm{Tr}[\chi\Gamma]+\mathcal{E}^{(1)}_{\mathcal{K}}, \tag{3.8}\] _with_ \[\mathcal{E}^{(1)}_{\mathcal{K}}\lesssim N\ell^{2}. \tag{3.9}\] Proof.: Let us introduce the notation \[f_{ij}=f_{\ell}(x_{i}-x_{j}),\qquad u_{ij}=u_{\ell}(x_{i}-x_{j}),\qquad\nabla f _{ij}=\nabla f_{\ell}(x_{i}-x_{j}), \tag{3.10}\] where, we recall, \(u_{\ell}=1-f_{\ell}^{2}\). We have \(\nabla_{i}F_{n}=F_{n}\sum_{j\neq i}\nabla f_{ij}/f_{ij}\), which implies \[|\nabla_{i}F_{n}|^{2}=F_{n}^{2}\Bigg{\{}\sum_{\begin{subarray}{c}1\leq j\leq n \\ j\neq i\end{subarray}}\frac{|\nabla f_{ij}|^{2}}{f_{ij}^{2}}+\sum_{ \begin{subarray}{c}1\leq j,k\leq n\\ j\neq i,k\neq i,j\end{subarray}}\frac{\nabla f_{ij}\cdot\nabla f_{ik}}{f_{ij}f_ {ik}}\Bigg{\}}.\] The first term, inserted into the definition of \(K_{1}\), gives \(\mathrm{Tr}[\chi\Gamma]\), thus \[\mathcal{E}^{(1)}_{\mathcal{K}}=\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in \widetilde{\mathcal{A}}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha }\|^{2}}\sum_{n=1}^{\infty}\int_{\Lambda^{n}}F_{n}^{2}\big{|}\phi^{n}_{z,\alpha }\big{|}^{2}\sum_{\begin{subarray}{c}1\leq i,j,k\leq n\\ i\neq j,k\neq i,j\end{subarray}}\frac{\nabla f_{ij}\cdot\nabla f_{ik}}{f_{ij}f_ {ik}}\,\mathrm{d}X^{n}\,\mathrm{d}z.\] In combination, (2.67), the inequality \((1-x)^{-1}\leq 1+x+2x^{2}\), valid for \(0\leq x\leq 1/2\), and \(F_{n}^{2}f_{ij}^{-1}f_{ik}^{-1}\leq 1\), imply \[\mathcal{E}^{(1)}_{\mathcal{K}}\leq (1+CN\ell^{2})\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\widetilde{\lambda}_{\alpha}\sum_{n=1}^{\infty}\int_{\Lambda^{n}} \sum_{\begin{subarray}{c}1\leq i,j,k\leq n\\ i\neq j,k\neq i,j\end{subarray}}|\nabla f_{ij}||\nabla f_{ik}||\phi^{n}_{z, \alpha}\big{|}^{2}\,\mathrm{d}X^{n}\,\mathrm{d}z \tag{3.11}\] \[\lesssim \kappa_{0}^{-1}\int_{\Lambda^{3}}\varrho^{(3)}_{\Gamma_{0}}(x_{1},x_{2},x_{3})|\nabla f(x_{1}-x_{2})||\nabla f(x_{1}-x_{3})|\mathrm{d}x_{1} \mathrm{d}x_{2}\mathrm{d}x_{3}\leq\kappa_{0}^{-1}\|\varrho^{(3)}_{\Gamma_{0}} \|_{\infty}\|\nabla f_{l}\|_{1}^{2},\] as long as \(N\ell^{2}\) is sufficiently small. The estimate in (3.9) follows using (2.71) with \(k=3\), (A.3), and \(\kappa_{0}^{-1}\lesssim 1+C\exp(-c\beta N)\lesssim 1\), which follows from Lemma 2.10. **Lemma 3.5**.: _We have_ \[K_{2}=\operatorname{Tr}[\mathcal{K}\widetilde{\Gamma}_{0}]+\mathcal{E}^{(2)}_{ \mathcal{K}}, \tag{3.12}\] _where the error term satisfies the bound_ \[\mathcal{E}^{(2)}_{\mathcal{K}}\lesssim N^{2/3+3\delta_{\mathrm{Bog}}}\sqrt{ \ell(1+N^{2}\ell^{3})}. \tag{3.13}\] Proof.: Let us decompose the kinetic energy in momentum space as \(\mathcal{K}=\mathcal{K}^{<}+\mathcal{K}^{>}\) with \[\mathcal{K}^{<}=\sum_{p\in P_{\mathrm{B}}}p^{2}a_{p}^{*}a_{p},\qquad\mathcal{K }^{>}=\sum_{p\in\Lambda^{*}_{+}\backslash P_{\mathrm{B}}}p^{2}a_{p}^{*}a_{p}.\] As a symmetrized product of plane waves, \(\Psi_{\alpha}\) is an eigenfunction of the localized kinetic energies \(\mathcal{K}^{<}\) and \(\mathcal{K}^{>}\), that is \(\mathcal{K}^{\gtrdot}\Psi_{\alpha}=E^{\gtrdot}_{\alpha}\Psi_{\alpha}\). Both, \(T_{z}\) and \(W_{z}\), act trivially on creation/annihilation operators indexed by high momenta \(p\in\Lambda^{*}_{+}\setminus P_{\mathrm{B}}\). Hence, \[\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}}\frac{ \widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}}\langle F^{2}\phi_{z, \alpha},\mathcal{K}^{>}\phi_{z,\alpha}\rangle\,\mathrm{d}z=\sum_{\alpha\in \widetilde{\mathcal{A}}}\widetilde{\lambda}_{\alpha}E^{>}_{\alpha}= \operatorname{Tr}[\mathcal{K}^{>}\widetilde{\Gamma}_{0}].\] To extract the largest contributions from the low-momentum part of the kinetic energy, we write \[\langle F^{2}\phi_{z,\alpha},\mathcal{K}^{<}\phi_{z,\alpha}\rangle= \langle\phi_{z,\alpha},\mathcal{K}^{<}\phi_{z,\alpha}\rangle+ \langle(F^{2}-1)\phi_{z,\alpha},\mathcal{K}^{<}\phi_{z,\alpha}\rangle. \tag{3.14}\] Combining the first term on the right-hand side of (3.14) with the high momentum part of the kinetic energy gives \(\operatorname{Tr}[\mathcal{K}\widetilde{\Gamma}_{0}]\). It remains to estimate the remainder. Using Cauchy-Schwarz, (2.36), (2.67) and the inequality \[\big{|}F_{n}(x_{1},...,x_{n})^{2}-1\big{|}\leq\sum_{i<j}^{n}u_{\ell}(x_{i}-x_{ j}),\] which follows from (2.68) and \(F_{n}\leq 1\), we can estimate the error term as \[\mathcal{E}^{(2)}_{\mathcal{K}}= \mathfrak{Re}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}} \langle(F^{2}-1)\phi_{z,\alpha},\mathcal{K}^{<}\phi_{z,\alpha}\rangle \tag{3.15}\] \[\lesssim \Big{(}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\widetilde{\lambda}_{\alpha}\|(F^{2}-1)\phi_{z,\alpha}\|^{2}\, \mathrm{d}z\Big{)}^{1/2}\Big{(}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in \widetilde{\mathcal{A}}}\widetilde{\lambda}_{\alpha}\|\mathcal{K}^{<}\phi_{z, \alpha}\|^{2}\,\mathrm{d}z\Big{)}^{1/2}\] \[\lesssim \Big{(}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\widetilde{\lambda}_{\alpha}\sum_{n}\int_{\Lambda^{n}}\Big{|} \sum_{1\leq i<j\leq n}u_{ij}\Big{|}^{2}|\phi_{z,\alpha}^{n}|^{2}\,\mathrm{d}X ^{n}\,\mathrm{d}z\Big{)}^{1/2}\big{(}\operatorname{Tr}_{\widetilde{\mathfrak{A }}_{+}}[(\mathcal{K}^{<})^{2}\Gamma_{0}]\big{)}^{1/2}\] with \(u_{ij}\) in (3.10). We have \[\operatorname{Tr}_{\widetilde{\mathfrak{A}}_{+}}[(\mathcal{K}^{<})^{2}\Gamma_ {0}]\leq N^{4\delta_{\mathrm{Bog}}}\int_{\mathbb{C}}\zeta(z)\operatorname{Tr} _{\widetilde{\mathfrak{A}}_{+}}[(\mathcal{N}^{<})^{2}G(z)]\,\mathrm{d}z.\] An application of Wick's theorem and Lemma 2.11 shows \[\operatorname{Tr}_{\widetilde{\mathfrak{A}}_{+}}[(\mathcal{N}^{< })^{2}G(z)] =\sum_{p,q\in P_{\mathrm{B}}}\operatorname{Tr}_{\widetilde{\mathfrak{A }}_{+}}[a_{p}^{*}a_{p}a_{q}^{*}a_{q}G(z)]\] \[=\sum_{p,q\in P_{\mathrm{B}}}\gamma_{p}\gamma_{q}+\sum_{p\in P_{ \mathrm{B}}}\Big{(}\gamma_{p}(\gamma_{p}+1)+|\alpha_{p}|^{2}\Big{)}\lesssim(1+ \beta^{-1}N^{\delta_{\mathrm{Bog}}})^{2},\] and hence \[\big{(}\operatorname{Tr}_{\mathfrak{F}_{+}}[(\mathcal{K}^{<})^{2}\Gamma_{0}]\big{)} ^{1/2}\lesssim N^{2\delta_{\mathrm{nog}}}(1+\beta^{-1}N^{\delta_{\mathrm{nog}}}). \tag{3.16}\] We use \(\kappa_{0}^{-1}\lesssim 1\), which follows from Lemma 2.10, and expand the square in the integral to see that \[\begin{split}&\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\widetilde{\lambda}_{\alpha}\sum_{n}\int_{\Lambda^{n}}\Big{|}\sum _{1\leq i<j\leq n}u_{ij}\Big{|}^{2}\big{|}\phi_{z,\alpha}^{n}\big{|}^{2}\, \mathrm{d}X^{n}\,\mathrm{d}z\\ \lesssim&\frac{1}{4}\int_{\Lambda^{4}}\varrho_{ \Gamma_{0}}^{(4)}(x_{1},x_{2},x_{3},x_{4})u_{\ell}(x_{1}-x_{2})u_{\ell}(x_{3} -x_{4})\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}x_{3}\mathrm{d}x_{4}\\ &\qquad+\int_{\Lambda^{3}}\varrho_{\Gamma_{0}}^{(3)}(x_{1},x_{2}, x_{3})u_{\ell}(x_{1}-x_{2})u_{\ell}(x_{1}-x_{3})\mathrm{d}x_{1}\mathrm{d}x_{2} \mathrm{d}x_{3}\\ &\qquad+\frac{1}{2}\int_{\Lambda^{2}}\varrho_{\Gamma_{0}}^{(2)}( x_{1},x_{2})u_{\ell}(x_{1}-x_{2})^{2}\mathrm{d}x_{1}\mathrm{d}x_{2}.\end{split} \tag{3.17}\] Moreover, applications of (A.3) and Lemma 2.15 show \[\begin{split}\int_{\Lambda^{4}}\varrho_{\Gamma_{0}}^{(4)}(x_{1}, x_{2},x_{3},x_{4})u_{\ell}(x_{1}-x_{2})u_{\ell}(x_{3}-x_{4})\mathrm{d}x_{1} \mathrm{d}x_{2}\mathrm{d}x_{3}\mathrm{d}x_{4}\lesssim& N^{4}\mathfrak{a}_{N}^{2}\ell^{4}\lesssim N^{2}\ell^{4},\\ \int_{\Lambda^{3}}\varrho_{\Gamma_{0}}^{(3)}(x_{1},x_{2},x_{3})u_ {\ell}(x_{1}-x_{2})u_{\ell}(x_{1}-x_{3})\mathrm{d}x_{1}\mathrm{d}x_{2} \mathrm{d}x_{3}\lesssim& N^{3}\mathfrak{a}_{N}^{2}\ell^{4} \lesssim N\ell^{4},\\ \int_{\Lambda^{2}}\varrho_{\Gamma_{0}}^{(2)}(x_{1},x_{2})u_{\ell }(x_{1}-x_{2})^{2}\mathrm{d}x_{1}\mathrm{d}x_{2}\lesssim& N^{2}\mathfrak{a}_{N}^{2}\ell\lesssim\ell.\end{split} \tag{3.18}\] With (3.17) and (3.18), we see that the first factor on the right-hand side of (3.15) is bounded by \(\sqrt{\ell(1+N^{2}\ell^{3})}\). Combined with (3.16), this implies (3.13). Putting together Lemma 3.4 and Lemma 3.5, we find \[\operatorname{Tr}[\mathcal{K}\Gamma]=\operatorname{Tr}[\chi\Gamma]+ \operatorname{Tr}[\mathcal{K}\widetilde{\Gamma}_{0}]+\mathcal{E}_{\mathcal{K}},\] with \[\mathcal{E}_{\mathcal{K}}=\mathcal{E}_{\mathcal{K}}^{(1)}+\mathcal{E}_{ \mathcal{K}}^{(2)}.\] The claim of Proposition 3.2 follows from (3.9) and (3.13). ### Analysis of the renormalized interaction As shown in Proposition 3.2, the expectation of the kinetic energy in our trial states yields two contributions, up to negligible errors. The first contribution is the kinetic energy of the undressed trial state \(\widetilde{\Gamma}_{0}\), which will be combined with the entropy to obtain the free energy of the ideal gas. The second is given by the expectation of the two-body operator \(\chi\) in the state \(\Gamma\). This term needs to be combined with the interaction potential \(\mathcal{V}_{N}\) to replace the integral of \(V_{N}\) by \(8\pi\mathfrak{a}_{N}L^{-3}\) in the relevant contributions to the free energy. The precise statement is captured by the following proposition. **Proposition 3.6**.: _We have_ \[\begin{split}\operatorname{Tr}[(\chi+\mathcal{V}_{N})\Gamma]=& \int_{\mathbb{C}}\zeta(z)\operatorname{Tr}[\mathcal{Q}_{\mathrm{B}} \widetilde{G}(z)]\,\mathrm{d}z+4\pi\mathfrak{a}_{N}\int_{\mathbb{C}}\zeta(z)|z |^{4}\,\mathrm{d}z\\ &+8\pi\mathfrak{a}_{N}\Bigg{[}\Big{(}\sum_{p\in\Lambda_{+}^{*}} \gamma_{p}\Big{)}^{2}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\setminus P_{ \mathrm{B}}}\gamma_{p}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p} \Bigg{]}+\mathcal{E}_{\mathcal{V}},\end{split} \tag{3.19}\] _where \(\widetilde{N}_{0}\) is defined in (2.74) and_ \[\mathcal{Q}_{\mathrm{B}}\coloneqq 4\pi\mathfrak{a}_{N}N_{0}\sum_{p\in P_{ \mathrm{B}}}\left[2a_{p}^{*}a_{p}+\left(\frac{z^{2}}{|z|^{2}}a_{p}^{*}a_{-p}^{* }+\frac{\overline{z}^{2}}{|z|^{2}}a_{p}a_{-p}\right)\right]. \tag{3.20}\] _The error satisfies the bound_ \[\mathcal{E}_{\mathcal{V}}\lesssim N^{1/3+\delta_{\mathrm{Bog}}}+N^{2/3+2 \delta_{\mathrm{Bog}}}\ell+N^{5/3}\ell^{2}+N^{3}\ell^{4}+\ell^{-1}. \tag{3.21}\] To prove the above proposition, we start by writing the expectation on the left-hand side of (3.19) in a more convenient way: \[\mathrm{Tr}[(\chi+\mathcal{V}_{N})\Gamma]= \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widehat{\mathcal{A}}} \frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}}\] \[\times\sum_{n=2}^{\infty}\sum_{1\leq i<j\leq n}\int_{\Lambda^{n}} \Big{(}2\frac{|\nabla f_{\ell}(x_{i}-x_{j})|^{2}}{f_{\ell}(x_{i}-x_{j})^{2}}+v _{n}(x_{i}-x_{j})\Big{)}F_{n}^{2}\left|\phi_{z,\alpha}^{n}\right|^{2}\, \mathrm{d}X^{n}\,\mathrm{d}z\] \[\leq \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widehat{\mathcal{A}}} \frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}}\] \[\times\sum_{n=2}^{\infty}\sum_{1\leq i<j\leq n}\int_{\Lambda^{n}} 2\xi_{N}(x_{i}-x_{j})\prod_{\begin{subarray}{c}1\leq k<h\leq n\\ (k,h)\neq(i,j)\end{subarray}}f_{\ell}^{2}(x_{k}-x_{h})\left|\phi_{z,\alpha}^{n }\right|^{2}\,\mathrm{d}X^{n}\,\mathrm{d}z,\] where we defined the effective potential \[\xi_{N}(x)=|\nabla f_{\ell}(x)|^{2}+\frac{1}{2}v_{N}(x)f_{\ell}^{2}(x).\] Its \(L^{1}\)-norm satisfies \[\int_{\Lambda}\xi_{N}(x)\mathrm{d}x=\mathcal{E}_{\ell}(f_{\ell})=\frac{4\pi \mathfrak{a}_{N}}{1-\frac{\mathfrak{a}_{N}}{\ell}}, \tag{3.22}\] see (A.1). For \(\ell\geq 2\mathfrak{a}_{N}\), the denominator is bounded from below by \(1/2\), and thus \(\|\xi_{N}\|_{1}\lesssim\mathfrak{a}_{N}\). Using the pointwise bound \[\prod_{\begin{subarray}{c}k<h\\ (k,h)\neq(i,j)\end{subarray}}f_{\ell}^{2}(x_{k}-x_{h})\leq 1-\sum_{ \begin{subarray}{c}k<h\\ (k,h)\neq(i,j)\end{subarray}}u_{\ell}(x_{k}-x_{h})+\frac{1}{2}\sum_{ \begin{subarray}{c}k<h,r<s\\ (i,j)\neq(k,h)\neq(r,s)\end{subarray}}u_{\ell}(x_{k}-x_{h})u_{\ell}(x_{r}-x_{s}), \tag{3.23}\] we find \[\mathrm{Tr}[(\chi+\mathcal{V}_{N})\Gamma]\leq\int_{\mathbb{C}} \zeta(z)\sum_{\alpha\in\widehat{\mathcal{A}}}\frac{\widetilde{\lambda}_{ \alpha}}{\|F\phi_{z,\alpha}\|^{2}}\int_{\Lambda^{2}}\varrho_{z,\alpha}^{(2)} \xi_{N}(x_{1}-x_{2})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}z \tag{3.24}\] \[\qquad-\frac{1}{2}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widehat {\mathcal{A}}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}} \int_{\Lambda^{4}}\varrho_{z,\alpha}^{(4)}\xi_{N}(x_{1}-x_{2})u_{\ell}(x_{3}-x_ {4})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}\,\mathrm{d}x_{4}\, \mathrm{d}z\] \[\qquad+\frac{1}{8}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widehat {\mathcal{A}}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}} \int_{\Lambda^{0}}\varrho_{z,\alpha}^{(6)}\xi_{N}(x_{1}-x_{2})u_{\ell}(x_{3}-x_ {4})u_{\ell}(x_{5}-x_{6})\,\mathrm{d}x_{1}...\,\mathrm{d}x_{6}\,\mathrm{d}z\] \[= \,\mathrm{Tr}[\Xi_{N}\widetilde{\Gamma}_{0}]+V_{1}+V_{2}+V_{3}.\] Here, we defined \[\Xi_{N}\coloneqq\int_{\mathbb{R}^{6}}\xi_{N}(x-y)a_{x}^{*}a_{y}^{*}a_{x}a_{y}\, \mathrm{d}x\,\mathrm{d}y,\] \(V_{1}\) is the first term on the right-hand side of (3.24), with \(\|F\phi_{z,\alpha}\|^{-2}\) replaced by \(\|F\phi_{z,\alpha}\|^{-2}-1\), and the densities \(\varrho_{z,\alpha}^{(k)}\) are defined in (2.55). Using (2.71), (3.22) and (A.3), it is easy to see that \[V_{3}\lesssim\|\varrho_{\Gamma_{0}}^{(6)}\|_{\infty}\|\xi_{N}\|_{1}\|u_{\ell} \|_{1}^{2}\lesssim N^{6}a_{N}^{3}\ell^{4}\lesssim N^{3}\ell^{4}. \tag{3.25}\] In contrast, naive bounds for \(V_{1}\) and \(V_{2}\) are not sufficient to achieve the level of precision necessary to prove Theorem 1.1. Indeed, when considered individually, they give contributions to the energy proportional to \(N^{2}\ell^{2}\). Together with the errors arising from the localization of the minimization problem in (A.1), which are of order \(\ell^{-1}\), they add up to a contribution proportional to \(N^{2/3}\). This is the level of accuracy of the upper bound for the free energy given in [27], and it is not sufficient to resolve the free energy of the interacting BEC and the Bogoliubov corrections we are interested in. It turns out that \(V_{1}\) and \(V_{2}\) cancel out to leading order. This important cancellation is the content of the next lemma. **Lemma 3.7**.: _The following bound holds:_ \[V_{1}+V_{2}\lesssim N^{5/3}\ell^{2}+N^{3}\ell^{4}. \tag{3.26}\] Proof.: We compute the leading order contributions of \(V_{1}\) and \(V_{2}\) separately, and we check that they indeed cancel. We first consider \(V_{1}\). Using (2.69) and the bound \((1-x)^{-1}\leq 1+x+2x^{2}\) for \(0\leq x\leq 1/2\), we find \[V_{1}\leq \frac{1}{2}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\widetilde{\lambda}_{\alpha}\Big{(}\int_{\Lambda^{2}}\varrho_{ \alpha,z}^{(2)}(x_{1},x_{2})u_{\ell}(x_{1}-x_{2})\,\mathrm{d}x_{1}\,\mathrm{d }x_{2}\Big{)}\] \[\qquad\times\Big{(}\int_{\Lambda^{2}}\varrho_{\alpha,z}^{(2)}(x_ {1},x_{2})\xi_{N}(x_{1}-x_{2})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\Big{)}\, \mathrm{d}z\] \[+\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}} \widetilde{\lambda}_{\alpha}\Big{(}\int_{\Lambda^{2}}\varrho_{\alpha,z}^{(2) }(x_{1},x_{2})u_{\ell}(x_{1}-x_{2})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\Big{)}^ {2}\] \[\qquad\times\Big{(}\int_{\Lambda^{2}}\varrho_{\alpha,z}^{(2)}(x_ {1},x_{2})\xi_{N}(x_{1}-x_{2})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\Big{)}\, \mathrm{d}z\eqqcolon V_{1,1}+V_{1,2}.\] Equations (2.56), (3.22), (A.3), and the particle number cutoffs imposed on \(\widetilde{\lambda}_{\alpha}\) and \(\zeta(z)\), allow us to show that \[V_{1,2}\lesssim N^{6}\,\Big{(}\int u_{\ell}\Big{)}^{2}\Big{(}\int\xi_{N} \Big{)}\lesssim N^{6}\mathfrak{a}_{N}^{3}\ell^{4}\lesssim N^{3}\ell^{4}. \tag{3.27}\] We recall the definition of \(N_{\alpha}\) in (2.10). With the same bounds we also obtain \[V_{1,1}\leq\frac{1}{2}\Big{(}\int\xi_{N}\Big{)}\Big{(}\int u_{\ell}\Big{)} \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}}\widetilde{ \lambda}_{\alpha}\big{(}|z|^{4}+4|z|^{2}N_{\alpha}+2N_{\alpha}^{2}\big{)}^{2} \,\mathrm{d}z+\mathcal{E}_{\mathcal{V}}^{(1)}, \tag{3.28}\] where the error term satisfies (recall \(\delta_{\mathrm{Bog}}<1/12\)) \[\mathcal{E}_{\mathcal{V}}^{(1)}\lesssim N^{8/3+\delta_{\mathrm{Bog}}}\Big{(}\int\xi_{N} \Big{)}\Big{(}\int u_{\ell}\Big{)}\lesssim N^{2/3+\delta_{\mathrm{Bog}}}\ell^ {2}.\] It thus follows from Lemma 2.9 that \[\begin{split} V_{1,1}\leq&\frac{1}{2\kappa_{0}}\Big{(} \int\xi_{N}\Big{)}\Big{(}\int u_{\ell}\Big{)}\int_{\mathbb{C}}\zeta(z)\Big{\{}| z|^{8}+8|z|^{6}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\text{diag}}]\\ &+20|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^ {\text{diag}}]^{2}+16|z|^{2}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_ {+}G^{\text{diag}}]^{3}+4\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+} G^{\text{diag}}]^{4}\Big{\}}+CN^{4/3}\ell^{2}.\end{split} \tag{3.29}\] We now turn to estimating \(V_{2}\). The simple bound \(\|F(z\otimes T_{z}\Psi_{\alpha})\|\leq 1\) implies \[V_{2}\leq -\frac{1}{2}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\widetilde{\lambda}_{\alpha}\int_{\Lambda^{4}}\varrho_{z,\alpha} ^{(4)}(x_{1},x_{2},x_{3},x_{4})\xi_{N}(x_{1}-x_{2})u_{\ell}(x_{3}-x_{4}) \mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}x_{3}\mathrm{d}x_{4}\,\mathrm{d}z\] \[= -\frac{1}{2\kappa_{0}}\int_{\Lambda^{4}}\varrho_{\Gamma_{0}}^{( 4)}(x_{1},x_{2},x_{3},x_{4})\xi_{N}(x_{1}-x_{2})u_{\ell}(x_{3}-x_{4})\, \mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}\,\mathrm{d}x_{4}+\mathcal{E }_{\mathcal{V}}^{(2)},\] with \[\mathcal{E}_{\mathcal{V}}^{(2)}=\frac{1}{2}\int_{\mathbb{C}}\zeta(z)\sum_{ \alpha\in\mathcal{A}\backslash\widetilde{\mathcal{A}}}\widetilde{\lambda}_{ \alpha}\int_{\Lambda^{4}}\varrho_{z,\alpha}^{(4)}(x_{1},x_{2},x_{3},x_{4})\xi_ {N}(x_{1}-x_{2})u_{\ell}(x_{3}-x_{4})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\, \mathrm{d}x_{3}\,\mathrm{d}x_{4}\,\mathrm{d}z.\] Using \(0\leq u_{\ell}\leq 1\), (2.36), (2.58) and (3.22), we find \[\mathcal{E}_{\mathcal{V}}^{(2)}\lesssim\mathfrak{a}_{N}\sum_{\alpha\in \mathcal{A}\backslash\widetilde{\mathcal{A}}}\widetilde{\lambda}_{\alpha}(N^{ 4}+N^{3\delta_{\text{Bog}}}N_{\alpha}^{2}N^{2}+N^{3\delta_{\text{Bog}}}N_{ \alpha}^{4})\lesssim\exp(-cN^{\delta_{\text{Bog}}}). \tag{3.30}\] To extract the leading order contribution from \(V_{2}\), we expand \[\begin{split}-\frac{1}{2}\int_{\Lambda^{4}}&\varrho_ {\Gamma_{0}}^{(4)}(x_{1},x_{2},x_{3},x_{4})\xi_{N}(x_{1}-x_{2})u_{\ell}(x_{3}- x_{4})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}\,\mathrm{d}x_{4}\\ =&-\frac{1}{2}\Big{(}\int u_{\ell}\Big{)}\Big{(}\int \xi_{N}\Big{)}\int_{\Lambda^{4}}\varrho_{\Gamma_{0}}^{(4)}(x_{1},x_{1},x_{2}, x_{2})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}+\mathcal{E}_{\mathcal{V}}^{(3)},\end{split} \tag{3.31}\] where \[\begin{split}\mathcal{E}_{\mathcal{V}}^{(3)}=&- \frac{1}{2}\int_{\Lambda^{4}}\int_{0}^{1}[(x_{2}-x_{1})\cdot\nabla_{2}+(x_{4}- x_{3})\cdot\nabla_{4}]\varrho_{\Gamma_{0}}^{(4)}(x_{1},x_{1}+t(x_{2}-x_{1}),x_{3},x_{ 3}+t(x_{4}-x_{3}))\\ &\qquad\qquad\times\xi_{N}(x_{1}-x_{2})u_{\ell}(x_{3}-x_{4})\, \mathrm{d}t\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}\,\mathrm{d}x_{4 }.\end{split} \tag{3.32}\] With (3.22), (A.3), Lemma 2.15, and the fact that the supports of \(\xi_{N}\) and \(u_{\ell}\) are contained in \(B_{\ell}\), we find \[|\mathcal{E}_{\mathcal{V}}^{(3)}|\lesssim\ell\Big{(}\int u_{\ell}\Big{)} \Big{(}\int\xi_{N}\Big{)}(\|\nabla_{2}\varrho_{\Gamma_{0}}^{(4)}\|_{\infty}+ \|\nabla_{4}\varrho_{\Gamma_{0}}^{(4)}\|_{\infty})\lesssim\mathfrak{a}_{N}^{2 }\ell^{3}N^{4}\beta^{-1/2}\lesssim N^{7/3}\ell^{3}.\] Let us now compare (3.31) with (3.29). We write \[\varrho_{\Gamma_{0}}^{(4)}(x_{1},x_{1},x_{2},x_{2})= \int_{\mathbb{C}}\zeta(z)\operatorname{Tr}_{\mathfrak{F}}[a_{x_{1} }^{*}a_{x_{1}}^{*}a_{x_{1}}^{*}a_{x_{2}}^{*}a_{x_{2}}a_{x_{1}}a_{x_{1}}a_{x_{2 }}a_{x_{2}}|z\rangle\langle z|\otimes G(z)]\,\mathrm{d}z, \tag{3.33}\] and use (2.26) and Wick's theorem to compute the right-hand side. A naive approach generates many terms. To simplify the computation, we observe that, by Lemma 2.11, \(\sup_{x\in\Lambda}|\check{\alpha}(x)|\lesssim N^{2/3}\). In particular, when we apply (2.26) to (3.33), we see that all terms that do not contain the same number of creation and annihilation operators are subleading, and we get \[\begin{split}-\frac{1}{2}&\Big{(}\int u_{\ell}\Big{)} \Big{(}\int\xi_{N}\Big{)}\int_{\Lambda^{4}}\varrho^{(4)}_{\Gamma_{0}}(x_{1},x_ {1},x_{2},x_{2})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\\ =&-\frac{1}{2}\Big{(}\int\xi_{N}\Big{)}\Big{(}\int u _{\ell}\Big{)}\int\int_{\mathbb{C}}\zeta(z)\Big{\{}\operatorname{Tr}_{\mathfrak{ F}_{+}}[a^{*}_{x_{1}}a^{*}_{x_{1}}a^{*}_{x_{2}}a^{*}_{x_{2}}a_{x_{1}}a_{x_{1}}a_{x_{2}}a_{ x_{2}}G(z)]\\ &+8|z|^{2}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a^{* }_{x_{2}}a^{*}_{x_{2}}a_{x_{1}}a_{x_{2}}a_{x_{2}}G(z)]+8|z|^{2}\operatorname{ Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a^{*}_{x_{2}}a^{*}_{x_{2}}a_{x_{1}}a_{x_{2}}G (z)]\\ &+2|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a^{* }_{x_{2}}a_{x_{1}}a_{x_{1}}G(z)]+16|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[ a^{*}_{x_{1}}a^{*}_{x_{2}}a_{x_{1}}a_{x_{2}}G(z)]\\ &+8|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a^{* }_{x_{1}}a_{x_{2}}G(z)]+8|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_ {1}}a^{*}_{x_{2}}a_{x_{1}}a_{x_{1}}G(z)]\\ &+2|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a^{* }_{x_{1}}a_{x_{2}}a_{x_{2}}G(z)]\\ &+8|z|^{6}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a_{x _{1}}G(z)]+8|z|^{6}\operatorname{Tr}_{\mathfrak{F}_{+}}[a^{*}_{x_{1}}a_{x_{2}} G(z)]+|z|^{8}\Big{\}}\,\mathrm{d}z\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}+CN^{5/3}\ell^{2}. \end{split} \tag{3.34}\] An application of Lemma 2.11 shows \[\int_{\Lambda}\check{\gamma}(x)\,\mathrm{d}x=0,\qquad\int_{\Lambda}|\check{ \gamma}(x)|^{2}\,\mathrm{d}x\lesssim N^{4/3}.\] When we apply Wick's theorem to compute the right-hand side of (3.34), this allows us to see that only the constant terms give leading-order contributions. More precisely, the right-hand side of (3.34) is bounded from above by \[\begin{split}-\frac{1}{2}\Big{(}\int\xi_{N}\Big{)}\Big{(}\int u _{\ell}\Big{)}\int_{\mathbb{C}}\zeta(z)&\Big{\{}4\check{\gamma }(0)^{4}+16|z|^{2}\check{\gamma}(0)^{3}\\ &+20|z|^{4}\check{\gamma}(0)^{2}+8|z|^{6}\check{\gamma}(0)+|z|^{ 8}\Big{\}}\,\mathrm{d}z+CN^{5/3}\ell^{2}.\end{split}\] It follows from (2.22) and (2.45) that \(\big{|}\check{\gamma}(0)-\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+} G^{\text{diag}}]\big{|}\lesssim N^{2/3}\). The above considerations imply \[\begin{split} V_{2}\leq&-\frac{1}{2\kappa_{0}} \Big{(}\int\xi_{N}\Big{)}\Big{(}\int u_{\ell}\Big{)}\int_{\mathbb{C}}\zeta(z) \Big{\{}4\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\text{diag}}] ^{4}+16|z|^{2}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\text{ diag}}]^{3}\\ &+20|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^ {\text{diag}}]^{2}+8|z|^{6}\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+} G^{\text{diag}}]+|z|^{8}\Big{\}}\,\mathrm{d}z+C(N^{7/3}\ell^{3}+N^{5/3}\ell^{2}). \end{split} \tag{3.35}\] Combining (3.27), (3.29) and (3.35) we find (3.26). In the next Lemma, we extract the largest contributions to the energy from the expectation of the effective interaction \(\Xi_{N}\) with respect to the undressed trial state. **Lemma 3.8**.: _We have_ \[\begin{split}\operatorname{Tr}[\Xi_{N}\widetilde{\Gamma}_{0}]=& \int_{\mathbb{C}}\zeta(z)\operatorname{Tr}[\mathcal{Q}_{\mathrm{B}} \widetilde{G}(z)]\,\mathrm{d}z+4\pi\mathfrak{a}_{N}\int_{\mathbb{C}}\zeta(z)|z |^{4}\,\mathrm{d}z\\ &+8\pi\mathfrak{a}_{N}\Bigg{[}\Big{(}\sum_{p\in\Lambda^{*}_{+}} \gamma_{p}\Big{)}^{2}+\widetilde{N}_{0}\sum_{p\in\Lambda^{*}_{+}\setminus P_{ \mathrm{B}}}\gamma_{p}+\widetilde{N}_{0}\sum_{p\in\Lambda^{*}_{+}}\gamma_{p} \Bigg{]}+\mathcal{E}_{\Xi},\end{split} \tag{3.36}\] _with \(\mathcal{Q}_{\mathrm{B}}\) in (3.20) and_ \[\mathcal{E}_{\Xi}\lesssim N^{2/3+\delta_{\mathrm{Bog}}}(N^{-1/3}+N^{\delta_{ \mathrm{Bog}}}\ell+N^{2/3+\delta_{\mathrm{Bog}}}\ell^{2}+N^{2}\ell^{4})+\ell^{-1}. \tag{3.37}\] Proof.: In the momentum space representation, \(\Xi_{N}\) reads \[\Xi_{N}=\sum_{p,q,r\in\Lambda^{*}}\widehat{\xi}_{N}(r)a_{p+r}^{*}a_{q}^{*}a_{p}a_ {q+r},\] and hence \[\langle W_{z}^{*}\Xi_{N}W_{z}\rangle_{\Omega_{0}\otimes T_{z}\Psi_{\alpha}}= \langle\Xi_{N}^{+}+\mathcal{C}_{N}+\mathcal{Q}_{<}+\mathcal{Q}_{>}\rangle_{ \Omega_{0}\otimes T_{z}\Psi_{\alpha}}+2\widehat{\xi}_{N}(0)|z|^{2}\langle \mathcal{N}_{+}\rangle_{T_{z}\Psi_{\alpha}}+\widehat{\xi}_{N}(0)|z|^{4}, \tag{3.38}\] with \[\Xi_{N}^{+}= \sum_{\begin{subarray}{c}p,q,r\in\Lambda^{*}\\ p,q,p+r,q+r\neq 0\end{subarray}}\widehat{\xi}_{N}(r)a_{p+r}^{*}a_{q}^{*}a_{p}a_{q+r},\] \[\mathcal{C}_{N}= 2\sum_{\begin{subarray}{c}p,q,\in\Lambda_{+}^{*}\\ p+q\neq 0\end{subarray}}\widehat{\xi}_{N}(p)(za_{p+q}^{*}a_{-p}^{*}a_{q}+\overline{ z}a_{q}^{*}a_{p+q}a_{-p}),\] \[\mathcal{Q}_{<}= |z|^{2}\sum_{p\in P_{\rm B}}\widehat{\xi}_{N}(p)\left[2a_{p}^{*} a_{p}+\left(\frac{z^{2}}{|z|^{2}}a_{p}^{*}a_{-p}^{*}+\frac{\overline{z}^{2}}{|z|^{2}} a_{p}a_{-p}\right)\right],\] \[\mathcal{Q}_{>}= |z|^{2}\sum_{p\in\Lambda_{+}^{*}\setminus P_{\rm B}}\widehat{ \xi}_{N}(p)\left[2a_{p}^{*}a_{p}+\left(\frac{z^{2}}{|z|^{2}}a_{p}^{*}a_{-p}^{* }+\frac{\overline{z}^{2}}{|z|^{2}}a_{p}a_{-p}\right)\right].\] Since all \(\Psi_{\alpha}\) are eigenfunctions of \(\mathcal{N}\), the expectation of the cubic term on the basis functions of \(\widetilde{G}(z)\) vanishes, that is, \[\langle\Omega_{0}\otimes T_{z}\Psi_{\alpha},\mathcal{C}_{N}\Omega_{0}\otimes T _{z}\Psi_{\alpha}\rangle=0.\] To bound the expectation of the quartic term, we use \(\Xi_{N}^{+}\geq 0\), \(\widehat{\xi}_{N}(p)\leq\widehat{\xi}_{N}(0)\), which follow from \(\xi_{N}(x)\geq 0\) and \(\xi_{N}(x)=\xi_{N}(-x)\), Wick's theorem and Lemma 2.11: \[\operatorname{Tr}[\Xi_{N}^{+}\widetilde{\Gamma}_{0}]= \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}} \widetilde{\lambda}_{\alpha}\langle T_{z}\Psi_{\alpha},\Xi_{N}^{+}T_{z}\Psi_{ \alpha}\rangle\,\mathrm{d}z\] \[\leq \kappa_{0}^{-1}\int_{\mathbb{C}}\zeta(z)\sum_{\begin{subarray}{c }p,q,r\in\Lambda^{*}\\ p,q,p+r,q+r\neq 0\end{subarray}}\widehat{\xi}_{N}(r)\operatorname{Tr}_{\mathfrak{F}_{ +}}[a_{p+r}^{*}a_{q}^{*}a_{p}a_{q+r}G_{\rm B}(z)]\,\mathrm{d}z\] \[\leq \kappa_{0}^{-1}\widehat{\xi}_{N}(0)\Big{(}\sum_{\begin{subarray}{ c}p,q\in\Lambda^{*}\\ p,q\neq 0\end{subarray}}\gamma_{p}\gamma_{q}+\sum_{\begin{subarray}{c}p,q\in \Lambda^{*}\\ p,p+q\neq 0\end{subarray}}\gamma_{p}\gamma_{p+q}\Big{)}+\kappa_{0}^{-1}\sum_{ \begin{subarray}{c}p,q\in\Lambda^{*}\\ p,p+q\neq 0\end{subarray}}\widehat{\xi}_{N}(q)\alpha_{p}\overline{\alpha_{p+q}}.\] Using (2.48) and (3.22), we see that \[\widehat{\xi}_{N}(0)\Big{(}\sum_{\begin{subarray}{c}p,q\in\Lambda ^{*}\\ p,q\neq 0\end{subarray}}\gamma_{p}\gamma_{q}+\sum_{\begin{subarray}{c}p,q\in \Lambda^{*}\\ p,p+q\neq 0\end{subarray}}\gamma_{p}\gamma_{p+q}\Big{)}= \frac{8\pi\mathfrak{a}_{N}}{1-\frac{\mathfrak{a}_{N}}{t}}\Big{(} \sum_{p\in\Lambda_{+}^{*}}\gamma_{p}\Big{)}^{2},\] \[\Big{|}\sum_{\begin{subarray}{c}p,q\in\Lambda^{*}\\ p,p+q\neq 0\end{subarray}}\widehat{\xi}_{N}(q)\alpha_{p}\overline{\alpha_{p+q}}\Big{|}\lesssim \mathfrak{a}_{N}(N^{\delta_{\rm Bog}}+\beta^{-1})^{2}\lesssim N^{1/3},\] and thus \[\operatorname{Tr}[\Xi_{N}^{+}\widetilde{\Gamma}_{0}]\leq 8\pi\mathfrak{a}_{N}\Big{(} \sum_{p\in\Lambda_{+}^{*}}\gamma_{p}\Big{)}^{2}+C\big{(}N^{1/3}+\ell^{-1}\big{)}. \tag{3.39}\] In the last step, we used Lemma 2.10 to conclude \(\kappa_{0}^{-1}\leq 1+C\exp(-cN^{\delta_{\mathrm{Bog}}})\), and \((1-\mathfrak{a}_{N}/\ell)^{-1}\leq 1+2\mathfrak{a}_{N}/\ell\), which follows from \(\ell\geq 2\mathfrak{a}_{N}\). We now analyze the quadratic terms, starting with \(\mathcal{Q}_{<}\). We write \[\int_{\mathbb{C}}\zeta(z)\operatorname{Tr}[\mathcal{Q}_{<}\widetilde{G}(z)] \,\mathrm{d}z=\int_{\mathbb{C}}\zeta(z)\operatorname{Tr}[\mathcal{Q}_{\mathrm{ B}}\widetilde{G}(z)]\,\mathrm{d}z+\mathcal{E}_{\mathcal{Q}} \tag{3.40}\] with \(Q_{\mathrm{B}}\) in (3.20) and \[\mathcal{E}_{\mathcal{Q}}= \int_{\mathbb{C}}\zeta(z)\operatorname{Tr}_{\mathfrak{F}_{+}} \bigg{[}\sum_{p\in P_{\mathrm{B}}}\big{(}|z|^{2}\widehat{\xi}_{N}(p)-4\pi \mathfrak{a}_{N}N_{0}\big{)}\Big{(}2a_{p}^{*}a_{p}+\frac{z^{2}}{|z|^{2}}a_{p}^ {*}a_{-p}^{*}+\frac{\overline{z}^{2}}{|z|^{2}}a_{p}a_{-p}\Big{)}\widetilde{G}( z)\bigg{]}\,\mathrm{d}z.\] It follows from Lemma 2.10 and (2.47) that \[\sum_{p\in P_{\mathrm{B}}}\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{p}^{*}a_{p} \widetilde{G}_{\mathrm{B}}(z)]\leq\kappa_{0}^{-1}\sum_{p\in P_{\mathrm{B}}} \gamma_{p}\lesssim N^{2/3+\delta_{\mathrm{Bog}}}, \tag{3.41}\] and \[\Big{|}\sum_{p\in P_{\mathrm{B}}}\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{p}^{* }a_{-p}^{*}\widetilde{G}_{\mathrm{B}}(z)]\Big{|}\leq \sum_{p\in P_{\mathrm{B}}}|\alpha_{p}|+C\exp(-cN^{\delta_{\mathrm{Bog}}}) \lesssim N^{2/3}. \tag{3.42}\] We also have \[\sup_{p\in P_{\mathrm{B}}}|\widehat{\xi}_{N}(p)-4\pi\mathfrak{a}_{N}|\lesssim \mathfrak{a}_{N}N^{\delta_{\mathrm{Bog}}}\ell, \tag{3.43}\] which follows from combining \[\big{|}\widehat{\xi}_{N}(p)-\xi_{N}(0)\big{|}\leq|p|\sup_{q}|(\widehat{\xi}_{ N})^{\prime}(q)|\leq|p|\int_{\Lambda}|x|\xi_{N}(x)\lesssim|p|\ell\mathfrak{a}_{N}\] with (3.22). The last inequality in the above equation follows from (3.22) and the fact that the support of \(\xi_{N}\) is contained in the ball of radius \(\ell\). Using (2.75), (3.41), (3.42), (3.43), and observing that the quantity \[\operatorname{Tr}_{\mathfrak{F}_{+}}\Big{[}\Big{(}2a_{p}^{*}a_{p}+\frac{z^{2}} {|z|^{2}}a_{p}^{*}a_{-p}^{*}+\frac{\overline{z}^{2}}{|z|^{2}}a_{p}a_{-p}\Big{)} \widetilde{G}(z)\Big{]}\] does not depend on \(z\in\mathbb{C}\), we find \[\begin{split}\mathcal{E}_{\mathcal{Q}}\leq&\Big{(} \sup_{p\in P_{\mathrm{B}}}\big{|}\widehat{\xi}_{N}(p)-4\pi\mathfrak{a}_{N} \big{|}\widetilde{N}_{0}+4\pi\mathfrak{a}_{N}\big{|}\widetilde{N}_{0}-N_{0} \big{|}\Big{)}\\ &\qquad\times 2\Big{(}\sum_{p\in P_{\mathrm{B}}}\operatorname{Tr}_{ \mathfrak{F}_{+}}[a_{p}^{*}a_{p}\widetilde{G}_{\mathrm{B}}(z)]+\Big{|}\sum_{p \in P_{\mathrm{B}}}\operatorname{Tr}_{\mathfrak{F}_{+}}[a_{p}^{*}a_{-p}^{*} \widetilde{G}_{\mathrm{B}}(z)]\Big{|}\Big{)}\\ \lesssim&(N^{\delta_{\mathrm{Bog}}}\ell+N^{-1/3}+N ^{2}\ell^{4}+N^{2/3+\delta_{\mathrm{Bog}}}\ell^{2})N^{2/3+\delta_{\mathrm{Bog }}}.\end{split} \tag{3.44}\] The expectation of the high-momentum term \(\mathcal{Q}_{>}\) in \(\widetilde{\Gamma}_{0}\) can be bounded by \[\begin{split}\operatorname{Tr}[\mathcal{Q}_{>}\widetilde{\Gamma}_ {0}]=& 2\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\backslash P_{ \mathrm{B}}}\widehat{\xi}_{N}(p)\operatorname{Tr}_{\mathfrak{F}}[a_{p}^{*}a_{p} \widetilde{\Gamma}_{0}]\\ \leq&\frac{8\pi\mathfrak{a}_{N}}{1-\frac{\mathfrak{a}_{N}}{ \ell}}\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\backslash P_{\mathrm{B}}} \operatorname{Tr}_{\mathfrak{F}}[a_{p}^{*}a_{p}\widetilde{\Gamma}_{0}]\leq 8\pi \mathfrak{a}_{N}\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\backslash P_{ \mathrm{B}}}\gamma_{p}+C[\ell^{-1}+\exp(-cN^{\delta_{\mathrm{Bog}}})],\end{split} \tag{3.45}\] where we used \(\widehat{\xi}_{N}(p)\leq\widehat{\xi}_{N}(0)\), \(\ell\geq 2\mathfrak{a}_{N}\), \(\sum_{p\in\Lambda^{*}_{+}\setminus P_{\mathrm{B}}}\gamma(p)\lesssim N\) and (2.36). Similarly, we estimate the term related to the second-to-last term on the right-hand side of (3.38) by \[2\widehat{\xi}_{N}(0)\operatorname{Tr}_{\mathfrak{F}}[\mathcal{N}_{+}\widetilde {\Gamma}_{0}]\leq 8\pi\mathfrak{a}_{N}\widetilde{N}_{0}\sum_{p\in\Lambda^{*}_{ +}}\gamma_{p}+C\ell^{-1}. \tag{3.46}\] Finally, the last term on the right-hand side of (3.38) satisfies \[\widehat{\xi}_{N}(0)\int_{\mathbb{C}}\zeta(z)|z|^{4}\,\mathrm{d}z\leq 4\pi \mathfrak{a}_{N}\int_{\mathbb{C}}\zeta(z)|z|^{4}\,\mathrm{d}z+C\ell^{-1}. \tag{3.47}\] Combining (3.39), (3.40), (3.44), (3.45), (3.46) and (3.47), we find the claim. We can now conclude the proof of Proposition 3.6. Inserting (3.36) into (3.24), we find (3.19), with \(\mathcal{E}_{\mathcal{V}}=V_{1}+V_{2}+V_{3}+\mathcal{E}_{\Xi}\). The result now follows from (3.25), (3.26) and (3.37). ### Final upper bound We are now prepared to prove Proposition 3.1. Observing that \(\mathcal{K}+\mathcal{Q}_{\mathrm{B}}=\mathcal{H}_{\mathrm{B}}(z)+\mu_{0} \mathcal{N}_{+}\) and using (2.23), we find \[\operatorname{Tr}_{\mathfrak{F}}[\mathcal{K}\widetilde{\Gamma}_{0 }]+\int_{\mathbb{C}}\zeta(z)\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{Q}_ {\mathrm{B}}\widetilde{G}(z)]\,\mathrm{d}z= \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{H}^{\mathrm{diag}} \widetilde{G}^{\mathrm{diag}}]+\mu_{0}\operatorname{Tr}_{\mathfrak{F}_{+}}[ \mathcal{N}_{+}\widetilde{G}(z)] \tag{3.48}\] \[\leq \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{H}^{\mathrm{diag}} G^{\mathrm{diag}}]+\mu_{0}\sum_{p\in\Lambda^{*}_{+}}\gamma_{p}+\mathcal{E}_{\mathrm{ Bog}},\] with \[\mathcal{E}_{\mathrm{Bog}}=(\kappa_{0}^{-1}-1)\operatorname{Tr}[(\mathcal{H}^ {\mathrm{diag}}-E_{0})G^{\mathrm{diag}}]+\mu_{0}\big{(}\operatorname{Tr}_{ \mathfrak{F}_{+}}[\mathcal{N}_{+}\widetilde{G}(z)]-\operatorname{Tr}_{ \mathfrak{F}_{+}}[\mathcal{N}_{+}G(z)]\big{)}.\] The lowest eigenvalue \(E_{0}<0\) of \(\mathcal{H}_{\mathrm{B}}(z)\) has been defined in (2.23). The bounds \(|p|^{2}-\mu_{0}\leq\varepsilon(p)\lesssim|p|^{2}-\mu_{0}\) and an application of Lemma 2.4 show \[\operatorname{Tr}[(\mathcal{H}^{\mathrm{diag}}-E_{0})G^{\mathrm{diag}}]=\sum_ {p\in\Lambda^{*}_{+}}\frac{\varepsilon(p)}{\exp(\beta\varepsilon(p))-1} \lesssim\sum_{p\in\Lambda^{*}_{+}}\frac{(|p|^{2}-\mu_{0})}{\exp(\beta(|p|^{2}- \mu_{0}))-1}\lesssim\beta^{-5/2}.\] In combination with Lemma 2.7, Lemma 2.10, and \(\mu_{0}=-\beta^{-1}\log(1+N_{0}^{-1})\), this implies \[\mathcal{E}_{\mathrm{Bog}}\lesssim\exp(-cN^{\delta_{\mathrm{Bog}}}), \tag{3.49}\] for some \(c>0\). The claim thus follows by combining (3.48), (3.49), and Propositions 3.2 and 3.6. ## 4 Estimate for the entropy The goal of this section is to prove the lower bound in Proposition 4.1 for the entropy of the trial state \(\Gamma\) defined in (2.14). It should be compared to [13, Proposition 4.1] and to [52, Lemma 2]. We also refer to the discussion in Remark 2.3 above. The main improvement is that we can estimate the influence of correlations added to the eigenfunctions of the state \(|z\rangle\langle z|\otimes\widetilde{G}(z)\), while this has been possible in [13] only for correlations added to those of \(\widetilde{\Gamma}_{0}\). The additional freedom we obtain by doing this is a crucial ingredient for our analysis in Section 3. We recall that the assumptions stated at the beginning of Section 3 also apply in this section. **Proposition 4.1**.: _The entropy of the state \(\Gamma\) in (2.14) satisfies_ \[S(\Gamma)=S(\widetilde{G}^{\rm diag})+S^{\rm cl}(\zeta)+\mathcal{E}_{\rm S}, \tag{4.1}\] _with \(\widetilde{G}^{\rm diag}\) in (2.4), \(\zeta\) in (2.9), \(S^{\rm cl}\) defined below (1.21) and_ \[\mathcal{E}_{\rm S}\gtrsim-N\ell^{2}. \tag{4.2}\] Proof.: We define the function \(\varphi(x)=-x\log(x)\) for \(x\geq 0\) and we assume that \(\{\xi_{k}\}_{k}\) is an orthonormal basis of eigenfunctions of \(\Gamma\). This allows us to write \[S(\Gamma)=\sum_{k}\varphi\big{(}\langle\xi_{k},\Gamma\xi_{k}\rangle\big{)}= \sum_{k}\varphi\Big{(}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{ \mathcal{A}}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z,\alpha}\|^{2}}\,| \langle\xi_{k},F\phi_{z,\alpha}\rangle|^{2}\;{\rm d}z\Big{)}.\] Next, we define \[Z_{k}=\int_{\mathbb{C}}\sum_{\alpha\in\widetilde{\mathcal{A}}}\frac{|\langle \xi_{k},F\phi_{z,\alpha}\rangle|^{2}}{\|F\phi_{z,\alpha}\|^{2}}\,{\rm d}z\] and observe that (2.67) implies the bound \[\begin{split} Z_{k}\leq&(1+CN\ell^{2})\int_{ \mathbb{C}}\sum_{\alpha\in\mathcal{A}}|\langle\xi_{k},F\phi_{z,\alpha}\rangle| ^{2}\;{\rm d}z\\ =&(1+CN\ell^{2})\int_{\mathbb{C}}\sum_{\alpha\in \mathcal{A}}\langle z\otimes T_{z}\Psi_{\alpha},F\xi_{k}\rangle\langle F\xi_{ k},z\otimes T_{z}\Psi_{\alpha}\rangle\,{\rm d}z\\ =&(1+CN\ell^{2})\int_{\mathbb{C}}\langle z,\operatorname {Tr}_{\mathfrak{F}_{+}}[F|\xi_{k}\rangle\langle\xi_{k}|F|z\rangle=(1+CN\ell^ {2})\operatorname{Tr}[F|\xi_{k}\rangle\langle\xi_{k}|F]\leq(1+CN\ell^{2}), \end{split} \tag{4.3}\] as long as \(N\ell^{2}\) is sufficiently small. To come to the last line, we used the fact that the set \(\{T_{z}\Psi_{\alpha}\}_{\alpha}\) is an orthonormal basis of \(\mathfrak{F}_{+}\) for fixed \(z\in\mathbb{C}\), the completeness relation \(\int_{\mathbb{C}}|z\rangle\langle z|\,{\rm d}z=\mathds{1}_{\mathfrak{F}_{0}}\), and the bound \(F\leq 1\) for the multiplication operator \(F\) on \(\mathfrak{F}\). Using the identity \(\varphi(xy)=x\varphi(y)+y\varphi(x)\) for \(x,y\geq 0\), we can thus write \[S(\Gamma)=\sum_{k}Z_{k}\varphi\Big{(}\int_{\mathbb{C}}\sum_{\alpha\in \mathcal{A}}\zeta(z)\widetilde{\lambda}_{\alpha}\frac{|\langle\xi_{k},F\phi_{ z,\alpha}\rangle|^{2}}{Z_{k}\|F\phi_{z,\alpha}\|^{2}}\,{\rm d}z\Big{)}+\sum_{k} \varphi\,(Z_{k})\int_{\mathbb{C}}\sum_{\alpha\in\mathcal{A}}\zeta(z) \widetilde{\lambda}_{\alpha}\frac{|\langle\xi_{k},F\phi_{z,\alpha}\rangle|^{2 }}{Z_{k}\|F\phi_{z,\alpha}\|^{2}}\,{\rm d}z. \tag{4.4}\] With (4.3) we see that the second term on the right-hand side can be bounded by \[\begin{split}-\sum_{k}\log(Z_{k})\int_{\mathbb{C}}\zeta(z)\sum_{ \alpha\in\widetilde{\mathcal{A}}}&\widetilde{\lambda}_{\alpha} \frac{|\langle\psi_{k},F\phi_{z,\alpha}\rangle|^{2}}{\|F\phi_{z,\alpha}\|^{2 }}\\ &\gtrsim-N\ell^{2}\sum_{k}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha \in\widetilde{\mathcal{A}}}\widetilde{\lambda}_{\alpha}\frac{|\langle\psi_{k}, F\phi_{z,\alpha}\rangle|^{2}}{\|F\phi_{z,\alpha}\|^{2}}=-N\ell^{2},\end{split} \tag{4.5}\] where the equality in the second line follows from the fact that \(\{\xi_{k}\}_{k}\) is an orthonormal basis. Moreover, an application of Jensen's inequality to the strictly concave function \(\varphi\) shows \[\begin{split}\sum_{k}Z_{k}\varphi\Big{(}\int_{\mathbb{C}}\sum_{ \alpha\in\widetilde{\mathcal{A}}}\zeta(z)\widetilde{\lambda}_{\alpha}\frac{| \langle\xi_{k},F\phi_{z,\alpha}\rangle|^{2}}{Z_{k}\|F\phi_{z,\alpha}\|^{2}}\,{ \rm d}z\Big{)}\geq\sum_{k}Z_{k}\int_{\mathbb{C}}\sum_{\alpha\in\widetilde{ \mathcal{A}}}\varphi(\zeta(z)\widetilde{\lambda}_{\alpha})\frac{|\langle\xi_{k}, F\phi_{z,\alpha}\rangle|^{2}}{Z_{k}\|F\phi_{z,\alpha}\|^{2}}\,{\rm d}z\\ =&\int_{\mathbb{C}}\sum_{\alpha\in\widetilde{\mathcal{ A}}}\varphi(\widetilde{\lambda}_{\alpha}\zeta(z))\,{\rm d}z=\sum_{\alpha\in \widetilde{\mathcal{A}}}\varphi(\widetilde{\lambda}_{\alpha})+\int_{\mathbb{C}} \varphi(\zeta(z))\,{\rm d}z=S(\widetilde{G}^{\rm diag})+S^{\rm cl}(\zeta).\end{split} \tag{4.6}\] We insert (4.5) and (4.6) into (4.4) to conclude the proof. Proof of Theorem 1.1 Let us fix some \(0<\varepsilon_{0}<1/12\). To prove (1.11) we show two distinct upper bounds, corresponding to the two terms in the minimum appearing on the right-hand side. The first upper bound is obtained with the trial state \(\Gamma\) in (2.14) and yields the term \(F^{\rm BEC}-8\pi\mathfrak{a}_{N}N_{0}^{2}\) in the minimum. It is only valid in the parameter regime in which \(N_{0}\gtrsim N^{2/3+\varepsilon_{0}}\) holds. A second bound that is valid for all \(N_{0}\leq N\) (but useful only if \(N_{0}\ll N^{5/6}\)) and contributes the term \(F_{0}^{\rm BEC}\) in the minimum will be obtained at the end of this section with a much simpler trial state. Before we discuss these issues in more detail, we provide the final upper bound for the free energy of \(\Gamma\). We recall that for this part of the analysis the assumptions stated at the beginning of Section 3 hold. In combination, Proposition 3.1 and 4.1 imply the upper bound \[\begin{split}\mathcal{F}(\Gamma)=&\,{\rm Tr}[ \mathcal{H}_{N}\Gamma]-\beta^{-1}S(\Gamma)\\ \leq&\,{\rm Tr}[\mathcal{H}^{\rm diag}G^{\rm diag}] -\beta^{-1}S(\widetilde{G}^{\rm diag})+4\pi\mathfrak{a}_{N}\int_{\mathbb{C}} \zeta(z)|z|^{4}\,{\rm d}z+\beta^{-1}S^{\rm cl}(\zeta)+\mu_{0}\sum_{p\in \Lambda_{+}^{*}}\gamma_{p}\\ &+8\pi\mathfrak{a}_{N}\Bigg{[}\Big{(}\sum_{p\in\Lambda_{+}^{*}} \gamma_{p}\Big{)}^{2}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\setminus P_{ \rm B}}\gamma_{p}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}\Bigg{]} +\mathcal{E}_{\mathcal{H}}-\beta^{-1}\mathcal{E}_{\rm S},\end{split} \tag{5.1}\] where \(\mathcal{E}_{\mathcal{H}}\) and \(\mathcal{E}_{\rm S}\) satisfy (3.2) and (4.2), respectively. We first remove the particle number cutoff from the entropy of the Bogoliubov Gibbs state. As in Section 4 we denote \(\varphi(x)=-x\log(x)\) for \(x\geq 0\). Using \(\kappa_{0}=\sum_{\alpha\in\widetilde{\mathcal{A}}}\lambda_{\alpha}\leq 1\), we find \[S(\widetilde{G}^{\rm diag})=\sum_{\alpha\in\widetilde{\mathcal{A}}}\varphi( \widetilde{\lambda}_{\alpha})\geq\sum_{\alpha\in\widetilde{\mathcal{A}}} \varphi(\lambda_{\alpha})+\varphi(\kappa_{0}^{-1})\sum_{\alpha\in\widetilde{ \mathcal{A}}}\lambda_{\alpha}=S(G^{\rm diag})+\log(\kappa_{0})-\sum_{\alpha\in \mathcal{A}\setminus\widetilde{\mathcal{A}}}\varphi(\lambda_{\alpha}). \tag{5.2}\] An application of (2.36) shows \[\log(\kappa_{0})\gtrsim-\exp(-cN^{\delta_{\rm Bog}}). \tag{5.3}\] Using the definition of \(\lambda_{\alpha}\) above (2.10) and (2.37), we find \[\begin{split}\sum_{\alpha\in\mathcal{A}\setminus\widetilde{ \mathcal{A}}}\varphi(\lambda_{\alpha})&=\sum_{\alpha\in\mathcal{A} \setminus\widetilde{\mathcal{A}}}\lambda_{\alpha}\log\Big{(}\sum_{\alpha^{ \prime}\in\mathcal{A}}e^{-\beta E_{\alpha^{\prime}}}\Big{)}+\beta\sum_{\alpha \in\mathcal{A}\setminus\widetilde{\mathcal{A}}}\lambda_{\alpha}E_{\alpha}\\ &\leq\sum_{\alpha\in\mathcal{A}\setminus\widetilde{\mathcal{A}}} \lambda_{\alpha}\log\Big{(}\sum_{\alpha^{\prime}\in\mathcal{A}}e^{-\beta E_{ \alpha^{\prime}}}\Big{)}+C\exp(-cN^{\delta_{\rm Bog}}),\end{split}\] with \(E_{\alpha}\) in (2.10). A standard computation shows that \[\begin{split}\log\Big{(}\sum_{\alpha\in\mathcal{A}}e^{-\beta E _{\alpha}}\Big{)}=&-\sum_{p\in\Lambda_{+}^{*}}\log\left(1-\exp(- \beta\varepsilon(p))\right)\leq-\sum_{p\in\Lambda_{+}^{*}}\log\left(1-\exp(- \beta|p|^{2})\right)\\ \lesssim&-\beta^{-3/2}\int_{0}^{\infty}(\beta+x^{2} )\log(1-\exp(-x^{2}))\,{\rm d}x\lesssim N.\end{split}\] The inequalities above follow from \(\varepsilon(p)\geq|p|^{2}\), the fact that \(x\mapsto-\log(1-\exp(-x))\) is monotone decreasing for \(x\geq 0\), and Lemma 2.4 with \(\lambda=0\). In combination with (2.36), this implies \[\sum_{\alpha\in\mathcal{A}\setminus\widetilde{\mathcal{A}}}\lambda_{\alpha} \log\Big{(}\sum_{\alpha\in\mathcal{A}}e^{-\beta E_{\alpha}}\Big{)}\lesssim\exp (-cN^{\delta_{\rm Bog}}). \tag{5.4}\] Inserting (5.3) and (5.4) into (5.2) yields \[S(\widetilde{G}^{\text{diag}})\geq S(G^{\text{diag}})-C\exp(-cN^{\delta_{\text{ Bog}}}). \tag{5.5}\] We also have \[\text{Tr}[\mathcal{H}^{\text{diag}}G^{\text{diag}}]-\beta^{-1}S(G^{\text{diag}} )=\beta^{-1}\sum_{p\in\Lambda_{+}^{*}}\log\big{(}1-e^{-\beta\varepsilon(p)} \big{)}. \tag{5.6}\] The following Lemma, which is proved in [13, Appendix B], provides us with an asymptotic expansion of the term on the right-hand side of (5.6). **Lemma 5.1**.: _Let \(\beta=\kappa\beta_{\text{c}}\), with \(\beta_{\text{c}}\) defined in (1.8) and \(\kappa\in(0,\infty)\). There exists a constant \(C>0\) such that, for every \(N\),_ \[\frac{1}{\beta}\sum_{p\in P_{\text{B}}}\log\big{(}1-e^{-\beta \varepsilon(p)}\big{)}\] \[\qquad\leq \frac{1}{\beta}\sum_{p\in P_{\text{B}}}\log\big{(}1-e^{-\beta(|p |^{2}-\mu_{0})}\big{)}+8\pi\mathfrak{a}_{N}N_{0}\sum_{p\in P_{\text{B}}}\frac{ 1}{e^{\beta(|p|^{2}-\mu_{0})}-1}\] \[\quad-\frac{1}{2\beta}\sum_{p\in\Lambda_{+}^{*}}\Bigg{[}\frac{16 \pi\mathfrak{a}_{N}N_{0}}{|p|^{2}}-\log\Big{(}1+\frac{16\pi\mathfrak{a}_{N}N_ {0}}{|p|^{2}}\Big{)}\Bigg{]}+\frac{CN_{0}^{2}}{N^{2}}\Big{(}N^{\delta_{\text{ Bog}}}+\frac{1}{\beta N^{\delta_{\text{ Bog}}}}+\frac{1}{\beta^{2}N_{0}}\Big{)}.\] An application of Lemma 5.1 shows \[\frac{1}{\beta}\sum_{p\in\Lambda_{+}^{*}}\log\big{(}1-e^{-\beta \varepsilon(p)}\big{)}\leq\frac{1}{\beta}\sum_{p\in\Lambda_{+}^{*}}\log\big{(} 1-e^{-\beta(|p|^{2}-\mu_{0})}\big{)}+8\pi\mathfrak{a}_{N}N_{0}\sum_{p\in P_{ \text{B}}}\gamma_{p}^{\text{id}} \tag{5.7}\] \[\qquad\qquad-\frac{1}{2\beta}\sum_{p\in\Lambda_{+}^{*}}\Bigg{[} \frac{16\pi\mathfrak{a}_{N}N_{0}}{|p|^{2}}-\log\Big{(}1+\frac{16\pi\mathfrak{a }_{N}N_{0}}{|p|^{2}}\Big{)}\Bigg{]}+CN^{2/3-\delta_{\text{Bog}}},\] where we introduced the notation \(\gamma_{p}^{\text{id}}=\big{(}\exp(\beta(|p|^{2}-\mu_{0}))-1\big{)}^{-1}\) for \(p\in\Lambda_{+}^{*}\). The sum of the third and fourth term on the right-hand side of (5.1) equals \[4\pi\mathfrak{a}_{N}\int_{\mathbb{C}}\zeta(z)|z|^{4}\,\mathrm{d}z+\beta^{-1}S ^{\text{cl}}(\zeta)=-\beta^{-1}\log\Big{(}\int_{|z|^{2}\leq\widetilde{N}}e^{- \beta(4\pi\mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\,\mathrm{d}z\Big{)} +\widetilde{\mu}\widetilde{N}_{0}, \tag{5.8}\] with \(\widetilde{N}_{0}\) defined in (2.74), and where we used the shorthand notation \(\widetilde{N}=\widetilde{c}N\). Extending the integral inside the logarithm to the whole complex plane, we generate the error term \[-\beta^{-1}\log\Big{(}1-\frac{\int_{|z|^{2}>\widetilde{N}}e^{-\beta(4\pi \mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\,\mathrm{d}z}{\int_{\mathbb{C }}e^{-\beta(4\pi\mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\,\mathrm{d}z} \Big{)}.\] Setting \(\eta=\widetilde{\mu}\sqrt{\beta/(4h)}\) and \(A=\sqrt{\beta\hbar}\widetilde{c}N\), and computing the integrals in polar coordinates with the change of variables \(t=\sqrt{\beta h}|z|^{2}-\eta\), we find \[\frac{\int_{|z|^{2}>\widetilde{N}}e^{-\beta(4\pi\mathfrak{a}_{N}|z|^{4}- \widetilde{\mu}|z|^{2})}\,\mathrm{d}z}{\int_{\mathbb{C}}e^{-\beta(4\pi \mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\,\mathrm{d}z}=\frac{\int_{A- \eta}^{\infty}e^{-t^{2}}\,\mathrm{d}t}{\int_{-\eta}^{\infty}e^{-t^{2}}\, \mathrm{d}t}.\] We know from the proof of Lemma B.1 that \(\eta<A/2\). If \(0\leq\eta<A/2\), we estimate the denominator from below by a constant and apply (B.5) to obtain a bound for the numerator. This shows \[\frac{\int_{A-\eta}^{\infty}e^{-t^{2}}\,\mathrm{d}t}{\int_{-\eta}^{\infty}e^{-t ^{2}}\,\mathrm{d}t}\lesssim\int_{A/2}^{\infty}e^{-t^{2}}\,\mathrm{d}t\lesssim \exp(-cN^{1/3}).\] In contrast, if \(\eta<0\) we apply (B.5) in both the numerator and the denominator, and find \[\frac{\int_{A-\eta}^{\infty}e^{-t^{2}}\,\mathrm{d}t}{\int_{-\eta}^{\infty}e^{- t^{2}}\,\mathrm{d}t}\lesssim e^{-A^{2}+2A\eta}\lesssim\exp(-cN^{1/3}).\] Hence, \[-\beta^{-1}\log\Big{(}\int_{|z|^{2}\leq\widetilde{N}} e^{-\beta(4\pi\mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\, \mathrm{d}z\Big{)} \tag{5.9}\] \[\leq -\beta^{-1}\log\Big{(}\int_{\mathbb{C}}e^{-\beta(4\pi\mathfrak{a }_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\,\mathrm{d}z\Big{)}-N_{0}(\widetilde{ \mu}-\mu)+Ce^{-cN^{1/3}},\] with \(\mu\) in (1.13). In the last step, we used the fact that the first term on the second line of (5.9) is a concave function of \(\widetilde{\mu}\) and that \[\frac{1}{\beta}\frac{\partial}{\partial\widetilde{\mu}}\log\Big{(}\int_{ \mathbb{C}}e^{-\beta(4\pi\mathfrak{a}_{N}|z|^{4}-\widetilde{\mu}|z|^{2})}\, \mathrm{d}z\Big{)}\Big{|}_{\widetilde{\mu}=\mu}=N_{0}\] holds. Inserting (5.5)-(5.9) into (5.1), we get \[\mathcal{F}(\Gamma)\leq \frac{1}{\beta}\sum_{|p|\in\Lambda_{+}^{*}}\log(1-e^{-\beta(|p|^{ 2}-\mu_{0})})+\mu_{0}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}-\beta^{-1}\log\left( \int_{\mathbb{C}}e^{-\beta(4\pi\mathfrak{a}_{N}|z|^{4}-\mu|z|^{2})}\right) \tag{5.10}\] \[+8\pi\mathfrak{a}_{N}\Bigg{[}N_{0}\sum_{p\in P_{\mathrm{B}}} \gamma_{p}^{\mathrm{id}}+\Big{(}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}\Big{)}^{ 2}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}\setminus P_{\mathrm{B}}}\gamma_ {p}+\widetilde{N}_{0}\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}\Bigg{]}\] \[+\mu N_{0}+\widetilde{\mu}(\widetilde{N}_{0}-N_{0})-\frac{1}{2 \beta}\sum_{p\in\Lambda_{+}^{*}}\left[\frac{16\pi a_{N}N_{0}}{|p|^{2}}-\log \Big{(}1+\frac{16\pi a_{N}N_{0}}{|p|^{2}}\Big{)}\right]\] \[+CN^{2/3-\delta\mathrm{bos}}+\mathcal{E}_{\mathcal{H}}-\beta^{-1} \mathcal{E}_{\mathrm{S}}.\] With Lemma 2.1, Lemma 2.7 and Lemma 2.10, we infer that \[\sum_{p\in\Lambda_{+}^{*}}\gamma_{p}=N-\widetilde{N}_{0}+E,\] with \(E\lesssim N^{5/3+\delta\mathrm{bos}}\ell^{2}+N^{3}\ell^{4}\lesssim N\). Using this, we see that the terms inside the bracket on the second line of (5.10) are bounded from above by \[2\widetilde{N}_{0}(N-\widetilde{N}_{0})+(N-\widetilde{N}_{0})^{2 }+N_{0}\sum_{p\in P_{\mathrm{B}}}\gamma_{p}^{\mathrm{id}}-\widetilde{N}_{0} \sum_{p\in P_{\mathrm{B}}}\gamma_{p}+CNE\] \[= 2\widetilde{N}_{0}(N-\widetilde{N}_{0})+(N-\widetilde{N}_{0})^{2 }+\widetilde{N}_{0}(\widetilde{N}_{0}-N_{0})+(N_{0}-\widetilde{N}_{0})\sum_{p \in P_{\mathrm{B}}}\gamma_{p}^{\mathrm{id}}+CNE\] \[= N^{2}-N_{0}^{2}+\widetilde{N}_{0}(N_{0}-\widetilde{N}_{0})+(N_{0 }-\widetilde{N}_{0})^{2}+(N_{0}-\widetilde{N}_{0})\sum_{p\in P_{\mathrm{B}}} \gamma_{p}^{\mathrm{id}}+CNE.\] Moreover, an application of Lemma 2.16 shows \[8\pi\mathfrak{a}_{N}\Big{[}(N_{0}-\widetilde{N}_{0})^{2}+(N_{0}-\widetilde{N}_{0} )\sum_{p\in P_{\rm B}}\gamma_{p}^{\rm id}+CNE\Big{]}\lesssim N^{1/3+\delta_{\rm Bog}}+N^{5/3+\delta_{\rm Bog}}\ell^{2}+N^{3}\ell^{4}.\] The above considerations show that the second line of (5.10) is bounded from above by \[8\pi\mathfrak{a}_{N}(N^{2}-N_{0}^{2})+8\pi\mathfrak{a}_{N}\widetilde{N}_{0}(N_ {0}-\widetilde{N}_{0})+C[N^{1/3+\delta_{\rm Bog}}+N^{5/3+\delta_{\rm Bog}}\ell ^{2}+N^{3}\ell^{4}].\] Next, we consider the second term in the above equation and the second term on the third line of (5.10). Let \(\varepsilon\in(0,\varepsilon_{0})\). If \(N_{0}\gtrsim N^{5/6+\varepsilon}\), we infer from (B.1) in Appendix C that \[(8\pi\mathfrak{a}_{N}\widetilde{N}_{0}-\widetilde{\mu})(N_{0}-\widetilde{N}_{ 0})\lesssim e^{-cN^{\varepsilon}}.\] On the other hand, if \(N_{0}\lesssim N^{5/6+\varepsilon}\), Lemma 2.16, Lemma B.1 and the lower bound \(N_{0}\gtrsim N^{2/3+\varepsilon_{0}}\) imply \[|8\pi\mathfrak{a}_{N}\widetilde{N}_{0}(N_{0}-\widetilde{N}_{0})|\lesssim N^{1/3+2 \varepsilon}+N^{3/2+\delta_{\rm Bog}+\varepsilon}\ell^{2}+N^{17/6+\varepsilon }\ell^{4}\] and \[|\widetilde{\mu}(\widetilde{N}_{0}-N_{0})|\lesssim \Big{(}\frac{1}{\widetilde{N}_{0}\beta}+\frac{1}{\sqrt{\beta N}} +\frac{\widetilde{N}_{0}}{N}\Big{)}\Big{(}\frac{N_{0}}{N\beta}+\frac{N_{0}^{2 }}{N^{2}}+N^{3}\ell^{4}+N^{5/3+\delta_{\rm Bog}}\ell^{2}\Big{)}\] \[\lesssim N^{1/3+2\varepsilon}+N^{3}\ell^{4}+N^{5/3+\delta_{\rm Bog}}\ell^ {2}.\] With the bound \(|\mu_{0}|=\beta^{-1}\log(1+N_{0}^{-1})\leq 1/(N_{0}\beta)\), we see that \[\mu_{0}\sum_{p\in\Lambda_{+}^{\ast}}\gamma_{p}= \mu_{0}(N-N_{0})+\mu_{0}(N_{0}-\widetilde{N}_{0}+E)\] \[\leq \mu_{0}(N-N_{0})+C(N^{1/3}+N^{5/3+\delta_{\rm Bog}}\ell^{2}+N^{3} \ell^{4}).\] We collect the above estimates, insert the bounds for \(\mathcal{E}_{\mathcal{H}}\) and \(\mathcal{E}_{\rm S}\) in (3.2) and (4.2), respectively, choose \(\varepsilon\) sufficiently small, and find \[\mathcal{F}(\Gamma)\leq \frac{1}{\beta}\sum_{|p|\in\Lambda_{+}^{\ast}}\log(1-e^{-\beta(|p| ^{2}-\mu_{0})})+\mu_{0}(N-N_{0})-\frac{1}{\beta}\log\left(\int_{\mathbb{C}}e^ {-\beta(4\pi a_{N}|z|^{4}-\mu|z|^{2})}\right)+\mu N_{0} \tag{5.11}\] \[+8\pi\mathfrak{a}_{N}(N^{2}-N_{0}^{2})-\frac{1}{2\beta}\sum_{p\in \Lambda_{+}^{\ast}}\left[\frac{16\pi a_{N}N_{0}}{|p|^{2}}-\log\left(1+\frac{1 6\pi a_{N}N_{0}}{|p|^{2}}\right)\right]\] \[+C\Big{(}N^{2/3-\delta_{\rm Bog}}+N^{2/3+3\delta_{\rm Bog}}\ell^ {1/2}+N^{5/3+3\delta_{\rm Bog}}\ell^{2}+N^{3}\ell^{4}+\ell^{-1}\Big{)}.\] The error size is minimized by choosing \(\delta_{\rm Bog}=1/18\), \(\ell=N^{-11/18}\), which yields the first upper bound. To prove an upper bound that is valid in the non-condensed phase, we need a different trial state. In this regime, the Bogoliubov corrections become negligible compared to the size of the error term in Theorem 1.1. As undressed trial state, we can thus take the Gibbs state of the ideal gas with an appropriate cutoff for the number of particles: \[\widetilde{G}^{\rm id}=\frac{\mathds{1}_{\{\mathcal{N}\leq\overline{\varepsilon \in N\}}}\exp(-\beta(\mathrm{d}\Gamma(-\Delta-\widetilde{\mu}_{0})))}{\mathrm{ Tr}_{\overline{\mathfrak{F}}}[\mathds{1}_{\{\mathcal{N}\leq\overline{ \varepsilon\in N\}}}\exp(-\beta(\mathrm{d}\Gamma(-\Delta-\widetilde{\mu}_{0})) )]},\] where \(\widetilde{c}>1\) is chosen independent of \(N\). The chemical potential \(\widetilde{\mu}_{0}\) is uniquely determined by the condition \(\mathrm{Tr}_{\widehat{\mathfrak{s}}}[\mathcal{N}\widehat{G}^{\mathrm{id}}]=N\). The final trial state \(\widetilde{\Gamma}\) is obtained by dressing \(\widetilde{G}^{\mathrm{id}}\) with the correlation structure \(F\) in the same way as we did in (2.14). Since the eigenfunctions of \(\widetilde{G}^{\mathrm{id}}\) can be chosen to be eigenfunctions of \(\mathcal{N}\), it is easy to check that, in contrast to the previous regime, the correlation structure does not alter the expected number of particles in the trial state. Computing the free energy of \(\widetilde{\Gamma}\) is a straightforward replication of Sections 3 and 4 in a simplified setting. We therefore leave the details to the reader and only state the final result: \[\mathcal{F}(\widetilde{\Gamma})\leq F_{0}(\beta,N)+8\pi\mathfrak{a}_{N}N^{2}+ CN^{11/18}. \tag{5.12}\] Theorem 1.1 is a direct consequence of (5.11) and (5.12). To see this, let us make the following remarks. Using Proposition 1.2 one easily checks that the minimum in (1.11) is attained by the first term if \(N_{0}\gtrsim N^{5/6+\varepsilon}\) with \(\varepsilon>0\). Since the term on the second line of (1.11) is negative, it can be pulled out of the minimum in this parameter regime. If \(N_{0}\lesssim N^{5/6+\varepsilon}\) this term can also be pulled out of the minimum, because it is bounded in absolute value by a constant times \(N^{1/3+2\varepsilon}\), which is smaller than our remainder term if \(\varepsilon\) is chosen sufficiently small. Moreover, a short computation that uses Proposition 1.2 and the identity \(\mu_{0}=-\ln(1+N_{0}^{-1})/\beta\) show that the condition \(N_{0}\lesssim N^{2/3+\varepsilon_{0}}\) with \(\varepsilon_{0}<1/12\) implies \[F^{\mathrm{BEC}}-8\pi\mathfrak{a}_{N}N_{0}^{2}\geq F_{0}^{\mathrm{BEC}}-\frac {\mu_{0}}{2}+\mathcal{O}(N^{1/2}). \tag{5.13}\] Using additionally \(\mu_{0}<0\), we thus find \[\min\{F^{\mathrm{BEC}}-8\pi\mathfrak{a}_{N}N_{0}^{2},F_{0}^{\mathrm{BEC}}\}=F _{0}^{\mathrm{BEC}}+\mathcal{O}(N^{1/2}).\] This explains why it is sufficient to prove the bound for the trial state \(\Gamma\) only in the parameter regime \(N_{0}\gtrsim N^{2/3+\varepsilon_{0}}\), concluding our proof of Theorem 1.1. **Acknowledgments.** A. D. gratefully acknowledges funding from the Swiss National Science Foundation (SNSF) through the Ambizione grant PZ00P2 185851. M. C. gratefully acknowledges support from the European Research Council through the ERC-AdG CLaQS. It is our pleasure to thank Chiara Boccato, Phan Thanh Nam, Marcin Napiorkowski, Alessandro Olgiati and Benjamin Schlein for inspiring discussions. -- Appendix -- ## Appendix A Properties of the solution to the scattering equation In this appendix we recall some well-known properties of the solution \(f(|x|)\) to the zero energy scattering equation and of \(f_{\ell}(x)\) defined above and in (2.12), respectively. In the whole section we assume that \(V\) is a nonnegative measurable and radial function that vanishes outside the ball with radius \(R>0\). All results that we state without proof can be found in [40, Appendix C]3. Footnote 3: For general interaction potentials, the scattering equation in [40, Theorem C.1] has to be understood in the sense of quadratic forms and not in the sense of distributions as claimed. That is, functions used to test the equation should be elements of the form domain of the energy functional \(\mathcal{E}_{R}\) (notation from the reference). All proofs in the reference apply, with minor adjustments. Let us introduce the energy functional \[\mathcal{E}_{\ell}[\phi]=\int_{B_{\ell}}\left(|\nabla\phi|^{2}+\frac{1}{2}V_{N} (x)|\phi(x)|^{2}\right)\mathrm{d}x\] with \(\ell>R\). The function \(f_{\ell}\) is the unique minimizer of \(\mathcal{E}_{\ell}\) among all \(H^{1}\) functions \(\phi\) satisfying the boundary condition \(\phi(x)=1\) for \(|x|=\ell\), and its energy is given by \[\mathcal{E}_{\ell}[f_{\ell}]=\min_{\begin{subarray}{c}\phi\in H^{1}(B_{\ell}) \\ \phi=1\ \mathrm{on}\ \partial B_{\ell}\end{subarray}}\mathcal{E}_{\ell}[\phi]= \frac{4\pi\mathfrak{a}_{N}}{1-\mathfrak{a}_{N}/\ell}.\] (A.1) Here \(\mathfrak{a}_{N}<\ell\) is a positive number called the scattering length of the potential \(V_{N}\). It is easy to see that, by scaling, \(\mathfrak{a}_{N}=N^{-1}\mathfrak{a}\), where \(\mathfrak{a}\) is the scattering length of the unscaled potential \(V\). The function \(f(|x|)\) is monotonically non-decreasing in \(|x|\), and it is bounded from above and from below by \[1\geq f(r)\geq 1-\frac{\mathfrak{a}_{N}}{r},\] with equality in the lower bound for \(r\geq R\). This, in particular, implies \[1\geq f_{\ell}(x)\geq\frac{(1-\mathfrak{a}_{N}/|x|)_{+}}{1-\mathfrak{a}_{N}/ \ell}\geq(1-\mathfrak{a}_{N}/|x|)_{+}.\] (A.2) We also need the following integral bounds on \(f_{\ell}\). **Lemma A.1**.: _Let \(u_{\ell}=1-f_{\ell}\). We have_ \[\int_{\mathbb{R}^{3}}u_{\ell}(x)\mathrm{d}x\lesssim\mathfrak{a}_{N}\ell^{2}, \qquad\int_{\mathbb{R}^{3}}u_{\ell}(x)^{2}\mathrm{d}x\lesssim\mathfrak{a}_{N} ^{2}\ell,\quad\text{ and }\quad\int_{\mathbb{R}^{3}}|\nabla f_{\ell}(x)|\, \mathrm{d}x\lesssim\mathfrak{a}_{N}\ell.\] (A.3) Proof.: With (A.2), we see that \[0\leq u_{\ell}(x)\leq\begin{cases}0&|x|\geq\ell,\\ 2\mathfrak{a}_{N}/|x|&\mathfrak{a}_{N}\leq|x|<\ell,\\ 1&|x|<\mathfrak{a}_{N},\end{cases}\] which implies the first two bounds in (A.3). The last bound follows from \(f^{\prime}\geq 0\) and one integration by parts: \[\int|\nabla f_{\ell}|= \frac{4\pi}{f(\ell)}\int_{0}^{\ell}f^{\prime}(r)r^{2}\mathrm{d}r =\frac{4\pi}{f(\ell)}\left[f(\ell)\ell^{2}-2\int_{0}^{\ell}f(r)r\mathrm{d}r\right]\] \[\leq 4\pi\ell^{2}-8\pi\int_{0}^{\ell}(1-\mathfrak{a}_{N}/r)_{+}r \mathrm{d}r=8\pi\mathfrak{a}_{N}\ell-4\pi\mathfrak{a}_{N}^{2}\leq 8\pi \mathfrak{a}_{N}\ell.\] To come to the second line, we additionally used (A.2). ## Appendix B Properties of the effective functional for the condensate We recall the definition of \(\zeta\) in (2.9) with the cutoff parameter \(\widetilde{c}\). The following is an adaptation of [13, Lemma C.1]. **Lemma B.1**.: _We assume that \(\widetilde{c}>2\) and consider the combined limit \(N\to\infty\), \(\beta=\kappa\beta_{\mathrm{c}}\) with \(\kappa\in(0,\infty)\) and \(\beta_{\mathrm{c}}\) in (1.8). Let \(1<\widetilde{b}<\widetilde{c}/2\) and choose a sequence \(M=M(N)\in\mathbb{R}\) such that \(0<M<\widetilde{b}N\) for every \(N\in\mathbb{N}\). Then there exists a unique \(\widetilde{\mu}\in\mathbb{R}\) such that \(\int_{\mathbb{C}}|z|^{2}\zeta(z)\,\mathrm{d}z=M\), with \(\zeta\) in (2.9). Moreover, for every \(\varepsilon>0\), there exists \(c>0\) such that:_ 1. _If_ \(M\gtrsim N^{5/6+\varepsilon}\)_, then_ \[|\widetilde{\mu}-8\pi\mathfrak{a}_{N}M|\lesssim e^{-cN^{\varepsilon}}.\] (B.1) 2. _If_ \(M\lesssim N^{5/6-\varepsilon}\)_, then_ \[\left|\widetilde{\mu}+\frac{1}{\beta M}\right|\lesssim\frac{N^{-2\varepsilon} }{\beta M}.\] (B.2) 3. _For any_ \(M=M(N)\)_, we have_ \[|\widetilde{\mu}|\lesssim\left(\frac{1}{M\beta}+\frac{1}{\sqrt{\beta N}}+ \frac{M}{N}\right).\] (B.3) Proof.: Proceeding as in the proof of [13, Lemma C.1], we compute \[\sqrt{\beta h}\int_{\mathbb{C}}\zeta(z)|z|^{2}\,\mathrm{d}z=\frac{1-\exp(-A^{2 }+2A\eta)+\sqrt{\pi}\eta\exp(\eta^{2})\,\mathrm{erf}[-\eta,A-\eta]}{\sqrt{\pi }\exp(\eta^{2})\,\mathrm{erf}[-\eta,A-\eta]}=:\Upsilon(\eta),\] (B.4) where \(h=4\pi\mathfrak{a}_{N}\), \(\eta=\widetilde{\mu}\sqrt{\beta/(4h)}\), \(A=\widetilde{c}\sqrt{\beta h}N\) and \[\mathrm{erf}[a,b]=\frac{2}{\sqrt{\pi}}\int_{a}^{b}e^{-t^{2}}\,\mathrm{d}t\] with \(a,b\in\mathbb{R}\). From [1, Eq. 7.1.21] we know that \[2\exp(x^{2})\int_{x}^{\infty}e^{-t^{2}}\,\mathrm{d}t=\frac{1}{x}-\frac{1}{2x^ {3}}+Q(x),\qquad|Q(x)|\leq\frac{3}{4x^{5}},\] (B.5) for \(x>0\). Using (B.5) we find \[\Upsilon(\eta)=\frac{\frac{1}{2\eta^{2}}+\eta Q(-\eta)-\eta\exp(-A^{2}+2A\eta) \big{(}\frac{1}{\eta}+\frac{1}{A-\eta}-\frac{1}{2(A-\eta)^{3}}+Q(A-\eta)\big{)} }{-\frac{1}{\eta}+\frac{1}{2\eta^{3}}+Q(-\eta)-\exp(-A^{2}+2A\eta)\big{(} \frac{1}{A-\eta}-\frac{1}{2(A-\eta)^{3}}+Q(A-\eta)\big{)}}\] (B.6) for \(\eta<0\). This, in particular, implies \(\Upsilon(\eta)\to 0\) in the limit \(\eta\to-\infty\), for fixed \(A\). Moreover, a direct computation shows \(\Upsilon(A/2)=A/2\). With \[\frac{\partial}{\partial\widetilde{\mu}}\int_{\mathbb{C}}\zeta(z)|z|^{2}\, \mathrm{d}z=\beta\int_{\mathbb{C}}\zeta(z)\left(|z|^{2}-\left(\int_{\mathbb{C }}\zeta(w)|w|^{2}\,\mathrm{d}w\right)\right)^{2}\,\mathrm{d}z>0,\] we conclude that for \(0<M<\widetilde{b}N\) there exists a unique solution \(\eta\) to the equation \(\sqrt{\beta h}M=\Upsilon(\eta)\). In the following we derive the asymptotic behavior of this solution for large \(N\). Let us first consider the case \(M\lesssim N^{5/6-\varepsilon}\), where we have \(\Upsilon(\eta)\lesssim N^{-\varepsilon}\). By comparing this to \[\Upsilon(0)=\frac{1-\exp(-A^{2})}{2\int_{0}^{A}e^{-t^{2}}\,\mathrm{d}t}\to \pi^{-1/2}\quad\text{ as }A\to\infty,\] (B.7) and using the monotonicity of \(\Upsilon(\eta)\), we see that \(\eta<0\), for sufficiently large \(N\). Thus, it follows from (B.6) that \(\eta\to-\infty\) as \(N\to\infty\), and moreover \[\sqrt{\beta h}M=-\frac{1}{2\eta}+\mathcal{O}(|\eta|^{-3}),\] which immediately implies (B.2). Let us now assume that \(M\gtrsim N^{5/6+\varepsilon}\), in which case we have \[\Upsilon(\eta)=\sqrt{\beta h}M\gtrsim N^{\varepsilon}.\] (B.8) From the assumption \(\widetilde{b}<\widetilde{c}/2\) we know that \[\Upsilon(\eta)=\sqrt{\beta h}M<A/2=\Upsilon(A/2).\] In combination with (B.7) and the monotonicity of \(\Upsilon(\cdot)\), this allows us to conclude that \(0<\eta<A/2\), for large \(N\). An application of (B.5) shows \[\Upsilon(\eta)=\eta+\frac{1-e^{-A(A-2\eta)}}{2e^{\eta^{2}}\sqrt{\pi}-\frac{1} {\eta}+\frac{1}{2\eta^{3}}-Q(\eta)-e^{-A(A-2\eta)}\big{(}\frac{1}{A-\eta}- \frac{1}{2(A-\eta)^{3}}+Q(A-\eta)\big{)}},\] (B.9) which together with (B.8) implies \(\eta\to+\infty\) as \(N\to\infty\), provided \(M\gtrsim N^{5/6+\varepsilon}\). Using again \(\eta<A/2\) and (B.9), we see that \[|\eta-\sqrt{\beta h}M|\lesssim\exp(-N^{2\varepsilon}),\] which is (B.1). Finally, the bound (B.3) follows from the asymptotics of \(\Upsilon\) for \(\eta\to\pm\infty\) and the fact that if \(\eta\) is bounded, then \(\widetilde{\mu}\lesssim(\beta N)^{-1/2}\). ## Appendix C Estimate of the expected number of particles in \(\Gamma\) This section is devoted to the proof of Lemma 2.1. We start by showing (2.15) uniformly in \(\widetilde{\mu}\in\mathbb{R}\). We will often need to remove cutoffs from the expectation of observables on the Gibbs state \(\widetilde{G}^{\rm diag}\), producing errors that are exponentially small in \(N\). We will omit such errors because they can be absorbed in the remaining error bounds. Since the proof is analogous to that of Lemma 3.7 we only sketch it. Using (2.69) and (3.23) without the restriction \((k,h)\neq(i,j)\) and expanding the relevant numerator and denominator, we find \[\begin{split}\operatorname{Tr}[\mathcal{N}\Gamma]=& \operatorname{Tr}[\mathcal{N}\widetilde{\Gamma}_{0}]+\frac{1}{2} \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}}\widetilde{ \lambda}_{\alpha}\int_{\Lambda}\varrho^{(1)}_{z,\alpha}(x)\,\mathrm{d}x\int_{ \Lambda^{2}}u_{12}\varrho^{(2)}_{z,\alpha}(x_{1},x_{2})\,\mathrm{d}x_{1}\, \mathrm{d}x_{2}\,\mathrm{d}z\\ &-\frac{1}{2}\int_{\Lambda^{3}}u_{12}\varrho^{(3)}_{\Gamma_{0}} (x_{1},x_{2},x_{3})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}+ \mathcal{O}(N\ell^{2}+N^{3}\ell^{4}).\end{split}\] (C.1) From (2.27) and (2.60) we know that \[\int_{\Lambda}\varrho^{(1)}_{z,\alpha}(x)\,\mathrm{d}x= \langle\phi_{z,\alpha},\mathcal{N}\phi_{z,\alpha}\rangle=|z|^{2}+N_{ \alpha}+\sum_{p\in\mathcal{P}_{\rm B}}v_{p}^{2}\Big{(}1+\langle\Psi_{\alpha}, a_{p}^{*}a_{p}\Psi_{\alpha}\rangle\Big{)}.\] (C.2) Using (2.57), we see that the second term on the right-hand side of (C.1) equals \[\frac{1}{2}\Big{(}\int u_{\ell}\Big{)}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha \in\widetilde{\mathcal{A}}}\widetilde{\lambda}_{\alpha}\int_{\Lambda}\varrho ^{(1)}_{z,\alpha}(x)\,\mathrm{d}x\int_{\Lambda}\varrho^{(2)}_{z,\alpha}(x,x) \,\mathrm{d}x\,\mathrm{d}z+\mathcal{E}^{(1)}_{\rm C},\] (C.3) with \[\begin{split}\mathcal{E}_{\mathbb{C}}^{(1)}=&\frac{1}{2} \int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in\widetilde{\mathcal{A}}}\widetilde{ \lambda}_{\alpha}\int_{\Lambda}\varrho_{z,\alpha}^{(1)}(x)\,\mathrm{d}x\int_{ \Lambda^{2}}u_{\ell}(x_{2}-x_{1})\\ &\times\int_{0}^{1}(x_{2}-x_{1})\cdot\nabla_{2}\varrho_{z,\alpha }^{(2)}(x_{1},x_{1}+t(x_{2}-x_{1}))\,\mathrm{d}t\,\mathrm{d}x_{1}\,\mathrm{d} x_{2}\,\mathrm{d}z.\end{split}\] (C.4) It follows from (2.57), (A.3), (C.2) and the cutoff in \(\widetilde{\lambda}_{\alpha}\) that \[\begin{split}|\mathcal{E}_{\mathbb{C}}^{(1)}|\lesssim& \mathfrak{a}_{N}N\ell^{3}\int_{\mathbb{C}}\zeta(z)\sum_{\alpha\in \widetilde{\mathcal{A}}}\widetilde{\lambda}_{\alpha}\sup_{x_{1},x_{2}\in \Lambda}\big{|}\nabla_{2}\varrho_{z,\alpha}^{(2)}(x_{1},x_{2})\big{|}\mathrm{ d}z\\ \lesssim&\ell^{3}\Big{(}N^{3/2}\big{(}\operatorname{ Tr}_{\mathfrak{F}_{+}}[K\widetilde{G}^{\mathrm{diag}}]\big{)}^{1/2}+N^{4\delta_{ \mathrm{Bog}}}(\beta^{-2}N^{2\delta_{\mathrm{Bog}}}+1)+N^{1+2\delta_{\mathrm{ Bog}}}(1+\beta^{-1})\Big{)}\\ \lesssim&\ell^{3}N^{2}(\beta^{-1/2}+1).\end{split}\] (C.5) To come to the last line we also used the second bound in (2.32) and the assumption \(\delta_{\mathrm{Bog}}<1/12\). The same arguments used to prove (2.56) show \[\varrho_{z,\alpha}^{(2)}(x,x)=|z|^{4}+4|z|^{2}N_{\alpha}+2N_{\alpha}(N_{ \alpha}-1)+\mathcal{O}\Big{(}(|z|^{2}+N_{\alpha})(N_{\alpha}^{<}+N^{\delta_{ \mathrm{Bog}}})+N^{3\delta_{\mathrm{Bog}}}(N_{\alpha}^{<}+1)^{2}\Big{)},\] (C.6) uniformly in \(x\in\Lambda\). This follows from the observation that, thanks to the translation invariance of the eigenstates \(\Psi_{\alpha}\), the phases in the expansion (2.59) drop out when \(x_{1}=x_{2}\). Inserting (C.6) into (C.3) and using (A.3), (C.2), (C.4), (C.5) and Corollary 2.9, we see that the second term on the right-hand side of (C.1) equals \[\begin{split}\frac{1}{2}\Big{(}\int u_{\ell}\Big{)}& \int_{\mathbb{C}}\zeta(z)\Big{\{}|z|^{6}+5|z|^{4}\operatorname{ Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\mathrm{diag}}]+6|z|^{2}( \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{\mathrm{diag}}])^{2}\\ &+2(\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{ \mathrm{diag}}])^{3}\Big{\}}\,\mathrm{d}z+\mathcal{O}\Big{(}\ell^{2}N^{1+ \delta_{\mathrm{Bog}}}(\beta^{-1}+1)+\ell^{3}N^{2}(\beta^{-1/2}+1)+\ell^{4}N^ {3}\Big{)}.\end{split}\] (C.7) As for the term on the second line of (C.1), it is easy to see, using Lemma 2.15, that \[\int_{\Lambda^{3}}u_{12}\varrho_{\Gamma_{0}}^{(3)}(x_{1},x_{2},x_{3})\, \mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}=\Big{(}\int u_{\ell}\Big{)} \int_{\Lambda^{2}}\varrho_{\Gamma_{0}}^{(3)}(x_{1},x_{1},x_{2})\,\mathrm{d}x_{1 }\,\mathrm{d}x_{2}+\mathcal{O}(N^{2}\ell^{3}\beta^{-1/2}).\] From here, using Wick's theorem as in the proof of (3.35) we arrive at \[\begin{split}-\frac{1}{2}\int_{\Lambda^{3}}& u_{12} \varrho_{\Gamma_{0}}^{(3)}(x_{1},x_{2},x_{3})\,\mathrm{d}x_{1}\,\mathrm{d}x_{2 }\,\mathrm{d}x_{3}\\ =&-\frac{1}{2}\Big{(}\int u_{\ell}\Big{)}\int_{ \mathbb{C}}\zeta(z)\Big{\{}|z|^{6}+5|z|^{4}\operatorname{Tr}_{\mathfrak{F}_{+}}[ \mathcal{N}_{+}G^{\mathrm{diag}}]\\ &+6|z|^{2}(\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{ \mathrm{diag}}])^{2}+2(\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}G^{ \mathrm{diag}}])^{3}\Big{\}}\,\mathrm{d}z+\mathcal{O}\Big{(}N^{2}\ell^{3} \beta^{-1/2}+N\ell^{2}\beta^{-1}\Big{)}.\end{split}\] (C.8) Inserting (C.7) and (C.8) into (C.1), we get (2.15). It remains to prove the existence statement of the lemma. We denote \[M(\widetilde{\mu})=\int_{\mathbb{C}}|z|^{2}\zeta(z)\,\mathrm{d}z\] and observe that \(\operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}\widetilde{G}(z)]\) is independent of \(z\in\mathbb{C}\), and that \[\operatorname{Tr}[\mathcal{N}\widetilde{\Gamma}_{0}]=M(\widetilde{\mu})+ \operatorname{Tr}_{\mathfrak{F}_{+}}[\mathcal{N}_{+}\widetilde{G}(z)]\] holds. Using Lemmas 2.7, 2.10, 2.12 and Equation (2.54), it is easy to see that \[\Big{|}\mathrm{Tr}_{\widetilde{\mathfrak{F}}+}[\mathcal{N}_{+}\widetilde{G}(z)] -N+N_{0}\Big{|}\lesssim\frac{N_{0}}{N\beta}+\frac{N_{0}^{2}}{N^{2}}.\] We can thus write \[\mathrm{Tr}[\mathcal{N}\Gamma]-N=M(\widetilde{\mu})-N_{0}+\Big{(}\mathrm{Tr}[ \mathcal{N}\widetilde{\Gamma}_{0}]-\mathrm{Tr}[\mathcal{N}\Gamma]\Big{)}+ \mathcal{O}\Big{(}\frac{N_{0}}{N\beta}+\frac{N_{0}^{2}}{N^{2}}\Big{)},\] which, for \(\beta\gtrsim\beta_{\mathrm{c}}\), \(\delta_{\mathrm{Bog}}<1/12\) and \(\ell\leq cN^{-7/12}\), for some sufficiently small \(c>0\), yields \[M(\widetilde{\mu})-N_{0}-\Big{(}\frac{1}{2}+C\frac{N_{0}}{N}\Big{)}N^{2/3} \leq\mathrm{Tr}[\mathcal{N}\Gamma]-N\leq M(\widetilde{\mu})-N_{0}+\Big{(} \frac{1}{2}+C\frac{N_{0}}{N}\Big{)}N^{2/3},\] (C.9) for some constant \(C>0\) independent of \(\widetilde{\mu}\) and \(N\). From the proof of Lemma B.1 we know that \[\lim_{\widetilde{\mu}\to-\infty}M(\widetilde{\mu})=0,\qquad\lim_{\widetilde{ \mu}\to+\infty}M(\widetilde{\mu})=\widetilde{c}N,\] (C.10) for fixed \(N\in\mathbb{N}\). Equations (C.9), (C.10) and the assumption \(N_{0}\geq N^{2/3}\) imply that \(\mathrm{Tr}[\mathcal{N}\Gamma]-N\) takes positive and negative values as a function of \(\widetilde{\mu}\), for fixed \(N\) large enough. The claim now follows from the continuity of the map \[\widetilde{\mu}\mapsto\mathrm{Tr}[\mathcal{N}\Gamma]=\int_{\mathbb{C}}\zeta(z )\sum_{\alpha\in\widetilde{A}}\frac{\widetilde{\lambda}_{\alpha}}{\|F\phi_{z, \alpha}\|^{2}}\sum_{n=1}^{\infty}n\int_{\Lambda^{n}}\big{|}\big{(}F_{n}\phi_{ z,\alpha}^{n}\big{)}\big{|}^{2}\,\mathrm{d}X^{n}\,\mathrm{d}z,\] which is a consequence of the dominated convergence theorem.
2302.05479
A variational encoder-decoder approach to precise spectroscopic age estimation for large Galactic surveys
Constraints on the formation and evolution of the Milky Way Galaxy require multi-dimensional measurements of kinematics, abundances, and ages for a large population of stars. Ages for luminous giants, which can be seen to large distances, are an essential component of studies of the Milky Way, but they are traditionally very difficult to estimate precisely for a large dataset and often require careful analysis on a star-by-star basis in asteroseismology. Because spectra are easier to obtain for large samples, being able to determine precise ages from spectra allows for large age samples to be constructed, but spectroscopic ages are often imprecise and contaminated by abundance correlations. Here we present an application of a variational encoder-decoder on cross-domain astronomical data to solve these issues. The model is trained on pairs of observations from APOGEE and Kepler of the same star in order to reduce the dimensionality of the APOGEE spectra in a latent space while removing abundance information. The low dimensional latent representation of these spectra can then be trained to predict age with just $\sim$ 1,000 precise seismic ages. We demonstrate that this model produces more precise spectroscopic ages ($\sim$ 22% overall, $\sim$ 11% for red-clump stars) than previous data-driven spectroscopic ages while being less contaminated by abundance information (in particular, our ages do not depend on [$\alpha$/M]). We create a public age catalog for the APOGEE DR17 data set and use it to map the age distribution and the age-[Fe/H]-[$\alpha$/M] distribution across the radial range of the Galactic disk.
Henry W. Leung, Jo Bovy, J. Ted Mackereth, Andrea Miglio
2023-02-10T19:26:57Z
http://arxiv.org/abs/2302.05479v2
A variational encoder-decoder approach to precise spectroscopic age estimation for large Galactic surveys ###### Abstract Constraints on the formation and evolution of the Milky Way Galaxy require multi-dimensional measurements of kinematics, abundances, and ages for a large population of stars. Ages for luminous giants, which can be seen to large distances, are an essential component of studies of the Milky Way, but they are traditionally very difficult to estimate precisely for a large dataset and often require careful analysis on a star-by-star basis in asteroseismology. Because spectra are easier to obtain for large samples, being able to determine precise ages from spectra allows for large age samples to be constructed, but spectroscopic ages are often imprecise and contaminated by abundance correlations. Here we present an application of a variational encoder-decoder on cross-domain astronomical data to solve these issues. The model is trained on pairs of observations from APOGEE and _Kepler_ of the same star in order to reduce the dimensionality of the APOGEE spectra in a latent space while removing abundance information. The low dimensional latent representation of these spectra can then be trained to predict age with just \(\sim 1,000\) precise seismic ages. We demonstrate that this model produces more precise spectroscopic ages (\(\sim 22\%\) overall, \(\sim 11\%\) for red-clump stars) than previous data-driven spectroscopic ages while being less contaminated by abundance information (in particular, our ages do not depend on \([\alpha/\mathrm{M}]\)). We create a public age catalog for the APOGEE DR17 data set and use it to map the age distribution and the age-\([\mathrm{Fe}/\mathrm{H}]\)-\([\alpha/\mathrm{M}]\) distribution across the radial range of the Galactic disk. keywords: methods: data analysis -- stars: fundamental parameters-- techniques: spectroscopic ## 1 Introduction Stars provide an important window into our Galaxy's evolutionary history as all major events that occurred in the past leave imprints in the chemical abundances and kinematics of stars (Freeman and Bland-Hawthorn, 2002). To improve our understanding of the formation history of the Milky Way and explore the evolution and chemodynamical structure of the Galaxy as a whole, we need to measure abundances and kinematics of stars as functions of age for stellar samples covering a large volume of our Galaxy from the bulge and the disk to the stellar halo (Rix and Bovy, 2013; Bland-Hawthorn and Gerhard, 2016). Age, chemical abundances and kinematics are interconnected in complex ways (e.g., Edvardsson et al., 1993; Haywood et al., 2013; Mackereth et al., 2019; Ness et al., 2019; Bovy et al., 2019) and information on their distributions far away from the solar neighbourhood remains scant. To observe a large volume of stars for galactic archaeology purposes, low-mass giants are of particular interest, because they are common throughout the Galaxy, live relatively long and stable lives, and they are intrinsically luminous allowing them to be observed to large distances even in regions with high extinction such as the Galactic bulge. Modern spectroscopic surveys such as SDSS-IV's APOGEE (Majewski et al., 2017; Blanton et al., 2017), GALAH (De Silva et al., 2015), the ongoing SDSS-V's Milky Way Mapper (MWM; Kollmeier et al., 2017), and _Gaia_(Gaia Collaboration et al., 2016) provide accurate measurements of basic stellar parameters like \(T_{\mathrm{eff}}\), elemental abundances, and kinematics. These surveys, however, do not directly provide accurate stellar age measurements for low-mass giants, because stellar ages are not a directly observable quantity. Unlike subgiants, for which age can be measured fairly accurately with basic stellar parameters and using isochrones (Haywood et al., 2013; Xiang et al., 2017; Xiang and Rix, 2022), stellar ages for giants are intrinsically difficult to measure, because giant evolutionary tracks are crowded together compared to sub-giant isochrones, age and metallicity are to some extent degenerate observationally, and stellar evolutionary models have large uncertainties. While age correlates with kinematics and abundances, individual stellar ages cannot simply be inferred accurately from stellar kinematics (e.g., Beane et al., 2018), abundance, or kinematics-abundance alone. Even if ages could be inferred in this way, to investigate the relation between age, abundance, and kinematics, we cannot rely on pre-determined relations in this space. Ages for giant stars can be obtained from determinations of their mass, because the age of a giant is almost directly given by its mass-dependent main-sequence (MS) lifetime. Because of the steep dependence of the MS lifetime on mass (\(\tau\propto M^{-3.5}\)), small uncertainties in mass determinations get amplified strongly and giant ages typically have large uncertainties. Spectroscopically, masses for giants can be determined using proxies such as the [C/N] ratio (e.g., Masseron & Gilmore, 2015; Martig et al., 2016), which is partially set by the mass-dependent dredge-up process. Alternatively, accurate masses for giants can be obtained with careful analysis on a star-by-star basis (Appourchaux, 2020) using asteroseismic observations, which can determine masses from the properties of stochastically-driven oscillations that can be observed by space telescopes such as _CoRoT_(Auvergne et al., 2009), _Kepler_(Borucki et al., 2010) and _TESS_(Ricker et al., 2015). Using stellar masses in combination with spectroscopically determined parameters like \(T_{\rm eff}\) and [Fe/H], we can derive stellar age using stellar models (e.g., Rodrigues et al., 2014). The APOKASC project (Pinsonneault et al., 2014, 2018) is an example of this, combining _Kepler_ seismic observation with APOGEE spectroscopic parameters to derive stellar ages with uncertainties of \(\approx 30\%\). APOKASC has opened up the possibility of doing galactic archaeology with ages for thousands of giants, but it is limited to the relatively small _Kepler_ field. While TESS' sky coverage is much bigger, its shorter observation span means that it is difficult to use TESS to determine asteroseismic ages for high-luminosity giants with their long oscillation time scales. Spectroscopic surveys such as APOGEE or the ongoing SDSS-V Milky Way Mapper (MWM) have all-sky coverage and obtain spectra for \(\approx 1\) million (for APOGEE) to 5 million (MWM) stars. Recent advances in machine learning methodology and algorithms allow us to do transfer learning, using which we can transfer knowledge obtained in one domain to another by training on pairs of data from the two domains. In the context of ages, this means that we can transfer age knowledge from the asteroseismic realm to the spectroscopic realm by training on stars with observations in both realms, without requiring any prior knowledge on how to map stellar spectra to ages. The APOKASC catalog provides such a training set and it has been used to determine spectroscopic ages using APOGEE (e.g., Ness et al., 2016; Mackereth et al., 2019; Ciucel et al., 2021), LAMOST (e.g., Xiang et al., 2019), and other surveys. This has allowed for millions of data-driven spectroscopic ages to be determined. As Xiang et al. (2019) demonstrates, these spectroscopic ages provide great scientific value even if their precision falls short of what is ideally required. Spectroscopic ages obtained through transfer-learning from asteroseismic data currently suffer from a series of limitations. Firstly, the overlap between spectroscopic surveys and asteroseismic surveys is small with \(O(10^{4})\) stars. This is a relatively small amount of data to train modern machine-learning methods (e.g., neural networks; Mackereth et al., 2019; Ciucel et al., 2021). However, many more pairs of, e.g., APOGEE/Kepler observations exist than we have asteroseismic ages for and these could in principle be used to improve the information transfer between the domains. Secondly, spectra contain information on abundances that are highly correlated with age (e.g., the alpha enhancement [\(\alpha\)/Fe]) and current methods provide no guarantee that the spectroscopically-determined age is not solely or largely coming from age-abundance correlations present in the training sample rather than true spectral age information. This makes any inference of the age-abundance-kinematics correlations using current spectroscopic ages suspect. In this paper, we present a novel transfer-learning method for determining spectroscopic ages for giants that solves these issues and furthermore allows for spectroscopic ages to be determined in the future using other small, high-quality samples of stellar ages. We do this by splitting the spectroscopic-age determination task in two parts: (i) extracting the age information from stellar spectra while discarding abundance information and (ii) mapping the extracted age information to age using a small sample of accurate stellar ages. We achieve (i) by using a variational encoder-decoder neural network to map high-resolution spectra to asteroseismic power spectra using a small latent-space bottleneck connecting the two. Because asteroseismic power spectra contain information on mass and radius but not abundance, this effectively extracts age information from the stellar spectra while discarding abundance information. In step (ii) we then train a simpler machine-learning method with fewer free parameters to map the latent space to age using the small sample of high-quality ages. Predictions for new spectroscopic ages are obtained by encoding the spectrum in the latent space and then mapping its location in latent space to age. We then apply this method to the APOGEE data and map the age distribution and age-abundance correlations across the Galactic disk. This paper is organized as follows. We given an overview of the relevant deep-learning methodology in Section 2. Section 3 describes the actual machine-learning methods that we use: the encoder-decoder used in step (i) of the algorithm in Section 3.1 and a modified version of the random forest method used in step (ii) in Section 3.2. We then discuss the data from APOGEE, _Kepler_, and APOKASC that we use in Section 4. We give details on the training and validation steps of our algorithm in Section 5 and then describe the results in Section 6: in Section 6.1 we show how well we can reconstruct asteroseismic power spectra from high-resolution spectra using the encoder-decoder network, we discuss the latent space representation of the spectra in Section 6.2, we discuss the derived seismic parameters, ages, and evolutionary state classifications in Section 6.3. The detail of applying our method to generate APOGEE age catalog for APOGEE DR17 is given in Section 7.1 and spatial age-abundance trends in the Galactic disk are shown in Section 7.2. We discuss the implications of our results and possible future applications in Section 8 and then conclude in Section 9. ## 2 Deep Learning Methodology In this section, we provide an overview of the deep-learning methodology that we use in this work: (variational) auto-encoders in Section 2.1, encoder-decoder networks in Section 2.2, and we end with a discussion of the rationale behind why we choose to use an encoder-decoder in this work in Section 2.3. ### Variational auto-encoders Deep learning using artificial neural networks is a versatile and flexible machine-learning method and it has been applied to problems in both supervised and unsupervised learning with data ranging from two-dimensional images, voice recordings, video, and sequences of words (Lecun et al., 2015). Auto-encoders are a specific type of neural network primarily developed to learn efficient, low-dimensional representations of input data that allow the input to be faithfully reproduced. As such, auto-encoders are essentially a non-linear version of Principal Component Analysis (PCA), that is an auto-encoder network without any non-linearity (e.g., not using non-linear activation functions such as Rectified linear units; Nair & Hinton, 2010) using a squared-error loss function are equivalent to traditional PCA (Kramer, 1991). The basic idea of using an auto-encoder is having a network that takes an input image, compresses it to a low-dimensional middle layer (the _latent space_) in a sequence of layers (_the encoder_), and finally decompresses this low-dimensional representation to reconstruct the input image (the _decoder_). The low-dimensional latent space acts as a bottleneck that restricts the flow of information through the network and, thus, when trained well forms an efficient representation of the input data. Auto-encoders have been used in astronomy in the past for applications such as denoising (training on pairs of noisy, near-noiseless, or augmented data; e.g., Frontera-Pons et al., 2017; Sedaghat and Mahabal, 2018; Gheller and Vazza, 2022), dimensionality reduction (e.g., Portillo et al., 2020), and representation learning (e.g., Cheng et al. (2020) for identifying strong lenses and de Mijolla et al. (2021) for chemical tagging). There have, to our knowledge, not been any astronomical applications of the type of encoder-decoder network with cross-domain data that we discuss in Section 2.2 below. Variational auto-encoders are a type of auto-encoder that incorporates variational inference into the model (Kingma and Welling, 2013). Variational inference is a maximum-likelihood estimation (MLE) method for situations in which the probability density is very complex, which in our case is the distribution of the latent variables that generate the output data. The addition of variational inference in practice acts as a regularization that forces the latent space distribution to follow a given statistical distribution. This is often a Gaussian distribution, as any distribution can be described as a set of normally-distributed variables mapped via a complex function like a neural network. Moreover, forcing the latent space distribution to follow a known statistical distribution also makes generating new samples easier, because we can draw random samples from the latent space distribution and decode them to construction outputs. A detailed review of variational auto-encoders can be found in Doersch (2016). Overall, a variational auto-encoder has the following major components: The _encoder:_ The encoder is a discriminative model that takes input data and compresses it to a latent space of much lower dimensionality. In variational auto-encoders, the encoder predicts the means and variances (in practice the encoder predicts logarithmic variance instead for numerical stability) of each latent-space parameter and then samples from a normal distribution with these parameters to obtain the final efficient representation. The _latent Space:_ The latent space is the layer that is populated by the latent variables. The latent space is fully unconstrained prior to training and the entire latent-variable representation is learnt during training. However, we do need to set the dimension of the latent space, analogous to setting the number of principal components to use in PCA. The _decoder:_ The decoder is a generative model that takes the latent variables and generates the (generally much higher dimensional) output. This is done by starting from the low-dimensional representation and building it out to the high-dimensional output through a sequence of steps (layers). In many ways, the decoder does the opposite of the encoder. Variational auto-encoders are trained by minimizing an objective function that is composed of the sum of two loss terms. The first of these is the reconstruction loss, i.e., a measure of how well the model predicts the output. The second is a regularization term that shapes the distribution in the latent space, i.e., forcing the latent space to follow a certain distribution. We have adapted the mean squared error (MSE) as the reconstruction loss and the Kullback-Leibler (KL) divergence as the latent-space regularization loss to force the latent space to follow a Gaussian distribution. The MSE reconstruction loss \(J_{\text{MSE}}\) is given by \[J_{\text{MSE}}(\mathbf{y},\mathbf{\hat{y}})=\frac{1}{N}\sum_{i=1}^{N}\mathbf{ w}_{i}(\mathbf{\hat{y}}_{i}-\mathbf{\hat{y}}_{i})^{2}\,, \tag{1}\] where \(\mathbf{\hat{y}}_{i}\) is the predicted output, \(\mathbf{\hat{y}}_{i}\) is the true output, and \(\mathbf{w}_{i}\) is a weight for each pixel \(i\) in the output. The weight \(\mathbf{w}\) will be an array of ones if no pixel-level weighting applied. The KL-divergence regularization loss \(J_{\text{KL}}\) is \[J_{\text{KL}}(\mu,\log\sigma^{2})=\frac{1}{2}\left[-\sum_{i}\left(\log\sigma_ {i}^{2}+1\right)+\sum_{i}\exp\left(\log\sigma_{i}^{2}\right)+\sum_{i}\mu_{i}^ {2}\right]\,, \tag{2}\] where \((\mu_{i},\log\sigma_{i}^{2})\) are the mean and logarithmic variance in each latent-space dimension for data point \(i\) from the encoder. ### Encoder-Decoder networks A encoder-decoder network is very similar to an auto-encoder in terms of architecture and the same applies to variational encoder-decoders with respect to variational auto-encoders. They both consists of three major components, an encoder, a decoder and a latent space between the encoder and the decoder as we have discussed in the previous sub-section. The major difference between the two is in the input and output data used by the model. An auto-encoder uses data in the same domain (e.g., pairs of images of same objects) for the input and output node, while an encoder-decoder network employs data in different domains (e.g., image input and text output). Encoder-decoders are commonly used in neural machine translation (e.g., Cho et al., 2014; Vaswani et al., 2017), because different languages (hence different domains) are just different ways to express the same abstract ideas. In our application of an encoder-decoder in this paper, we have two domains of data, which are APOGEE high-resolution spectra as inputs on the one hand and power spectral density (PSD) derived from _Kepler_ light-curves as outputs on the other hand. The goal of using an encoder-decoder is to extract the information about the _Kepler_ PSDs that is contained in the APOGEE spectra, that is, the "asteroseismic" information that is present in the spectroscopic data. We do not necessarily extract the true asteroseismic information from APOGEE spectra, but simply any spectral information that is correlated with the asteroseismic information. However, because _Kepler_ PSDs do not contain abundance information, we emphatically do not extract abundance information at the same time (as we will explicitly demonstrate below). To be able to successfully construct _Kepler_ PSDs, we expect that the latent space must contain information on the asteroseismic parameters \(\nu_{\text{max}}\) and \(\Delta\nu\) from which the mass can be derived using scaling relations Kjeldsen and Bedding (1995); Brown et al. (1991) as \[\frac{M}{M_{\odot}}\simeq\Big{(}\frac{\nu_{\text{max}}}{\nu_{\text{max},\odot }}\Big{)}^{3}\Big{(}\frac{\Delta\nu}{\Delta\nu_{\odot}}\Big{)}^{-4}\Big{(} \frac{T_{\text{eff}}}{T_{\text{eff},\odot}}\Big{)}^{3/2}\,, \tag{3}\] where \(\nu_{\text{max},\odot}\) and \(\Delta\nu_{\odot}\) are the asteroseismic parameters of the Sun. Thus, because spectra also contain information about \(T_{\text{eff}}\), we expect the latent space to contain information on the stellar mass, from which the age can be derived. Note that we do not use the scaling relations anywhere in our methodology, but we simply use them here to argue that the encoder-decoder should be able to extract mass information from the high-resolution spectra if it can successfully reconstruct _Kepler_ PSDs from stellar spectra. Figure 1: Schematics of the methods and models that we use to obtain spectroscopic ages from a latent-space representation of APOGEE spectra. The top panels show the training and inference phase. In step 1 of the training phase, the encoder and decoder act as a single model that takes APOGEE spectra as inputs with the objective to reconstruct _Kepler_ PSDs as outputs. In step 2, we train a random forest regressor to determine age from the latent space of the encoder-decoder along with \(\Upsilon_{\text{eff}}\) and \(\left[\text{Fe}/\text{H}\right]\). During inference, we discard the decoder and use the encoder to compress APOGEE spectra to the latent space, and then on through the trained random forest regressor to get the age prediction. The bottom panel specifies the detailed architectures of the encoder and the decoder. Both the encoder and the decoder are essentially convolutional neural networks. ### Rationale of using an encoder-decoder network In this work, we use an encoder-decoder to map high-resolution infrared spectra from APOGEE to PSDs derived from _Kepler_ light curves. This has the following advantages: 1. _Ability to train on all APOGEE_/Kepler _pairs_: When training the encoder-decoder, we do not need labels (i.e., mass or age). Thus, all that we require are stars that have both APOGEE spectra and _Kepler_ light curves and we can use _all_ overlapping observations between the APOGEE spectra and _Kepler_ (or, in the future, _TESS_) light curves. There are many more APOGEE-_Kepler_ pairs than there are reliable asteroseismic measurements of mass and age for those pairs, so this allows for a large expansion of the available training data. 2. _Information extraction_: Current data-driven spectroscopic ages likely rely on proxies like the [C/N] ratio (Ness et al., 2016; Mackereth et al., 2019) and they may even use information such as that contained in \([\alpha/\mathrm{Fe}]\). However, the [C/N] ratio is not only affected by mass-dependent mixing processes, but instead at least in part depends on galactic chemical evolution and this adversely affects age predictions. However, we have millions of spectra from surveys such as APOGEE and GALAH, so being able to determine ages without relying on abundance information beside [Fe/H] is important. By using an encoder-decoder, we force the method to extract only information necessary to predict the _Kepler_ PSD without extracting unnecessary abundance information. Information such as [C/N] is not directly available in PSD, but its component that is due to stellar mixing may still emerge in the latent space through its correlation with mass. 3. _Application to large spectroscopic datasets_: Once trained, we can discard the decoder and apply the encoder plus the latent space to all available spectroscopic data, which cover the entire sky. 4. _Simplicity_: The dimensionality of the latent space is orders of magnitude smaller than that of the spectra and the PSD. Thus, we can train simpler regression models to predict the age from the latent space. Simple models can be trained using only a handful of very precise ages. ## 3 Models In this section, we provide details on the encoder-decoder network that we use to map APOGEE spectra to _Kepler_ PSDs and on the regressor that we employ to map the latent space to age. A schematic overview of our methodology is shown in Figure 1. The encoder-decoder part is implemented as ApokascEncoderDecoder() using tensorflow(Abadi et al., 2015) in the astroNN package (Leung and Bovy, 2019)1. Footnote 1: [https://github.com/henrysky/astroNN](https://github.com/henrysky/astroNN) ### Encoder-decoder model The encoder in our model consists of a convolutional neural network with two convolutional layers, a max-pooling layer, and a dense layer that outputs the mean and variance for each dimension of the latent space starting with APOGEE stellar spectra. We use the ReLU activation function throughout the encoder except in the final layer where we employ the \(\tanh\) activation function to prevent predicting extreme values in the latent space and improve training stability. The latent space is simply a normal distribution sampler that samples latent variables from the output of the encoder. The decoder is another convolutional neural network, but with transposed convolutional layers to reconstruct the _Kepler_ PSD as opposed to the regular convolutional layers used in the encoder. We again use the ReLU activation function throughout the decoder, except in the final layer where no activation is applied to the outputs in order not to limit the range of the reconstruction output. The conventional convolutional layer (LeCun et al., 1989) used in convolutional neural networks excels in pattern recognition. It works by learning convolutional filters that recognize certain patterns, hence the output values are calculated as the dot product between filter and the input to the layer by moving the filter kernel across the input, which is the usual convolution operation. For the transposed convolutional layer (also known as deconvolution), the input value in the layer determines the filter values that will be written to the output. In other words, the input determines the weight of the filters as opposed to learning the weights as in the usual convolutional layers. Hyper-parameters such as the number of neurons, the dimension of the latent space, and the optimizer's learning rate are optimized by hyper-parameter search. For parameters like the latent space dimension, we have an educated guess of what value we want to use. We know at least three parameters are important to the reconstruction of the PSD: \(\nu_{\mathrm{max}}\), \(\Delta\nu\), as well as the evolutionary state. From this minimum number of dimensions, the latent space dimension is increased until there is no increase in the performance for the validation set. This gives a five-dimensional latent space. ### Probabilistic random forest model To map the latent space to stellar age, we employ a version of the random forest implementation from scikit-learn(Pedregosa et al., 2012). In scikit-learn, random forests are built with individual trees in the forests in which each tree is built from bootstrapped samples from the training set; the prediction is simply the mean of the value returned by each tree in the forest. We have modified the implementation such that during training, we sample the input and output data with their corresponding uncertainty on top of the bootstrap sampling where the input data uncertainties are taken directly from the encoder (i.e. variance predicted by the encoder) while the output data uncertainties are given in training set. Each star is then weighted by the age uncertainty of the training set by \(\sqrt{1/\sigma_{\mathrm{age}}^{2}}\). At testing time, the predicted age is the mean of all of the trees and we interpret the standard deviation of the trees as the age uncertainty. ## 4 Datasets and data reduction We use spectroscopic data from APOGEE, light curves and derived PSDs from _Kepler_, and ages for stars in the APOKASC catalog derived by Miglio et al. (2021). We discuss these different data sets in this section and describe the relevant data reduction processes for each dataset. ### APOGEE We use high-resolution spectra from the APO Galactic Evolution Experiment (APOGEE; Majewski et al., 2017; Blanton et al., 2017), specifically from its seventeenth data release (DR17; Abdurro'uf and others, 2022). APOGEE DR17 is a high signal-to-noise (\(>100\) per pixel typically), high resolution (\(R\sim 22,000\)) panoptic spectroscopic survey in the near infrared H-band wavelength region of \(1.5-1.7\mu m\). APOGEE spectra are the input of our neural network model and we employ the same continuum normalization procedure as in Leung & Bovy (2019). In addition to APOGEE spectra, we use stellar parameters and elemental abundances derived by the APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP; Garcia Perez et al. (2016)). These are not used by the encoder-decoder part of our model, but we add some of them to the input data given to the random-forest regressor when mapping the latent space to age. Specifically, we add the effective temperature \(T_{\rm eff}\) and the metallicity \(\rm{[Fe/H]}\), because stellar models require these to determine stellar ages from asteroseismic observations. We use the ASPCAP parameters rather than external data-driven stellar parameters and elemental abundances such as those from Leung & Bovy (2019) or Ting et al. (2019) that are proven to be robust to lower signal-to-noise ratio spectra, to be consistent with the stellar parameters used to derive the training ages. ### Kepler We use light curves for giant stars obtained by the _Kepler_ telescope (Borucki et al., 2010) in the original Kepler field near the Cygnus and Lyra constellations, to computer the power spectral densities (PSD) that are the output node of our encoder-decoder model. To download, manage, and manipulate _Kepler_ data, we make use of the lightkurve Python package (Lightkurve Collaboration et al., 2018). We compute the PSD from the observed light curve using the Lomb-Scargle periodogram Lomb (1976); Scargle (1982) as implemented in _astropy_(Astropy Collaboration et al., 2022). We adopt a minimum frequency of \(2\,\mu\)Hz and maximum frequency of \(270\,\mu\)Hz (roughly corresponding to half of the inverse of the _Kepler_ 30 minutes cadence) with a frequency spacing of \(0.009\,\mu\)Hz (roughly corresponding to the inverse of _Kepler_'s 3.5 years baseline for our sample). These values are similar to those used in the Kepler Light Curves Optimized For Asteroseismology (KEPSEISMIC; Garcia et al., 2011) work. We divide the PSD by a low-pass background filter largely corresponding to the noise background coming from stellar activity, granulation, and photon noise; the low-pass filter consist of a moving median filter with a width of 0.01 in logarithmic \(\mu\)Hz frequency space. In this work, we employ the PSD instead of the auto-correlation function (ACF) used in works like McQuillan et al. (2013) and Angus et al. (2018) for asteroseismic data analysis as well and by Ness et al. (2018) for determining data-driven stellar parameters. The reason why the latter work prefers to use the ACF over the PSD is the smooth gradient with respect to parameters such as age of each pixel in the ACF, which is essential for models that require smooth gradients. Although our decoder is a generative model for the PSD, we do not use true stellar labels to generate the PSD but instead rely on latent variables. Moreover, Blancato et al. (2020) demonstrated that using the PSD rather than the ACF in a discriminative model improves performance. In this paper, using the PSD improves interpretability of our model compared to the ACF as we can directly check whether the decoder reconstructs the expected asteroseismic modes in the PSD which will be shown and discussed in Section 6.1. The PSDs constructed using the method above have a large number of pixels and to reconstruct them with a neural network would require a large number of parameters that would likely be overfit due to the limited number of APOGEE-_Kepler_ pairs available for training the encoder-decoder. Moreover, it is unlikely that the stellar spectra contain such extremely precise asteroseismic information that they can predict the PSD at \(0.009\,\mu\)Hz resolution. The resolution of interest for our application is proportional to the large frequency separation \(\Delta\nu\) -- the average frequency spacing between modes of adjacent radial order of a given angular degree \(l\). In particular, we want to resolve mass-dependent variations in \(\Delta\nu\) at a given \(\nu_{\rm max}\). These are a few percent of the \(\Delta\nu\) expected for a typical star at a given \(\nu_{\rm max}\) (see Equation 7 below). Thus, we rebin the PSD using a logarithmic spacing chosen such that \(\Delta\nu\) is resolved by about 50 pixels at any given \(\nu_{\rm max}\) (i.e., on average there are 50 pixels between two oscillation modes of consecutive overtones of the same angular Figure 2: Age measurements from Miglio et al. (2021) which are used for training in this paper. The left panel shows the distribution of stars in \(T_{\rm eff}\)-\(\log g\) space colored by their seismic classification of red-giant branch (RGB) or red-clump (RC) stars with the marker size corresponding to their mass. The \(\log g\) has a limited range with the majority of the stars lying within \(2.5\leq\log g\leq 3.3\) dex. The right panel show the distribution of stars in \(\rm{[Fe/H]}\)-\(\{\alpha/Fe\}\) space colored by age. There is an artifact at \(\rm{[\alpha/Fe]}\) at 0.1 dex to separate \(\alpha\)-rich and \(\alpha\)-poor population, as Miglio et al. (2021) separate their analysis into \(\alpha\)-rich and \(\alpha\)-poor stellar samples. degree for stars with any \(\nu_{\rm max}\)). This gives 2, 092 pixels in each power spectrum. The frequency values \(f_{i}\) of our final PSDs (that is, the frequency \(f_{i}\) in \(\mu\)Hz represented at every pixel \(i\)) are approximately given by the expression \[\begin{split} f_{i}=77.35\Big{(}\frac{i}{2092}\Big{)}^{4}+& 102.55\Big{(}\frac{i}{2092}\Big{)}^{3}+68.96 \Big{(}\frac{i}{2092}\Big{)}^{2}\\ &+18.47\Big{(}\frac{i}{2092}\Big{)}+2.00\,.\end{split} \tag{4}\] The exact solution that we use is computed using the following recurrence relation \[f_{i+1}=\frac{0.263\mu{\rm Hz}(f_{i}^{0.772})}{50}+f_{i}\,, \tag{5}\] where \(f_{0}\) is 2 \(\mu\)Hz. The maximum deviation between the approximation and full solution is 0.008 \(\mu\)Hz while median absolute deviation is 0.002 \(\mu\)Hz. APOGEE-_Kepler_ Asteroseismic data, we use data from the APOGEE-_Kepler_ Asteroseismology Science Consortium (APOKASC; Pinsonneault et al., 2014) catalog, the Yu et al. (2018) _Kepler_ red-giant seismology catalog, and ages from Miglio et al. (2021). The APOKASC catalog consists stars observed by both the APOGEE survey and the _Kepler_ telescope. We have adopted the second data release of the catalog (APOKASC-2; Pinsonneault et al., 2018) based on APOGEE DR14 (Abollafini et al., 2018). The APOKASC-2 catalog consist of 6, 676 giant stars, 85% of which have _Kepler_ light curves with time baseline of \(>3.5\) years. APOKASC first estimates global seismic parameters \(\nu_{\rm max}\) and \(\Delta\nu\) from power spectra generated from the _Kepler_ light curves using multiple pipelines that suffer from different systematics and apply different constraints on the model parameters. To expand the sample size of APOGEE-_Kepler_ pairs, we cross-match the Yu et al. (2018) catalog with APOGEE DR17 to obtain 10,526 stars. Figure 3: Reconstruction of the power spectral density (PSD) from APOGEE spectra with our encoder-decoder model (orange line) compared to the PSD generated directly from the _Kepler_ light curve (blue line) for stars not included in training set, such that the model has never seen those spectrum-PSD pairs before (except for the bottom right panel example, which is from the training set). Our encoder-decoder model successfully reconstructs the area of interest where the p-mode envelope is. The top-left panel demonstrates that even in the case of low \(\nu_{\rm max}\), where the amplitude of the p-mode envelope relative to the background is low, the model still successfully reconstructs the envelope at approximately the correct frequency location. The bottom two panels show two RCs with peculiar PSD reconstructions. The bottom left one shows on RC star that is not included in the training set where the p-mode envelope is reconstructed at the correct \(\nu_{\rm max}\), but the overall shape does not resemble the ground truth PSD. The bottom right panel show an RC star _included_ in the training set where the model fails to reconstruct the p-mode envelope at the expected \(\nu_{\rm max}\). In general, the neural network correctly identifies and reconstructs the p-mode envelope and it does not reconstruct the noise, which is expected as noise is unpredictable from the APOGEE spectra. We use the union between the APOKASC-2 and Yu et al. (2018) catalog to obtain \(10,672\) stars with solar-like oscillations. The global parameters \(\nu_{\rm max}\) and \(\Delta\nu\) are highly correlated and we adopt the following relation that describes this correction when relevant \[\Delta\nu=0.263\mu{\rm Hz}\times(\nu_{\rm max}/\mu{\rm Hz})^{0.772}\,. \tag{6}\] which is a relation that is empirically determined using results from the SVD asteroseismic pipeline (Huber et al., 2009, 2010). We parameterize deviations from this relation for individual stars using what we call "excess \(\Delta\nu\)", defined as \[{\rm Excess}\Delta\nu=\Delta\nu\big{/}(0.263\mu{\rm Hz}\,[\nu_{\rm max}/\mu{\rm Hz }]^{0.772})\,, \tag{7}\] where \(\nu_{\rm max}\) and \(\Delta\nu\) are the measured values for a given star. Excess \(\Delta\nu\) can be shown to be sensitive to stellar mass using the asteroseismic scaling relations (a review on asteroseismology and seismic scaling relations can be found in Garcia & Ballot 2019). To train the latent-space-to-age part of our model, we adopt training ages from Miglio et al. (2021). Specifically, we use a version of the method from this paper that is updated to use the DR17 ASPCAP stellar parameters and abundances as opposed to those from DR16 in the original work. The Miglio et al. (2021) catalog consist of \(3,078\) evolved stars with measured ages. The distribution in the \(\log g\)-\(T_{\rm eff}\) plane and in the \([\alpha/{\rm M}]-[{\rm Fe}/{\rm H}]\) plane of the sample is given in Figure 2. The Miglio et al. (2021) method uses a similar approach as the APOKASC catalog to determine masses, radii, and ages using the observed light curves, spectroscopic parameters from APOGEE, and stellar models, but it has small implementation differences. Miglio et al. (2021) uses PARAM(da Silva et al., 2006; Rodrigues et al., 2017) to infer masses and ages while APOKASC-2 uses BeSPP(Serenelli et al., 2017). Another difference is that Miglio et al. (2021) determines the global seismic parameters using peak-bagging (Davies et al., 2016), where individual radial-mode (\(\ell=0\)) frequencies are fit for a subset of stars to get \(\nu_{\rm max}\) and \(\Delta\nu\), allowing a better measurement of those parameters. This in turn leads to improved stellar ages. The ages from Miglio et al. (2021) that we use have a standard deviation of \(\sim 10\%\) compared to ages in the APOKASC-2 catalog. In general, the Miglio et al. (2021) age uncertainty is \(\sim 5\%\) larger for RGB but \(\sim 15\%\) less for RC stars when compared to APOKASC-2. Figure 4: Reconstruction of power spectral densities from a few locations in the latent space. The big panel (top-left) shows the latent space colored by \(\nu_{\rm max}\) for the encoder-decoder training sample (i.e., the top-right panel of Figure 5). The reconstructed PSDs at the locations of the triangles are shown in the six smaller panels, where the colors of the lines correspond to the color of the triangle markers in the big panel. The four top right panels show reconstructions for PSDs with \(\nu_{\rm max}\) around \(190\,\mu{\rm Hz}\), \(110\,\mu{\rm Hz}\), \(40\,\mu{\rm Hz}\) and \(11\,\mu{\rm Hz}\). The bottom left panel displays a PSD reconstruction from a region associated with the RC. The bottom right panel shows a reconstruction from a region not populated by any real-world APOGEE spectra; this PSD is much noisier than the others. All PSD panels have arbitrary units on the y-axis and logarithmic frequency the x-axis. ## 5 Training and testing Before training, we further standardize the APOGEE spectra by subtracting the pixel-level mean and dividing by the pixel-level standard deviation after the data reduction steps discussed in Section 4.1. We restrict output PSDs to have \(4<\nu_{\rm max}<250\,\mu\)Hz to ensure that the whole p-mode power envelope can be seen as a whole in the PSD with \(\sigma_{\nu_{\rm max}}\) less than 10%. We restrict the sample of APOGEE-_Kepler_ pairs to those with PSDs with evolutionary state determinations from APOKASC-2 or Yu et al. (2018), as this provides a good indication of the quality of the PSD (some PSDs without evolutionary state determinations have \(\nu_{\rm max}\) that appear far off visually). We predict the logarithmic amplitude of the PSD. The first step of the training process is to train the encoder-decoder (as shown in Figure 1) as a whole with \(9,869\) pairs of APOGEE spectra and _Kepler_ PSDs randomly selected from all APOKASC pairs. We optimize the model using the ADAM optimizer (Kingma & Ba, 2014). To accelerate training as well as to make the training process more stable, we weight pixels near the observed \(\nu_{\rm max}\) higher when calculating the objective function in Equation 1. We generate a Gaussian normalized to a height of one for each PSD centered at \(\nu_{\rm max}\) with a width of twice the \(\Delta\nu\) expectation from Equation 6 and we add this Gaussian to the existing sample weights for each pixel which were arrays of ones. It is not necessary to know \(\nu_{\rm max}\) in advance and apply this pixel-level weighting to obtain convergence to a good model, but the addition of this weighting makes the model converge much faster. The introduction of this weighting scheme does not prevent our model from learning features outside of the p-mode power envelope as shown in the bottom right panel of Figure 3. After training the encoder-decoder, APOGEE spectra in the APOKASC sample are run through the trained encoder (but not the decoder) to get their latent representations. We then train random forest models to go from these latent representations to other labels, primarily age, but we also predict other quantities from the latent space to aid in understanding the latent space; this training step is shown in the middle panel of Figure 1. When predicting chemical abundances and global seismic parameters from the latent space, \(2,000\) stars are randomly selected to be the training set. When predicting stellar ages, about \(1,200\) stars are randomly selected among the \(2,000\) stars with ages to be the training set. Our fiducial model predicts age from the latent space augmented with \(T_{\rm eff}\) and \(\rm[Fe/H]\); we refer to these as "latent space ages". In practice, we predict the logarithm of the age and we do not apply any bounds on this logarithm. ## 6 Results The ultimate goal of our method is to determine ages from the high-resolution APOGEE data using the encoder, the latent space, and the random-forest regressor from the latent space to age. Because our methodology consists of multiple important components that build on each other to provide the final ages, we discuss the results from the components in order in this section. We commence with a discussion of the performance of the encoder-decoder in reconstructing the PSD from spectra in Section 6.1, then we take a detailed look at the latent space in Section 6.2, as well as describe how well we can predict different stellar parameters, including age, from the latent space representation of the spectra in Section 6.3. Finally, we discuss the application of our model to the whole APOGEE DR17 data set to get ages for a large sample of stars. ### Power spectral density reconstruction After training the encoder-decoder on the APOGEE/_Kepler_ pairs, we can directly check the performance of the model by generating the reconstructed PSDs for spectra in a test set of pairs that was not used during the training. We expect that any information in the PSD that is predictable from (or correlates with) the APOGEE spectra to be present in the output. PSD information that is not contained in the APOGEE spectra cannot be obtained by the decoder-encoder, instead the model will predict the mean value of those PSD pixels in the sample to minimize the objective function (Equation 1). Information in the APOGEE spectra that is not necessary to reconstruct the PSD will, similarly, not be part of the latent space (this crucial last point is discussed in detail in Section 6.2). In Figure 3, we compare the actual power spectra computed from _Kepler_ light curves and the encoder-decoder reconstruction from the APOGEE spectrum for pairs in the set of APOGEE/_Kepler_ pairs that are not included in training or validation set and, thus, the model has never seen those pairs before. The figure includes stars with large and small \(\nu_{\rm max}\) and of different evolutionary states. We see that, overall, the encoder-decoder reconstructs the p-mode envelope remarkably well. The p-mode envelop is the area of interest for our purposes, because it contains most of the seismic information relevant to age determination. The encoder-decoder is also able to reconstruct the locations and heights of individual oscillation modes to high precision considering the information comes from spectroscopic observations that only contain information on the surface condition of stars. Thus, the encoder-decoder appears to have learnt the global seismic properties like \(\nu_{\rm max}\) and \(\Delta\nu\). The encoder-decoder is able to reconstruct the p-mode envelope well regardless of the value of \(\nu_{\rm max}\). The top-left panel of Figure 3 demonstrates that even for a star with low \(\nu_{\rm max}\) (\(<20\,\mu\)Hz), where the amplitude of the p-mode envelope relative to the background is low, the model still successfully reconstructs the envelope with an overall peak at approximately the correct location and with individual oscillation modes at the approximately correct locations as well (note that the absolute amplitude of the p-modes are higher at low \(\nu_{\rm max}\), but lower relative to the background granulation and activity noise so PSDs for low \(\nu_{\rm max}\) stars seem "noisier" ). In general, the encoder-decoder correctly identifies the location of the p-mode envelope and of the individual modes for a wide range of stars and reconstructs them correctly. The model does not reproduce the noise in the PSD (e.g., the photon noise instrumental systematics, and stochastic granulation), which is expected because the noise is unpredictable from the APOGEE spectra. Increasing the latent space dimension from the five dimensions used by the model does not improve the PSD reconstruction for the testing set. The reconstructions are likely limited by the spectroscopic information available in APOGEE spectra. The use of a variational method in the encoder-decoder also allow us to generate new PSD samples directly from the latent space (i.e., without starting from an APOGEE spectrum). In Figure 4, we show PSDs generated from latent space locations not directly associated with APOGEE spectra, but within the parameter space of the training set, and one location outside of the training set's parameter distribution. The PSD samples generated look reasonable across a wide range of \(\nu_{\rm max}\), except except the one generated in latent space region associated with the RC, which does not have the correct envelope shape, and the one (cyan) that is generated outside of the parameter space of the training set, which is very noisy. ### The latent space The latent space of the trained encoder-decoder contains the information learnt by the encoder in its quest to reconstruct the PSD from the near-infrared APOGEE spectrum. It consist of the information that the model found to be crucial to predicting the PSD. Unlike checking the performance of the encoder-decoder as a whole by validating the quality of the PSD reconstruction using test data, the latent space is unsupervised in that we do not use any data to directly inform or restrict its structure. The only constraint placed on the latent space is the variational constraint that pushes it towards having a Gaussian distribution (see Section 2.1). Thus, the best we can do is to study the latent space and make sure that it has properties that we expect and desire when going from spectra to PSD: (i) that it clearly encodes crucial seismic information such as \(\nu_{\rm max}\) and \(\Delta\nu\) and (ii) that it does not contain extraneous spectral information such as elemental abun Figure 5: This figure shows the latent space prediction of our encoder-decoder model running through all the APOGEE spectra in the encoder-decoder training set and colored by different labels (taken from external catalogs, not predicted by our neural networks). The top left panel shows the latent space colored by evolutionary state, with blue markers for RGB stars and red markers for RC stars. RC stars clearly cluster together in most of the latent space dimension. The top right panel show the same latent space but colored by \(\nu_{\rm max}\). Smooth trends in \(\nu_{\rm max}\) are clearly seen in most of the dimensions, which is expected as the encoder-decoder is able to reconstruct the PSD so well as shown in Figure 3. The bottom right panel displays the latent space colored by excess \(\Delta\nu\), demonstrating that the model indeed learns the tiny shift in those oscillation peaks related to stellar mass instead of just generating visually good-looking PSDs. The bottom left panel shows the latent space colored by \(\left[{\rm Fe}/{\rm H}\right]\). There is no clear \(\left[{\rm Fe}/{\rm H}\right]\) trend in any dimension, showing that the model did not learn \(\left[{\rm Fe}/{\rm H}\right]\). This figure is discussed in detail in Section 6.2. dance ratios that are unnecessary for reconstructing the PSD but that have galactic-evolution correlations with age. We find that we can satisfactorily predict the PSD using a five-dimensional latent space. Figure 5 displays the distribution of the data in the five-dimensional latent space values for all APOKASC stars whether they are in the training set or not (we can use all data, because we are simply trying to understand what the model has learned as opposed to directly validating the model, and the large number of stars in the full sample makes it easier to spot clear trends). The figure contains four sub-figures that are color-coded by (i) the evolutionary state--whether a star is a red-giant branch (RGB) or a red-clump (RC) star--in the top, left, (ii) \(\nu_{\rm max}\) in the top, right, (iii) \(\rm[Fe/H]\) in the bottom, left, and (iv) excess \(\Delta\nu\) in the bottom, right (see Equation 7). Figure 6: Jacobian of the second dimension z[1] of the latent space for each pixel in the spectra (i.e., how changes in each pixel in the APOGEE spectrum changes the value of z[3]) for RGB stars with low \(\nu_{\rm max}\) (blue line; \(15\,\mu\)Hz \(\lesssim\nu_{\rm max}\lesssim 20\,\mu\)Hz) and high \(\nu_{\rm max}\) (orange dashed line; \(180\,\mu\)Hz \(\lesssim\nu_{\rm max}\lesssim 220\,\mu\)Hz) using the encoder part only in the encoder-decoder model. There are a few regions, especially around the hydrogen lines (the Bracket series) in the green and red regions of the APOGEE spectra that are known to be sensitive to log \(g\), with large sensitivity for the low \(\nu_{\rm max}\) RGB stars but little sensitivity for the high \(\nu_{\rm max}\) RGB stars in this particular latent space dimension. Overall, the information about the latent space appears to be spread over the entire spectrum. Figure 7: Latent-space abundance predictions. This figure shows density plots on a logarithmic scale of APOGEE abundance prediction from the latent space compared to the ground truth for \(\rm[Fe/H]\), \(\lfloor\sigma/M\rfloor\), \(\rm[N/H]\), \(\rm[C/N]\), from left to right respectively, with the overall scatter around the one-to-one line shown in the top-left of each panel. It is clear that the latent space contains no information on \(\lfloor\alpha/M\rfloor\), a small amount of information on \(\rm[N/H]\) and \(\rm[Fe/H]\), and some information on \(\rm[C/N]\). In the top, left panel of Figure 5, we see that the RC stars in red are clearly clustered in most of the latent space dimensions. This is expected for the following two reasons. Firstly, the PSDs for RC stars look very different from those of RGB stars due to effects like mode-mixing (e.g., Grosjean et al., 2014; Mosser et al., 2014), such that one can distinguish RC from RGB with PSDs alone. The effect of these mixed modes can be more clearly seen when converting the PSD to an echelle diagram--transforming the PSD to a two-dimensional image by stacking parts of the PSD separated by the large frequency separation \(\Delta\nu\)--such as those shown in Metcalfe et al. (2014). Secondly, pure spectroscopic separation of RC and RGB stars is also possible (e.g., Bovy et al., 2014; Hawkins et al., 2018; Ting et al., 2018; He et al., 2022) and it is possible to obtain samples of RC stars with very high purity ( 5% contamination) at low completeness. In the latent space, the RC cluster is not entirely separated from the RGB cluster, showing that there is a fundamental limit on the spectroscopic separability of the RC and the RGB for stars at the edges of both clusters. A clean seismic separation of RGB and RC requires the use of the period spacings expected from gravity modes (Bedding et al., 2011). Because we are not primarily interested in classifying RC/RGB stars, we do not pursue that here, but a better spectroscopic classification may be possible using a similar encoder-decoder approach applied to period spacings. The latent space color-coded by \(\nu_{\rm max}\) in the top, right panel of Figure 5 shows a smooth color gradient in some of the latent space dimensions. This is not surprising, as we have already demonstrated that our model can reconstruct power spectra fairly well in Figure 3 as discussed in Section 6.1. The parameter \(\nu_{\rm max}\) scales with the acoustic cut-off frequency \(\nu_{\rm ac}\), which in turn scales with surface parameters such as \(T_{\rm eff}\) and log \(g\)(Stello et al., 2009) that can easily be determined from APOGEE spectra. When color-coding the latent space by \(\rm[Fe/H]\) in the bottom, left panel of Figure 5, we find no strong trend, even though there are numerous \(\rm[Fe/H]\) lines in the APOGEE spectrum and \(\rm[Fe/H]\) is one of the easiest parameters to obtain from APOGEE spectra. There is information in the PSD that potentially correlates with metallicity. For example, Corsaro et al. (2017) demonstrates that the amplitude of the granulation activity is significantly affected by metallicity. However, we are not sensitive to this, because we explicitly remove the background level of the PSD as discussed Section 4.2. If the model is working as expected, we should not see metallicity information in the latent space and this is exactly what we see in the bottom, left panel of Figure 5. We perform a more detailed assessment of the amount of abundance information in the latent space in Section 6.3 below. The bottom, right panel of Figure 5 color-codes the latent space by excess \(\Delta\nu\), which is the basic seismic parameter that is most sensitive to stellar mass. Excess \(\Delta\nu\) manifests itself in the PSD as small shifts in the separations of the oscillation modes compared to the expectation from Equation (7). We see clear trends with excess \(\Delta\nu\) in the latent space, indicating that the encoder has learnt to extract seismic information related to mass and to predict the detailed location of the oscillation peaks, rather than simply generating visually good-looking power spectra. To further understand the latent space, we show in Figure 6 the Jacobian of the fourth dimension z[3] of the latent space with respect to each pixel in the APOGEE spectra (i.e., how changes in each pixel in the APOGEE spectra affects the value of z[3]) for RGB stars with low \(\nu_{\rm max}\) and high \(\nu_{\rm max}\). Regions like the hydrogen lines (the Brackett series) in the APOGEE spectral range are known to be sensitive to log \(g\) and we see that z[3] for RGB stars with low \(\nu_{\rm max}\) is sensitive to the hydrogen lines, while this is not the case for stars with high \(\nu_{\rm max}\). Overall though, there is no one region in the infrared spectrum that contains the seismic information encoded in the latent space, instead it appears to be spread throughout the APOGEE spectral range. ### Abundances, seismic parameters and ages As we discussed in Section 3.2, we use a modified version of the random forest method to map from the latent space to physical parameters such as abundances and stellar ages. Our primary goal is to determine ages from the latent space, but we also train random-forest regressors to determine abundances, seismic parameters, and the evolutionary state to better understand the information content of the latent space. We train separate random forest models for all of these cases. _Stellar parameters and abundances:_ To quantitatively determine the abundance information content of the latent space, we train different random forest regressors to predict \(\rm[Fe/H]\), \(\rm[\alpha/M]\), \(\rm[N/H]\) Figure 8: Latent-space seismic parameter predictions. This figure shows density plots on a logarithmic scale of the global seismic parameters predicted using the latent space compared to the ground truth for \(\nu_{\rm max}\) in the left panel, \(\Delta\nu\) in the middle panel, and excess \(\Delta\nu\) in the right panel. These stars are not contained in the training set for the latent space model, but some are contained in the encoder-decoder training set. The red points show additional stars not included in the training set for either the latent space model or the encoder-decoder model; these red points follow approximately the same distribution as the density plot. We are able to determine the important seismic parameters from the latent space to high precision. and \([\mathrm{C}/\mathrm{N}]\) from the latent space. We compare these predictions to the APOGEE ASPCAP values in Figure 7. We see in particular that the latent space is entirely unable to predict \([\alpha/\mathrm{M}]\), as the latent-space prediction is simply the mean of the sample regardless of the true \([\alpha/\mathrm{M}]\). Thus, the latent space has no information about \([\alpha/\mathrm{M}]\). This in turn means that when we use the latent space to determine ages, these ages are entirely uninformed by \([\alpha/\mathrm{M}]\). The \([\mathrm{Fe}/\mathrm{H}]\) and \([\mathrm{N}/\mathrm{H}]\) abundances show a small trend that still falls short of the 1-to-1 trend and the information in the latent space about these abundances is small. While there is only a weak trend in \([\mathrm{N}/\mathrm{H}]\) (and similarly, in \([\mathrm{C}/\mathrm{H}]\)), the latent space does appear to be able to predict \([\mathrm{C}/\mathrm{N}]\). This is consistent with the fact that the \([\mathrm{C}/\mathrm{N}]\) ratio is a good mass proxy for giants (Masseron and Gilmore, 2015; Martig et al., 2016), because mass correlates with the seismic parameters. However, the precision to which the latent space can predict \([\mathrm{C}/\mathrm{N}]\) is still much worse than the precision to which it can be measured from the APOGEE spectra directly (\(\sim 0.05\) dex). Thus, the \([\mathrm{C}/\mathrm{N}]\) prediction here is not limited by how well one can determine \([\mathrm{C}/\mathrm{N}]\) directly, but is instead limited by the fact that the encoder-decoder did not actually learn \([\mathrm{C}/\mathrm{N}]\) (directly. Rather, it learns other seismic parameters that correlate with \([\mathrm{C}/\mathrm{N}]\). In addition to the abundances, we check how well we can recover \(T_{\mathrm{eff}}\) from the latent space alone as \(T_{\mathrm{eff}}\) is needed by traditional asteroseismic age determination. We recover \(T_{\mathrm{eff}}\) from the latent space at a precision of \(\sim 110\) K, similar to the precision to which one can recover \(T_{\mathrm{eff}}\) from simply predicting it from \(\log g\) for giants. So the latent space does not contain much information on \(T_{\mathrm{eff}}\). _Global seismic parameters:_ We check how accurately we can predict the global seismic parameters \(\nu_{\mathrm{max}}\;\;\Delta\nu\) and excess \(\Delta\nu\) from the latent space. A comparison between the seismic parameters taken from the APOKASC catalog and those predicted from the latent space is shown in Figure 8. It is clear that in all three cases we predict the global seismic parameters to high fidelity from the latent space. Comparing to previous work on getting data-driven seismic parameters from PSDs, our work here has a comparable accuracy, although the nature of the previous work is very different. For example, Ness et al. (2018) predicts \(\nu_{\mathrm{max}}\) and \(\Delta\nu\) at an accuracy level of \(\sim 15\%\) from the auto-correlation function (ACF) of the light curve using a very simple but interpretable polynomial model. Hon et al. (2018) develops a "quick-look" deep learning method using convolutional neural networks to detect the presence of solar-like oscillations from 2D images of PSDs and then estimate \(\nu_{\mathrm{max}}\) at a level of \(\sim 5\%\). It is worth noting that converting a PSD to a low-resolution 2D image results in the loss of positional information of the oscillation modes, thus making the performance worse than it could in principle be. Hence, it is not surprising that our seismic parameter predictions from spectra are comparable to those from previous data-driven methods applied to PSDs. Comparing to the values from APOKASC, our predictions for \(\nu_{\mathrm{max}}\) and \(\Delta\nu\) are at the accuracy level of \(\sim 10\%\) and \(\sim 7\%\) using the latent space while APOKASC's \(\nu_{\mathrm{max}}\) and \(\Delta\nu\) precisions are at \(\sim 0.02\%\) and \(\sim 0.17\%\) respectively. This shows that the PSDs contains far more precise seismic information than can be obtained from the APOGEE spectra. _Evolutionary state classification:_ We saw in Figure 5 that RC and RGB stars cluster separately in the latent space, which means that we should be able to classify giants as RC or RGB stars using the latent space. Thus, we train a naive random forest classifier to perform this classification using the latent space. The confusion matrix of the resulting classification is shown in Figure 9. Unlike works such as Bovy et al. (2014) and Ting et al. (2018), our classifier does not aim at high purity at the expense of completeness, so it is expected that our purity is lower, while our completeness should be higher. Overall, we are able to obtain good classification results based on the latent space, only misclassifying \(\sim 10\%\) of the stars. We see that our classifier is indeed able to obtain high completeness (only \(\sim 4\%\) of RC stars are misclassified as RGB stars) at the expense of somewhat higher contamination (\(\sim 11\%\)) than obtained in studies that focus on purity. We can improve the classification's performance along all axes by \(\sim 1\) to \(2\%\) by augmenting the latent space with \(T_{\mathrm{eff}}\) and \([\mathrm{Fe}/\mathrm{H}]\). _Ages:_ Finally, we check how well we can predict age, which is the primary goal of this study. To obtain our fiducial age determinations, we augment the latent space with \(T_{\mathrm{eff}}\) and \([\mathrm{Fe}/\mathrm{H}]\), because these Figure 10: Latent-space age predictions. This figure shows the age prediction for stars based on the latent space augmented by \(T_{\mathrm{eff}}\) and \([\mathrm{Fe}/\mathrm{H}]\) compared to the ground-truth value from Miglio et al. (2021) for stars in the test set. Points are colored by the uncertainty in the Miglio et al. (2021) age with the marker shape indicating whether a star is an RC (triangle) or RGB (circle) star. The model is only trained on \(\sim 1,200\) stars selected randomly from Miglio et al. (2021). Stars with very low ground truth uncertainty around \(0-5\) Gyr old are RC stars and we are able to predict these ages to high precision. The standard deviation around the one-to-one line for RC and RGB stars separately are given in the top-left corner and are \(\sim 11\%\) and \(\sim 22\%\) respectively. Figure 9: Latent-space RC-vs-RGB classification. This figure shows the confusion matrix of a naive random forest model trained on the evolutionary state predicting whether a given APOGEE star is a RC or RGB star based on the latent space only. The model performs very well, only misclassifying \(\sim 10\%\) of the stars. The RC misclassification rate (RC being misclassified as RGB) is \(\sim 4\%\) while the RGB misclassification rate is \(\sim 8\%\). are the spectroscopic parameters that a traditional asteroseismic age determination needs in addition to quantities derived from the light curves (i.e., \(\nu_{\rm max}\) and \(\Delta\nu\)) and we have shown above that they are not available in the latent space. In Figure 10, we compare the ages that we obtain in this way with the ages from Miglio et al. (2021). We see that we are able to predict ages well over the entire age range of the sample: the predicted ages cluster around the one-to-one line. The overall bias is \(\sim 3\%\) with an overall dispersion of \(\sim 22\%\). However, the accuracy of our age predictions depend strongly on the evolutionary state: for RC stars, which are generally younger than RGB stars (ages typically in the range \(0-5\) Gyr), have highly accurate age predictions with a scatter of only \(\approx 11\%\). For RGB stars, on the other hand, the age accuracy is worse with a scatter of \(\approx 22\%\). Nevertheless, this number represents a clear improvement to other data-driven spectroscopic ages including those from Mackereth et al. (2019), which uses a Bayesian neural network that is directly trained on APOKASC ages to obtain an age precision of \(\sim 30\%\), and those from Lu et al. (2022), which uses the Cannon (Ness et al., 2015; Casey et al., 2016). Compared to Mackereth et al. (2019), we also do not observe any plateauing at \(\sim 8\) Gyr, but we are instead able to obtain precise ages for old stars. Additionally, unlike the previous methods, we have demonstrated above that our ages are not informed by information comping from abundance ratios like \([\alpha/{\rm M}]\), because this information is not contained in the latent space. We also obtain uncertainties on our predicted ages from the random forest regressor. To check the quality of these uncertainties, we compute the distribution of \((\hat{y}-y)/\sqrt{\sigma_{y}{}^{2}+\sigma_{y}{}^{2}}\) where \(y\) is the ground truth age, \(\hat{y}\) is the model age, and \(\sigma_{y}\) and \(\sigma_{\hat{y}}\) are their respective uncertainties. If the uncertainties are correct, this distribution should be a standard normal distribution. Instead, we find that the distribution is normal with a width \(\sim 0.75\), indicating that we are overestimating the uncertainty in our predicted ages. We provide further discussion of the predicted latent-space ages in Section 8.1 below. ## 7 The APOGEe DR17 latent-space age catalog and the age structure of the Milky Way disk ### Age catalog We have applied our models to the whole APOGEe DR17 catalog to obtain latent space ages and produce a publicly available catalog. First, we compute the latent-space representation of APOGEe DR17 stars using the trained encoder-decoder for stars within the parameter space of the training set, that is, for stars with \(1.5<\log g<3.6\) and \(\rm{SNREV}>30\), where \(\rm{SNREV}\) is an alternative signal-to-noise measurement recommended by APOGEe that takes detector persistence issues in account. This allows 308, 119 stars to have their latent representation computed. To compute latent space ages, we further restrict the sample to stars with \(T_{\rm eff}\) and \(\rm{[Fe/H]}\) available as required by our pipeline, as well as lying in the same \(2.5<\log g<3.6\) range as the Miglio et al. 2021 training set used in this work. A data model and download link are available in Table 1. For science analyses, we recommend using stars with no STARFLAG and ASPCAPFLAG flag set as well as requiring a latent space age uncertainty less than 40%. When compared to our previous work (Mackereth et al., 2019) of data-driven spectroscopic ages with neural networks as shown in Figure 11, the latent space ages from this work extend to significantly older ages for old stars, because unlike in our previous work, the latent space ages do not exhibit a plateau at around 8 Gyr. Otherwise the two works are quite consistent with each other with a scatter of 25%. When plotting \(\rm{[Fe/H]}-[\alpha/M]\) colored by latent space age as well as latent space age versus \([\alpha/M]\) as shown in Figure 12, our latent space age shows the expected trends of low \([\alpha/M]\) sequence stars being young and high \([\alpha/M]\) sequence stars being old. The oldest stars in the low \([\alpha/M]\) sequence are as old as the youngest high \([\alpha/M]\) sequence stars. Although we have a number of young high \([\alpha/M]\) stars, some of them can be removed by further restricting \(\log g>2.55\) dex instead of 2.5 dex to avoid the edge of training set parameter space and using our recommended \(\sigma_{\rm age}<40\%\) instead of the 50% used in the figure for completeness. For the purpose of studying spatial age-abundance trends in the Milky Way disk in the next subsection, we use the spectro-photometric distances from Leung and Bovy (2019) and convert from heliocentric to Galactocentric coordinates assuming \(R_{0}=8.23\) kpc and \(v_{\odot}=249.44\) km s\({}^{-1}\)(Leung et al., 2023) and \(z_{\odot}=20.8\) pc (Bennett and Bovy, 2019). Orbital parameters are calculated using galpy(Bovy, 2015) using the standard MWPotential2014 potential. ### The age structure of the Milky Way disk As a first application of our new age catalog, we present a brief investigation into spatial age-abundance trends in the Milky Way disk here. In Figure 12, the left panel shows the \(\rm{[Fe/H]}\)-\([\alpha/M]\) distribution colored by mean latent space age in each bin. We see a clear separation between old high and young low \([\alpha/M]\) sequences. The right panel shows the age-\([\alpha/M]\) distribution colored by angular momentum \(L_{z}\) with the cyan dashed line representing the median relation between age and \([\alpha/M]\); the \([\alpha/M]\) scatter around this relation is \(\approx 0.05\) dex for any age. The overall trend is a slowly increasing \([\alpha/M]\) with age for the low \([\alpha/M]\) sequence and a steeper trend when transitioning to the high \([\alpha/M]\) sequence, similar to what is seen for local samples (Haywood et al., 2013). The trend with angular momentum shows Figure 11: Latent-space ages compared to spectroscopic ages from astroNN (Mackereth et al., 2019). This figure shows the logarithmic density of a comparison of the latent-space ages from this work to the spectroscopic ages obtained using a Bayesian neural network trained directly on APOGEe spectra without physical constraints against using abundance information. The overall scatter between these ages predictions is \(\sim 25\%\). Unlike the previous ages, the latent-space ages do not plateau around 8 Gyr. that the \([\alpha/\mathrm{M}]\)-age trend is steeper in the high-angular momentum, outer disk than it is in the inner disk. In the right panel of Figure 12, we also see that there are young, high \([\alpha/\mathrm{M}]\) stars with ages \(\sim 6\) Gyr, similar to previous works (e.g., Martig et al. (2015); Chiappini et al. (2015)), which are likely high \([\alpha/\mathrm{M}]\) stars with unusual elemental abundance ratios (e.g., Hekker & Johnson, 2019) that seem to be over-massive (and thus appear young) due to binary stellar evolution (Zhang et al., 2021; Jofre et al., 2022). It is worth noting that when we adopt the definition of young, high \([\alpha/\mathrm{M}]\) stars with a flat cut of \([\alpha/\mathrm{M}]>0.15\) dex and age younger than 6 Gyr used in previous literature (e.g., Martig et al., 2015), we find that 9% of the high \([\alpha/\mathrm{M}]\) sequence population is young. This can be compared to \(\approx 6\)% in Martig et al. (2015) and Zhang et al. (2021). Further restricting to \(\log g>2.55\) dex, the fraction of young high \([\alpha/\mathrm{M}]\) stars approaches 7%. Alternatively, defining high \([\alpha/\mathrm{M}]\) as \([\alpha/\mathrm{M}]>0.18\) dex, similar to values adopted in asteroseismic analyses, the fraction is 6%. These results are interesting, because there are no young high \([\alpha/\mathrm{M}]\) stars in our training set, yet we recover similar fractions of them as independent, previous analyses. The spatial distribution of stars in Galactocentric radius \(R\) and vertical height \(z\) is shown in Figure 13. We clearly see the vertical flaring of the disk in age with radius. The outer disk is uniformly younger than \(\approx 5\) Gyr. The age-\([\mathrm{Fe}/\mathrm{H}]\)-\([\alpha/\mathrm{M}]\) distribution of stars in Galactocentric radius \(R\) and vertical height \(|z|\) bins is displayed in Figure 14. The dashed black line that we use to separate the low and high \([\alpha/\mathrm{M}]\) stars is given by the combination of the following equations \[\begin{split}[\alpha/\mathrm{M}]&>-0.2211\times[ \mathrm{Fe}/\mathrm{H}]+0.0442\,\mathrm{dex},\\ [\alpha/\mathrm{M}]&>0.05\,\mathrm{dex}\end{split} \tag{8}\] We see that the high \([\alpha/\mathrm{M}]\) sequence is old wherever it appears, with age declining from \(\approx 12\) Gyr at its low-metallicity end to \(\approx 8\) Gyr at its high-metallicity end; the smattering of young high \([\alpha/\mathrm{M}]\) stars is uniformly mixed up with these, again demonstrating that these are likely anomalously massive rather than anomalously young stars. The low \([\alpha/\mathrm{M}]\) stars are generally younger than 8 Gyr. While the age-abundance trends seen in this figure are generally what has been found before, the precision of our ages for a large sample of stars sharpens the picture significantly compared to previous work. A more detailed view of the age distribution and its overall radial and abundance trends is presented in Figure 15. While the giant sample selected by our cuts does not sample age uniformly, calculations similar to those in Section 5 of Bovy et al. (2014) show that the relative age bias of \(\approx 2\) to 5 Gyr to \(\approx 10\) Gyr is a factor of \(\approx 2.5\) Figure 12: Ages inferred from the latent space model applied to the whole APOGEE DR17 data set. After various quality cuts of spectral signal-to-noise ratio SNREP\(\nu>30\), latent space age \(\sigma_{\mathrm{age}}<50\)%, \(2.5<\log g<3.3\), which is the range covered by the training set, as well as cutting on the STARFLAG and ASPCAPFLAG flags, \(\sim 56,000\) giants are included in both panels. The left panel displays the \([\mathrm{Fe}/\mathrm{H}]\)-\([\alpha/\mathrm{M}]\) bimodality colored by average latent space age in each bin. The right panel shows the latent space age vs. \([\alpha/\mathrm{M}]\) colored by average angular momentum \(L_{\mathrm{x}}\) in each bin, with a cyan dashed line representing the median age-\([\alpha/\mathrm{M}]\) relation. Both panels clearly demonstrate that high \([\alpha/\mathrm{M}]\) sequence stars are significantly older than low \([\alpha/\mathrm{M}]\) sequence stars. In the right panel, there is a very small population of young (\(\leq 6\) Gyr) \([\alpha/\mathrm{M}]\)-enriched stars that is not presented in the training set (see Figure 2) and that was previously observed using asteroseismic ages, but not usually in data-driven spectroscopic ages. Figure 13: Spatial distribution of stars in APOGEE DR17 in Galactocentric radius \(R\) and vertical height \(z\) colored by median latent space age. The spatial distribution clearly shows the vertical flaring in age of the disk with radius. with the age bias being relatively flat between 1 and 5 Gyr and then slowly decreasing towards larger ages. Given this, the overall age distribution in the left panel is indicative of a roughly uniform intrinsic age distribution, or a flat star-formation history. Also from the left panel, we see that the inner disk is more heavily weighted towards old ages, while the outer disk barely extends beyond 5 Gyr (see also the rightmost panels of Figure 14). The middle and right panels split these radial age trends by \(\left[\alpha/\mathrm{M}\right]\) abundance, defining high and low \(\left[\alpha/\mathrm{M}\right]\) sequences using the separation from Equation (8). The age distribution of the high \(\left[\alpha/\mathrm{M}\right]\) population is approximately the same regardless of radius and peaks at \(\approx 10\) Gyr. Accounting for the age bias, there is a tail towards younger ages, but most of these are actually the likely anomalously-massive high \(\left[\alpha/\mathrm{M}\right]\) stars discussed above. Figure 14 demonstrates that the high \(\left[\alpha/\mathrm{M}\right]\) sequence ends at \(\approx 8\) Gyr. The low \(\left[\alpha/\mathrm{M}\right]\) sequence has a clear radial age trend, with the inner disk being older than the outer disk. Regardless of radius, the low \(\left[\alpha/\mathrm{M}\right]\) disk is younger than \(\approx 9\) Gyr, or likely younger than \(\approx 8\) Gyr once we account for age errors (we, however, do not attempt a deconvolution of the age distribution in the first look here). This all indicates that the high and low \(\left[\alpha/\mathrm{M}\right]\) sequences formed at different times, with the high \(\left[\alpha/\mathrm{M}\right]\) sequence corresponding to the first \(\approx 4\) Gyr of the disk's existence and the low \(\left[\alpha/\mathrm{M}\right]\) sequence corresponding to the last \(\approx 8\) Gyr. There does not appear to be a large hiatus between the two, although properly understanding the transition would require proper deconvolution of the age distribution and a good understanding of the anomalous young high \(\left[\alpha/\mathrm{M}\right]\) stars. A similar picture has emerged previously from observations near the Sun (e.g., Haywood et al., 2013; Bonaca et al., 2020). ## 8 Discussion ### Latent Space Ages In the previous section and in Figure 10, we have demonstrated that we can obtain precise ages from the latent space augmented by \(T_{\mathrm{eff}}\) and \(\mathrm{[Fe/H]}\). To understand how important each of these three ingredients is to the prediction of the age, we have tested a few other combinations of these ingredients. In Figure 16, we use a three separate models to predict age. The first model, shown in the left panel, uses the latent space only, the second model, in the middle, uses the latent space and \(T_{\mathrm{eff}}\), and the third model, in the right panel, uses the latent space and \(\mathrm{[Fe/H]}\). These three models allow us to assess the relative importance of the latent space and the augmented parameters in the age prediction. The age predictions in all three of these models are significantly worse than our fiducial latent-space age. Interestingly, the predicted age plateaus at \(\sim 8\) Gyr similar to what we observed in Mackereth et al. (2019) when directly predicting ages from APOGEE spectra. When only using the latent space, the age prediction still follows the one-to-one relationship relatively well, albeit with large scatter, indicating that the seismic information in the latent space is rich enough to provide a rough estimate of the age. While the scatter in the predicted ages decreases when adding \(T_{\mathrm{eff}}\) and \(\mathrm{[Fe/H]}\) separately, it is clear from comparing the middle and right panels to the left panel that the addition of these parameters separately does not qualitatively improve the age prediction over that from the latent space alone. Thus, adding both \(T_{\mathrm{eff}}\) and \(\mathrm{[Fe/H]}\) to the latent space is crucial to the good performance in Figure 10. In Figure 16, it is clear that RC stars get good ages regardless of which combination of latent space, \(T_{\mathrm{eff}}\), and \(\mathrm{[Fe/H]}\) we use. This is a direct consequence of the scaling relations combined with the fact that RC stars have a narrow range of absolute magnitudes. The mass scaling relation in Equation (3) can be expressed in terms of Figure 14: The age-\(\mathrm{[Fe/H]}\)-\(\left[\alpha/\mathrm{M}\right]\) distribution of stars in the Milky Way disk. This figure—similar to Figure 4 in Hayden et al. (2015)—shows the \(\left[\mathrm{Fe/H}\right]\)-\(\left[\alpha/\mathrm{M}\right]\) distribution color-coded by age of stars in spatial \(R\)-\(z\) bins spanning 3 kpc \(<R<13\) kpc in Galactic radius and \(\left|z\right|<2\) kpc in height from the Galactic midplane. The dashed line roughly divides the high- and low-\(\left[\alpha/\mathrm{M}\right]\) sequences and is the same for all subplots. It is clear that the high-\(\left[\alpha/\mathrm{M}\right]\) sequence is older than the low-\(\left[\alpha/\mathrm{M}\right]\) sequence with little age overlap between the two. the luminosity instead of \(T_{\rm eff}\) as shown in Equation (7) of Miglio et al. (2012). Because of the narrow luminosity spread of RC stars (Paczynski & Stanek, 1998), good ages for RC stars can thus be obtained with seismic data only assuming no significant mass loss. This means that we can obtain good RC ages just using the latent space. That not using both \(T_{\rm eff}\) and \(\rm[Fe/H]\) degrades the age predictions is expected, because traditional asteroseismic age pipelines require at least the following parameters to predict the age: seismic information (typically in the form of \(\nu_{\rm max}\) and \(\Delta\nu\)), \(T_{\rm eff}\), and \(\rm[Fe/H]\). This is because we can obtain the stellar mass using the seismic information and \(T_{\rm eff}\) (Equation 3) and converting mass to age mostly relies on the \(\rm[Fe/H]\)-dependent main-sequence lifetime of the star. While we expect the latent space to contain all the seismic information that can be extracted from the APOGEE spectra, it does not contain \(T_{\rm eff}\) as we have discussed in Section 6.3 or \(\rm[Fe/H]\) as we have shown in Figure 5. Thus, it is not surprising that the age precision degrades when dropping one of \(T_{\rm eff}\) and \(\rm[Fe/H]\). Finally, we investigate how the latent space ages change when we also include \([\alpha/{\rm M}]\) in the prediction. Figure 17 shows the age prediction for a model that includes \([\alpha/{\rm M}]\) directly in addition to the latent space, \(T_{\rm eff}\), and \(\rm[Fe/H]\). Although including \([\alpha/{\rm M}]\) does not cause age to correlate much more strongly with \([\alpha/{\rm M}]\), the inclusion of \([\alpha/{\rm M}]\) does significantly smooth the age trend in the \(\rm[Fe/H]\)-\([\alpha/{\rm M}]\) space while only improving the performance of our model by about 2%. Therefore, even thought \([\alpha/{\rm M}]\) is also a parameter sometimes used by stellar models to get ages, we choose not to include \([\alpha/{\rm M}]\) to avoid the adverse effects that result from its inclusion. Our current latent space age model suffer from the limitations of the age, because the model can only be applied to new stars in the same parameter space as that covered by the training sets of the Figure 16: Latent space age prediction using different combinations of parameters. The left panel shows the prediction based on the latent space only without any additional parameters. The middle panel displays the prediction based on the latent space augmented with \(T_{\rm eff}\) and the right panel’s age is based on the latent space with \(\rm[Fe/H]\). These can be compared to our fiducial latent space ages, which are based on the latent space augmented with both \(T_{\rm eff}\) and \(\rm[Fe/H]\) and for which this comparison is shown in Figure 10. In all three cases, the age prediction is significantly worse that in our fiducial model, with much larger scatter compared to the ground-truth ages and with a plateau arising at old ages (\(\gtrsim 10\) Gyr). Figure 15: Stellar age distributions in the Milky Way. Each panel in this figure displays the age distribution stars across the entire disk (“all”) and those in the inner and outer disks, defined as \(3\,{\rm kpc}<R<6\,{\rm kpc}\) and \(10\,{\rm kpc}<R<13\,{\rm kpc}\), respectively. The left panel shows all stars, regardless of their abundances, while the middle and right panels split the stars into high \([\alpha/{\rm M}]\) and low \([\alpha/{\rm M}]\) using the dashed line in Figure 14. The left panel demonstrates that outer disk stars are significantly younger than inner disk stars. The middle panel shows that high \([\alpha/{\rm M}]\) stars are old, but extend down to ages of 5 Gyr. The right panel demonstrates that low \([\alpha/{\rm M}]\) sequence stars are young across the disk and do not exceed 9 Gyr (accounting for uncertainties, the low \([\alpha/{\rm M}]\) upper age cut-off is \(8\), Gyr). encoder-decoder and the age. In this case, the most limiting factor is the narrow \(2.5<\log g<3.6\) range of the age training set. ### Application with TESS and Beyond The NASA Transiting Exoplanet Survey Satellite (_TESS_) mission is an all-sky survey that allows for the detection of solar-like oscillations in at least an order of magnitude more giants than _Kepler_(Silva Aguirre et al., 2020; Mackereth et al., 2021; Hon et al., 2021, 2022). Compared to _Kepler_ or even the K2 mission (Howell et al., 2014), _TESS_ light curves are typically much shorter and less precise, making it more difficult to do precise asteroseismology. We can exploit the flexibility of the encoder-decoder model to include _TESS_ light curves. To train our method, we require pairs of APOGEE spectra and light curves that cover a larger parameter space than that spanned by the training set of the ages (i.e., we cannot train the latent space \(\rightarrow\) age regression with stars outside of the parameter space of the training set of the encoder-decoder; the parameter space spanned by the training set of the encoder-decoder is much larger than that of the latent space age). Including light curves from _TESS_ can help enlarge the parameter space with stars with a wider range of chemical abundance patterns from different parts of the Galaxy, especially stars with low metallicity (\(\lesssim-1.5\) dex) that are not abundant in the _Kepler_ field. Data-driven models, and in particular deep neural networks, are susceptible to small changes in the data. For example, a trained neural network to predict abundances from spectra cannot typically be applied to data taken by different instruments, even if they cover the same wavelength range. Also, it is difficult to train a model on synthetic data (e.g., synthetic spectra) and then apply it to observational data (e.g., observed spectra); often this type of analysis will show a "synthetic gap" between the synthetic and true data that adversely affects the performance of the method (e.g., Fabbro et al., 2018). In the application at hand here, _TESS_ and _Kepler_ have different instrumental noise properties, different photometric precision, different observing cadences, etc. Naively combining _TESS_ and _Kepler_ light curves will likely make our model worse (see, e.g., Hon et al., 2021 for a discussion of the difference between _TESS_ and _Kepler_ for machine learning models). Making use of better algorithms to generate the PSD (e.g., the multi-taper algorithm of Patiil et al., 2022) may result in more uniform PSDs derived from observational data. We could also use different styles of transfer learning that can map data taken with different instruments onto a uniform scale (e.g., O'Briain et al., 2021). For example, we could employ another encoder-decoder to map _TESS_ PSDs to _Kepler_ PSDs trained on overlapping observations between _TESS_ and _Kepler_. Future missions like PLATO (Rauer et al., 2014) and HAYDN (Miglio et al., 2021) will also have their own photometric precisions, baselines, and cadences and would therefore also benefit from such an approach.. ### Prospects for SDSS-V The Milky-Way Mapper (MWM) in SDSS-V is an ongoing panoptic survey similar to the APOGEE survey that uses the same spectrograph, but combines it with a robotic focal plane system (FPS) with 300 robotic fiber positioners allowing flexible, high-cadence targeting and obtaining high target densities in a small field of view (Pogge et al., 2020). The Galactic Genesis Survey (GGS) program within MWM will produce a densely sampled panoptic spectroscopic stellar map covering a large volume of the Milky Way by targeting very luminous giants with low \(\log g\) (typically lower \(\log g\) than APOGEE observations). As we have discussed in Figure 12 and Section 8.1, currently the latent space age model only applies to the parameter space covered by the ages of Miglio et al. (2021), with the major limitation coming from the narrow range of \(\log g\) present in that sample. Because many stars in GGS have lower \(\log g\) than this, our model will not be directly applicable to GGS/MWM observations. In particular, the \(\log g\) limitation makes it difficult to reach the bulge with current APOGEE-like observations (see Figure 13). The minimum useful frequency we can obtain from the Lomb-Scargle PSD of _Kepler_ light curves is about 3 \(\mu\)Hz and this minimum has been adopted by many previous papers (e.g., Ness et al., 2018; Hon et al., 2018). While we might be able to determine \(\nu_{\rm max}\) down to this minimum frequency, realistically we can only obtain precise determinations of additional global seismic parameters as \(\Delta\nu\) at \(\Delta\nu\gtrsim 7\,\mu\)Hz. This cut-off corresponds to giants with \(\log g\sim 1.8\) dex using standard scaling relations. Aside from the fact that the estimating mass and age using the empirical scaling relations might break down at very low \(\nu_{\rm max}\), the minimum frequency sets a lower \(\log g\) limit on applying our encoder-decoder methodology to luminous giant stars. The \(\log g\) limitation along with the lack of low-metallicity stars in the _Kepler_ field currently makes it difficult to train and apply our method to obtain age estimates for interesting objects such as the Gaia-Enceladus merger remnant (Helmi et al., 2018), which shows up in APOGEE in significant numbers only at low \(\log g\) and low metallicity. ### Using alternative mass measurements to train The two-component nature of our methodology, where we first extract seismic information that contains mass information into the latent space using the encoder-decoder and we then map the latent space to age, means that we have flexibility in how we train the latent space \(\rightarrow\) age regression. In the second step, we do not need to use ages derived from the PSDs used to train the encoder-decoder, but we could instead use ages obtained from other parts of the sky (e.g., the _TESS_ continuous viewing zone), as long as the age sample has Figure 17: The impact of including \([\,\alpha/\mathrm{M}]\) in the latent-space age model. This figure is similar to the left panel of Figure 12, but the latent space age here is obtained with model using the combination of the latent space, \(T_{\rm eff}\), \([\mathrm{Fe}/\mathrm{H}]\), and \([\,\alpha/\mathrm{M}]\). The additional \([\,\alpha/\mathrm{M}]\) on top of the other parameters smooths the \([\,\alpha/\mathrm{M}]\) bi-modality plot significantly compared to the left panel of Figure 12, with additionally artifacts appearing at \([\,\alpha/\mathrm{M}]\sim 0.13\) dex because of the \([\,\alpha/\mathrm{M}]\) selection from our training set (see Figure 2). This demonstrates that including \([\,\alpha/\mathrm{M}]\) information in spectroscopic age determination strongly impacts the derived age–abundance trends. similar stellar populations as the sample used to train the encoder-decoder, because we need the latent-space representation of the age sample. We do not even need to use seismic ages at all in the second step, although we do so here because currently asteroseismology is the only method that provides precise ages for large-ish samples of stars. As we have shown in this work, we only need about a thousand accurate age measurements as the training set for the latent space age. Because we can convert mass to age relatively precisely along the giant branch, we could use alternative mass determinations to train the latent space age. For example, we could use stars in a cluster with masses determined using stellar evolution models and we could even require that they return the same age for all stars in the cluster. We could also make use of masses determined for eclipsing binaries using Kepler's laws, as these allow mass determinations at the few percent level compared with the \(\sim 8\%\) masses for giants from asteroseismology. That these methods can obtain such highly accurate masses and ages has been demonstrated by Brogaard et al. (2012) in estimating a cluster's age using binary members as well as by Gaulme et al. (2013) and Brogaard et al. (2018) in estimating age with eclipsing binary systems. As long as we can get APOGEE spectra for giants with such alternative mass measurements, we can include them in the training to go from the latent space to age, because we can determine their latent space parameters using the encoder (we do not need the decoder). Precise astrometry from _Gaia_ can potentially detect tens of thousands of resolved wide binaries from which precise masses can be determined (e.g., Andrews et al., 2017; Kochoska et al., 2017; Mowlavi et al., 2017; El-Badry et al., 2021; Gaia Collaboration and et al., 2022) for an all-sky sample. ## 9 Conclusions We are living in an era of abundance asteroseismic and spectroscopic data. Surveys such as _TESS_ and the upcoming PLATO mission (Rauer et al., 2014) will allow for large sets of ages for giant stars to be determined, which are essential for Galactic archaeology. At the same time, even larger spectroscopic surveys are ongoing, like SDSS-V and the upcoming 4MOST (de Jong et al., 2012) survey, which collect stellar spectra densely-sampled across the sky and covering large parts of the Milky Way disk and halo. Thus, being able to determine ages using spectroscopic data allows for detailed investigations into our Galaxy's formation and evolution. In this paper, we have applied a well developed methodology in deep learning called a variational encoder-decoder to create a data-driven model that can determine more precise ages from APOGEE spectra compared to other data-driven methods by leveraging available asteroseismic data, provided that there is a large sample of spectrum-light-curve pairs (which do not require age determinations). We train a model on \(\sim 10,000\) pairs of APOGEE spectra and _Kepler_ light-curves to reduce the dimensionality of APOGEE spectra while simultaneously extracting mass and age information without contamination from abundance information beside \(\rm[Fe/H]\). Reducing the dimensionality of APOGEE spectra in a latent space is crucial for being able to train an age model, because precise giant ages are rare and it is, therefore, difficult to train a complex model to infer spectroscopic age. We then trained a simple random forest model to determine ages from the latent space of the encoder-decoder model. We showed that we are able to reduce the dimensionality of APOGEE spectra to just five dimensions for the purpose of reconstructing the relevant information in the light curve's PSD. The decoder is able to reconstruct the PSD in the region where pressure modes are located. From the resulting latent space, we are able to infer ages precise to \(\sim 22\%\) by training only with \(\sim 1,200\) stars with good ages using the latent space augmented by \(T_{\rm eff}\) and \(\rm[Fe/H]\); for red clump stars we approach a precision of \(\sim 10\%\). We have applied our methods to the whole APOGEE DR17 catalog for stars that fall within the parameter set of the training data. The resulting latent space ages are overall consistent with the data-driven spectroscopic ages from Mackereth et al. (2019), except that old stars are much older (there is no plateau at the old end as in previous work) and the \(\rm[\alpha/M]\) rich stars are significantly older than the oldest \(\rm[\alpha/M]\) poor stars. Because our latent space does not include information on \(\rm[\alpha/M]\) and only weak information about other abundance ratios, our latent space ages are independent of abundance ratios, yet we will obtain precise ages. Using the APOGEE DR17 age catalog that we create in this paper, we have mapped the age-abundance distribution across the radial range spanned by the Galactic disk. We find that the high \(\rm[\alpha/M]\) sequence has the same age distribution at any radius, extending from ages of \(\approx 12\,\rm Gyr\) to \(\approx 8\,\rm Gyr\); at younger ages, we find a small fraction of young, high \(\rm[\alpha/M]\) stars similar to what has previously been found. The low \(\rm[\alpha/M]\) disk is younger than \(\approx 8\,\rm Gyr\) everywhere, with a radial gradient in the oldest low \(\rm[\alpha/M]\) stars: the outer disk (\(R\gtrsim 10\,\rm kpc\)) is almost entirely low \(\rm[\alpha/M]\) and younger than \(\approx 5\,\rm Gyr\). The high and low \(\rm[\alpha/M]\) disks appear to be temporally separated, with the high \(\rm[\alpha/M]\) disk representing the early evolution of the disk that transition to the later low \(\rm[\alpha/M]\) evolution \(\approx 8\,\rm Gyr\) ago. The PSD reconstruction provided by our encoder-decoder has interesting applications of its own. For example, it can be used as a "sanity check" for the light curve, because we can quickly check whether the directly observed PSD deviates greatly from that reconstructed from the APOGEE spectra. Any deviations could be due to observational or reduction issues in the light curves or they could result from information in the PSD that is not predictable from photospheric observations like stellar spectra. Examples of the latter are mode mixing or strong internal magnetic fields (e.g., Fuller et al., 2015). In the near future, the all-sky _TESS_ light-curves provide a great opportunity to expand this model especially with the two ecliptic continuous viewing zones. Proposed mission like PLATO and HAYDN (Miglio et al., 2021) can provide even more accurate ages to use in the latent space age training set, as can more detailed asteroseismic modeling of individual modes in current data (e.g., Montalban et al., 2021). More generally, advances in machine learning using artificial neural network allows more opportunities for training with cross-domain data with few available labels. In astronomy, it is very common to have observations across multiple-domain, for example in multi-messenger astronomy (e.g., the famous multi-messenger gravitational wave event GW170817; Abbott and et al., 2018). Many of the objects on the sky are observed in multiple large-scale surveys; stellar spectra and light-curves as an example in this paper. The large amount of overlap between surveys in different domains can be exploited using methods such as the one used in this paper, because they contain more information than using one survey in one domain alone. ## Acknowledgements We thank the anonymous referee for helpful and constructive comments. HL and JB acknowledge financial support from NSERC
2303.05750
A brief review of mathematical foundation for analyzing topological characteristics of quantum electronic states and matter phases
We briefly review the advanced mathematical language of fiber bundle structures and how they can be used to classify two-level quantum systems based on the analysis of the topological properties of their sets of state vectors. The topological classes of quantum electronic states and matter phases are characterized by topological invariants, which can be defined geometrically as the integral of differential forms on the base manifold of the fiber bundle structure. Specifically, we demonstrate that for one-dimensional systems described by the Su-Schrieffer-Heeger (SSH) model, the set of state vectors does not always have a fiber bundle structure directly on the Brillouin zone. To classify the SSH systems, we use a technique based on the concept of composite maps to decompose the set of electronic state vectors. As a result, the SSH systems are classified based on the geometrical properties of principal fiber bundles with different base manifolds.
V. Nam Do
2023-03-10T07:23:24Z
http://arxiv.org/abs/2303.05750v1
A brief review of mathematical foundation for analyzing topological characteristics of quantum electronic states and matter phases ###### Abstract We briefly review the advanced mathematical language of fiber bundle structures and how they can be used to classify two-level quantum systems based on the analysis of the topological properties of their sets of state vectors. The topological classes of quantum electronic states and matter phases are characterized by topological invariants, which can be defined geometrically as the integral of differential forms on the base manifold of the fiber bundle structure. Specifically, we demonstrate that for one-dimensional systems described by the Su-Schrieffer-Heeger (SSH) model, the set of state vectors does not always have a fiber bundle structure directly on the Brillouin zone. To classify the SSH systems, we use a technique based on the concept of composite maps to decompose the set of electronic state vectors. As a result, the SSH systems are classified based on the geometrical properties of principal fiber bundles with different base manifolds. ## I Introduction For nearly two decades, there has been a growing use of topological invariant concepts to characterize electronic and photonic states in various media.[1; 2; 3] This has led to the understanding of two types of semiconductors/superconductors: ordinary and topological insulators/superconductors, as well as essential features in the electronic structure of semimetals, among other examples.[4] Electronic states with nontrivial topological features, such as quantized states localized at the edges of quantum Hall systems, are incredibly robust and are not destroyed by perturbations, even by changing the atomic lattice of material samples.[5; 6] Therefore, nontrivial topological states are expected to have the potential to create efficient qubits for quantum computers.[7] Topology is a term that may be familiar to those learning mathematics but not necessarily to those studying physics, particularly in the field of condensed matter physics (CMP). Nevertheless, topology is now commonly used in CMP and has implications for modern technologies.[7; 8] Why is this? In fact, topology and related concepts are not unfamiliar to physicists working in the domains of quantum field theories and general relativity theories, as they are used to establish abstract structures of space-times and the transformation of matter fields. The birth of topology is attributed to Euler, who used graph theory to solve the famous problem of the seven bridges of Konigsberg in 1735.[9] It was then systematically developed in Poincare's methodology in studying a series of basic problems in 1895.[10] Many aspects of topology were also reflected in physics through Gauss's law (1835) and Ampere's law (1825).[11; 12] In the 20th century, since the birth of quantum mechanics, topology has been used to solve many fundamental issues in quantum mechanics, including the Dirac magnetic monopole (1931),[13; 14; 15] the Aharonov-Bohm effect (1959),[16; 17] and the quantum Hall effect (1980).[5] The mystery of these phenomena was revealed through Berry's description of adiabatic evolution of physical systems (1984).[18; 19] Through this description, it is clear that topological and geometrical structures govern the quantum world.[20; 21] Fundamentally, there are two problems in physics that we are concerned with. The first is to characterize the existence of a system of matter, and the second is to characterize its evolution over time. To tackle these issues, the concept of degrees of freedom is introduced as a means of parameterizing the system's states. Using phase space transforms the physical problem into a mathematical one, where each state is represented by a point in this space, and the set of states of the system is established as a domain in this phase space. However, it is essential to note that a geometric representation of physical states is inadequate since different points in phase space may correspond to the same physical state. The concept of "equivalence" is used in mathematics to describe this situation, revealing the complex structure of the set of physical states as a set of points in phase space. Consequently, comprehending the transition of a system from one state to another necessitates consideration of the structural properties of the set of points in phase space. While calculus methods are typically used to perform specific calculations, the frequent use of calculus may cause us to overlook the natural meaning of the underlying operations. Calculus is built on the fundamental concept of continuity, which is not a natural concept but depends on the set of objects under consideration. Issues regarding the structure of sets have been noticed and studied by mathematicians from early on, leading to the formation of topology as a branch of modern mathematics. Although topology theories are usually presented at advanced levels, topology is often described as the study of the invariant properties of geometric objects under continuous deformation. While a common illustration of this is the topological similarity between a coffee cup and a doughnut or tire, this explanation is insufficient for abstract objects such as sets of physical system states. Nevertheless, the analogy of deforming one object into another may relate to the dynamics of the physical system, as the transition of a system from one state to another requires changing some parameters of the physical system. In CMP, the theory of the energy band structure of electrons in periodic lattices is a direct and significant achievement of quantum mechanics. The success of this theory allows us to distinguish between two types of materials, metals and insulators. However, this theory is built on the infinite extent of the periodic lattice, so there are difficulties in describing certain physical phenomena such as the charge polarization in the insulators. In fact, the periodicity of the atomic lattice has been used to calculate the energy band structure of electrons, which, mathematically, is a way of compactifying infinite space. When rewriting the theory of charge polarization, a quantity similar to the geometrical Berry phase appears, which is defined as the integral of a vector field along a closed path in the Brillouin zone of the crystalline lattice. This vector field is determined through the states of the electronic system and has the meaning of guiding a "parallel motion" within the set of electronic states.[22] Combining all of things, one may wonder how all of these physical aspects have been resolved. Specifically, how has mathematics been applied, and what reasoning method is used here? The purpose of this paper is to provide a brief review of the minimum basic mathematical foundation and to demonstrate the application of topological theories in condensed matter quantum physics. We employ two classical toy models, one for general two-level systems and the Su-Schrieffer-Heeger (SSH) model for one-dimensional electronic lattices, to demonstrate a procedure for topologically characterizing and classifying quantum systems. These models are commonly used in lectures and overview articles,[1; 3; 4; 22] but they are usually presented in a mathematically loose manner. By employing precise and rigorous mathematical language to correctly state physical problems, we show that the set of state vectors does not always have the fiber bundle structure defined on a compact manifold, such as the Brillouin zone, as intuitively thought. Therefore, the analysis of the topological structure of the set of state vectors is rather tricky. We aim to present the rigorous and advanced mathematical language in a familiar way to resolve obstacles in the "classical mindset". We highlight the interpretation of expressions such as "investigating behaviors of a physical system" as "the investigation of features of a map" or "a set of state vectors of a physical system" as a vector field in a space of parameters where a set of equivalent states is assigned at each point. Such a field of state vectors is determined by a continuous map from a parameter space to a special space of the fiber bundle structure. The movement in this space is guided by a quantity called the connection, which is mathematically defined as a differential form in the parameter space. The invariant characters of the set of quantum states characterizing the movement are the geometrical properties of the fiber bundle, which are usually determined by an integer number, the value of an integral of the differential form over the entire parameter space. This article, aside from the introductory and concluding sections, is divided into two main parts. Part I, presented in Sec. II, provides an overview of the preliminary mathematical concepts in topology that are necessary to understand and apply the methodology. Part II, consisting of Secs. III and IV, presents the methodology, along with detailed instructions, for analyzing two physical models, each of which is presented in a separate section. ## II Fundamental mathematical concepts The aim of this section is to present a non-exhaustive list of fundamental mathematical concepts that commonly appear in the analysis of the topological properties of physical systems. Each concept is briefly introduced, but for further details, readers are encouraged to refer to textbooks on topology such as [23; 24; 25]. These concepts serve as a basic vocabulary for discussions in this field. ### Topological spaces and continuous maps **Topology:** It is the primary concept used to define other fundamental mathematical concepts that describe physical objects as mathematical structures from the viewpoint of the set of elementary elements. The topology defines the neighborhood of an element in a set and the continuity in the variation of a quantity with respect to another. The formal definition of topology can be stated as follows: Given a set \(X\) of objects, a topology \(T_{X}\) of \(X\) is a collection of subsets \(\{U_{i}\}\) of \(X\) such that: 1. Both \(X\) and the empty set belong to \(T_{X}\). 2. Any union of elements in \(T_{X}\) belongs to \(T_{X}\). 3. A finite intersection of elements in \(T_{X}\) belongs to \(T_{X}\). **Open sets:** The subsets of \(X\) that belong to the topology \(T_{X}\) are called open sets. **Topological spaces:** Sets of objects endowed with an appropriate topology are called topological spaces. A topological space is usually written as a pair \((X,T_{X})\) or simply as the set \(X\) if \(T_{X}\) is a normal topology or if the context makes it clear. Each object in a topological space is called a point. **Neighborhood:** A subset \(N\) of a set \(X\) is called a neighborhood of a point \(p\in X\) if \(N\) contains at least one open set that contains the point \(p\), i.e., \(N\supset U_{i}\) such that \(p\in U_{i}\) and \(U_{i}\in T_{X}\). If \(N\) is an open set, it is called an open neighborhood. **Coverings:** A collection of subsets \(U_{i}\) of a set \(X\) is called a covering of \(X\) if \(\bigcup_{i}U_{i}=X\). If all \(U_{i}\) belong to \(T_{X}\), then the covering is called an open covering. **Compactness:** A set is called compact if it can be covered by a finite number of open sets. **Connectedness:** A set is called connected if it cannot be partitioned into two non-empty subsets that are disjoint. In other words, the set cannot be separated into two subsets without breaking the continuity of the set. **Continuous maps:** A map \(f\) between two topological spaces \(X\) and \(Y\) is called continuous if the preimage of an open set in \(Y\) is an open set in \(X\). **Homeomorphisms:** A map \(f\) between two topological spaces \(X\) and \(Y\) is called a homeomorphism if \(f\) and its inverse are both continuous. This means that there is a correspondence not only between elements in the two sets \(X\) and \(Y\), but also between open sets in the two topologies \(T_{X}\) and \(T_{Y}\). If there is a homeomorphism between two topological spaces, we say that the two topological spaces are homeomorphic to each other. A homeomorphism defines an equivalence relation on the set of all topological spaces. As a consequence, given a homeomorphism, we can classify all topological spaces into equivalence classes. **Topological invariants:** Integer numbers are often used to characterize common features of topological spaces in the same equivalence class under the homeomorphism relation. These integer values are known as topological invariants. **Homotopy and deformation:** Homotopy is a relation between two continuous functions that captures the idea of "continuous deformation". More specifically, given two continuous functions \(f,g:X\to Y\) between two topological spaces \(X\) and \(Y\), a homotopy between \(f\) and \(g\) is a continuous function \(H:X\times[0,1]\to Y\) such that \(H(x,0)=f(x)\) and \(H(x,1)=g(x)\) for all \(x\) in \(X\). ### Manifolds **Manifold:** A manifold is a generalization of the concept of smooth curves and surfaces to arbitrary dimensional objects. The smoothness of a topological space allows the description of an arbitrary open neighborhood of a point by an open set in an \(n\)-dimensional vector space \(\mathbb{R}^{n}\). This allows calculus on the topological space to be endowed thanks to the definition of coordinates in \(\mathbb{R}^{n}\). Formally, we have the following definition: The topological space \(\mathbb{M}\) is called a manifold if: 1. There exists an open covering \(U_{i}\) such that \(\mathbb{M}=\bigcup_{i}U_{i}\). 2. For each open set \(U_{i}\) in the \(\mathbb{M}\)-covering, there exists a homeomorphism \(f_{i}:U_{i}\to\mathbb{R}^{n}\). This map allows \(U_{i}\) to be described by an open set in \(\mathbb{R}^{n}\). 3. On the overlap of two open sets \(U_{i}\) and \(U_{j}\), the composite map \(t_{ij}=f_{i}^{-1}\circ f_{j}\) is differentiable. This is the compatibility condition for the description using two different maps \(f_{i}\) and \(f_{j}\) in \(U_{i}\cap U_{j}\). The pairs \((U_{i},f_{i})\) are called charts, and the set of all charts is called an atlas of the manifold. \(U_{i}\) is called the neighborhood, and the map \(f_{i}\) is called the coordinate function. **Dimension:** The dimension of the vector space \(\mathbb{R}^{n}\) that the manifold locally resembles is called the dimension of the manifold. **Functions:** Maps \(f:\mathbb{M}\to\mathbb{R}\), where \(p\in\mathbb{M}\mapsto f(p)\in\mathbb{R}\), are called functions defined on the manifold. Each bijective map \(f_{i}:\mathbb{M}\to\mathbb{R}^{n}\) is represented by \(n\) functions \(p\in U_{i}\mapsto f_{i}(p)=(x^{1}(p),x^{2}(p),\ldots,x^{n}(p))\). **Curves:** A differentiable curve in the manifold \(\mathbb{M}\) is a \(C^{\infty}\)-map of an interval of \(\mathbb{R}\) to \(\mathbb{M}\), i.e., \(c:[a,b]\subset\mathbb{R}\to\mathbb{M}\), where \(t\in[a,b]\mapsto c(t)=f_{i}^{-1}(x^{i}(c(t)),x^{2}(c(t)),\ldots,x^{n}(c(t)))\). **Tangent vectors and tangent spaces:** A tangent vector is an object \(V(p)\) defined at each point of the manifold \(\mathbb{M}\) that acts as the derivative on functions at the point \(p\), i.e., \(V(p)=V^{\mu}(p)\partial/\partial x^{\mu}\), where \(V[f(p)]=V^{\mu}(p)\partial f(p)/\partial x^{\mu}\). Here \(p\in U_{i}\) and \(f_{i}(p)=(x^{1}(p),x^{2}(p),\ldots,x^{n}(p))\). The set of \(n\) numbers \(V^{\mu}\in\mathbb{R}\) is called the set of coordinates of the vector \(V(p)\). **Tangent bundle:** The union of all tangent vectors over the manifold \(\mathbb{M}\) is called the tangent bundle, \(T(\mathbb{M})=\bigcup_{p\in M}T_{p}(\mathbb{M})\). **Vector fields:** A vector field \(V\) is a continuous map from \(\mathbb{M}\) to the tangent bundle \(T(\mathbb{M})\), i.e., \(V:\mathbb{M}\to T(\mathbb{M})\), where \(p\mapsto V(p)\in T_{p}(\mathbb{M})\). **One-forms and cotangent spaces:** A one-form is a linear object \(\omega\) defined at each point \(p\in\mathbb{M}\) that maps tangent vectors at \(p\) to a number, i.e., \(\langle\omega(p),V(p)\rangle\in\mathbb{R}\). The set of all possible one-forms defined at a point \(p\in\mathbb{M}\) forms a vector space called the cotangent space, denoted by \(T_{p}^{\star}(\mathbb{M})\). If the partial derivatives \(\partial/\partial x^{\mu}\) are seen as the basis vectors of the tangent space, the differential \(dx^{\mu}\) plays the role of the basis vectors of the cotangent space, i.e., \(\omega(p)=\omega_{\mu}(p)dx^{\mu}\) where \(\omega_{\mu}(p)\in\mathbb{R}\), because \(\langle dx^{\mu},\partial/\partial x^{\nu}\rangle=\delta_{\nu}^{\mu}\). Here, \(\delta_{\nu}^{\mu}\) denotes the Kronecker delta symbol. **Tensor fields:** A tensor of rank \((q,r)\) is a multilinear object that takes \(q\) elements of \(T_{p}^{\star}(\mathbb{M})\) and \(r\) elements of \(T_{p}(\mathbb{M})\) and returns a number. The set of all tensors of rank \((q,r)\) defined at the point \(p\) on the manifold \(\mathbb{M}\) is denoted by \(\mathcal{T}_{r,p}^{q}(\mathbb{M})\). The union \(\bigcup_{p\in\mathbb{M}}\mathcal{T}_{r,p}^{q}(\mathbb{M})\) is called the tensor bundle. A tensor field of rank \((q,r)\) is a continuous map \(T:\mathbb{M}\to\mathcal{T}_{r}^{q}(\mathbb{M})\). **Connection:** Intuitively, this concept is an instruction to a special motion in a manifold, i.e., the so-called parallel transport. The connection is a central quantity to describe the geometrical properties of manifolds. The definition is quite technical, so readers should consult textbooks of topology.[23; 24; 25] **Curvature:** Curvature is a concept used to characterize a geometrical feature of manifolds. The general definition of this concept is quite technical, so we ask readers to consult textbooks on geometry and topology.[23; 25; 24] ### Fiber bundles **Fiber bundle:** The tangent bundle and cotangent bundle are two natural geometric objects associated with a manifold that allow for the definition of vector fields on the manifold. A fiber bundle is a generalization and a natural mathematical setting to describe physical fields. Technically, it is a concept used to refer to a special kind of manifold that locally looks like the direct product of two manifolds. Conversely, we can construct a fiber bundle from some other manifolds. The formal definition is as follows: A manifold \(\mathbb{E}\) is said to have a fiber bundle structure over the base manifold \(\mathbb{B}\) with fiber manifold \(\mathbb{F}\) if there exists a surjective map \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\) satisfying the following conditions: 1. For all points \(p\in\mathbb{B}\), the preimage \(\hat{\pi}^{-1}(p)\) of \(p\) by the map \(\hat{\pi}\) is homeomorphic to the manifold \(\mathbb{F}\). 2. For each open set \(U_{i}\) of an open covering \(U_{i}\) of \(\mathbb{B}\), its preimage \(\hat{\pi}^{-1}(U_{i})\) is simply described as a direct product \(U_{i}\times\mathbb{F}\) by a diffeomorphism \(\phi_{i}:U_{i}\times\mathbb{F}\rightarrow\hat{\pi}^{-1}(U_{i})\). The maps \(\phi_{i}\) are called the local trivializations. 3. The description of \(\hat{\pi}^{-1}(U_{i})\) as \(U_{i}\times\mathbb{F}\) must be consistent. This means that the composite map \(\phi_{i}^{-1}\circ\phi_{j}:(U_{i}\cap U_{j})\times\mathbb{F}\rightarrow(U_{i} \cap U_{j})\times\mathbb{F}\) must satisfy the condition \(\phi_{i}^{-1}\circ\phi_{j}(p,f)=(p,g_{ij}(p)f)\), where \(g_{ij}(p)\) are the functions determined by the map \(g_{ij}:U_{i}\cap U_{j}\rightarrow\mathbb{F}\) that have the properties: \(g_{ii}(p)=id_{U_{i}}\) and \(g_{ij}(p)g_{jk}(p)g_{ki}(p)=id_{U_{i}}\). The set \(\mathbb{E}\), called the total/entire space, is thus decomposed into the fibers \(\mathbb{F}(p)\), i.e., \(\mathbb{E}=\bigcup_{p\in\mathbb{B}}\mathbb{F}(p)\), where \(\mathbb{F}(p)=\hat{\pi}^{-1}(p)\) is called the fiber attached to the base manifold \(\mathbb{B}\) at the point \(p\). **Principal bundles:** Principal bundles are fiber bundles in which the fiber \(\mathbb{F}\) is identical to the structure group \(G\). In the next section, we will work with this type of fiber bundle with the structure group \(G=U(1)\) (the gauge \(U(1)\) group). **Cross-sections:** Let \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\) be a fiber bundle. A cross-section of the fiber bundle is a smooth map \(s:\mathbb{B}\rightarrow\mathbb{E}\). Clearly, \(p\mapsto s(p)\) is an element of \(\mathbb{F}_{p}=\hat{\pi}^{-1}(p)\). **Connections:** As mentioned in the previous subsection, it is rather technical to define the connection. However, because of the special structure of the fiber bundles, the idea of defining the connection is a way to separate the tangent vector of the total space into the vertical (along the fiber) and horizontal (along the base space) directions. Again, readers are asked to consult textbooks.[23; 24; 25] ## III General model for two-level systems This section presents an analysis of a toy model for generic two-level systems to highlight the application of the fiber bundle structure in characterizing a set of quantum states. Mathematically, this section illustrates the analysis of a set of points defined by an appropriate map. ### Physical model The general model for the dynamics of an electron in a two-level system is defined by a two-dimensional Hamiltonian matrix. Based on the Hermitian property of physical observables, the Hamiltonian matrix reads \[H =\begin{pmatrix}d_{0}+d_{z}&d_{x}-id_{y}\\ d_{x}+id_{y}&d_{0}-d_{z}\end{pmatrix} \tag{1}\] \[=d_{0}\sigma_{0}+d_{x}\sigma_{x}+d_{y}\sigma_{y}+d_{z}\sigma_{z},\] where \(\sigma_{0}\) is the \(2\times 2\) identity matrix, and \(\sigma_{x},\sigma_{y},\sigma_{z}\) are three conventional Pauli matrices. The parameters \(d_{x},d_{y},d_{z}\) and \(d_{0}\) are real and determine the properties of the system. Since the first term in Eq. (1) can be seen as the energy reference, it can be ignored in the following analysis. Therefore, the Hamiltonian matrix is determined by the remaining three terms. Defining the 3-dimensional vectors \(\mathbf{d}=(d_{x},d_{y},d_{z})\) and \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\), the Hamiltonian matrix can be expressed in a compact form as the scalar product of two vectors: \[H=\mathbf{d}\cdot\mathbf{\sigma}. \tag{2}\] The vector \(\mathbf{d}\) represents a point in the 3-dimensional linear space \(\mathbb{R}^{3}\). In this space, we do not consider the origin point \((0,0,0)\) as it defines a non-trivial physical system. Any other point in \(\mathbb{R}^{3}\) defines a particular 2-level system. In other words, the existence condition of the 2-level systems is defined by the parameter vector \(\mathbf{d}\). ### Determination of eigen-states An eigen-state of a quantum system is determined by an energy value and a set of objects known as state vectors. Mathematically, the eigen-energies and the associated eigen-state vectors are determined by the eigenvalue equation of the Hamiltonian \(H|\psi\rangle=E|\psi\rangle\). Specifically, the eigen-values \(E\) are determined by the secular equation: \[\det\begin{pmatrix}d_{z}-E&d_{x}-id_{y}\\ d_{x}+id_{y}&-d_{z}-E\end{pmatrix}=0 \tag{3}\] \[\rightarrow E^{2}-d_{z}^{2}-(d_{x}+id_{y})(d_{x}-id_{y})=0.\] This equation has two solutions for the unknown \(E\) given by the formulae: \[E=\pm\sqrt{d_{x}^{2}+d_{y}^{2}+d_{z}^{2}}=\pm|\mathbf{d}|=\pm d=E_{\pm}( \mathbf{d}). \tag{4}\] The eigen-values of the Hamiltonian \(H\) depend on the Euclidean length \(d\) of the vector \(\mathbf{d}\). The parameter space \(\mathbb{R}^{3}\) is therefore identified as the Euclidean space \(\mathbb{R}^{3}\). Since the point \(\mathbf{d}=(0,0,0)\) is not considered, we see that these two eigen-values are always separated from each other by a finite amount: \[\Delta E(\mathbf{d})=E_{+}(\mathbf{d})-E_{-}(\mathbf{d})=2d>0. \tag{5}\] This confirms that the model (2) is appropriate for describing two-level systems. To determine the eigenvectors associated with the two eigenvalues, we define the generic state vector of the two-level systems as follows: \[|\Psi\rangle=\begin{pmatrix}\phi_{1}\\ \phi_{2}\end{pmatrix}, \tag{6}\] where \(\phi_{1}\) and \(\phi_{2}\) can take on complex values. The eigenvectors of the Hamiltonian matrix are denoted by \(|+,\mathbf{d}\rangle\) and \(|-,\mathbf{d}\rangle\), corresponding to the eigenvalues \(E_{\pm}(\mathbf{d})\), respectively. The equation \([H-E_{-}(\mathbf{d})]|-,\mathbf{d}\rangle=0\) is specified as follows: \[\begin{pmatrix}d_{z}+d&d_{x}-id_{y}\\ d_{x}+id_{y}&-d_{z}+d\end{pmatrix}\begin{pmatrix}\phi_{1}\\ \phi_{2}\end{pmatrix}=\left(0\ 0\right)\] \[\Rightarrow(d_{z}+d)\phi_{1}+(d_{x}-id_{y})\phi_{2}=0. \tag{7}\] To find \(\phi_{1}\) and \(\phi_{2}\), we need to consider the following cases: (1) If \(d_{z}=d\), then \(d_{x}\) and \(d_{y}\) are both zero, i.e., \(d_{x}=d_{y}=0\). In this case, \(\phi_{1}\) and \(\phi_{2}\) can be arbitrarily chosen, and the eigenvector is therefore ill-defined. (2) If \(d_{z}\neq d\), Eq. (7) allows us to find the relationship, but not a specific value, between the two components \(\phi_{1}\) and \(\phi_{2}\) of the eigenvector. Specifically, we can write: \[\left\{\begin{array}{l}\phi_{1}=\gamma(-d_{x}+id_{y})\\ \phi_{2}=\gamma(d+d_{z})\end{array}\right., \tag{8}\] where \(\gamma\) is a non-zero undetermined complex factor. Normalizing the length of \(|-,\mathbf{d}\rangle\), we calculate: \[|\phi_{1}|^{2}+|\phi_{2}|^{2}=2|\gamma|^{2}d(d+d_{z}). \tag{9}\] The expression of the eigenvector \(|-,\mathbf{d}\rangle\) thus reads: \[|-,\mathbf{d}\rangle=\frac{e^{i\chi}}{\sqrt{2d(d+d_{z})}}\begin{pmatrix}-d_{x }+id_{y}\\ d+d_{z}\end{pmatrix}, \tag{10}\] Where \(e^{i\chi}=\gamma/|\gamma|\) is the argument of the complex factor \(\gamma\). Here, \(\chi\) is a real parameter. So, the eigen-vector \(|-,\mathbf{d}\rangle\) is not uniquely defined, but is up to a phase factor. In other words, we can state that given a vector \(\mathbf{d}\in\mathbb{R}^{2}\backslash(0,0,0)\), Eq. (7) does not define one state vector, but a set of state vectors that differ from each other by a \(U(1)\) gauge factor \(e^{i\chi}\). Physically, all state vectors in this set describe the same state of the system. Hence, they are classified into a unique equivalence class with the \(U(1)\) equivalence relation, i.e., \([|-,\mathbf{d}\rangle_{0}]=g(\mathbf{d})|-,\mathbf{d}\rangle_{0},|,\forall,g (\mathbf{d})\in U(1)\), where the representative element \(|-,\mathbf{d}\rangle_{0}\) is chosen as: \[|-,\mathbf{d}\rangle_{0}=\frac{1}{\sqrt{2d(d+d_{z})}}\begin{pmatrix}d_{x}-id_ {y}\\ d-d_{z}\end{pmatrix}. \tag{11}\] The discussion is totally similar to the eigen-vector \(|+,\mathbf{d}\rangle\), which is given by the formula: \[|+,\mathbf{d}\rangle=\frac{e^{i\chi}}{\sqrt{2d(d-d_{z})}}\begin{pmatrix}d_{x}- id_{y}\\ d-d_{z}\end{pmatrix}. \tag{12}\] Due to the dependence of the eigen-energies and state vectors on the length of the \(\mathbf{d}\) vector, we can change the parameterization of \(\mathbf{d}\) from Cartesian coordinates \((d_{x},d_{y},d_{z})\) to spherical coordinates \((d,\varphi,\theta)\), where \(d>0\), \(\varphi\in[0,2\pi]\), and \(\theta\in[0,\pi]\). The eigen-vectors \(|\pm,\mathbf{d}\rangle\) can then be rewritten in the form: \[|+,\varphi,\theta\rangle =e^{i\chi}\begin{pmatrix}e^{-i\varphi}\cos(\theta/2)\\ \sin(\theta/2)\end{pmatrix}, \tag{13a}\] \[|-,\varphi,\theta\rangle =e^{i\chi}\begin{pmatrix}e^{-i\varphi}\sin(\theta/2)\\ -\cos(\theta/2)\end{pmatrix}. \tag{13b}\] Using spherical coordinates, we find that the eigen-state vectors of the 2-level system do not depend on the length of the model parameter \(\mathbf{d}\) vector, but only on the two angular coordinates. We can therefore classify the set of all possible values of the vector \(\mathbf{d}\) into a unique equivalence class represented by the unit sphere \(\mathbb{S}^{2}\) centered at the origin of the Cartesian axes in the Euclidean \(\mathbb{R}^{3}\) space (this two-dimensional surface is embedded in the Euclidean space \(\mathbb{R}^{3}\)). Each point on the unit sphere is parameterized by only two real variables \(\varphi\in[0,2\pi]\) and \(\theta\in[0,\pi]\). A point on the sphere \(\mathbb{S}^{2}\) is determined by a point \((\varphi,\theta)\) in the rectangular domain \([0,2\pi]\times[0,\pi]\). Topologically, we see that the sphere \(\mathbb{S}^{2}\) is homeomorphic to the rectangular domain \([0,2\pi]\times[0,\pi]\). ### Investigation of topological features of quantum states Physically, we would like to know what happens when we drive the system. According to the model given by Eq. (1), the condition for the existence of a two-level quantum system is encoded in the parameter vector \(\mathbf{d}\). Eq. (4) shows the linear dependence of the energy values of the two eigen-states of a system on the length of the \(\mathbf{d}\) vector. Mathematically, the energy spectrum of a system is determined by scalar fields on the manifold of parameters. Some important features of such scalar fields can manifest through the picture of the density of states such as the van Hove singularities. However, these are local geometrical features of the energy isosurfaces, i.e., the existence of local extremal and/or saddle points of the surfaces. Driving the \(\mathbf{d}\) vector on the unit sphere \(\mathbb{S}^{2}\), the energy spectrum of the system does not change. Meanwhile, the state vectors explicitly depend on the angle coordinates of the \(\mathbf{d}\) vector, see Eqs. (13a) and (13b), so they vary when driving \(\mathbf{d}\) on \(\mathbb{S}^{2}\). Accordingly, it is natural to ask: what information can the state vectors provide as the system is driven? To proceed with the discussion, we consider the set of state vectors \[\mathbb{E}=\left\{|-,\mathbf{d}\rangle=e^{i\chi}|-,\mathbf{d}\rangle_{0},|, \forall\chi\in\mathbb{R},\forall\mathbf{d}\in\mathbb{S}^{2}\right\}. \tag{14}\] We consider this set due to the fact that physical systems tend to stay in their lowest-energy state. The detailed discussion is presented in the following subsections. #### ii.2.1 The fiber-bundle structure of the set of state vectors Let us denote \[\mathbb{F}(\mathbf{d})=\left\{|-,\mathbf{d}\rangle=\left.\frac{e^{i\chi}}{ \sqrt{2d(d+d_{z})}}\begin{pmatrix}-d_{x}+id_{y}\\ d+d_{z}\end{pmatrix},\right|,\forall\chi\in\mathbb{R}\right\}. \tag{15}\] The set of state vectors \(\mathbb{E}\) can thus be rewritten as: \[\mathbb{E}=\bigcup_{\mathbf{d}\in\mathbb{B}}\mathbb{F}(\mathbf{d}), \tag{16}\] where \(\mathbb{B}=\{\mathbf{d}\in\mathbb{R}^{3},|,\mathrm{norm}_{2}(\mathbf{d})=1\} \equiv\mathbb{S}^{2}\). We now show that the set \(\mathbb{E}\) has a fiber bundle structure with the fiber being the manifold \(\mathbb{F}=\mathbb{S}^{1}\) endowed by the \(U(1)\) structure group. Indeed, first of all, let us show that there exists a surjective map from \(\hat{\pi}\) projecting \(\mathbb{E}\) onto \(\mathbb{B}\), i.e., \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\). To do so, we pick a generic element \(|-\rangle=(\phi_{1},\phi_{2})^{T}\) in \(\mathbb{E}\) and identify it with an element \(|-,\mathbf{d}\rangle\) in a subset \(\mathbb{F}(\mathbf{d})\). From Eq. (10), we can construct the map \(\hat{\pi}\), given by the following explicit rule: \[\hat{\pi}:(\phi_{1},\phi_{2})\in\mathbb{E}\mapsto\left\{\begin{array}{l}d_ {x}=-2\mathrm{Re}\left(|\phi_{2}|\frac{\phi_{1}}{\phi_{2}}\right)\\ d_{y}=2\mathrm{Im}\left(|\phi_{2}|\frac{\phi_{1}}{\phi_{2}}\right)\\ d_{z}=2|\phi_{2}|-1\end{array}\right.\in\mathbb{B}. \tag{17}\] This means that the set \(\mathbb{F}(\mathbf{d})\) is the preimage of \(\mathbf{d}\) under the map \(\hat{\pi}\), i.e., \(\mathbb{F}(\mathbf{d})=\hat{\pi}^{-1}(\mathbf{d})\). Next, we will show that the manifold \(\mathbb{E}\) can be locally described as the direct product of an open set of \(\mathbb{B}\) and another manifold. We first notice that the set \(\mathbb{F}(\mathbf{d})\) is homeomorphic to the set \(\mathbb{F}=\mathbb{S}^{1}\) because all pairs of complex numbers \((\phi_{1},\phi_{2})\) that are different from each other by a phase factor \(e^{i\varphi}\) are mapped onto the same \(\mathbf{d}\) point in \(\mathbb{S}^{2}\). We also notice that \(\mathbb{F}(\mathbf{d})\) is ill-defined at \(\mathbf{d}=(0,0,-d)\) because of the singularity of the factor \(1/\sqrt{2d(d+d_{z})}\) in the expression of the state vectors. However, this singularity, as shown below, can be apparently avoided by choosing an appropriate scheme of local parameterization. Indeed, using the spherical coordinates as a specific parameterization of the \(\mathbb{S}^{2}\) sphere seems to eliminate the singularity because of the disappearance of the factor \(1/\sqrt{2d(d+d_{z})}\). Actually, the condition \(d_{z}=-d\) exactly corresponds to the value \(\theta=\pi\), but \(\varphi\) is still not uniquely determined. Hence, the state vector \(|-,\mathbf{d}\rangle_{0}\) is ill-defined at the south pole of the sphere \(\mathbb{S}^{2}\). For other state vectors, \(|-,\mathbf{d}\rangle=\exp(i\chi)|-,\mathbf{d}\rangle_{0}\), we see that if \(\chi\) is identical to \(\varphi\) then: \[e^{i\chi}|-,\mathbf{d}\rangle_{0}=e^{i\varphi}\begin{pmatrix}e^{-i\varphi} \sin(\theta/2)\\ -\cos(\theta/2)\end{pmatrix}=\begin{pmatrix}\sin(\theta/2)\\ -e^{i\varphi}\cos(\theta/2)\end{pmatrix}. \tag{18}\] These state vectors are clearly defined at the south pole, eliminating the problem of ill-definition. However, we realize that the new state vectors become ill-defined at the north pole with \(\theta=0\) and arbitrary \(\varphi\). As we can see from Eq. (10), the singularity of the map \(|-\rangle\) is actually permanent and cannot be removed by choosing a particular parameterization and a gauge transformation. The gauge transformation simply moves the singularity point of the map \(|-\rangle_{0}:\mathbb{S}^{2}\rightarrow\mathbb{F}(\mathbf{d})\) from one point on the domain manifold \(\mathbb{S}^{2}\) to another point. In other words, the map \(|-\rangle_{0}\) is only locally defined in the entire manifold \(\mathbb{S}^{2}\). To locally describe the manifold \(\mathbb{E}\), we must use some open sets to cover the manifold \(\mathbb{B}=\mathbb{S}^{2}\). Concretely, we use two open sets \(U_{N}=[0,2\pi]\times[0,\pi/2+\epsilon)\) and \(U_{S}=[0,2\pi]\times(\pi/2-\epsilon,\pi]\) to cover the north and south half spheres, respectively. The overlap of these two open sets is the ribbon covering the equator, i.e., \(U_{N}\cap U_{S}=[0,2\pi]\times(\pi/2-\epsilon,\pi/2+\epsilon)\). We can then define the maps \(|-\rangle_{N/S}:U_{N/S}\times U(1)\rightarrow\hat{\pi}^{-1}(U_{N/S})\) in each open set as follows: \[|-\rangle_{N}:(\varphi,\theta;e^{i\chi})\mapsto|-,\varphi,\theta \rangle_{N}=e^{i\chi}\begin{pmatrix}e^{-i\varphi}\sin(\theta/2)\\ -\cos(\theta/2)\end{pmatrix}, \tag{19}\] \[|-\rangle_{S}:(\varphi,\theta;e^{i\chi})\mapsto|-,\varphi,\theta \rangle_{S}=e^{i\chi}\begin{pmatrix}\sin(\theta/2)\\ -e^{i\varphi}\cos(\theta/2)\end{pmatrix}. \tag{20}\] Clearly, there is no problem of singularity of these maps on each chart \(U_{N}\) and \(U_{S}\). These maps are diffeomorphic since their inverse maps, e.g., \(|-\rangle_{N}^{-1}\), given by the functions: \[\begin{pmatrix}\phi_{1}\\ \phi_{2}\end{pmatrix}\mapsto\left(-\tfrac{i}{2}\ln\left[\left(\tfrac{\phi_{1}}{ \phi_{2}}\right)^{2}\cdot\tfrac{|\phi_{2}|^{2}}{1-|\phi_{2}|^{2}}\right],2 \arccos|\phi_{2}|;e^{i\chi}\right), \tag{21}\] is analytic. So, the pairs \((U_{N},|-\rangle_{N})\) and \((U_{S},|-\rangle_{S})\) are identified as the local trivializations. The last point we would like to show is that the transition function \(t_{NS}\) is an element of the gauge \(U(1)\) group. Indeed, let \(t_{NS}(\varphi,\theta)\) denote the function connecting \(|-,\varphi,\theta\rangle_{S}\) to \(|-,\varphi,\theta\rangle_{N}\) for \((\varphi,\theta)\in U_{N}\cap U_{S}\). From the requirement: \[e^{i\chi_{N}}\begin{pmatrix}e^{-i\varphi}\sin(\theta/2)\\ -\cos(\theta/2)\end{pmatrix}=t_{NS}(\varphi,\theta)e^{i\chi_{S}}\begin{pmatrix} \sin(\theta/2)\\ -e^{i\varphi}\cos(\theta/2)\end{pmatrix}, \tag{22}\] it is easy to deduce the identification: \[t_{NS}(\varphi,\theta)=e^{-i\varphi+\Delta\chi_{NS}}, \tag{23}\] where \(\Delta\chi_{NS}=\chi_{N}-\chi_{S}\). Clearly, \(t_{NS}(\varphi,\theta)\) belongs to \(U(1)\) as expected. To sum up, we have shown that the set \(\mathbb{E}\) of state vectors can be represented as a sphere \(\mathbb{S}^{3}\) embedded in the space \(\mathbb{C}^{2}\). This set has a fiber bundle structure with the base manifold \(\mathbb{B}=\mathbb{S}^{2}\) and the fiber \(\mathbb{F}=\mathbb{S}^{1}\) with the structure group \(U(1)\). The projection map \(\hat{\pi}:\mathbb{S}^{3}\rightarrow\mathbb{S}^{2}\) is given by Eq. (17). The local trivializations of the bundle are given by the pairs \((U_{N},|-\rangle_{N})\) and \((U_{S},|-\rangle_{S})\), which are identified on the overlap \(U_{N}\cap U_{S}\) by the transition function \(t_{NS}(\varphi,\theta)=\exp(i\varphi)\), which belongs to the gauge group \(U(1)\). #### ii.1.2 The state transition as the parallel motion in the principal bundle Physical processes always involve the transition of a system from one state to another. The transition induced by varying the parameter vector \(\mathbf{d}\) is mathematically translated into movement from one point to another in the fiber bundle (\(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B},\mathbb{F},U(1)\)). While moving in a flat space, a velocity field is enough to guide the motion, but in a curved manifold, another quantity, named the connection, is needed to keep the motion according to the curvature of the manifold, i.e., the parallel motion. The parallel motion in the fiber bundle structure can be described as the horizontal lift of the parallel motion in the base space. This description highlights the holonomy, a typical geometrical phenomenon in nontrivial fiber bundles.[25; 26; 23] To determine the connection \(\mathcal{A}_{-}\) in a fiber bundle, we need a vector field defined as a local cross-section of the bundle and then track its variation along some smooth curves. A concrete vector field \(|-,\mathbf{d}\rangle_{N/S}\), locally defined on the covering \(U_{N/S}\), i.e., \(|-\rangle_{N/S}:(\varphi,\theta)\in U_{N/S}\mapsto e^{i\chi(\varphi,\theta) }|-,\varphi,\theta\rangle_{0,N/S}\), is determined when a smooth function \(\chi(\varphi,\theta)\) on \(U_{N/S}\) is given. Due to the scalar product of the state vectors defined in the Hilbert space, the so-called Berry connection is determined as follows:[26; 4] \[\mathcal{A}_{-}(\mathbf{d})=i\langle-,\mathbf{d}|\hat{d}|-,\mathbf{d}\rangle, \tag{24}\] where \(\hat{d}\) denotes the exterior derivative[23]. Using the parameterization of spherical coordinates \((\varphi,\theta)\), we obtain: \[\mathcal{A}_{-}(\varphi,\theta) =i\langle-,\varphi,\theta|\partial_{\varphi}|-,\varphi,\theta \rangle d\varphi\] \[+i\langle-,\varphi,\theta|\partial_{\theta}|-,\varphi,\theta \rangle d\theta. \tag{25}\] On each chart \(U_{N,S}\) partially covering \(\mathbb{S}^{2}\) we obtain: \[\partial_{\varphi}|-,\varphi,\theta\rangle_{N} =e^{i\chi}\begin{pmatrix}-ie^{-i\varphi}\sin(\theta/2)\\ 0\end{pmatrix}, \tag{26}\] \[\partial_{\theta}|-,\varphi,\theta\rangle_{N} =e^{i\chi}\frac{1}{2}\begin{pmatrix}-ie^{-i\varphi}\cos(\theta/ 2)\\ \sin(\theta/2)\end{pmatrix}. \tag{27}\] Then, it yields the expression of the connection: \[\mathcal{A}_{-}(\varphi,\theta)_{N}=\sin^{2}(\theta/2)d\varphi=\frac{1}{2}(1- \cos\theta)d\varphi. \tag{28}\] Similarly, we have: \[\mathcal{A}_{-}(\varphi,\theta)_{S}=-\cos^{2}(\theta/2)d\varphi=-\frac{1}{2}(1+ \cos\theta)d\varphi. \tag{29}\] We see that the connection is not uniquely defined, but it is associated with each vector field under consideration. Since the vector fields are not globally defined, the connection in each covering of the base manifold also takes on its own form. However, in the overlap region \(U_{N}\cap U_{S}\), the local connections should be related to each other. It is easy to verify that \(\mathcal{A}_{-}(\varphi,\theta)N=\mathcal{A}_{-}(\varphi,\theta)s+d\varphi\). This relation does not depend on \(d\chi\) but only on \(d\varphi\), as it is actually the consequence of the gauge \(U(1)\) transformation.[23] This relation therefore ensures the existence of a globally defined connection one-form on the whole fiber bundle. #### ii.1.3 Topological characters of the set of state vectors The topological features of the set of state vectors \(\mathbb{E}\) of two-level systems are mathematically translated into the global geometrical features of the fiber bundle (\(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B},\mathbb{F},U(1)\)). The analysis presented in the previous subsection diagnoses such features: the problem of singularity of the vector fields on the base manifold \(\mathbb{B}\equiv\mathbb{S}^{2}\) of a fiber-bundle structure. This feature is quantitatively characterized by an index that is defined through an integral over the base manifold. Since the base manifold is a two-dimensional surface, we need a differential 2-form that is determined from the 1-form \(\mathcal{A}_{-}\) as follows:[23] \[\mathcal{F}_{-}(\varphi,\theta) =d\mathcal{A}_{-}(\varphi,\theta)_{N}=d\mathcal{A}_{-}(\varphi, \theta)_{S}\] \[=\partial_{\theta}\sin^{2}(\theta/2)d\theta\wedge d\varphi=\frac{1 }{2}\sin\theta d\theta\wedge d\varphi, \tag{30}\] where "\(\wedge\)" denotes the wedge product of two basis one-forms \(d\theta\) and \(d\varphi\) such that \(d\theta\wedge d\varphi=-d\varphi\wedge d\theta\). The 2-form \(\mathcal{F}_{-}(\varphi,\theta)\) is an antisymmetric 2-rank tensor and does not depend on \(\chi\) due to the identity \(d^{2}\chi=0\). Geometrically, this tensor determines the local curvature of the total manifold \(\mathbb{E}\equiv\mathbb{S}^{3}\). Therefore, the total curvature of the fiber-bundle \(\mathbb{E}\) is obtained by integrating the local curvature over the entire base manifold: \[\mathcal{C}_{-} =\frac{1}{2\pi}\int_{\mathbb{S}^{2}}\mathcal{F}^{-}=\frac{1}{2} \int_{0}^{2\pi}\int_{0}^{\pi}\frac{1}{2}\sin\theta d\theta\wedge d\varphi\] \[=\frac{1}{4\pi}\int_{0}^{\pi}\sin\theta d\theta\int_{0}^{2\pi}d \varphi=-1. \tag{31}\] This non-zero integer value of the integral characterizes the fact that the vector field \(|-,\mathbf{d}\rangle\) is not globally defined on the manifold \(\mathbb{S}^{2}\), but rather locally in each covering. The difference in the connection \(\mathcal{A}-\) on each covering directly results in the non-zero value of the integral. Geometrically, we can understand this value as follows: by moving around the whole manifold \(\mathbb{S}^{2}\) of the parameter vector \(\mathbf{d}\) along the positive direction, the maps \(|-\rangle_{N/S}\) allow us to move around the whole sphere \(\mathbb{E}\equiv\mathbb{S}^{3}\) in one round, but via the negative direction. Thus, \(\mathcal{C}_{-}\) (usually called the Chern number) plays the role of the winding number characterizing the topological features of the set of state vectors \(\mathbb{E}\) of all two-level systems described by the model (1). Su-Shrieffer-Heeger model for one-dimensional lattices In this section, we illustrate the analysis of deforming the band structure of an atomic chain. We also demonstrate that the application of the fiber bundle structure can be flexible and somewhat complex. Specifically, we will show that the set of electronic state vectors of the atomic lattice does not always possess the fiber bundle structure directly on the Brillouin zone. ### Physical model Consider a one-dimensional lattice defined by two parameters \(v\) and \(w\), which represent the hopping energies between the two nearest lattice nodes. The lattice is periodic, with a unit cell containing two lattice nodes labeled as \(A\) and \(B\). Let \(a\) be the length of the unit cell. Assume that each lattice node has only one electron orbital, denoted as \(|\alpha,n\rangle\) for node \(\alpha\) (\(\alpha=A,B\)) in cell \(n\), as shown in Fig. 1. The Hamiltonian of an electron in the lattice is given by the following tight-binding model: \[H =\sum_{n}|A,n\rangle\left(v\langle B,n|+w\langle B,n+1|\right)\] \[+\sum_{n}|B,n\rangle\left(v\langle A,n|+w\langle A,n-1|\right). \tag{32}\] Using the Bloch theorem we can construct the so-called Bloch state vectors \(|\alpha,k\rangle\) for each value of \(k\) in the Brillouin zone \(BZ=\{k\in\mathbb{R}\,|\,-\pi/a\leq k\leq\pi/a\}=[-\pi/a,\pi/a]\): \[|\alpha,k\rangle=\frac{1}{\sqrt{N}}\sum_{n}e^{ikan}|\alpha,n\rangle, \tag{33}\] Conversely, \[|\alpha,n\rangle=\frac{1}{\sqrt{N}}\sum_{k}^{BZ}e^{-ikan}|\alpha,k\rangle, \tag{34}\] Substitute Eq. (34) into the tight-binding Hamiltonian we get: \[H=\sum_{k}^{BZ}\sum_{\alpha,\beta}^{\{A,B\}}|\alpha,k\rangle H_{\alpha\beta}( k)\langle\beta,k|, \tag{35}\] where \(H_{\alpha\beta}(k)\) are the elements of the so-called Bloch-Hamiltonian matrix that takes the following form: \[H(k)=\mathbf{d}(k)\cdot\mathbf{\sigma}. \tag{36}\] Here the dependence on \(k\) is through a 2D vector \(\mathbf{d}\): \[\mathbf{d}(k)=\left\{\begin{array}{l}d_{x}(k)=v+w\cos(ka)\\ d_{y}(k)=w\sin(ka)\end{array}\right. \tag{37}\] The Bloch-Hamiltonian is defined by three parameters \(v,w\) and \(ka\). While the latter is real and given in the Brillouin zone \(BZ=[-\pi,\pi]\simeq\mathbb{S}^{1}\), the two former parameters can take complex-values. For simplicity, we assume here that both \(v,w\) are positive real parameters. We need to distinguish the role of these parameters: \(v\) and \(w\) define the physical system, and \(k\) defines the state space of the physical system; which is why we denote explicitly the dependence of the Hamiltonian and the vector \(\mathbf{d}\) on \(k\). ### Determination of eigen-states The eigenvalues of the Hamiltonian can be obtained through the diagonalization procedure. We can express the two eigenvalues \(E_{\pm}(k)\) as follows: \[E_{\pm}(k)=\pm\sqrt{d_{x}^{2}(k)+d_{y}^{2}(k)}=\pm\|\mathbf{d}(k)\|=\pm d(k), \tag{38}\] where \(d(k)=\sqrt{v^{2}+w^{2}+2vw\cos(kb)}\). Knowing the eigen-values, the corresponding eigen-vectors are determined by the equation \((H-E)|\psi\rangle=0\), which is specified as: \[\begin{pmatrix}\pm d(k)&d_{x}(k)-id_{y}(k)\\ d_{x}(k)+id_{y}(k)&\pm d(k)\end{pmatrix}\begin{pmatrix}\phi_{1}(k)\\ \phi_{2}(k)\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix}. \tag{39}\] This leads to the equation: \[\pm d(k)\phi_{1}(k)+[d_{x}(k)-id_{y}(k)]\phi_{2}(k)=0. \tag{40}\] Figure 1: (a) The figure shows a one-dimensional lattice with two sub-lattices labeled \(A\) and \(B\), with lattice constant \(a\). Each lattice node has one electronic orbital, and the kinetic energy of the electron is characterized by two hopping parameters, \(v\) and \(w\), denoted in the figure. The analysis presented in the text is consistent with the chosen unit cell. (b) and (c) The images of the Brillouin zone \(BZ=[-\pi,\pi]\) through the map given by Eq. (37) in the cases where \(w/v>1\) and \(w/v<1\), respectively. In the former case, the polar angle \(\varphi\) can take on all values in the range \([-\pi,\pi]\), while in the latter case it takes on values in the range \([-\varphi_{0},\varphi_{0}]\), where \(\varphi_{0}=\arcsin(w/v)\). To solve this equation, we need to consider two cases: (1) If \(d(k)=0\), it leads to \(d_{x}(k)=d_{y}(k)=0\). So, \(\phi_{1}(k)\) and \(\phi_{2}(k)\) can be arbitrarily chosen, and the state vectors are therefore ill-defined. Notice that in Sec. II, we did not consider the case of \(d=0\) because the \(\mathbf{d}\) vector determines the existence conditions of a particular physical system directly. Here, the two parameters \(v\) and \(w\) play this role, while the parameter \(k\) determines the states of a particular 1D lattice. Therefore, \(d(k)=0\) can occur for a given physical system (i.e., with a given set of \(v\) and \(w\)) for a value of \(k\) in the Brillouin zone. With \(d_{x}(k)\) and \(d_{y}(k)\) given by Eq. (37), the case \(d(k)=0\) implies that: \[\left\{\begin{array}{l}v+w\cos(ka)=0\\ w\sin(ka)=0\end{array}\right.\rightarrow\left\{\begin{array}{l}v=w\\ ka=\pm\pi\end{array}\right. \tag{41}\] It shows that \(d(k)=0\) occurs only for the configuration with \(v=w\) at \(ka=\pm\pi\), i.e., at the edges of the Brillouin zone. (2) If \(d(k)\neq 0\), we obtain the following expressions for the state vectors: \[\left|\pm,k\right\rangle=\frac{e^{i\chi}}{\sqrt{2}}\begin{pmatrix}\pm\frac{d_ {x}(k)-id_{y}(k)}{d(k)}\\ 1\end{pmatrix}, \tag{42}\] where \(\chi\) is an arbitrary real parameter defining the gauge \(U(1)\) factor. Since \(\mathbf{d}(k)\in\mathbb{S}^{1}\) (a circle of radius \(w\) embedded in the plane \(\mathbb{R}^{2}\) at the point \((v,0)\) as the center), it can be parameterized by only one real parameter. There are many ways to parameterize the circle \(\mathbb{S}^{1}\), but, as we will see, the use of polar angle coordinates \((d,\varphi)\) is more useful. Accordingly, \(d_{x}=d\cos\varphi,d_{y}=d\sin\varphi\). From Eq. (37) we deduce the equation expressing the constraint of \(d\) and \(\varphi\) (see Fig. 1): \[(d\cos\varphi-v)^{2}+w^{2}\sin^{2}\varphi=w^{2}\] \[\rightarrow d^{2}-2dv\cos\varphi+(v^{2}-w^{2})=0. \tag{43}\] Using this parameterization, we can rewrite the state vector \(\left|\pm,\mathbf{d}(k)\right\rangle\) as: \[\left|\pm,k\right\rangle=\frac{e^{i\chi}}{\sqrt{2}}\begin{pmatrix}\pm e^{-i \varphi(k)}\\ 1\end{pmatrix}. \tag{44}\] This result shows that the state vectors do not depend on the radial coordinate \(d\), but only on the polar angle coordinate \(\varphi\), which is determined as a function of \(k\in BZ\), see Eq. (48). Normally, with the solution for the eigen-states, observable quantities that determine physical properties of a system can be calculated. However, as mentioned in the previous section, we are interested in what happens when we drive the physical system from one state to another, i.e., vary the state parameter \(k\) in the Brillouin zone. The answer to this question is presented in the next subsection. ### Investigation of topological features of quantum states #### ii.3.1 The fiber-bundle structure of the set of state vectors In Subsection II.2, we consider the set of state vectors for all possible two-level systems. In contrast, in this subsection, we are interested in the features of the set of state vectors for a generic 1D SSH system. We will then compare the features of such sets to classify 1D systems described by the SSH model into different classes. Let's denote \[\mathbb{F}(k)=\left\{\left|-,k\right\rangle=\frac{e^{i\chi}}{\sqrt{2}}\begin{pmatrix} -\frac{d_{x}(k)-id_{y}(k)}{d(k)}\\ 1\end{pmatrix}\left|\,\forall\,\chi\in\mathbb{R}\right\},\right. \tag{45}\] The set of state vectors describes the same eigen-state with the energy \(E(k)=-d(k)\), as shown in Eq. (38). It is important to note that the set \(\mathbb{F}(k)\) is not defined for \(k\) such that \(d(k)=0\), as stated in Eq. (42). This condition is satisfied only for configurations with \(v=w\) at \(k=\pm\pi/a\), as shown in Eq. (41). For configurations where \(v\neq w\), the sets \(\mathbb{F}(k)\) are well-defined for all \(k\in BZ\). Therefore, we can consider the set \[\mathbb{E}=\bigcup_{k\in\mathbb{B}}\mathbb{F}(k), \tag{46}\] where \(\mathbb{B}=BZ\simeq\mathbb{S}^{1}\subset\mathbb{R}^{2}\). Since \(\mathbb{F}(k)\) is the set of all physically equivalent state vectors, \(\mathbb{E}\) mathematically represents the set of equivalent classes. To decompose \(\mathbb{E}\) according to \(\mathbb{B}\), we need to determine a surjective map \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\). To do so, we take an element \(|-\rangle=(\phi_{1},\phi_{2})^{T}\) in \(\mathbb{E}\) and identify it with one element in a subset \(\mathbb{F}(k)\): \[\left\{\begin{array}{l}\phi_{1}=-\frac{e^{i\chi}}{\sqrt{2}}\frac{d_{x}(k)- id_{y}(k)}{d(k)}\\ \phi_{2}=\frac{e^{i\chi}}{\sqrt{2}}\end{array}\right. \tag{47}\] It results in the equation for \(kb\): \[\sin(ka+\varphi)=-\frac{v}{w}\sin(\varphi), \tag{48}\] where \(\varphi=\arg(\phi_{1}/\phi_{2})\in[-\pi,\pi]\). It is clear that this equation does not always have a solution for \(kb\) because of the factor \(v/w\). Specifically, there are two cases to consider: 1. \(v/w<1\): Eq. (48) always has a solution for \(ka\) for all \(\varphi\in[-\pi,\pi]\). It means that when taking an arbitrary element of \(\mathbb{E}\), we can always identify it as belonging to a subset \(\mathbb{F}(k)\). This identification procedure defines the map \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\), which is given by the rule: \(ka=-\varphi-\arcsin(v/w\sin(\varphi))\). 2. \(v/w>1\): Eq. (48) only has a solution for elements \((\phi_{1},\phi_{2})\) such that \(|\sin(\varphi)|\leq w/v<1\). In other words, the map \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\) is not entirely, or globally, defined in \(\mathbb{B}\). Therefore, based on this rough analysis, 1D SSH systems can be classified into three categories according to the values of the ratio \(v/w\), i.e., (1) \(v/w=1\), (2) \(v/w<1\), and (3) \(v/w>1\). The first case is special because the two energy bands \(E_{-}(k)\) and \(E_{+}(k)\) touch each other at the two edge points of the Brillouin zone, and the energy value \(E=0\) is degenerate. Physically, the SSH configuration with \(v=w\) behaves as a semi-metallic system. To quantify the classification of the second and third cases, we move away from considering the set \(\mathbb{E}\) given by Eq. (46) and instead consider: \[\mathbb{E}=\bigcup_{\varphi\in\mathbb{B}}\mathbb{F}(\varphi), \tag{49}\] where \(\mathbb{B}\) is any open set in the interval \([-\pi,\pi]\) and \[\mathbb{F}(\varphi)=\left\{\left|-,\varphi\right\rangle=\left.\frac{e^{i \chi}}{\sqrt{2}}\begin{pmatrix}-e^{-i\varphi}\\ 1\end{pmatrix}\right|\forall\chi\in\mathbb{R}\right\}. \tag{50}\] To decompose the set \(\mathbb{E}\) according to \(\mathbb{B}\) we determine a map \(\hat{\pi}:\mathbb{E}\rightarrow\mathbb{B}\). It is easy to find: \[\hat{\pi}:(\phi_{1},\phi_{2})\mapsto\varphi=i\ln\left(-\frac{\phi_{1}}{\phi _{2}}\right). \tag{51}\] This map is surjective since it maps all pairs of complex variables \((\phi_{1},\phi_{2})\) and \((\phi_{1}^{\prime},\phi_{2}^{\prime})=e^{i\chi}(\phi_{1},\phi_{2})\) for all \(\chi\in\mathbb{R}\) to the same value of \(\varphi\). It means that, given a value of \(\varphi\), its preimage \(\hat{\pi}^{-1}(\varphi)=\mathbb{F}(\varphi)=\mathbb{S}^{1}\). The fiber bundle structure of \(\mathbb{E}\) is confirmed by a trivialization map \(\phi:\mathbb{B}\times\mathbb{S}^{1}\rightarrow\hat{\pi}^{-1}(\mathbb{B})\). This map is easily defined by the assignment: \[\phi:(\varphi,e^{i\chi})\mapsto\left|-,\varphi\right\rangle=\frac{e^{i\chi}} {\sqrt{2}}\begin{pmatrix}-e^{-i\varphi}\\ 1\end{pmatrix}. \tag{52}\] It is a diffeomorphism because its inverse \(\phi^{-1}:\hat{\pi}^{-1}(\mathbb{B})\rightarrow\mathbb{B}\times\mathbb{S}^{1}\) determined by the decomposition \(\phi^{-1}(\left|-,\varphi\right\rangle)=(\varphi,e^{i\chi})\) also differentiable. Note that, in the problem under consideration, we do not need a local description of the set \(\mathbb{E}\) but a global one. We thus conclude that the set \(\mathbb{E}\) is a fiber bundle over the base manifold \(\mathbb{B}\) with the fiber \(\mathbb{F}=\mathbb{S}^{1}\simeq U(1)\) according to the decomposition map \(\hat{\pi}\). The issue now is to determine the set \(\mathbb{B}\) as the image of the Brillouin zone \(BZ\) through some map \(\varphi:BZ\rightarrow\mathbb{B}=\varphi(BZ)\). From Eq. (37) we have: \[\varphi=\arg\left[v+w\cos(ka)+iw\sin(ka)\right]. \tag{53}\] We see that depending on the value of \(v/w\) the range of \(\varphi\) is determined as follows (see Fig. 2): 1. \(v/w>1\): \(\varphi\in[-\varphi_{0},\varphi_{0}]\equiv\varphi(BZ)\subset[-\pi,\pi]\), where \(\varphi_{0}=\arcsin(w/v)\). 2. \(v/w<1\): \(\varphi\in[-\pi,\pi]\simeq\mathbb{S}^{1}\). To summarize, we distinguish two different fiber bundle structures for the set of state vectors \(\mathbb{E}\). One has the base manifold \(\mathbb{B}=[-\pi,\pi]\) (for \(v/w<1\)) homeomorphic to the circle \(\mathbb{S}^{1}\), and the other has the base manifold \(\mathbb{B}=[-\varphi_{0},\varphi_{0}]\) (for \(v/w>1\)) simply a part of the interval \([-\pi,\pi]\). #### iv.2.2 The connection of the fiber bundle After the determination of the fiber bundle structure of the set of state vectors \(\mathbb{E}\) we can proceed to analyze its geometrical features by performing the "exploration travelling" in \(\mathbb{E}\). Because of the principal fiber bundle structure, the parallel motion in \(\mathbb{E}\) is realized by a horizontal lift of a parallel motion in the base manifold \(\mathbb{B}\) up to \(\mathbb{E}\). So, we need to perform a motion in \(\mathbb{B}\) first. We consider separately two cases: 1. \(v/w>1\): When varying \(k\) from \(-\pi/a\) to \(\pi/a\), \(\varphi\) continuously moves from \(0\) to \(-\varphi_{0}\) then back to \(0\). After that it moves from \(0\) to \(\varphi_{0}\) and then to \(0\). The entire path that \(\varphi\) draws is \(\varphi:0\rightarrow-\varphi_{0}\rightarrow 0\rightarrow\varphi_{0}\rightarrow 0\). This path can be seen as consisting of two loops of opposite direction, one is \(\varphi:0\rightarrow\varphi_{0}\to 0\) with the positive direction, and the other is \(\varphi:0\rightarrow-\varphi_{0}\to 0\) with the negative direction. 2. \(v/w<1\): When varying \(k\) from \(-\pi/b\) to \(\pi/a\), \(\varphi\) continuously moves from \(-\pi\) to \(\pi\), drawing a loop homotopic to the circle \(\mathbb{S}^{1}\). In Fig. 2 we graphically illustrate the range of \(\varphi\) and movement of \(\varphi\) with respect to \(ka\in[-\pi,\pi]\) according to these two cases. After choosing a vector field as a smooth map \(\left|-\right\rangle:\mathbb{B}\rightarrow\mathbb{E}\), given by \(\left|-\right\rangle:\varphi\mapsto\left|-,\varphi\right\rangle\), we can determine a connection in order to horizontally lift the motion in \(\mathbb{B}\) up to \(\mathbb{E}\). The so-called Berry connection is then given by \(\mathcal{A}-=i\langle-,\varphi|\hat{d}|-,\varphi\rangle\), where \(\hat{d}\) denotes the exterior derivative. Using Eq. (37), we can write: \[\hat{d}|-,\varphi\rangle=\frac{e^{i\chi}}{\sqrt{2}}\begin{pmatrix}ie^{-i\varphi }\\ 0\end{pmatrix}d\varphi, \tag{54}\] w Figure 2: The variation of \(\varphi\) with respect to \(ka\) is shown. The blue and red curves correspond to the cases \(v/w=0.8<1\) and \(v/w=1.25>1\), respectively. which allows us to obtain the expression for the Berry connection: \[\mathcal{A}_{-}(\varphi)=\frac{1}{2}d\varphi. \tag{55}\] #### iv.2.3 Topological characters Some topological features of the set of state vectors \(\mathbb{E}\) are characterized by the global geometrical features of the fiber-bundle \((\tilde{\pi}:\mathbb{E}\rightarrow\mathbb{B},\mathbb{F},U(1))\). Since the base manifold \(\mathbb{B}\) is one-dimensional, all the higher-rank differential forms deduced from the one-form \(\mathcal{A}_{-}\) vanish, e.g., \(d\mathcal{A}_{-}=0\). A topological quantity can be defined as an integral of \(\mathcal{A}_{-}\) over the whole base manifold \(\mathbb{B}\) to characterize the geometrical properties of the manifold \(\mathbb{E}\), as follows: \[\gamma=\int_{\mathbb{B}}\mathcal{A}_{-}=\left\{\begin{array}{ll}0&\frac{v}{ w}>1,\\ \pi&\frac{v}{w}<1.\end{array}\right. \tag{56}\] This integral is physically named the Zak phase, which determines the phase accumulated by the state vector when it completes its motion in the fiber bundle \(\mathbb{E}\) according to the motion of \(k\) in the entire Brillouin zone. The values of the Zak phase are the topological characteristic of the three categories of 1D lattices. If we consider the energy band structure, we realize only two phases: the semiconducting phase with a finite band gap and the semimetallic phase with a zero-band gap. However, by looking deeply into the set of state vectors, we realize that the semiconducting phase should be classified further into two different categories, characterized by two different values of the Zak phase. #### iv.2.4 A view from symmetry as global constraints Assuming that the two parameters \(v\) and \(w\) are real, the Hamiltonian of the SSH atomic chain does not change under time-reversal operations. This is known as time-reversal symmetry, and it implies that the eigenvalues of the Bloch-Hamiltonian matrix \(H(k)\) are even functions of \(k\), as shown in Eq. (38). The time-reversal symmetry plays this role specifically for the SSH model. If we assume further that the hopping parameters \(v\) and \(w\) are equal, then the Hamiltonian also possesses an inversion symmetry. Together with the chiral symmetry of the Bloch-Hamiltonian matrix due to the nearest-neighbor approximation for the hopping, we find a zero-energy state with a doubly degenerate band structure. This means that the SSH atomic chain is a metal with the conduction and valence bands touch each other at \(k=0\). The inversion symmetry is rigorously enforced for all possible configurations of the chain as long as \(v=w\). The touching point of two energy bands is a consequence of this symmetry. It is therefore said to be protected by inversion symmetry. Breaking this inversion symmetry simply requires deforming the atomic chain such that \(v\neq w\). In this case, the touching point of two energy bands is broken as expected, an energy gap appears, and the system becomes an insulator. With only two parameters, \(v\) and \(w\), there are two distinct insulating phases by setting \(v/w<1\) and \(v/w>1\). The question is whether these two insulating phases are equivalent or have any distinguishing features. The answer, as shown in the previous sections, is that there is a difference, and it is expressed in the structure of the set of state vectors of the system. This difference is characterized by a topological invariant quantity named the Zak phase. ## V Conclusion Topology is a fundamental concept that not only establishes the foundation for mathematical theories but also enables physicists to analyze the structure of the physical world, including space-time. However, the abstract nature and high degree of generalization of topology often pose challenges when applying topological concepts and related methods to analyze physical problems, particularly in the domain of condensed matter physics where the practical application is of high interest. Therefore, it is crucial to use mathematical concepts and rigorous mathematical language accurately to state physical problems when solving problems. We have carefully selected and presented a concise list of essential mathematical concepts as the vocabulary to discuss this topic. We have demonstrated that topological methods can be used to study problems induced by driving a quantum system through a systematic procedure. To illustrate this idea, we have analyzed two classical toy models. The first model is general for two-level quantum systems, and the second model is for one-dimensional crystalline lattices consisting of two sub-lattices. The analysis procedure starts with determining the system's states as the objects under study. While the energies associated with quantum states are realized as scalar fields defined on a manifold of the model parameters, the state vectors are treated as points in a special manifold with a fiber bundle structure. The set of state vectors is thus realized as vector fields defined as the cross-sections of the fiber bundle. The transition of the system from one state to another is thus translated as the connectivity of points in a topological space through a mechanism of parallel transport. This movement in a curved manifold is guided by a quantity called a connection. Mathematically, the connection is defined as a one-form, which is a differential form of rank one, that is a vector field of the dual space of the tangent bundle of the manifold. A topological index can be defined by integrating a differential form of appropriate rank over the entire manifold of parameters to characterize the geometric features of the fiber bundle. The two models considered exhibit different levels of sophistication in characterizing the nature governing the set of physical states. In the general two-level model, the topological properties are evident in the singularity of the state vector fields defined over the entire manifold of the model parameters. In contrast, for the SSH model, greater sophistication is required to discern the presence of a topological structure in a set of state vectors at an intermediate level of analysis. Additionally, we show that the set of state vectors does not always exhibit the fiber bundle structure directly over the Brillouin zone, contrary to intuitive thinking. Therefore, semiconducting atomic chains can be classified by the topological properties of two fiber bundles with the same total and fiber spaces but different base spaces. This brief article cannot encompass all aspects of topology and its applications in physics, nor can it address the diverse range of physical problems to which topology can be applied. However, we hope that it has provided insight into the use of topological theories to characterize the fundamental structure of material phases in the quantum realm.
2307.06976
On the Complexity of Target Set Selection in Simple Geometric Networks
We study the following model of disease spread in a social network. At first, all individuals are either infected or healthy. Next, in discrete rounds, the disease spreads in the network from infected to healthy individuals such that a healthy individual gets infected if and only if a sufficient number of its direct neighbors are already infected. We represent the social network as a graph. Inspired by the real-world restrictions in the current epidemic, especially by social and physical distancing requirements, we restrict ourselves to networks that can be represented as geometric intersection graphs. We show that finding a minimal vertex set of initially infected individuals to spread the disease in the whole network is computationally hard, already on unit disk graphs. Hence, to provide some algorithmic results, we focus ourselves on simpler geometric graph classes, such as interval graphs and grid graphs.
Michal Dvořák, Dušan Knop, Šimon Schierreich
2023-07-13T13:55:20Z
http://arxiv.org/abs/2307.06976v4
# Establishing Hard Immunity is Hard Even in Simple Geometric Networks ###### Abstract We study the following model of disease spread in a social network. At first, all individuals are either _infected_ or _healthy_. Next, in discrete rounds, the disease spreads in the network from infected to healthy individuals such that a healthy individual gets infected if and only if a sufficient number of its direct neighbours are already infected. We represent the social network as a graph. Inspired by the real-world restrictions in the current epidemic, especially by social and physical distancing requirements, we restrict ourselves to networks that can be represented as geometric intersection graphs. We show that finding a minimal vertex set of initially infected individuals to spread the disease in the whole network is computationally hard, already on unit disk graphs. Hence, to provide some algorithmic results, we focus ourselves on simpler geometric graph classes, such as interval graphs and grid graphs. **Keywords:** disease spread, Target Set Selection, intersection graphs, computational complexity This is an extended and revised version of a preliminary conference report that was presented in WAW 2023 (Dvorak et al., 2023)1365-8050 O 2005 Discrete Mathematics and Theoretical Computer Science (DMTCS), Nancy, France ## 1 Introduction In this work, we study the following deterministic model of disease spread. We are given a social network represented as a simple, undirected graph \(G=(V,E)\), a threshold function \(t\colon V\to\mathbb{N}\) that associates each agent with her _immunity_ (or _threshold_), and a _budget_\(k\in\mathbb{N}\). Our goal is to select a group \(S\subseteq V\), \(|S|\leq k\), of initially infected agents (a _target set_) such that all agents get infected by the following activation process: \[S_{0} =S,\] \[S_{i} =S_{i-1}\cup\{v\in V\mid t(v)\leq|N(v)\cap S_{i-1}|\}.\] In other words, the disease spreads in discrete rounds. A healthy agent \(v\) becomes infected if the number of neighbours already infected reaches the agent's immunity value \(t(v)\). We note that once an agent is infected, she remains in this state for the rest of the process. Dreyer and Roberts (2009) studied a similar model under the name Irreversible \(k\)-threshold Process. Unlike our setting, in their work, the immunity value is the same for all agents. Therefore, the presented model is more general and corresponds, in fact, to the Target Set Selection problem (TSS for short) where thresholds can be agent-specific. The Target Set Selection problem was introduced by Richardson and Domingos (2002) in the context of viral marketing on social networks. Kempe et al. (2015) later refined the problem in terms of thresholds, which is the model we follow in this work, and showed that the problem is NP-hard. The first way to tackle the complexity of the problem was aimed at the threshold function. However, Chen (2009) showed that TSS remains NP-hard even if all thresholds are at most two, which extends the previous result of Dreyer and Roberts (2009) who showed that the problem is NP-hard even if all thresholds are bounded by a constant \(c\geq 3\). NP-hardness for majority thresholds (for every \(v\in V\) we have \(t(v)=\lceil\deg(v)/2\rceil\)) is due to Peleg (1996). It is easy to see that the TSS problem is solvable in polynomial time if the underlying graph has diameter one, that is, it is a complete graph (Nichterlein et al., 2013; Reddy et al., 2010). Chen (2009) showed that the problem remains polynomial-time solvable when the underlying graph is a tree. Later, Chiang et al. (2013) proposed linear-time algorithms for block-cactus graphs, chordal graphs with all thresholds at most two, and Hamming graphs with all thresholds equal to two. Bessy et al. (2019) showed that the TSS problem is solvable in polynomial time on interval graphs if all thresholds are bounded by a constant. On the other hand, the problem becomes NP-hard on graphs of diameter two (Nichterlein et al., 2013). The problem becomes solvable in polynomial time when the input graph is \(3\)-regular and all thresholds are equal to \(2\). This setting is in fact equivalent to the Feedback Vertex Set problem (Takaoka and Ueno, 2015), which is solvable in polynomial time on \(3\)-regular graphs Ueno et al. (1988). The setting with thresholds equal to \(2\) was further examined by Kyncl et al. (2017). They extended the tractability result for TSS with thresholds equal to \(2\) when the input graph has degree at most \(3\) and showed NP-hardness when the input graph has maximum degree at most \(4\). Restriction of the underlying graph structure was further investigated by Ben-Zwi et al. (2011). They gave an algorithm running in \(n^{\mathcal{O}(\omega)}\) time for networks with \(n\) vertices and _tree-width_ bounded by \(\omega\), and showed that, under reasonable theoretical assumptions, there is no algorithm for TSS running in \(n^{o(\sqrt{w})}\) time. The parameterized complexity perspective, initiated by Ben-Zwi et al. (2011), was later used in multiple subsequent works (Chopin et al., 2014; Dvorak et al., 2022; Hartmann, 2018; Mathieson, 2010; Nichterlein et al., 2013). Finally, Cicalese et al. (2014) proposed a study of the TSS problem where the process must stabilize within a prescribed number of rounds. They gave a polynomial-time algorithm for graphs of bounded _clique-width_ and a linear-time algorithm for trees. Inspired by the actual restrictions in the current epidemic, especially by social and physical distancing requirements, we study the Target Set Selection problem restricted to instances where the underlying graph is a _(unit) disk graph_. Unit disk graphs were initially used as a natural model for the topology of ad-hoc wireless communication networks (Huson and Sen, 1995). For a given graph, it is NP-hard to recognize whether the graph is a unit disk graph (Breu and Kirkpatrick, 1998; Hlineny and Kratochvil, 2001; Kang and Muller, 2012). On the other hand, many computationally hard problems, such as Independent Set or Colouring, can be efficiently approximated for this graph class (Matsui, 2000). Clique can be solved even in polynomial time if the disk representation is given as part of the input (Clark et al., 1990). In our case, the disk representation models two different situations. In the first situation, the disk represents the distances that individuals must keep. In the second case, the disk represents the area in which the disease is spread by an infected individual. As the Target Set Selection problem is notoriously hard from both exact computation and approximation point of view, it is natural to ask whether any positive result can be given if we restrict TSS to instances where the underlying graph is a unit disk graph or if we need to restrict ourselves to even simpler graph classes. ### Preventing Disease Spread In what we discussed above, our goal was to spread the disease throughout the network. This goal corresponds to establishing herd immunity. However, herd immunity is not the only way to tackle the pandemic. For example, there may be a group of individuals who are very vulnerable to the disease, and the likelihood of them dying from the disease is very high. Or, in the case of a disease with very high mortality, our goal can be to minimize the number of infected individuals. The first case presented, where we have a group of agents that must be protected, is known as Group Identification(Dimitrov, 2011; Kasher and Rubinstein, 1997). Despite the fact that the problem is mostly studied in the context of opinion spread, we can easily utilize it to prevent disease spread. For us, the most significant results are the works of Yang and Dimitrov (2018) and Erdelyi et al. (2019), who study the control of agents to manipulate an outcome. One of such controls is agent deletion, which can be translated into the vaccination or quarantining of an agent. Deciding whether there are \(k\) agents whose deletion leads to protection of a given subset of agents is solvable in polynomial time. See also the work of Blazej et al. (2022) for more complicated settings of Group Identification. If we want to minimize the spread of the disease, we can employ the Firefighter Problem(Finbow and MacGillivray, 2009; Hartnell, 1995). Here, we are given a graph. Then, the fire breaks out on a set of vertices, and our goal is to protect as many vertices as possible. In each round of the process, we defend a selected vertex from being burnt, and the fire spreads to all undefended neighbours. Once the vertex is on fire or is defended, it remains so. The process stops when it stabilizes, that is, when there is no new burnt vertex in any round. Fomin et al. (2016) showed that the Firefighter Problem is NP-hard even on unit disk graphs, while it is solvable in polynomial time on interval graphs. We found the setting of preventing disease spread almost solved, and hence we will not assume any disease spread prevention in the rest of this paper. ### Our Contribution In this paper, we show that the Target Set Selection problem is computationally hard even on very simple geometric graph classes. In particular, we show that TSS is NP-complete in the class of unit disk graphs even if the threshold function is bounded by a constant \(c\geq 2\), is equal to majority, or is unanimous. Hence, we focus on the study of grid graphs, which is a subclass of unit disk graphs. For grid graphs, we show that TSS is NP-complete for constant and majority thresholds, while it is polynomial-time solvable for unanimous thresholds. Note that our results for constant thresholds establish a clear dichotomy between tractable and intractable subclasses of intersection graph classes, as TSS is known to be solvable in polynomial time on interval graphs with constant thresholds (Bessy et al., 2019). Moreover, we show that our problem is solvable in polynomial time on interval graphs even if the threshold function is unanimous. As a byproduct of our theorems we also obtain the full complexity picture of TSS in the class of planar graphs. For an overview of our results, we refer the reader to Tab. 1. We note that all NP-hardness results hold even if the maximum degree of the input graph is at most \(4\). Lastly, we provide an NP-hardness proof for the general setting of TSS when \(t(v)\leq 2\) and \(\Delta G\leq 3\). ### Paper Organization The remainder of this paper is organized as follows. In Section 2, we introduce all the definitions and notation used throughout the paper. In Section 3, we show the hardness and algorithmic results for Target Set Selection restricted to unanimous thresholds. Section 4 is dedicated to the variant where the maximum threshold value is bounded by a constant. At first, we give a somewhat straightforward NP-hardness proof for thresholds at most \(3\). Later, we give a more involved proof showing NP-hardness even in a special case with thresholds of at most \(2\). In Section 5, we study a variant of the Target Set Selection problem with majority thresholds. In Section 6, we give some remarks on how Target Set Selection behaves on graphs with small maximum degree, and we conclude the paper with open problems and future research directions in Section 7. ## 2 Preliminaries For \(n\in\mathbb{N}\) we denote \([n]=\{1,\ldots,n\}\), in particular, \([0]=\emptyset\). For a set \(X\) and a constant \(c\in\mathbb{N}\), the symbol \(X^{\geq c}\) denotes the set of all \(d\)-tuples from \(X\) where \(d\geq c\). A _simple undirected graph_ is a pair \(G=(V,E)\), where \(V\) is a set of _vertices_, and \(E\subseteq\binom{V}{2}\) is a set of _edges_. Let \(u\) and \(v\) be two distinct vertices. If \(\{u,v\}\in E\), then we call \(u\) a _neighbour_ of \(v\) and vice versa. We denote the _open neighborhood_ of a vertex \(v\) by \(N(v)\) and \(|N(v)|=\deg(v)\) is the _degree_ of the vertex \(v\). The _closed neighborhood_ of a vertex \(v\) is \(N[v]=N(v)\cup\{v\}\) and for \(A\subseteq V\), the closed neighborhood of \(A\) is \(N[A]=\bigcup_{v\in A}N[v]\). A vertex of degree \(1\) is called a _leaf_. The _maximum degree_ of a graph \(G\) is denoted by \(\Delta G\). Let \(r\in\mathbb{N}\) be a constant. We say that a graph \(G\) is \(r\)-_regular_ if every vertex \(v\in V\) has degree exactly \(r\). A graph is _regular_, if it is \(r\)-regular for some constant \(r\in\mathbb{N}\). **Definition 1** (Unit disk graph).: A graph \(G=(V,E)\) with \(V=\{v_{1},\ldots,v_{n}\}\) is a _disk graph_ if there exists a collection \(\mathcal{D}=(D_{1},\ldots,D_{n})\) of \(n\) closed disks in the Euclidean plane such that \(\{v_{i},v_{j}\}\in E\) if and only if \(D_{i}\cap D_{j}\neq\emptyset\). If all disks \(D_{i}\in\mathcal{D}\) have same diameter, we call the graph a _unit disk graph_. Let \(G\) and \(H\) be two graphs. The _Cartesian product_ of the graphs \(G\) and \(H\) is a graph \(G\square H\) such that \(V(G\square H)=V(G)\times V(H)\) and \(\{(u,u^{\prime}),(v,v^{\prime})\}\) is an edge if and only if either \(u=v\) and \(u^{\prime}\) is a neighbour of \(v^{\prime}\) in \(H\), or \(u^{\prime}=v^{\prime}\) and \(u\) is a neighbour of \(v\) in \(G\). **Definition 2** (Grid graph).: An \(n\times m\)_grid_ is the Cartesian product of the path graphs \(P_{n}\) and \(P_{m}\). A graph \(G\) is a _grid graph_ if and only if it is an induced subgraph of a grid. \begin{table} \begin{tabular}{l l l l l} \hline & constant & majority & unanimous & unrestricted \\ \hline **interval graphs** & \(\mathsf{P}\) (\(\dagger\)) &? & \(\mathsf{P}\) (Thm. 8) &? \\ **grid graphs** & \(\mathsf{NP}\)-c (Thm. 31) & \(\mathsf{NP}\)-c (Cor. 40) & \(\mathsf{P}\) (Thm. 6) & \(\mathsf{NP}\)-c (Thm. 31) \\ **unit disk graphs** & \(\mathsf{NP}\)-c (Cor. 33) & \(\mathsf{NP}\)-c (Cor. 41) & \(\mathsf{NP}\)-c (Thm. 5) & \(\mathsf{NP}\)-c (Thm. 5) \\ **planar graphs** & \(\mathsf{NP}\)-c (Thm. 21) & \(\mathsf{NP}\)-c (Thm. 38) & \(\mathsf{NP}\)-c (Thm. 7) & \(\mathsf{NP}\)-c (Thm. 7) \\ \hline \end{tabular} \end{table} Table 1: Overview of our results. The first row contains individual restrictions of the threshold function, and the first column contains assumed graph classes. In the table, “\(\mathsf{NP}\)-c” stands for “\(\mathsf{NP}\)-complete”, “\(\mathsf{P}\)” stands for polynomial-time solvable cases, and “\(?\)” indicates an open question. The new results from this paper are marked with a reference to the appropriate statement; the result marked \(\dagger\) is from Bessy et al. (2019). Let \(G=(V,E)\) be a graph. We call a set \(C\subseteq V\) a _vertex cover_ of \(G\), if at least one end of each edge is a member of \(C\). In the Vertex Cover (VC for short) problem, we are given a graph \(G\) and an integer \(k\in\mathbb{N}\), and our goal is to decide whether there is a vertex cover \(C\) of size at most \(k\). A set \(I\subseteq V\) is called an _independent set_, if for every pair of distinct vertices \(u,v\in I\) there is no edge connecting \(u\) and \(v\). In the Independent Set problem (IS for short), we are given a graph \(G\) and an integer \(k\in\mathbb{N}\), and our goal is to decide whether there is an independent set \(I\) of size at least \(k\). Some of our NP-hardness reductions come from a variant of the Sat problem. In this problem, we are given a propositional formula \(\varphi\) in conjunctive normal form (CNF) on the set \(X=\{x_{1},\ldots,x_{n}\}\) of _variables_. The set of clauses is denoted \(\mathcal{C}=\{C_{1},\ldots,C_{m}\}\). Our goal is to decide whether there is a truth assignment \(\pi\colon X\to\{0,1\}\) that satisfies \(\varphi\). In 3-Sat, all clauses are restricted to be of size at most \(3\). An _incidence graph_ for formula \(\varphi\) is a bipartite graph \(G_{\varphi}=(X\cup\mathcal{C},E)\) with an edge between \(x_{i}\in X\) and \(C_{j}\in\mathcal{C}\) if and only if the variable \(x_{i}\) occurs in the clause \(C_{j}\). A variant of Sat where \(G_{\varphi}\) is planar is called Planar Sat. Planar 3-Sat is combination of the two settings mentioned above. In Restricted Planar \(3\)-Sat it is further assumed that each variable \(x_{i}\) occurs exactly \(3\) times - twice as a positive literal, and once as a negative literal. ## 3 Unanimous Thresholds In this section, we study the special case of the Target Set Selection problem where all thresholds are equal to the degree of a vertex, that is, for every \(v\in V\) we have \(t(v)=\deg(v)\). Such thresholds are called _unanimous_. We strongly rely on the following easy-to-see and well-known equivalence between the Target Set Selection problem with unanimous thresholds and the Vertex Cover problem. Proof of this can be found, for example, in the work of Chen (2009), although for the sake of completeness, we provide our own proof of the lemma. **Lemma 3**.: _The Target Set Selection problem with unanimous thresholds is equivalent to the Vertex Cover problem._ Proof.: Let \((G,k)\) be an instance of the Vertex Cover problem. We construct an equivalent instance \((G^{\prime},t,k^{\prime})\) of the Target Set Selection problem as follows. We set \(G^{\prime}=G\), \(k^{\prime}=k\), and for every \(v\in V(G^{\prime})\) we set the threshold value equal to the degree of the vertex \(v\), that is, \(t(v)=\deg_{G}(v)\). Let \((G,k)\) be a _yes_-instance, and \(C\subseteq V(G)\) be a vertex cover of size at most \(k\). We claim that a set \(S=C\) is a solution of \((G^{\prime},t,k^{\prime})\). It is easy to see that \(|S|\leq k^{\prime}\). Furthermore, in the first round of activation, all the vertices in \(V(G^{\prime})\setminus S\) become infected as all their neighbours are already infected. Therefore, \((G^{\prime},t,k^{\prime})\) is indeed a _yes_-instance. In the opposite direction, let \((G^{\prime},t,k^{\prime})\) be a _yes_-instance and \(S\subseteq V(G^{\prime})\) be a target set of size at most \(k^{\prime}\). For every vertex \(v\in V(G^{\prime})\setminus S\) we have \(N_{G^{\prime}}(v)\cup S=N_{G^{\prime}}(v)\). If not, then due to the unanimous thresholds, neither \(v\) nor \(u\in N_{G^{\prime}}(v)\setminus S\) becomes infected by the activation process. Thus, \(V(G^{\prime})\setminus S\) is an independent set, and \(S\) is a vertex cover of \(G\) of size at most \(k\). It is not hard to see that TSS is indeed in the class NP. A valid target set is a valid NP certificate. **Observation 4**.: \(\textsc{TSS}\in\textsf{NP}\)_._ In fact, all NP-hard problems we deal with in this work are trivially in NP. We always state NP-completeness, however, we omit the part with NP containment because it is trivial. It is known that the Vertex Cover problem is NP-complete even on unit disk graphs (Clark et al., 1990). By Lemma 3 we have that the Vertex Cover problem is equivalent to the TSS problem with unanimous thresholds. Combining this with Observation 4 we obtain the following theorem. **Theorem 5**.: Target Set Selection _is NP-complete even if the underlying graph is a unit disk graph, and all thresholds are unanimous._ We note that the latter NP-hardness proof for unit disk graphs holds even if the underlying unit disk representation of the graph is given on the input. This means that the NP-hardness of the problem on unit disk graphs does not come from the NP-hardness of recognizing this class. The same holds for all other reductions presented in this paper. All our reductions are constructive. It follows from Theorem 5 that the general case of TSS on unit disk graphs with unrestricted threshold function is NP-complete too. As the Target Set Selection problem is, under reasonable complexity assumptions, computationally intractable on unit disk graphs, we focus ourselves on a subclass of unit disks called grid graphs. For the class of grid graphs, it is known that the Vertex Cover problem is solvable in polynomial time (Clark et al., 1990). Combining this result with Lemma 3, we obtain the following theorem. **Theorem 6**.: Target Set Selection _can be solved in polynomial time if the underlying graph is a grid graph and all thresholds are unanimous._ While Target Set Selection is tractable on grid graphs, it becomes hard for planar graphs with maximum degree \(3\). This is again a consequence of NP-hardness of Vertex Cover in this class of graphs. **Theorem 7**.: Target Set Selection _is NP-complete even if the underlying graph is planar with maximum degree \(3\) and all thresholds are unanimous._ Proof.: It is known that the Independent Set problem is NP-hard even on planar \(3\)-regular graphs. It follows that the same NP-hardness result holds for the Vertex Cover problem. By using Lemma 3 the theorem follows. Finally, Bessy et al. (2019) showed that TSS can be solved in polynomial time on interval graphs if all thresholds are bounded by a constant. We complement this result by showing that a set of initially infected agents can be found in linear time if all thresholds are unanimous. **Theorem 8**.: Target Set Selection _can be solved in linear time if the underlying graph is an interval graph and all thresholds are unanimous._ Proof.: We proceed in the same way as in the proof of Theorem 5. By Lemma 3 we find that TSS is equivalent to the Vertex Cover problem which can be solved in linear time on interval graphs (Farber, 1982). ## 4 Constant Thresholds The Target Set Selection problem seems to be intractable on unit disk graphs when the threshold function is unrestricted or unanimous. It is now natural to ask whether the problem remains NP-hard even under some natural restrictions of the threshold function. In this section, we show that TSS is -complete when the underlying graph is a unit disk graph and all thresholds are bounded by a constant. Note that the case where all thresholds are at most, is trivial. It is sufficient (and necessary) to choose one vertex per connected component without a vertex of threshold of the input graph. This can be accomplished in linear time. In the following we will employ reductions from problems involving planar graphs. To effectively apply such reductions, it will be essential to have some kind of 'nice' representation of the planar graph. One such representation useful for our purposes is the so-called _rectilinear embedding_. **Definition 9**.: Given a planar graph, a _rectilinear embedding (of )_ is a planar drawing of such that vertices occupy integer coordinates, and all edges are made of (possibly more) line segments of the form or (i.e., the line segments are parallel to the coordinate axes). Formally, a rectilinear embedding of a planar graph is a pair of mappings where and the following conditions must hold: 1. The mapping is injective. 2. For any edge, the tuple induces a simple polygonal chain and for all the points the points are adjacent grid points. More precisely, if and, then either or. 3. For any edge, the points and are endpoints of the polygonal chain induced by. 4. For any two distinct edges, the simple polygonal chains induced by and are disjoint except possibly at the endpoints. To simplify notation, we will use only one mapping to represent a rectilinear embedding. In other words, and. Since is by definition injective, for, we let denote the vertex such that. The _area_ of a rectilinear embedding, denoted, is the minimal area of an axes-parallel closed rectangle such that the embedding is contained in. We utilize the following theorem of Valiant which establishes a sufficient condition for existence of rectilinear embedding for a planar graph. **Theorem 10** (Valiant (1981)).: _Given a planar graph with maximum degree, there exists a rectilinear embedding of satisfying. Moreover, can be computed in polynomial time with respect to the size of._ ### Thresholds bounded by We begin with an auxiliary result showing -hardness of the Independent Set problem in -regular and -regular unit disk graphs which can be of independent interest. **Theorem 11**.: Independent Set _is -complete even if the underlying graph is an -regular unit disk graph, where._ We divide the proof of Theorem 11 into two parts. First, we explain the construction of the reduction. In the second part, we explain how to represent the resulting graph as a unit disk graph. ### Construction The reduction is from the Independent Set problem on \(r\)-regular planar graphs. Let \((G,k)\) be an instance of the Independent Set problem where \(G=(V,E)\) is a planar \(r\)-regular graph. First, subdivide each edge \(e=\{u,v\}\in E\) exactly \(6q_{e}\) times, creating a path \(ux_{1}^{e}x_{2}^{e}\ldots x_{6q_{e}}^{e}v\). The number \(q_{e}\), which depends on the edge \(e\), will be explained later. For the construction, it is only important that the number of subdivisions is a multiple of \(6\). Next, for all \(i\in[2q_{e}]\), replace every vertex \(x_{3i-1}^{e}\) with a clique \(K_{r-1}\) and connect all its neighbors to the clique. In other words, create \(r-2\) additional copies of the vertex \(x_{3i-1}^{e}\) and connect these copies into a complete graph (independently for each \(i\)). Let \(X_{e}\) denote the set of vertices created by subdividing the edge \(e\) (including the copies of all vertices \(x_{3i-1}^{e}\)) (see Fig. 1). Let \(G^{\prime}\) denote the resulting graph. Note that \(G^{\prime}\) is \(r\)-regular. To finish the construction, we set \(k^{\prime}=k+\sum_{e\in E}3q_{e}\). We now establish the equivalence of the instances \((G,k)\) and \((G^{\prime},k^{\prime})\). This is the content of Claims 12 and 13 **Claim 12**.: If \((G,k)\) is a _yes_-instance of Independent Set, then \((G^{\prime},k^{\prime})\) is a _yes_-instance of Independent Set. **Proof:** Assume that \((G,k)\) is a _yes_-instance and let \(I\) be an independent set in \(G\) of size at least \(k\). We build an independent set \(I^{\prime}\) in \(G^{\prime}\) of size at least \(k^{\prime}\). Let \(E(G)=\{e_{1},e_{2},\ldots,e_{m}\}\) be enumeration of all edges of \(G\) in arbitrary order. We inductively build independent sets \(I_{0},I_{1},\ldots I_{m}\) and we set \(I^{\prime}=I_{m}\). Set \(I_{0}=I\). Now let \(\ell\geq 1\) and assume that the set \(I_{\ell-1}\) is already built and is independent. We describe how to build \(I_{\ell}\). Let \(\{u,v\}=e_{\ell}\). Since \(I_{\ell-1}\) is, by assumption, independent, we have \(u\notin I_{\ell-1}\) or \(v\notin I_{\ell-1}\). We now distinguish two cases: **Case 1**: If \(u\notin I_{\ell-1}\), we set \(I_{\ell}=I_{\ell-1}\cup\{x_{2i-1}^{e_{\ell}}\mid i\in[3q_{e}]\}\). **Case 2**: If \(v\notin I_{\ell-1}\), we set \(I_{\ell}=I_{\ell-1}\cup\{x_{2i}^{e_{\ell}}\mid i\in[3q_{e}]\}\). It is straightforward to verify that for all \(\ell\in[m]\) the set \(I_{\ell}\) is independent. Since \(|I_{0}|=|I|\geq k\) and \(|I_{\ell}|=|I_{\ell-1}|+3q_{e_{\ell}}\) for all \(\ell\in[m]\), it indeed holds that \(|I^{\prime}|=|I_{m}|=|I_{0}|+\sum_{e\in E(G)}3q_{e}\geq k+\sum_{e\in E(G)}3q_{e} =k^{\prime}\). It follows that \((G^{\prime},k^{\prime})\) is a _yes_-instance. **Claim 13**.: If \((G^{\prime},k^{\prime})\) is a _yes_-instance of Independent Set, then \((G,k)\) is a _yes_-instance of Independent Set. Proof.: Assume that \((G^{\prime},k^{\prime})\) is a _yes_-instance and let \(I^{\prime}\) be an independent set in \(G^{\prime}\) of size at least \(k^{\prime}\). Let \(E(G)=\{e_{1},\ldots,e_{m}\}\) be an enumeration of all edges of \(G\) in arbitrary order. We inductively build independent sets \(I_{0},I_{1},\ldots,I_{m}\) and we set \(I=I_{m}\). Set \(I_{0}=I^{\prime}\). Now let \(\ell\geq 1\) and assume that the set \(I_{\ell-1}\) is already built and is independent. We describe how to build \(I_{\ell}\). Let \(\{u,v\}=e_{\ell}\). There are two cases to consider: **Case 1**: At least one of \(u\) and \(v\) is not in \(I_{\ell-1}\). In this case, since \(I_{\ell}\) is independent, we have \(|I_{\ell-1}\cap X_{e_{\ell}}|\leq 3q_{e_{\ell}}\) by the pigeonhole principle. We set \(I_{\ell}=I_{\ell-1}\setminus X_{e_{\ell}}\). **Case 2**: Both \(u\) and \(v\) are in \(I_{\ell-1}\). In this case, \(|I_{\ell-1}\cap X_{e_{\ell}}|\leq 3q_{e_{\ell}}-1\) by the same argument as above. We set \(I_{\ell}=I_{\ell-1}\setminus(X_{e_{\ell}}\cup\{u\})\). The resulting set \(I=I_{m}\) is indeed independent. Let \(e=\{u,v\}\in E(G)\) be arbitrary. Notice that at the time of processing the edge \(e_{\ell}=e\) one of \(u,v\) was already missing in \(I_{\ell-1}\) (Case 1) or we explicitly removed \(u\) from \(I_{\ell-1}\) (Case 2). Note that \(|I_{\ell-1}|-|I_{\ell}|\leq 3q_{e_{\ell}}\) for all \(\ell\in[m]\). It follows that \(|I|=|I_{m}|\geq|I_{0}|-\sum_{e\in E(G)}3q_{e}=k^{\prime}-\sum_{e\in E(G)}3q_{e }=k\). Thus, \((G,k)\) is a _yes_-instance. We now turn our attention to how to represent the graph \(G^{\prime}\) as a unit disk graph. We also explain how to compute the constants \(q_{e}\) for each edge \(e\), and we show that they can be bounded by a polynomial in the size of \(G\). ### Unit disk representation We let \(d=\frac{1}{7}\) be the diameter of the disks in the representation. Since \(G\) is planar and in both cases (i.e., \(G\) is \(3\)- or \(4\)-regular) we have \(\Delta G\leq 4\), by Theorem 10 there is a rectilinear embedding \(\mathcal{E}\) of \(G\) of polynomial area and computable in polynomial time. We now describe how to represent the vertices of \(G^{\prime}\) with disks. First, the vertices \(v\in V(G^{\prime})\) corresponding to vertices of \(G\) will have their disk centered at the grid point \(\mathcal{E}(v)\). We now show how to construct the subdivisions of the edges. We proceed independently for each edge \(e\in E(G)\)(1). Let \(\mathcal{E}(e)=(p_{1},\ldots,p_{g})\). We place disks \(D_{2},\ldots,D_{g-1}\) centered at the points \(p_{2},\ldots,p_{g-1}\). Let \(D_{1}\) and \(D_{g}\) denote the disks corresponding to vertices \(\mathcal{E}^{-1}(p_{1})\) and \(\mathcal{E}^{-1}(p_{g})\), respectively. Our task is now to insert a certain number of disks between \(D_{i},D_{i+1}\) for all \(i\in[g-1]\) such that the total number of disks between \(D_{1}\) and \(D_{g}\) is a multiple of \(6\), that is, the total number of disks between \(D_{1}\) and \(D_{g}\) (excluding \(D_{1}\) and \(D_{g}\)) should be equal to \(6q_{e}\). Let \(w_{i}\) denote the number of disks inserted between \(D_{i},D_{i+1}\). We specify the numbers \(w_{i}\) later. The total number of disks between \(D_{1},\ldots,D_{g}\) is therefore given by \(y_{e}=g-2+\sum_{i=1}^{g-1}w_{i}\). Our aim is now to choose the numbers \(w_{i}\) such that \(y_{e}=6q_{e}\). Footnote 1: All variables introduced from this point should have additional superscript \(e\) to signify their dependence on the edge \(e\). Nonetheless, for the sake of readability, we omit it. To achieve this, we do the following. First, we learn how to insert \(\ell\in\{6,7,8,9\}\) disks between a single pair of adjacent disks. We prove this in the following lemma. Note that \(D_{i}\) and \(D_{i+1}\) are centered at neighboring grid points since, by definition, \(p_{i},p_{i+1}\) are neighboring grid points. We simplify the scenario and assume that \(D_{i}\) and \(D_{i+1}\) are centered at \(p_{i}=(0,0)\) and \(p_{i+1}=(1,0)\), respectively. It is not hard to generalize this idea to general points \(p_{i},p_{i+1}\). **Lemma 14**.: _Let \(L\) be the line segment with endpoints \((0,0),(1,0)\) and let \(\ell\in\{6,7,8,9\}\). There exist \(\ell\) disks \(E_{1},\ldots,E_{\ell}\) with diameter \(d=\frac{1}{7}\) and centers \(s_{1},\ldots,s_{\ell}\) all lying on \(L\) such that:_ * \(s_{1}=(d,0)\)_,_ * \(s_{\ell}=(1-d,0)\)_,_ * _any disk_ \(E_{j}\) _intersects exactly its neighbors_ \(E_{j-1}\) _and_ \(E_{j+1}\) _(if they exist)._ Proof.: We prove this by construction and specify the centers of the \(\ell\) disks. As all centers shall lie on the line \(L\) they are of the form \(s_{j}=(a_{j},0)\). For fixed \(\ell\) and \(j\in[\ell]\), the numbers \(a_{j}\) are given by the formula: \[a_{j}=\frac{5j+\ell-6}{7(\ell-1)}.\] It can be verified by a straightforward calculation that the properties i), ii) and iii) hold. To verify iii), it is enough to check that \(a_{j+1}-a_{j}\leq d\) and \(a_{j+2}-a_{j}>d\) for appropriate \(j\). Now we know how to insert \(\ell\in\{6,7,8,9\}\) disks. We show how many disks we have to insert such that \(y_{e}=g-2+\sum_{i=1}^{g-1}w_{i}\) is a multiple of \(6\), given \(g\geq 2\). In other words, we are now in the situation to choose the \(w_{i}\)'s, given \(g\geq 2\). We prove this in the following lemma. **Lemma 15**.: _For any \(g\geq 2\) there exist \(g-1\) numbers \(w_{1},\ldots,w_{g-1}\in\{6,7,8,9\}\) such that_ \[g-2+\sum_{i=1}^{g-1}w_{i}=0\mod 6.\] Proof.: We divide the proof into six cases according to the residue class of \(g\) modulo \(6\). **Case 1**: If \(g=0\mod 6\), set \(w_{1}=8\) and \(w_{i}=6\) for \(i\in\{2,\ldots,g-1\}\). **Case 2**: If \(g=1\mod 6\), set \(w_{1}=7\) and \(w_{i}=6\) for \(i\in\{2,\ldots,g-1\}\). **Case 3**: If \(g=2\mod 6\), set \(w_{i}=6\) for all \(i\in[g-1]\). **Case 4**: If \(g=3\mod 6\), set \(w_{1}=9,w_{2}=8\) and \(w_{i}=6\) for all \(i\in\{3,\ldots,g-1\}\). **Case 5**: If \(g=4\mod 6\), set \(w_{1}=w_{2}=8\) and \(w_{i}=6\) for all \(i\in\{3,\ldots,g-1\}\). **Case 6**: If \(g=5\mod 6\), set \(w_{1}=9\) and \(w_{i}=6\) for all \(i\in\{2,\ldots,g-1\}\). It is a straightforward computation to verify that the chosen numbers \(w_{i}\) work in every case. Note that \(g\geq 3\) in Case 4 and 5. The formula for \(q_{e}\) is thus given by \[q_{e}=\frac{1}{6}y_{e}=\frac{1}{6}\left(g-2+\sum_{i=1}^{g-1}w_{i}\right).\] The number \(g\) is given by the polygonal chain induced by \(\mathcal{E}(e)\) and Lemma 15 tells us how to choose the numbers \(w_{i}\). It remains to show how to represent the cliques that are replaced for the vertices \(x_{3i-1}^{e}\) for \(i\in[2q_{e}]\). Simply create \(r-2\) additional copies of the corresponding disk in the representation. This completes the description of the unit disk representation for \(G^{\prime}\). Note that Lemma 14 ensures that the disks are placed in between the disks \(D_{i},D_{i+1}\) starting from \(s_{1}=(d,0)\) and ending at \(s_{\ell}=(1-d,0)\). This implies that for any edge \(e=\{u,v\}\) the disks adjacent to \(D_{1}\) and \(D_{g}\) will not intersect any other disks representing other subdivided edges (in particular those with endpoints \(u\) or \(v\)). We are now ready to finally prove Theorem 11. **Proof of Theorem 11:** Reduce from Independent Set on \(r\)-regular planar graphs. This setting is \(\mathsf{NP}\)-hard (Fleischner et al., 2010). Let \((G,k)\) be an instance of Independent Set where \(G\) is \(r\)-regular planar graph. Construct an instance \((G^{\prime},k^{\prime})\) of Independent Set where \(G^{\prime}\) is \(r\)-regular unit disk graph as described above. Combining Claims 12 and 13 the instances \((G,k)\) and \((G^{\prime},k^{\prime})\) are equivalent. It remains to argue that the reduction is polynomial. Computation of the rectilinear embedding \(\mathcal{E}\) for \(G\) can be done in polynomial time by Theorem 10. Computation of the numbers \(w_{i}\), \(g\) and \(q_{e}\) can also be done in polynomial time. What is left to show is that the numbers \(q_{e}\) are also polynomially bounded by the size of \(G\). By Theorem 10 the area of \(\mathcal{E}\) satisfies \(\mathrm{Area}(\mathcal{E})\leq\mathcal{O}(|V(G)|^{2})\). For any edge \(e\), the number of grid points contained in the polygonal chain induced by \(\mathcal{E}(e)\) is at most \(\mathrm{Area}(\mathcal{E})\). It follows that \(g\leq\mathcal{O}(|V(G)|^{2})\) for any edge \(e\). By construction we have \(w_{i}\leq 9\) for any \(g\geq 2\) and \(i\in[g-1]\). Thus, we have \[q_{e}\leq\frac{1}{6}\left(g-2+9(g-1)\right)\leq\frac{1}{6}(10g-11)\leq\frac{1 0}{6}g\leq\mathcal{O}(|V(G)|^{2}).\] The number of newly added vertices is at most \(\sum_{e\in E}10q_{e}\) and since \(q_{e}\) is polynomial in \(|V(G)|\), the reduction is indeed polynomial. Using Theorem 11, we can easily show that Target Set Selection remains \(\mathsf{NP}\)-complete even if the threshold function is bounded by a constant. We show a construction for the case where all thresholds are exactly \(3\). It is not hard to see that our result holds for every constant \(c\geq 3\). Whenever thresholds are bounded by a constant \(c^{\prime}\), then they are certainly bounded by any constant \(c\geq c^{\prime}\). **Theorem 16**.: Target Set Selection _is \(\mathsf{NP}\)-complete even if the underlying graph is a unit disk graph and all thresholds are bounded by a constant \(c\geq 3\)._ **Proof:** By Theorem 11, Independent Set is \(\mathsf{NP}\)-complete when restricted to the class of \(3\)-regular unit disk graphs. The same \(\mathsf{NP}\)-hardness holds for Vertex Cover. We reduce from VC restricted to such instances. Let \((G,k)\) be an instance of Vertex Cover and \(G\) be a \(3\)-regular graph. Set \(G^{\prime}=G,k^{\prime}=k\) and \(t(v)=3\) for each \(v\in V(G)\). Since \(G^{\prime}\) is \(3\)-regular this is the case of unanimous thresholds. As noted in Lemma 3, instances of Target Set Selection with unanimous thresholds are equivalent to the Vertex Cover problem, so the theorem follows. **Remark 17**.: The above result can be generalized for infinitely many constants \(r\). Given an instance \((G,k)\) of Independent Set where the underlying graph is \(r\)-regular, replacing each vertex by a clique \(K_{q}\) makes the graph \((q(r+1)-1)\)-regular. If we denote the new graph by \(G_{q}\) it is not hard to see that the instances \((G_{q},k)\) and \((G,k)\) of Independent Set are equivalent. Replacing vertices by cliques can be easily achieved in intersection graph classes, in particular, unit disk graphs. **Corollary 18**.: _If Independent Set is NP-hard on the class of \(r\)-regular graphs, then Independent Set is NP-hard in the class of \((q(r+1)-1)\)-regular graphs for any positive integer \(q\)._ Combining Corollary 18 with Theorem 11 we obtain the following. **Corollary 19**.: Independent Set _is NP-hard even if the underlying graph is an \(r\)-regular unit disk graph where \(r\) is positive integer and \(r=-1\mod 4\) or \(r=-1\mod 5\)._ **Remark 20**.: We remark that this approach does not prove NP-hardness of Independent Set for _all_ constants \(r\geq 3\) (note that for \(r\leq 2\) the problem is in P). The first value of \(r\) unknown to us is \(r=5\). Note that Independent Set is NP-hard on planar \(5\)-regular graphs (Akhoondian Amiri, 2021), however Theorem 10 is not applicable since \(\Delta G=5\) in this case. By using Corollary 18 and explicit proof for \(r=3,4\) we obtained NP-hardness for infinitely many constants \(r\geq 3\). Unfortunately, this approach can never be used to show NP-hardness for _all_ constants \(r\). To see this, note that even if we explicitly show NP-hardness for any number of constants \(r_{1},\ldots,r_{k}\), we can pick a large enough prime number \(p\) satisfying \(p>r_{i}+1\) for all \(i\in[k]\). Observe that explicitly proving NP-hardness for \(r_{i}\)-regular graphs implies NP-hardness for \(r\)-regular graphs with \(r=-1\mod r_{i}+1\). Now, the NP-hardness for \(r=p-1\) is not implied by NP-hardness for \(r_{1},\ldots,r_{k}\) together with Corollary 18 since otherwise \(p-1=-1\mod r_{j}+1\) for some \(j\in[k]\) which implies that \(r_{j}+1\) divides \(p\), contradicting the choice of \(p\). ### Thresholds bounded by \(2\) Following historical development in the study of TSS, it remains to show the complexity of the problem if all thresholds are bounded by \(2\) and the underlying graph is a unit disk graph. We first establish the NP-hardness for planar graphs with \(\Delta G\leq 4\) and then utilize this reduction to show NP-hardness for the classes of grid graphs and unit disk graphs. **Theorem 21**.: Target Set Selection _is NP-complete even when the underlying graph is planar with maximum degree \(\Delta G\leq 4\) and all thresholds are at most \(2\)._ Proof.: We reduce from the Restricted Planar \(3\)-Sat problem. Let \(\varphi\) be the input formula with variables \(x_{1},\ldots,x_{n}\) and clauses \(C_{1},\ldots,C_{m}\). The reduction consists of two types of gadgets: **Variable gadget**: Given a variable \(x_{i}\), the variable gadget for \(x_{i}\) is the planar graph depicted in Fig. 2. We refer to this gadget as \(X_{i}\). The notable vertices of the gadget are \(T_{i},F_{i},t_{i},\) and \(f_{i}\). The idea is that the vertices \(T_{i}\) and \(F_{i}\) stand for the truth assignment of this particular variable, while the vertices \(t_{i}\) and \(f_{i}\) represent the positive and negative literals, respectively, and serve to connect the variable gadgets with the respective clause gadgets. Note that by definition of Restricted Planar \(3\)-Sat we have \(\deg t_{i}=4\) and \(\deg f_{i}=2\). **Clause gadget**: Given a clause \(C_{j}\), the clause gadget for \(C_{j}\) consists of a single vertex \(y_{j}\) which is connected to the corresponding literal vertices that are contained in the clause \(C_{j}\). We refer to this gadget as \(Y_{j}\). We are now ready to construct an instance \((G,t,k)\) of Target Set Selection. Start with the incidence graph \(G_{\varphi}\). For every variable \(x_{i}\), we replace the vertex \(v_{x_{i}}\) by the variable gadget \(X_{i}\) and we identify each clause vertex \(v_{C_{j}}\) with the vertex \(y_{j}\), i.e., with the gadget \(Y_{j}\). Next, we connect all literal vertices of \(X_{i}\) with the corresponding clause gadgets. More precisely, we add an edge \(\{t_{i},y_{j}\}\) into \(E(G)\) if \(x_{i}\) occurs as a positive literal in the clause \(C_{j}\), and we add an edge \(\{f_{i},y_{j}\}\) to \(E(G)\) if \(x_{i}\) occurs as a negative literal in the clause \(C_{j}\). It remains to set the thresholds and \(k\). For the variable gadget, the gray vertices have threshold equal to \(2\), while the white vertices have threshold equal to \(1\). In the clause gadget, we set \(t(y_{j})=1\). Finally, we set \(k=n\). Observe that \(G\) is a planar graph. To see this, note that we can start with a planar drawing of \(G_{\varphi}\) and replace the vertices of \(G_{\varphi}\) with the gadgets. Note that the only problem could be with the edges coming from the vertices \(t_{i}\) and \(f_{i}\). However, for a variable \(x_{i}\) that occurs in clauses \(C_{j_{1}},C_{j_{2}},C_{j_{3}}\), no matter what the order(ii) of the vertices \(y_{j_{1}},y_{j_{2}},y_{j_{3}}\) is (with respect to the planar drawing of \(G_{\varphi}\)), it is always possible to draw the edges from \(t_{i}\) and \(f_{i}\) to the corresponding clause gadgets in such a way that we do not create any crossings. For example, the edges coming from \(t_{i}\) can encircle the entire gadget in the drawing and leave the gadget to the right of the edge coming from \(f_{i}\). Moreover, we have \(\Delta G\leq 4\), and the thresholds are at most \(2\), as promised. Before we show equivalence of the instances \((G,t,k)\) and \(\varphi\), we establish some basic properties of the variable gadget. Properties of the clause gadget are clear, since it is a single vertex with threshold \(1\). **Lemma 22**.: _The gadget \(X_{i}\) has the following properties:_ 1. _If the vertex_ \(T_{i}\) _is active, then after_ \(4\) _rounds, the vertices_ \(a_{i},b_{i},c_{i},d_{i},t_{i}\) _are necessarily active._ 2. _If the vertex_ \(F_{i}\) _is active, then after_ \(4\) _rounds, all vertices on the_ \(f_{i}\)_-_\(F_{i}\)_-path inside_ \(X_{i}\) _are necessarily active._ 3. _Even if the vertices in_ \(N[V(X_{i})]\setminus V(X_{i})\) _are active and no other vertex inside_ \(V(X_{i})\) _is active, then the vertices_ \(T_{i}\) _and_ \(F_{i}\) _will never be active._ 4. _If the vertex_ \(F_{i}\) _is active and_ \(t_{i}\) _becomes active, then all vertices in_ \(X_{i}\) _are eventually active._ 5. _If the vertex_ \(T_{i}\) _is active and_ \(f_{i}\) _becomes active, then all vertices in_ \(X_{i}\) _are eventually active._ **Proof:** The claims i) and ii) are clear from the construction of the gadget. For claim iii), observe that if the neighbors of \(t_{i}\) and \(f_{i}\) outside \(V(X_{i})\) are active, then \(t_{i}\) becomes active. However, \(t(a_{i})=t(T_{i})\), so the vertices \(a_{i}\) and \(T_{i}\) never become active. If \(f_{i}\) is also active, it only Figure 2: Schematic representation of the variable gadget \(X_{i}\) for variable \(x_{i}\). The filled vertices have threshold \(2\), while the white vertices have threshold \(1\). Note also that the half-edges illustrate the fact that the gadget is connected with the rest of the graph only via \(t_{i}\) and \(f_{i}\). \(T_{i}\) activates the neighbor of \(F_{i}\) with threshold \(1\) but never \(F_{i}\) itself, since \(t(F_{i})=2\) and \(T_{i}\) never become active. For claim iv), suppose that \(F_{i}\) is active and \(t_{i}\) becomes active in round \(r\). Then, in round \(r+1\) the vertex \(T_{i}\) becomes active. In the round \(r+2\), the vertex \(d_{i}\) becomes active. Next, in round \(r+3\), the vertices \(b_{i}\) and \(c_{i}\) become active. Finally, in round \(r+4\), the vertex \(a_{i}\) becomes active. Also, all vertices on the path from \(F_{i}\) to \(f_{i}\) become active during these rounds (if they are not already activated). Thus, all vertices in \(X_{i}\) are active. For claim v), suppose that \(T_{i}\) is active and \(f_{i}\) becomes active in round \(r\). After \(4\) rounds, the vertex \(F_{i}\) becomes active and similarly as in the proof for iv), observe that the remaining vertices \(a_{i},b_{i},c_{i},d_{i}\) and \(t_{i}\) become active. Thus, all vertices in \(X_{i}\) are active. We now establish the equivalence between the formula \(\varphi\) and the constructed instance \((G,t,k)\). **Claim 23**.: If \(\varphi\) is satisfiable, then \((G,t,k)\) is a _yes_-instance of TSS. Proof.: Let \(\varphi\) be satisfiable and let \(f\) be a satisfying assignment. We create a target set \(S\) as follows. For each variable \(x_{i}\) we add either \(T_{i}\) if \(f(x_{i})=1\) or \(F_{i}\) if \(f(x_{i})=0\). Observe that \(|S|=n=k\). It remains to show that \(S\) is a target set. To see this, observe that by the properties i) and ii) every \(T_{i}\) and \(F_{i}\) activate the corresponding \(t_{i}\) or \(f_{i}\) (respectively) in \(4\) rounds. In the fifth round, all clauses become active. Indeed, because \(f\) is a satisfying assignment, all of them become active. In the sixth round, the vertices \(f_{i}\) or \(t_{i}\) become active. More precisely, if \(T_{i}\in S\), then \(f_{i}\) becomes active in the sixth round (and vice versa if \(F_{i}\in S\), then \(t_{i}\) becomes active in the sixth round). By the properties iv) and v), the remaining vertices of all the variable gadgets become active. Thus, \(S\) is a target set. It follows that \((G,t,k)\) is a _yes_-instance. **Claim 24**.: If \((G,t,k)\) is a _yes_-instance of TSS, then the formula \(\varphi\) is satisfiable. Proof.: Suppose that \((G,t,k)\) is a _yes_-instance of Target Set Selection and let \(S\subseteq V(G)\) be a target set for \(G\) of size at most \(k\). First, we make several claims about the structure of \(S\). **Claim 25**.: For every variable gadget \(X_{i}\) we have \(S\cap V(X_{i})\neq\emptyset\). Proof.: Suppose otherwise, i.e., let \(X_{i}\) be a variable gadget such that \(S\cap V(X_{i})=\emptyset\). Note that by property iii) of the variable gadget, the vertices \(T_{i}\) and \(F_{i}\) never become active even if the vertices in \(N[V(X_{i})]\setminus V(X_{i})\) are active. This contradicts the assumption that \(S\) is a target set. By Claim 25, \(S\) must contain at least one vertex from the variable gadget. By the definition of \(k\), there is at most one vertex of the gadget \(X_{i}\) inside \(S\). Putting this together, we have the following claim. **Claim 26**.: \(\forall i\in[n]:|S\cap V(X_{i})|=1\)_._ Let \(u_{i}\in S\cap V(X_{i})\) be the unique vertex for the \(i\)-th variable gadget. We now argue that we can assume, without loss of generality, that \(u_{i}\in\{F_{i},T_{i}\}\). **Claim 27**.: There exists a target set \(S^{\prime}\) satisfying: \(\forall i\in[n]:S^{\prime}\cap\{T_{i},F_{i}\}\neq\emptyset\) and \(|S^{\prime}|=|S|\). Proof.: Process the variable gadgets independently one by one. Formally,3 start with \(S^{0}=S\) and inductively build the sets \(S^{i}\) for \(i\in[n]\) by processing the gadgets. We let \(S^{\prime}=S^{n}\). Let \(i\geq 1\) and consider the gadget \(X_{i}\). If \(S^{i-1}\cap\{T_{i},F_{i}\}\neq\emptyset\), there is nothing to do, i.e., we set \(S^{i}=S^{i-1}\). Otherwise, observe that \(S\cap V(X_{i})=S_{i-1}\cap V(X_{i})\). Recall that \(u_{i}\) was the unique vertex in \(S\cap V(X_{i})\). We distinguish two cases. **Case 1**: The vertex \(u_{i}\) lies on the \(f_{i}\)-\(F_{i}\)-path in \(X_{i}\). Note that we can replace \(u_{i}\) by \(F_{i}\) and this does not change the fact that \(u_{i}\) eventually becomes active by property ii). I.e., in this case, we set \(S^{i}=S^{i-1}\setminus\{u_{i}\}\cup\{F_{i}\}\). **Case 2**: The vertex \(u_{i}\) satisfies \(u_{i}\in\{a_{i},b_{i},c_{i},d_{i},t_{i}\}\). Note that we can replace \(u_{i}\) by \(T_{i}\) and this does not change the fact that \(u_{i}\) eventually becomes active by property i). I.e., in this case, we set \(S^{i}=S^{i-1}\setminus\{u_{i}\}\cup\{T_{i}\}\) This finishes the proof. By Claim 27 we can assume, without loss of generality, \(u_{i}\in\{F_{i},T_{i}\}\) for all \(i\in[n]\). We proceed to construct a satisfying assignment \(f\) for \(\varphi\) in the obvious way. For a variable \(x_{i}\) we set \(f(x_{i})=0\) if \(u_{i}=F_{i}\), and \(f(x_{i})=1\) otherwise (i.e., if \(u_{i}=T_{i}\)). The only thing left is to show that \(f\) is indeed a satisfying assignment for \(\varphi\). **Claim 28**.: \(f\) is a satisfying assignment for \(\varphi\). Proof.: For the sake of contradiction, suppose that \(f\) is not a satisfying assignment. Thus, there is clause \(C_{j}\) not satisfied by \(f\). Without loss of generality, assume that \(|C_{j}|=2\) and that \(C_{j}\) contains one positive and one negative literal. Any other case can be proven analogously. Without loss of generality, let \(C_{j}=\neg x_{1}\lor x_{2}\). By assumption, we have \(f(x_{1})=1\) and \(f(x_{2})=0\). Note that \(y_{j}\notin S\) because otherwise there is a variable gadget \(X_{i}\) with \(S\cap X_{i}=\emptyset\), which is impossible by Claim 25. Since \(S\) is a target set and \(y_{j}\notin S\), there must be a round \(r\) in which one of the neighbors of \(y_{j}\) becomes active. We show that this is impossible. Recall that \(N(y_{j})=\{f_{1},t_{2}\}\). We examine the variable gadgets \(X_{1}\) and \(X_{2}\). By definition of \(f\) we have \(S\cap V(X_{1})=\{T_{1}\}\). Observe that in order to make the vertex \(f_{1}\) active, it is necessary to have at least one vertex from the path from \(f_{1}\) to \(F_{1}\) in the target set \(S\). This implies that \(|V(X_{1})\cap S|\geq 2\), which contradicts Claim 26. Thus, \(f_{1}\) is never active. Analogously, we have \(S\cap V(X_{2})=\{F_{2}\}\). Observe that in order to have \(t_{2}\) active, then, since \(t(t_{2})=2\) and one edge from \(t_{2}\) outside \(V(X_{2})\) leads to \(y_{j}\), we need at least one of \(\{a_{i},T_{i}\}\) to be active. Observe that this implies that one of \(a_{2},b_{2},c_{2},d_{2},T_{2},t_{2}\) must be in \(S\), otherwise \(t_{2}\) is never active. However, this implies that \(|S\cap V(X_{2})|\geq 2\) which again contradicts Claim 26. Thus, \(t_{2}\) is never active either. Putting this together, we observe that \(y_{j}\) never becomes active, which contradicts the assumption that \(S\) was a target set. This finishes the proof of Claim 24. To conclude the proof of Theorem 21, it remains to combine Claims 23 and 24 and notice that the reduction is indeed polynomial, since \(G\) has exactly \(m+11n\) vertices. ### Grid graphs and Unit Disk Graphs In the previous section, we showed \(\mathsf{NP}\)-hardness of Target Set Selection when the underlying graph is restricted to be planar with maximum degree at most \(4\) and the thresholds are at most \(2\). We utilize this result to show \(\mathsf{NP}\)-hardness in the same setting for the class of grid graphs. Let us begin with a few observations about how graph subdivisions affect target sets. **Observation 29**.: _Let \(G=(V,E)\) be a graph and \(t\colon V\to\mathbb{N}\) a threshold function, \(S\subseteq V\) a target set, and let \(v\in S\) be a vertex with \(t(v)\leq 1\) and \(\deg v\geq 1\). Then for any \(u\in N(v)\), the set \(S\setminus\{v\}\cup\{u\}\) is also a target set._ **Observation 30**.: _Let \(G=(V,E)\) be a graph and \(t\colon V\to\mathbb{N}\) a threshold function, and let \(e\in E\). Let \(G^{\prime}\) be a graph that results from \(G\) by subdividing the edge \(e\) once, creating a new vertex \(v^{\prime}\notin V\). Let \(t^{\prime}\colon V(G^{\prime})\to\mathbb{N}\) be defined by \(t^{\prime}(v^{\prime})=1\) and \(t^{\prime}(v)=t(v)\) for \(v\neq v^{\prime}\). Then the following holds:_ 1. _If_ \(S\) _is a target set for_ \(G\) _with respect to_ \(t\)_, then_ \(S\) _is also a target set for_ \(G^{\prime}\) _with respect to_ \(t^{\prime}\)_._ 2. _If_ \(S^{\prime}\) _is a target set for_ \(G^{\prime}\) _with respect to_ \(t^{\prime}\)_, then there exists a target set_ \(S\) _for_ \(G\) _with respect to_ \(t\) _and_ \(|S|=|S^{\prime}|\)_._ **Theorem 31**.: Target Set Selection _is \(\mathsf{NP}\)-complete even if the underlying graph is a grid graph and all thresholds are at most \(2\)._ Proof.: We reduce from Target Set Selection on planar graphs with maximum degree \(4\) and thresholds at most \(2\). This setting is \(\mathsf{NP}\)-hard by Theorem 21. Let \((G,t,k)\) be an instance of TSS where \(G\) is planar and \(\Delta G\leq 4\). By Theorem 10 there is a rectilinear embedding of \(G\) of polynomial area and computable in polynomial time. Fix one such embedding and denote it by \(\mathcal{E}\). We now modifiy the graph \(G\) as follows. For an edge \(e\in E(G)\), let \(\mathcal{E}(e)=(p_{1},\ldots,p_{g})\). We subdivide the edge \(e\) exactly \(g-2\) times (see Fig. 3). Note that the case \(g=2\) vacuously corresponds to no subdivision. After this step, the graph is (not necessarily induced) subgraph of a grid. To make it induced, we further simultaneously subdivide all edges exactly once (see Fig. 4). After this step, the resulting graph is indeed an induced subgraph of a grid (i.e., a grid graph). We set the thresholds of all newly created vertices to \(1\). Let \(G^{\prime}\) denote the resulting graph, \(t^{\prime}\colon V(G^{\prime})\to\mathbb{N}\) the new threshold function and set \(k^{\prime}=k\). **Claim 32**.: \((G,t,k)\) is a _yes_-instance of Target Set Selection if and only if \((G^{\prime},t^{\prime},k^{\prime})\) is a _yes_-instance of Target Set Selection. Proof.: Let \((G,t,k)\) be a _yes_-instance and let \(S\subseteq V(G)\) be a target set of size at most \(k\). Inductively, for each subdivision, apply i) from Observation 30. It follows that \(S\) is also a target set with respect to \(t^{\prime}\) and is of size at most \(k=k^{\prime}\), thus \((G^{\prime},t^{\prime},k^{\prime})\) is a _yes_-instance. On the other hand, let \((G^{\prime},t^{\prime},k^{\prime})\) be a _yes_-instance and let \(S^{\prime}\subseteq V(G^{\prime})\) be a target set of size at most \(k^{\prime}\). Inductively for each subdivision, apply ii) from Observation 30. Observe that in each step we get a target set \(S\) with the same size. It follows that there is a target set \(S\) with respect to \(t\) of size \(k^{\prime}=k\), thus \((G,t,k)\) is a _yes_-instance. To finish the proof, we notice that the rectilinear embedding can be computed in polynomial time by Theorem 10 and its area is at most \(\mathcal{O}(|V|^{2})\). It follows that in both steps of the construction, we only added at most \(\mathcal{O}(|V|^{2})\) many new vertices, and thus the size of \(G^{\prime}\) is at most polynomial in the size of \(G\). This implies that the reduction is polynomial. The theorem follows. As the class of grid graphs is a subclass of the unit disk graphs, we obtain NP-hardness for unit disk graphs as a corollary of Theorem 31. **Corollary 33**.: Target Set Selection _is NP-complete even when the underlying graph is a unit disk graph and all thresholds are at most \(2\)._ ## 5 Majority Thresholds The last natural restriction of the threshold function, which is widely studied in the literature, is the case of majority thresholds, that is, for every \(v\in V\) we have \(t(v)=\lceil\deg(v)/2\rceil\). Before delving into the specific graph classes discussed in this work, we first examine how the general case (i.e., when the underlying graph is unrestricted) is proven to be hard. The first proof of NP-hardness in this setting is due to Peleg (1996). The proof provided here is inspired by the proof of a related result concerning the inapproximability of Target Set Selection given by Figure 4: Transformation of a graph, that is (not necessarily induced) subgraph of a grid into a graph that is induced subgraph of a grid (i.e., a grid graph) by subdividing all edges exactly once. Filled vertices correspond to the vertices of the original graph and the white ones are the newly created vertices. Figure 3: Transformation of a planar graph with maximum degree \(4\) into a subgraph of a grid by subdividing edges at the internal points of the polygonal chains. Filled vertices correspond to the vertices of the original graph and the white ones are the newly created vertices. Chen (2009). The idea is as follows. If the threshold of a vertex is not at majority, then it is either larger or smaller. In the first case, it is enough to increase its degree by appending dummy leaf vertices to it. Whenever this vertex gets activated it also activates the leafs for free. On the other hand, when a vertex has small threshold (i.e., it is lower than majority), we need to increase it. We cannot safely decrease the degree of the vertex, so we make use of a gadget that supplies the vertex with sufficiently many active neighbors _for free_. This is achieved by the _cherry gadget_. A cherry gadget is a path on three vertices \(g^{\ell},g^{m},g^{r}\) (see Fig. 5). **Theorem 34**.: Target Set Selection _remains NP-complete under the majority threshold setting._ Proof.: We know that Target Set Selection is NP-hard when the threshold function is unrestricted. We reduce from TSS with unrestricted thresholds. Let \((G,t,k)\) be an instance of TSS. We create new instance \((G^{\prime},t^{\prime},k^{\prime})\) as follows. For each vertex \(v\) with \(t(v)\neq\left\lceil\frac{\deg_{G}v}{2}\right\rceil\), we perform the following: **Case 1**: If \(t(v)>\left\lceil\frac{\deg_{G}(v)}{2}\right\rceil\), we add \(2t(v)-\deg_{G}(v)\) new vertices with threshold \(1\) incident to \(v\) and set \(t^{\prime}(v)=\left\lceil\frac{\deg_{G^{\prime}}(v)}{2}\right\rceil=t(v)\) **Case 2**: If \(t(v)<\left\lceil\frac{\deg_{G}(v)}{2}\right\rceil\) we add \(\deg_{G}(v)-2t(v)\) cherry gadgets and attach them to \(v\) as depicted in Fig. 5. We set the threshold of the vertices in the gadget to be at majority, that is \(t^{\prime}(g^{\ell})=t^{\prime}(g^{r})=1\) and \(t^{\prime}(g^{m})=2\). We increase the threshold of \(v\) by \(\deg_{G}(v)-2t(v)\), i.e., we set \(t^{\prime}(v)=t(v)+\deg_{G}(v)-2t(v)=\left\lceil\frac{\deg_{G^{\prime}}(v)}{2}\right\rceil\). Let \(V_{1}\subseteq V(G^{\prime})\) denote the vertices added in Case 1 of the construction. Let \(\alpha\) denote the number of cherries added in the construction, and let \(g_{i}^{\ell},g_{i}^{r},g_{i}^{m}\) be the three vertices of the \(i\)-th cherry added. Finally, set \(k^{\prime}=k+\alpha\). We now establish the equivalence of instances \((G,t,k)\) and \((G^{\prime},t^{\prime},k^{\prime})\). **Claim 35**.: If \((G,t,k)\) is a _yes_-instance of Target Set Selection, then \((G^{\prime},t^{\prime},k^{\prime})\) is a _yes_-instance of Target Set Selection. Proof.: Let \((G,t,k)\) be a _yes_-instance and let \(S\subseteq V(G)\) be a solution with \(|S|\leq k\). We claim that \(S^{\prime}=S\cup\{g_{i}^{m}\mid i\in[\alpha]\}\) is a solution for \((G^{\prime},t^{\prime},k^{\prime})\). Certainly, \(|S^{\prime}|\leq k+\alpha=k^{\prime}\). First, we check that all original vertices get activated. The only vertices with their threshold values changed were the vertices \(v\in V(G)\) for which \(t(v)<\left\lceil\frac{\deg_{G}(v)}{2}\right\rceil\). We attached exactly \(\deg_{G}(v)-2t(v)\) cherries to these vertices and increased their threshold by exactly \(\deg_{G}(v)-2t(v)\). However, cherries will become active because \(g_{i}^{m}\in S^{\prime}\) and only \(t^{\prime}(v)-\deg_{G}(v)-2t(v)=t(v)\) more neighbors of \(v\) need to be active for \(v\) to be active. But that corresponds to the original activation process arising from \(S\) in \(G\). It follows that \(S^{\prime}\) is a target set for \(G^{\prime}\) with respect to \(t^{\prime}\), thus \((G^{\prime},t^{\prime},k^{\prime})\) is a _yes_-instance. **Claim 36**.: If \((G^{\prime},t^{\prime},k^{\prime})\) is a _yes_-instance of Target Set Selection, then \((G,t,k)\) is a _yes_-instance of Target Set Selection. Proof.: Let \((G^{\prime},t^{\prime},k^{\prime})\) be a _yes_ instance and \(S^{\prime}\subseteq V(G^{\prime})\) a solution with \(|S^{\prime}|\leq k^{\prime}\). **Claim 37**.: For each \(i\in[\alpha]\) we have \(S^{\prime}\cap\{g_{i}^{\ell},g_{i}^{m},g_{i}^{r}\}\neq\emptyset\). **Proof:** Suppose, for the sake of contradiction, that there is an index \(i\in[\alpha]\) such that \(S^{\prime}\cap\{g_{i}^{\ell},g_{i}^{m},g_{i}^{r}\}=\emptyset\). Observe that the vertex \(g_{i}^{m}\) has exactly one neighbor outside the cherry gadget. It follows that it will never become active. This contradicts the fact that \(S^{\prime}\) is a target set. We may further assume that \(S^{\prime}\) contains no vertices \(v\) with \(\deg v\geq 1\) and \(t(v)\leq 1\) by Observation 29. In particular, we can assume that \(S^{\prime}\cap V_{1}=\emptyset\). Denote the set of vertices of all cherries by \(V^{\prime}\). That is, \(V^{\prime}=\bigcup_{i=1}^{\alpha}\{g_{i}^{\ell},g_{i}^{m},g_{i}^{r}\}\) and let \(S=S^{\prime}\setminus V^{\prime}\). The task is now to show that \(S\) is a valid solution to \((G,t,k)\). Since we assumed that \(S^{\prime}\) does not contain vertices from \(V_{1}\), and we removed all vertices from the cherries, \(S\) contains only vertices of \(G\). Now, we argue that \(|S|\leq k\). By Claim 37, the set \(S^{\prime}\) contains at least one vertex from each cherry and since the cherries are pairwise vertex-disjoint, we have \(|S^{\prime}\cap V^{\prime}|\geq\alpha\). It follows that \(|S|=|S^{\prime}\setminus V^{\prime}|\leq k^{\prime}-\alpha=k\). It remains to prove that \(S\) is a target set for \(G\) with respect to \(t\). Similarly as in the proof of the opposite direction, the only interesting vertices are those for which \(t(v)<\left\lceil\frac{\deg_{G}(v)}{2}\right\rceil\). We decreased their threshold by \(\deg_{G}(v)-2t(v)\) but that is also the number of neighbors we removed from \(v\) in \(G\). Thus, the activation of these vertices remains unchanged. To finish the proof of Theorem 34 it remains to combine Claims 35 and 36 and notice that the reduction is indeed polynomial since we added at most \(3\deg_{G}(v)\leq 3|V(G)|\) vertices for each vertex \(v\in V(G)\), i.e., we added at most \(3|V(G)|^{2}\) new vertices. **Remark.** We remark that we didn't actually have to mess with the cherry gadgets. Instead, we could have started the reduction with a hard instance, where for all vertices \(v\) we have \(t(v)>\left\lceil\frac{\deg_{G}(v)}{2}\right\rceil\). For example, in the proof of Theorem 7 the underlying graph in the hard instance of Target Set Selection is \(3\)-regular, and the thresholds are exactly \(3\). This means that only the first case from proof of Theorem 34 would apply and the proof would be much simpler. Although the cherry gadgets were not necessary, we now use them to show NP-hardness of Target Set Selection under the majority setting for our desired graph classes. We start with the planar graphs. **Theorem 38**.: Target Set Selection _is NP-complete under the majority threshold setting even when the underlying graph is planar with maximum degree \(\Delta G\leq 4\)._ **Proof:** The proof combines ideas from proofs of Theorem 21 and Theorem 34. We reduce from the Restricted Planar \(3\)-Sat problem in a similar way as in proof of Theorem 21 and fix the thresholds Figure 5: The cherry gadget (on the left). Connection of two cherry gadgets to a vertex \(v\in V(G)\) with original degree \(\deg_{G}(v)=4\) and original threshold \(t(v)=1\). The new threshold of \(v\) is \(t^{\prime}(v)=t(v)+\deg_{G}(v)-2t(v)=3\) and \(\deg_{G^{\prime}}(v)=6\), thus it is at majority. The half edges going from \(v\) represent connection of \(v\) to the rest of \(G\). of the vertices that are not equal to majority in a similar fashion as in the proof of Theorem 34. Let \(\varphi\) be the input formula with variables \(x_{1},\ldots,x_{n}\) and clauses \(C_{1},\ldots,C_{m}\). Recall the original variable gadget (see Fig. 2). Again, we have \(\deg t_{i}=4\) and \(\deg f_{i}=2\). Observe that all vertices except \(d_{i}\) and \(F_{i}\) have majority thresholds. We fix the threshold value of the vertices \(d_{i},F_{i}\). To fix the vertex \(d_{i}\), because we have \(t(d_{i})<\left\lceil\frac{\deg d_{i}}{2}\right\rceil\), we attach one cherry to \(d_{i}\) (and increase the threshold of \(d_{i}\) by \(1\)). This follows case 2 in the proof of Theorem 34 because \(\deg d_{i}-2t(d_{i})=3-2=1\). To fix vertex \(F_{i}\), because we have \(t(F_{i})>\left\lceil\frac{\deg F_{i}}{2}\right\rceil\), we add a leaf adjacent to \(F_{i}\) with threshold \(1\). The modified variable gadget is depicted in Fig. 6. Now comes the clause gadget. Recall that in the Restricted Planar \(3\)-Sat all clauses are of size at most \(3\). In our construction, this means that for the clause \(C_{j}\) and the original clause gadget \(Y_{j}\) consisting of a single vertex \(y_{j}\) we have \(\deg y_{j}\in\{1,2,3\}\). We also had \(t(y_{j})=1\). Thus, if \(\deg y_{j}\leq 2\), the threshold is at majority. In this case, the clause gadget remains unchanged. If this is not the case, i.e., \(\deg y_{j}=3\), we attach exactly one cherry to \(y_{j}\) (and increase the threshold of \(y_{j}\) by \(1\)). The modified clause gadget is depicted in Fig. 7. The placement of the variable and clause gadgets and connections between them remains the same as in proof of Theorem 21. Let \(\beta\) denote the number of clauses that contain exactly \(3\) literals (i.e., the number of cherries attached to clause gadgets). Total number of cherries attached is \(\alpha=n+\beta\). We set \(k=n+\alpha=2n+\beta\). Let \(G\) denote the constructed graph and \(t\) the threshold function. It is not hard to see that \(G\) is still planar, \(t(v)=\left\lceil\frac{\deg_{G}(v)}{2}\right\rceil\) for all \(v\in V(G)\) and \(\Delta G\leq 4\), as promised. **Claim 39**.: The formula \(\varphi\) is satisfiable if and only if \((G,t,k)\) is a yes-instance of Target Set Selection. Proof.: This can be shown by combining Claims 23 and 24 and Claims 35 and 36. More precisely, we already know by Claims 23 and 24 that the formula \(\varphi\) is satisfiable if and only if the originally constructed instance of Target Set Selection in the proof of Theorem 21 was a yes-instance. Let \((G_{\mathrm{old}},t_{\mathrm{old}},k_{\mathrm{old}})\) denote the constructed instance from the proof of Theorem 21. Now, since the reduction from \((G_{\mathrm{old}},t_{\mathrm{old}},k_{\mathrm{old}})\) to \((G,t,k)\) is essentially the same as in proof of Theorem 34, we get by Claims 23 and 24 that \((G,t,k)\) is a yes-instance if and only if \((G_{\mathrm{old}},t_{\mathrm{old}},k_{\mathrm{old}})\) is a yes-instance. The claim follows by combining these two equivalences. Notice that this reduction is a composition of two polynomial reductions, and hence it is also a polynomial reduction. It is now straightforward to prove the NP-hardness for the majority setting in the remaining graph classes. That is, grid graphs and unit disk graphs. We employ the same idea as in the proof of Theorem 31. **Corollary 40**.: Target Set Selection _is_ NP_-complete under the majority threshold setting even if the underlying graph is a grid graph._ Proof.: Apply the same reduction as in the proof of Theorem 31, but start from a planar instance with majority thresholds and \(\Delta G\leq 4\) which is NP-hard by Theorem 38. Observe that a vertex created by subdividing an edge has degree \(2\) and all other degrees are unchanged. Notice that since the thresholds of the vertices created by the subdivision is \(1\), the new threshold function is indeed majority. As the class of grid graphs is a subclass of unit disk graphs, we also obtain NP-hardness under the majority setting in unit disk graphs. **Corollary 41**.: Target Set Selection _is NP-complete under the majority threshold setting even if the underlying graph is a unit disk graph._ ## 6 Bounded-degree graphs In previous chapters, we established NP-hardness of Target Set Selection in all commonly studied restrictions of the threshold function - constant, unanimous, and majority in the classes of unit disk and planar graphs. Our proofs provided NP-hardness not for concrete graph classes but also for general graphs with very small degree. For example, Theorem 21 shows that Target Set Selection is NP-hard when the underlying graph \(G\) has maximum degree \(\Delta G\leq 4\) and thresholds are at most \(2\). Theorem 7 shows that when \(\Delta G\leq 3\) and thresholds are at most \(3\) (or exactly \(3\)), the problem is still NP-hard. Note that Target Set Selection is in P when thresholds are exactly \(2\) and \(\Delta G\leq 3\) by the result of Kyncl et. al (2017). They also show NP-hardness for \(t(v)=2\) and \(\Delta G\leq 4\). By a slight modification of their reduction, we can show NP-hardness for \(t(v)\leq 2\) and \(\Delta G\leq 3\) in the general case. We start with a simple observation that we can always upper-bound the threshold of a vertex by its degree. **Lemma 42**.: _Let \((G,t,k)\) be an instance of Target Set Selection. Then there is an equivalent instance \((G^{\prime},t^{\prime},k^{\prime})\) with \(t^{\prime}(v)\leq\deg_{G^{\prime}}(v)\) for all \(v\in V(G^{\prime})\)._ Proof.: If \(v\) is a vertex with threshold \(t(v)>\deg v\), then it must be included in any target set. We thus set \(G^{\prime}=G-v\), decrease the threshold value of all neighbors of \(v\) by \(1\) (if not already at zero) and \(k^{\prime}=k-1\). Certainly the new instance is equivalent to \((G,t,k)\). Repeat this step until there are no vertices with threshold \(t(v)>\deg v\). Figure 6: Schematic representation of the variable gadget \(X_{i}\) for a variable \(x_{i}\) in the case of majority thresholds. The gray vertices have threshold \(2\), while the white vertices have threshold \(1\) (cf. Fig. 2) Figure 7: Representation of the clause gadget \(Y_{j}\) for a clause \(C_{j}\) containing exactly \(3\) literals in the case of majority thresholds. Filled vertices have threshold \(2\), empty vertices have threshold \(1\). The three half edges illustrate the fact that the gadget is connected with the rest of the graph only via \(y_{j}\). **Theorem 43**.: Target Set Selection _is NP-hard even when the underlying graph has maximum degree \(\Delta G\leq 3\) and thresholds are at most \(2\)._ Proof.: We utilize the reduction of Kyncl et al. Kyncl et al. (2017) used to show the NP-hardness of Irreversible \(2\)-Conversion Set in graphs with maximum degree \(4\). Their problem exactly corresponds to Target Set Selection with the thresholds set to \(2\). In their reduction, they make use of leafs with threshold \(2\) to virtually decrease the thresholds of some vertices in the resulting graph. In their problem, they are not explicitly allowed to have thresholds other than \(2\). By Lemma 42, we can erase all these leaf vertices with threshold \(2\) and decrease the threshold of their neighbors to obtain an equivalent instance \((G,t,k)\). Observe that in their reduction, after erasing all these leaf vertices, we end up with a graph with maximum degree \(3\), i.e., we have \(\Delta G\leq 3\). Also, the thresholds are at most \(2\), as promised. The theorem follows. This result suggests that it might be of interest to distinguish between _constant-bounded_ thresholds (i.e., \(t(v)\leq c\) for some fixed constant \(c\)) and _exact_ thresholds (i.e., \(t(v)=c\) for some fixed constant \(c\)). Going back to the class of unit disk graphs, we also have slightly weaker results for this class when the thresholds are exact. We rely on the result about NP-hardness of Independent Set on regular unit disk graphs (see Theorem 11 and Corollary 18). **Theorem 44**.: _For infinitely many constants \(c\) Target Set Selection is NP-complete when restricted to the class of unit disk graphs and the thresholds are exactly \(c\). In particular, the claim holds for \(c=2,3,4\)._ Proof.: For \(c\geq 3\) we reduce from Independent Set restricted to instances where the underlying graph is \(c\)-regular unit disk graph where \(c>0\) and \(c\equiv-1\mod 4\) or \(c\equiv-1\mod 5\). NP-hardness of this setting is implied by Corollary 18. We proceed in a similar way as in proof of Theorem 21 but start with a \(c\)-regular graph for appropriate \(c\). For \(c=2\) we reduce from Target Set Selection with majority thresholds on grid graphs. NP-hardness of this setting is implied by Corollary 40. Let \((G,t,k)\) be such instance and let \(V(G)=\{v_{1},\ldots,v_{n}\}\). We create a new instance \((G^{\prime},t^{\prime},k^{\prime})\) as follows. We are aiming for \(t^{\prime}(v)=2\) for all \(v\in V(G^{\prime})\). First, we obtain a disk representation \(\mathcal{D}=\{D_{1},\ldots,D_{n}\}\) for \(G\) in the obvious way. Recall that grid graphs are _induced_ subgraphs of a grid. The diameters of the disks are \(d=1\) and the center of the disk \(D_{i}\) corresponds to the grid point of the vertex \(v_{i}\). Now, we fix vertices \(v_{i}\) with threshold \(1\) by attaching a leaf vertex \(v^{\prime}_{i}\) with threshold \(2\) to \(v_{i}\) and we increase the threshold of \(v_{i}\) by \(1\). In this way we have \(t^{\prime}(v_{i})=t^{\prime}(v^{\prime}_{i})=2\). Let \(z\) denote the number of vertices \(v_{i}\in V(G)\) with \(t(v_{i})=1\). We set \(k^{\prime}=k+z\). Let \(G^{\prime}\) be the newly created graph. **Claim**.: The instances \((G,t,k)\) and \((G^{\prime},t^{\prime},k^{\prime})\) are equivalent. Proof.: Observe that by applying Lemma 42 to the instance \((G^{\prime},t^{\prime},k^{\prime})\) we obtain precisely the instance \((G,t,k)\). The claim follows. It remains to say how to realize the attachment of a leaf vertex in the unit disk representation. Let \(s_{i}\in\mathbb{R}^{2}\) be the center of \(D_{i}\) and let \(\varepsilon=\frac{1}{5}\). Observe that the representation satisfies: Every two disks have at most \(1\) point in common. Let \(v_{i}\) satisfy \(t(v_{i})=1\). Thus, \(\deg_{G}v_{i}\in\{1,2\}\) because \(t\) is majority. As all disks are embedded in an integer grid and \(\deg v_{i}\leq 2\), there exists a direction \(\{(0,1),(1,0),(-1,0),(0,-1)\}\) such that \(s_{i}+d_{i}\) is not a center of any other disk \(D_{j}\). We add a new disk \(D_{i}^{\prime}\) with diameter \(1\) centered at \(s_{i}+\varepsilon d_{i}\) (see Fig. 8). It is not hard to see that \(D_{i}^{\prime}\cap D_{i}\neq\emptyset\) and that \(D_{i}^{\prime}\) does not intersect any other disks. In other words, this exactly corresponds to attaching a leaf vertex \(v_{i}^{\prime}\) to \(v_{i}\). We repeat this step for all other vertices \(v\in V(G)\) satisfying \(t(v)=1\). Observe that the selection(iv) \(\varepsilon=\frac{1}{5}\) ensures that no matter which direction \(d_{j}\) we choose for any other disk \(D_{j}\), the newly created disk \(D_{i}^{\prime}\) intersects only the disk \(D_{i}\). This can be checked directly by computing the distances of the centers of the corresponding disks. To conclude the proof, note that we added at most \(1\) new vertex per each original vertex, thus the reduction is indeed polynomial. **Remark**.: We remark that in the case for \(c=2\), the resulting unit disk graph still satisfies \(\Delta G\leq 4\). Therefore, we have a NP-hardness result not only for \(t(v)\leq 2\) and \(\Delta G\leq 4\), but also for \(t(v)=2\) and \(\Delta G\leq 4\) in the class of unit disk graphs. Notice that the very same proof also works for planar graphs but without all the geometrical mess. **Corollary 45**.: Target Set Selection _is NP-hard even when the underlying graph is planar or unit disk graph with \(\Delta G\leq 4\) and the thresholds are exactly \(2\)._ **Remark**.: We remark that the first nontrivial constant for which we do not have an NP-hardness result for Target Set Selection in the class of unit disk graphs and thresholds set exactly to \(c\), is \(c=5\). It is implied by the fact that the proof is based heavily on the NP-hardness of Independent Set, for which the situation is pretty much the same. Refer back to Remark 17 for more details. ## 7 Conclusions In this paper, we showed that Target Set Selection is computationally hard even on very simple geometric graph classes such as unit disk graphs. For more restricted graph classes, we showed that TSS is polynomial-time solvable on interval graphs and grid graphs if the threshold function is unanimous; however, in the case of grid graphs, we were not able to identify any other tractable restriction of the threshold function. In particular, we showed that TSS is NP-complete in the class of grid graphs even Figure 8: Attaching leafs to vertices \(v_{i},v_{j}\) corresponding to disks \(D_{i},D_{j}\). The black disks correspond to the original graph while the two red disks are the newly created leafs. The corresponding directions are \(d_{i}=(1,0)\) and \(d_{j}=(0,-1)\). if restricted to constant-bounded and majority thresholds. En route, we established NP-hardness of the problem in the class of planar graphs. Refer to Fig. 9 for graphical overview of our results. In the last section, we showed that there is a slight difference between thresholds _set_ to a constant (\(t(v)=c\)) and thresholds _bounded_ by a constant (\(t(v)\leq c\)). More precisely, we showed that if \(\Delta G\leq 3\) and \(t(v)\leq 2\), TSS is NP-hard, whereas it is solvable in polynomial time if \(t(v)=2\) and \(\Delta G\leq 3\) by a previous result of Kyncl et al. (2017). We also returned back to the class of unit disk graphs and addressed the exact threshold setting and showed that for infinitely many \(c\)Target Set Selection remains NP-hard even in the classes of unit disk and planar graphs with \(t(v)=c\). For the case \(c=2\) we still preserved the tight upper bound on maximum degree of the input graph, i.e., \(\Delta G\leq 4\). ### Future directions and open questions The first obvious question arises, whether the upper bound on the maximum degree in the NP-hardness proofs when thresholds are at most \(2\) can be strengthened to \(\Delta G\leq 3\) even in the classes of planar, grid or unit disk graphs. In particular: **Question.** Is Target Set Selection NP-hard when restricted to the classes of planar, grid, or unit disk graphs even if the maximum degree is at most \(3\) and the thresholds are at most \(2\)? We don't have any results on Target Set Selection with exact thresholds for the class of grid graphs. **Question.** Is Target Set Selection NP-hard when restricted to the class of grid graphs and the thresholds are exactly \(c\) for \(c\geq 2\)? Another question that remains open even after this paper is related to the computational complexity of the Target Set Selection problem on interval graphs. It was known that TSS is polynomial-time solvable on interval graphs when the threshold function is bounded by a constant and we showed that the Figure 9: Overview of the results for _unanimous_, _constant_, _majority_, and _unrestricted_ threshold function. Red squares correspond to NP-hardness, green correspond to polynomial-time solvability and yellow indicate an open question. Squares with black border correspond to results shown in this work. same holds even in the case of unanimous thresholds. We conjecture that the problem should be tractable with majority thresholds; however, we are more skeptical in the case of general thresholds. **Question.** What is the complexity of Target Set Selection on the class of interval graphs in the majority and unrestricted threshold setting? Finally, an interesting open problem, unrelated to Target Set Selection is about the NP-hardness of the Independent Set problem in \(r\)-regular unit disk graphs for all constants \(r\geq 5\). In this work, we established NP-hardness for \(r=3,4\) and infinitely many (but not all) constants (refer to Remark 17 for details). **Question.** Is Independent Set NP-hard when restricted to the class of \(r\)-regular unit disk graphs for all constants \(r\geq 5\)? Similar question follows with the NP-hardness of exact threshold setting in the class of unit disk graphs as our proof relies on the result about Independent Set. **Question.** Given any \(c\geq 2\), is Target Set Selection NP-hard even when restricted to the class of unit disk graphs and all thresholds are exactly \(c\)? ### Acknowledgments. The authors acknowledge the support of the Czech Science Foundation Grant No. 22-19557S. MD and SS were additionally supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS23/205/OHK3/3T/18. MD was also supported by the Student Summer Research Program 2021 of FIT CTU in Prague.
2302.12105
A subgradient method with constant step-size for $\ell_1$-composite optimization
Subgradient methods are the natural extension to the non-smooth case of the classical gradient descent for regular convex optimization problems. However, in general, they are characterized by slow convergence rates, and they require decreasing step-sizes to converge. In this paper we propose a subgradient method with constant step-size for composite convex objectives with $\ell_1$-regularization. If the smooth term is strongly convex, we can establish a linear convergence result for the function values. This fact relies on an accurate choice of the element of the subdifferential used for the update, and on proper actions adopted when non-differentiability regions are crossed. Then, we propose an accelerated version of the algorithm, based on conservative inertial dynamics and on an adaptive restart strategy, that is guaranteed to achieve a linear convergence rate in the strongly convex case. Finally, we test the performances of our algorithms on some strongly and non-strongly convex examples.
Alessandro Scagliotti, Piero Colli Franzone
2023-02-23T15:43:32Z
http://arxiv.org/abs/2302.12105v2
# A subgradient method with constant step-size for \(\ell_{1}\)-composite optimization ###### Abstract. Subgradient methods are the natural extension to the non-smooth case of the classical gradient descent for regular convex optimization problems. However, in general, they are characterized by slow convergence rates, and they require decreasing step-sizes to converge. In this paper we propose a subgradient method with constant step-size for composite convex objectives with \(\ell_{1}\)-regularization. If the smooth term is strongly convex, we can establish a linear convergence result for the function values. This fact relies on an accurate choice of the element of the subdifferential used for the update, and on proper actions adopted when non-differentiability regions are crossed. Then, we propose an accelerated version of the algorithm, based on conservative inertial dynamics and on an adaptive restart strategy. Finally, we test the performances of our algorithms on some strongly and non-strongly convex examples. **Keywords:** convex optimization, \(\ell_{1}\)-regularization, subgradient method, inertial acceleration, restart strategies. ## Introduction In this paper we deal with convex _composite_ optimization, i.e., we consider objective functions \(f:\mathbb{R}^{n}\to\mathbb{R}\) of the form \[f(x)=g(x)+h(x) \tag{0.1}\] where \(g:\mathbb{R}^{n}\to\mathbb{R}\) is \(C^{1}\)-regular with Lipschitz-continuous gradient, and \(h:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) is a non-smooth convex function. We recall that the concept of _composite_ function was introduced by Nesterov in [9], and it usually denotes the splitting (0.1) in the case that the non-regular term \(h\) is _simple_. In this framework, possible examples of _simple_ functions include, e.g., the indicator of a closed convex set, or the supremum of a finite family of linear functions. The problem of minimizing such composite functions can be effectively addressed by means of _forward-backward_ methods (see, e.g., [5]), and their accelerated versions [3]. In this regard, we report the recent contribution [15], where it is considered an accelerated method that achieves linear convergence when \(g,h\) in (0.1) are strongly convex. The aim of this paper is to develop a convergent subgradient method with constant step-size for the minimization of particular instances of (0.1). The subgradient method was first introduced in [19] and, given an initial guess \(x_{0}\in\mathbb{R}^{n}\), the algorithm produces a sequence \((x^{k})_{k\geq 0}\) with update rule \[x^{k+1}=x^{k}-h_{k}\mathfrak{v}^{k}\quad k\geq 0, \tag{0.2}\] where \(\mathfrak{v}^{k}\in\partial f(x^{k})\), i.e., it is an element taken from the subdifferential of the objective at the point \(x^{k}\), and where \(h_{k}>0\) denotes the step-size. If we set \(\nu_{k}=h_{k}|\mathfrak{v}^{k}|_{2}\), we can equivalently rephrase (0.2) as \[x^{k+1}=x^{k}-\nu_{k}\frac{\mathfrak{v}^{k}}{|\mathfrak{v}^{k}|_{2}}, \tag{0.3}\] where \(\nu_{k}\) represents the _step-length_ at the \(k\)-th iteration. It is possible to deduce the convergence \(\lim_{k\to\infty}f(x^{k})=f(x^{*})\) as soon as \((\nu_{k})_{k\geq 0}\) satisfies \(\lim_{k\to\infty}\nu_{k}=0\) and \(\sum_{k=1}^{\infty}\nu_{k}=\infty\) (see [20, Chapter 2]). In [14, Theorem 5.2] it is proposed a construction for \((\nu_{k})_{k\geq 1}\) that achieves ## 1. Introduction In this paper, we study the convergence of the _non-regularized logistic regression_ problem, which is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The problem is formulated as a linear linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem, which is formulated as a linear programming problem problem, which is formulated as a linear programming problem problem, which is formulated as a linear programming problem problem, which is formulated as a linear programming problem problem, which is formulated as a linear programming a new version of the _restarted-conservative_ algorithm that has been heuristically outlined in [17]. Finally, in Section 4 we test our algorithms in strongly and non-strongly convex optimization problems with \(\ell_{1}\)-regularization. ## 1. Preliminary results In this section we establish some auxiliary results that will be used later. Given a convex function \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\), for every \(x\in\mathbb{R}^{n}\) we denote with \(\partial f(x)\subset\mathbb{R}^{n}\) the subdifferential of \(f\) at the point \(x\). **Definition 1**.: Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a convex function. For every \(x\in\mathbb{R}^{n}\), we define the vector \(\partial^{-}f(x)\in\mathbb{R}^{n}\) as follows \[\partial^{-}f(x):=\arg\min\{|y|_{2}\mid y\in\partial f(x)\}. \tag{1.1}\] _Remark 1_.: We observe that Definition 1 is always well-posed. Indeed, for every convex function \(f:\mathbb{R}^{n}\to\mathbb{R}\), for every \(x\in\mathbb{R}^{n}\) the subdifferential \(\partial f(x)\) is a non-empty, compact and convex subset of \(\mathbb{R}^{n}\). Namely, since we do not allow \(f\) to assume the value \(+\infty\), this fact descends directly from [10, Theorem 3.1.15]. Moreover, we can equivalently rephrase (1.1) as \[\partial^{-}f(x):=\arg\min\{|y|_{2}^{2}\mid y\in\partial f(x)\},\] i.e., as a positive-definite quadratic programming problem on a convex domain. Hence, we deduce that \(\partial^{-}f(x)\) is well-defined, and that it consists of a single element. Considering this last fact, in this paper we understand \(\partial^{-}f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) as a vector-valued operator, rather than a set-valued mapping. We now establish a non-smooth version of the celebrated Polyak-Lojasiewicz inequality (see [12] and [10, Theorem 2.1.10] for the classical statement in the smooth case). **Lemma 1.1** (Non-smooth Polyak-Lojasiewicz).: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a \(\mu\)-strongly convex function, and let \(x^{*}\) be its minimizer. Then, for every \(x\in\mathbb{R}^{n}\) and for every element of the subdifferential \(y\in\partial f(x)\) the following inequality holds:_ \[f(x)-f(x^{*})\leq\frac{1}{2\mu}|y|_{2}^{2},\] _and, in particular, we have_ \[f(x)-f(x^{*})\leq\frac{1}{2\mu}|\partial^{-}f(x)|_{2}^{2}. \tag{1.2}\] Proof.: Let us introduce the auxiliary function \(\psi:\mathbb{R}^{n}\to\mathbb{R}\) defined as \[\psi(x):=f(x)-\frac{\mu}{2}|x-x^{*}|_{2}^{2}.\] The fact that \(f\) is \(\mu\)-strongly convex guarantees that \(\psi\) is still a convex function. Moreover, for every \(x\in\mathbb{R}^{n}\) we have that \[\partial\psi(x)=\partial f(x)-\mu(x-x^{*}). \tag{1.3}\] This follows immediately from the fact that \(f(x)=\psi(x)+\frac{\mu}{2}|x-x^{*}|_{2}^{2}\) for every \(x\in\mathbb{R}^{n}\), and from the _sum rule for subdifferentials_ (see, e.g., [16, Theorem 23.8]), i.e., \(\partial f(x)=\partial\psi(x)+\mu(x-x^{*})\). For every \(x\in\mathbb{R}^{n}\) and for every \(y\in\partial f(x)\) we compute \[\begin{split}\psi(x^{*})&\geq\psi(x)+\langle y-\mu(x- x^{*}),x^{*}-x\rangle\\ &=f(x)+\frac{\mu}{2}|x-x^{*}|_{2}^{2}+\langle y,x^{*}-x\rangle\\ &\geq f(x)-\frac{1}{2\mu}|y|_{2}^{2},\end{split} \tag{1.4}\] where we used (1.3) and the subdifferential inequality for the convex function \(\psi\). Recalling that \(\psi(x^{*})=f(x^{*})\), from (1.4) we directly deduce the thesis. We now introduce the class of functions that will be the main object of our investigation. We consider a _composite_ objective (see [9]) \(f:\mathbb{R}^{n}\to\mathbb{R}\) of the form \[f(x)=g(x)+\gamma|x|_{1}, \tag{1.5}\] where \(g:\mathbb{R}^{n}\to\mathbb{R}\) is a \(C^{1}\)-regular convex function with Lipschitz-continuous gradient of constant \(L>0\), and where \(\gamma>0\) is a positive constant. We recall that \(|x|_{1}:=\sum_{i=1}^{n}|x_{i}|\) for every \(x\in\mathbb{R}^{n}\). We observe that \[\partial f(x)=\left\{\nabla g(x)+\gamma\sum_{i=1}^{n}\nu_{i}e_{i}\mid\nu_{i}= \operatorname{sign}(x_{i})\text{ if }x_{i}\neq 0,\ \nu_{i}\in[-1,1]\text{ if }x_{i}=0\right\} \tag{1.6}\] for every \(x\in\mathbb{R}^{n}\), where \(e_{i}\) is the \(i\)-th element of the standard basis of \(\mathbb{R}^{n}\). If we define \(\partial_{i}f(x):=\langle e_{i},\partial f(x)\rangle\), we have that \[\partial_{i}f(x)=\begin{cases}\{\partial_{i}g(x)+\gamma\nu_{i}\mid\nu_{i}= \operatorname{sign}(x_{i})\}&x_{i}\neq 0,\\ \{\partial_{i}g(x)+\gamma\nu_{i}\mid\nu_{i}\in[-1,1]\}&x_{i}=0,\end{cases} \tag{1.7}\] for every \(i=1,\ldots,n\), where \(\partial_{i}g(x):=\frac{\partial}{\partial x_{i}}g(x)\) denotes the usual partial derivative of the regular term \(g:\mathbb{R}^{n}\to\mathbb{R}\) at the right-hand side of (1.5). From (1.7) we read that the \(i\)-th component of \(\partial f(x)\) is affected only by \(\nu_{i}\). Therefore, in order to compute the operator \(\partial^{-}f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) introduced in Definition 1, we can find _separately_ the element of minimal absolute value of \(\partial_{i}f(x)\) for \(i=1,\ldots,n\). We use \(\partial_{i}^{-}f(x)\) to access the \(i\)-th component of \(\partial^{-}f(x)\). In particular, for every \(x\in\mathbb{R}^{n}\) we have that \[\partial_{i}^{-}f(x)=\begin{cases}\partial_{i}g(x)+\gamma\text{sign}(x_{i})& x_{i}\neq 0,\\ \operatorname{sign}(\partial_{i}g(x))\max\{|\partial_{i}g(x)|-\gamma,0\}&x_{ i}=0.\end{cases} \tag{1.8}\] **Definition 2**.: Given \(x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\), we define the following partition of the components \(\{1,\ldots,n\}\) induced by the point \(x\): \[\alpha_{x}^{+} :=\{i\in\{1,\ldots,n\}\mid x_{i}>0\}, \tag{1.9}\] \[\alpha_{x}^{-} :=\{i\in\{1,\ldots,n\}\mid x_{i}<0\},\] \[\beta_{x} :=\{i\in\{1,\ldots,n\}\mid x_{i}=0\}.\] From now on, when making use of a partition \(\alpha^{1},\ldots,\alpha^{k}\) of the indexes of the components \(\{1,\ldots,n\}\), for every \(z=(z_{1},\ldots,z_{n})\in\mathbb{R}^{n}\) we write \(z=(z_{\alpha^{1}},\ldots,z_{\alpha^{k}})\), where \(z_{\alpha^{j}}\in\mathbb{R}^{|\alpha^{j}|}\) is the vector obtained by extracting from \(z\) the components that belong to \(\alpha^{j}\), i.e., \(z_{\alpha^{j}}=(z_{i})_{i\in\alpha^{j}}\) for every \(j=1,\ldots,k\). The next technical result is the key-lemma of the convergence proof of Section 2. **Lemma 1.2**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a convex function of the form (1.5). Given \(x\in\mathbb{R}^{n}\), let \(\alpha_{x}^{+},\alpha_{x}^{-},\beta_{x}\) be the partition of \(\{1,\ldots,n\}\) corresponding to the point \(x\) and prescribed by (1.9). Let us consider a vector \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}\) such that_ \[\begin{cases}x_{i}+v_{i}\geq 0&\forall i\in\alpha_{x}^{+},\\ x_{i}+v_{i}\leq 0&\forall i\in\alpha_{x}^{-},\\ v_{i}=0&\forall i\in\beta_{x}\text{ s.t. }\partial_{i}^{-}f(x)=0,\\ v_{i}\geq 0&\forall i\in\beta_{x}\text{ s.t. }\partial_{i}^{-}f(x)<0,\\ v_{i}\leq 0&\forall i\in\beta_{x}\text{ s.t. }\partial_{i}^{-}f(x)>0.\end{cases} \tag{1.10}\] _Then the following inequality holds:_ \[f(x+v)\leq f(x)+\langle\partial^{-}f(x),v\rangle+\frac{1}{2}L|v|_{2}^{2}, \tag{1.11}\] _where \(L>0\) is the Lipschitz constant of the regular term at the right-hand side of (1.5), and \(\partial^{-}f(x)\) is defined as in Definition 1._ _Remark 2_.: We recall that, in the case of a regular convex function \(\phi:\mathbb{R}^{n}\to\mathbb{R}\) with \(L\)-Lipschitz continuous gradient, we have \[\phi(x+v)\leq\phi(x)+\langle\nabla\phi(x),v\rangle+\frac{1}{2}L|v|_{2}^{2} \tag{1.12}\] for every \(x,v\in\mathbb{R}^{n}\) (see, e.g., [10, Theorem 2.1.5]). The crucial fact for the proof of Lemma 1.2 is that, when \(v\) satisfies the conditions (1.10), the segment \(\overrightarrow{xx^{\prime}}\) lies in a region where the restriction of the objective \(f\) is regular. Lemma 1.2 will be used to prove that, along proper directions, the objective function \(f\) is decreasing. Proof.: Before proceeding, we introduce another partition of the set of indexes \(\beta_{x}\): \[\beta_{x}^{+} :=\{i\in\beta_{x}\mid v_{i}>0\},\] \[\beta_{x}^{-} :=\{i\in\beta_{x}\mid v_{i}<0\},\] \[\beta_{x}^{0} :=\{i\in\beta_{x}\mid v_{i}=0\},\] and we define \[\zeta^{+}:=\alpha_{x}^{+}\cup\beta_{x}^{+},\qquad\zeta^{-}:=\alpha_{x}^{-} \cup\beta_{x}^{-},\qquad\zeta^{0}:=\beta_{x}^{0}\] where \(\alpha_{x}^{+},\alpha_{x}^{-},\beta_{x}\) are set accordingly to (1.9). If we consider the segment \(t\mapsto\eta(t)=x+tv\) for \(t\in[0,1]\), it turns out that \(\eta(t)\in C_{\zeta^{\pm,0}}\) for every \(t\in[0,1]\), where \[C_{\zeta^{\pm,0}}:=\left\{z\in\mathbb{R}^{n}\mid z_{\zeta^{+}}\geq 0,z_{\zeta^{-} }\leq 0,z_{\zeta^{0}}=0\right\}.\] Let us define the auxiliary function \(f^{\rm aux}:\mathbb{R}^{n}\to\mathbb{R}\) as \[f^{\rm aux}:z=(z_{\zeta^{+}},z_{\zeta^{-}},z_{\zeta^{0}})\mapsto g(z_{\zeta^ {+}},z_{\zeta^{-}},0_{\zeta^{0}})+\gamma\sum_{i\in\zeta^{+}}z_{i}-\gamma\sum _{i\in\zeta^{-}}z_{i}, \tag{1.13}\] where \(g:\mathbb{R}^{n}\to\mathbb{R}\) is the smooth term at the right-hand side of (1.5). From the definition of \(f^{\rm aux}\), it follows that \[\nabla f^{\rm aux}:z=(z_{\zeta^{+}},z_{\zeta^{-}},z_{\zeta^{0}})\mapsto\nabla g (z_{\zeta^{+}},z_{\zeta^{-}},0_{\zeta^{0}})+\gamma\sum_{i\in\zeta^{+}}e_{i}- \gamma\sum_{i\in\zeta^{-}}e_{i}, \tag{1.14}\] We observe that the function \(f^{\rm aux}:\mathbb{R}^{n}\to\mathbb{R}\) is as regular as \(g\), i.e., it is of class \(C^{1}\) with \(L\)-Lipschitz continuous gradient. Indeed, the first term at the right hand-side of (1.14) is obtained as the composition \(\nabla g\circ\Pi_{\zeta^{0}}\), where \(\Pi_{\zeta^{0}}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the linear (1-Lipschitz) orthogonal projection onto the subspace \(\{z\in\mathbb{R}^{n}\mid z_{\zeta^{0}}=0\}\subset\mathbb{R}^{n}\). Moreover, the last terms at the right hand-side of (1.14) are constant. Therefore, using the identity \[f|_{C_{\zeta^{\pm,0}}}\equiv f^{\rm aux}|_{C_{\zeta^{\pm,0}}},\] if we apply the estimate (1.12) to \(f^{\rm aux}\), we deduce that \[f(x+v)\leq f(x)+\langle\nabla f^{\rm aux}(x),v\rangle+\frac{1}{2}L|v|_{2}^{2}.\] Therefore, the thesis follows if we show that the following equalities hold: \[\frac{\partial}{\partial x_{i}}f^{\rm aux}(x)v_{i}=\partial_{i}^{-}f(x)v_{i} \qquad\forall i=1,\ldots,n. \tag{1.15}\] Using the partition of the components \(\{1,\ldots,n\}\) provided by the families of indexes \(\alpha_{x}^{+}\), \(\alpha_{x}^{-}\), \(\beta_{x}^{+}\), \(\beta_{x}^{-}\), and \(\beta_{x}^{0}\), we have the following possibilities: * If \(i\in\alpha_{x}^{+}\), in virtue of (1.8) and (1.13), we obtain \(\partial_{i}^{-}f(x)=\frac{\partial}{\partial x_{i}}g(x)+\gamma=\frac{\partial }{\partial x_{i}}f^{\mathrm{aux}}(x)\). * The case \(i\in\alpha_{x}^{-}\) is analogous to \(i\in\alpha_{x}^{+}\). * If \(i\in\beta_{x}^{+}\), then \(x_{i}=0\) and \(v_{i}>0\), and, in virtue of (1.10), we deduce that \(\partial_{i}^{-}f(x)<0\). In particular, using again (1.8), this implies that \(\partial_{i}^{-}f(x)=\frac{\partial}{\partial x_{i}}g(x)+\gamma\). On the other hand, recalling the expression of \(f^{\mathrm{aux}}\) in (1.13) and the inclusion \(\beta_{x}^{+}\subset\zeta^{+}\), we finally deduce \(\frac{\partial}{\partial x_{i}}f^{\mathrm{aux}}(x)=\partial_{i}g(x)+\gamma\). * The case \(i\in\beta_{x}^{-}\) is analogous to \(i\in\beta_{x}^{+}\). * If \(i\in\beta_{x}^{0}\), then \(v_{i}=0\), and we immediately obtain \(\partial_{i}^{-}f(x)v_{i}=0=\frac{\partial}{\partial x_{i}}f^{\mathrm{aux}}(x )v_{i}\). This argument shows that (1.15) is true, and it concludes the proof. ## 2. Subgradient method and convergence analysis In this section we propose a subgradient method with constant step-size for the numerical minimization of a convex function \(f:\mathbb{R}^{n}\to\mathbb{R}\) with the _composite_ structure reported in (1.5). We insist on the fact that the analysis presented here holds only when the non-smooth term at the right-hand side of (1.5) is a \(\ell_{1}\)-penalization. Before introducing formally the algorithm, we provide some insights that have guided us towards its construction. Let \(\bar{x}\in\mathbb{R}^{n}\) be the current guess for the minimizer of \(f\). We want to find a suitable direction in the subdifferential \(\mathfrak{v}\in\partial f(\bar{x})\) such that \(f(x-h\mathfrak{v})\leq f(\bar{x})\), where \(h>0\) represents a _constant_ step-size. In order to accomplish this, a natural choice consists in setting \(\mathfrak{v}=\partial^{-}f(\bar{x})\), where \(\partial^{-}f(\bar{x})\) is defined as in (1.1). To see this, we first observe that, in virtue of the particular structure of \(\partial f\) reported in (1.7), we can choose _separately_ the components \(\mathfrak{v}_{1},\ldots,\mathfrak{v}_{n}\) of the direction of the movement. If \(\bar{x}_{i}\neq 0\), then \(\partial_{i}f(\bar{x})\) consists of a single element, hence the only possible choice is \(\mathfrak{v}_{i}=\partial_{i}^{-}f(\bar{x})\). If \(\bar{x}_{i}=0\) and \(\partial_{i}^{-}f(\bar{x})=0\), then any choice \(\mathfrak{v}_{i}\in\partial_{i}f(\bar{x})\) with \(\mathfrak{v}_{i}\neq 0\) would give \(f(\bar{x}-\mathfrak{v}_{i}e_{i})\geq f(\bar{x})\), resulting in an increase of the objective function. For this reason, it is convenient to set \(\mathfrak{v}_{i}=\partial_{i}^{-}f(\bar{x})=0\), and to move _tangentially_ to the non-differentiability region \(\{x\in\mathbb{R}^{n}\mid x_{i}=0\}\). On the other hand, if \(\bar{x}_{i}=0\) and, e.g., \(\partial_{i}^{-}f(\bar{x})>0\), then \(\partial_{i}f(\bar{x})\subset(0,+\infty)\), and for every choice of \(\mathfrak{v}_{i}\in\partial_{i}f(\bar{x})\), we have that \(\bar{x}_{i}-h\mathfrak{v}_{i}=-h\mathfrak{v}_{i}<0\). However, observing that \(\lim_{h\to 0^{+}}(f(\bar{x}+he_{i})-f(\bar{x}))/h=\partial_{i}^{-}f(\bar{x})\), it looks natural to set once again \(\mathfrak{v}_{i}=\partial_{i}^{-}f(\bar{x})\). Besides the selection of the direction \(\mathfrak{v}=\partial^{-}f(\bar{x})\), the second crucial aspect is whether some changes of coordinates occur when moving from \(\bar{x}\) to \(\bar{x}-h\partial^{-}f(\bar{x})\). If not, the situation is pretty analogous to a step of the classical gradient descent in the smooth framework. On the other hand, if there is, e.g., a positive component \(\bar{x}_{i}\) that becomes negative, then we should carefully decide if the _barrier_\(\{x\in\mathbb{R}^{n}\mid x_{i}=0\}\) should be crossed, or not. This is a key-point, in order to avoid the oscillations that characterized the simple example in the Introduction. In this case, we first set to \(0\) the components involved in a sign change, and for these components we re-evaluate \(\partial^{-}f\). Finally, using this additional information, we complete the step. The implementation of the method is described in Algorithm 1. We now establish the linear convergence result for Algorithm 1 in the case of strongly convex objective. **Theorem 2.1**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a function such that \(f(x)=g(x)+\gamma|x|_{1}\) for every \(x\in\mathbb{R}^{n}\), where \(\gamma>0\) and \(g:\mathbb{R}^{n}\to\mathbb{R}\) is \(C^{1}\)-regular. We further assume that there exist constants \(L>\mu>0\) such that \(g\) is \(\mu\)-strongly convex and \(\nabla g\) is \(L\)-Lipschitz continuous. Let \((x^{k})_{k\geq 0}\) be the sequence generated by Algorithm 1. Then, there exists \(\kappa=\kappa(L,\mu)\in(0,1)\) such that_ \[f(x^{k})-f(x^{*})\leq\kappa^{k}(f(x^{0})-f(x^{*})), \tag{2.1}\] _where \(x^{*}\in\mathbb{R}^{n}\) denotes the unique minimizer of \(f\), and where we set the step-size \(h=\frac{1}{L}\)._ Proof.: We follow the procedure described in Algorithm 1. We prove that each iteration leads to a linear decrease of the value of the objective function. The first stage of each step is based on the following update: \[x^{\text{temp}}:=x^{k}-h\partial^{-}f(x^{k}), \tag{2.2}\] where \(h>0\) represents the step-size of the sub-gradient method. We distinguish two possible scenarios, corresponding to the if-else statement at the lines 5 and 7 of Algorithm 1. **Case 1.** We have that \[\text{sign}(x_{i}^{\text{temp}}\cdot x_{i}^{k})\geq 0,\quad\forall i=1,\ldots,n, \tag{2.3}\] i.e., none of the components of \(x^{k}\) and of \(x^{\text{temp}}\) changes sign, in the sense that from strictly positive it becomes strictly negative, or vice-versa. If we set \(v:=-h\partial^{-}f(x^{k})\), we observe that Figure 1. In this \(2D\)-example, we observe that the second component \(x_{2}\) changes sign when moving from \(x^{k}\) to \(x^{\text{temp}}=x^{k}-h\partial^{-}f(x^{k})\). Hence, we first consider \(x^{\prime}=(x_{1}^{k},0)\) and we evaluate \(\partial_{2}^{-}f(x^{\prime})\). If \(\partial_{2}^{-}f(x^{\prime})=0\) (see the picture on the left), then we complete the step by moving tangentially to the axis \(\{x_{2}=0\}\), in the direction \(-(\partial_{1}^{-}f(x^{k}),0)\). If \(\partial_{2}^{-}f(x^{\prime})\neq 0\) (right), then we complete the step using the direction \(-(\partial_{1}^{-}f(x^{k}),\partial_{2}^{-}f(x^{\prime}))\). the hypotheses of Lemma 1.2 are met for the point \(x^{k}\) and the vector \(v\). Indeed, using the partition introduced in (1.9) and induced by the point \(x^{k}\), from (2.3) it follows that \(i\in\alpha_{x^{k}}^{+}\) implies \(x_{i}^{k}+v_{i}\geq 0\). A similar argument holds for \(i\in\alpha_{x^{k}}^{-}\). Finally, if \(i\in\beta_{x^{k}}\), then \(v_{i}\) satisfies (1.10) by construction. Therefore, from (1.11) we deduce that \[f(x^{\text{temp}}) \leq f(x^{k})+\langle\partial^{-}f(x^{k}),v\rangle+\frac{1}{2}L|v |_{2}^{2}\] \[=f(x^{k})-\left(h-\frac{h^{2}}{2}L\right)\left|\partial^{-}f(x^{k })\right|_{2}^{2}.\] Moreover, if \(h\leq\frac{2}{L}\), in virtue of Lemma 1.1, we obtain that \[f(x^{\text{temp}})-f(x^{*})\leq\left(f(x^{k})-f(x^{*})\right)\left(1-2\mu \left(h-\frac{h^{2}}{2}L\right)\right).\] In this case, we assign \(x^{k+1}:=x^{\text{temp}}\) and, choosing \(h=\frac{1}{L}\) in order to minimize the right-hand side of the previous inequality, we get \[f(x^{k+1})-f(x^{*})\leq\left(1-\frac{\mu}{L}\right)\left(f(x^{k})-f(x^{*}) \right). \tag{2.4}\] **Case 2.** Recalling the definition of \(x^{\text{temp}}\) in (2.2), we are in the second scenario when \[\exists i\in\{1,\dots,n\}:\,(x_{i}^{k}>0\text{ and }x_{i}^{\text{temp}}<0) \text{ or }(x_{i}^{k}<0\text{ and }x_{i}^{\text{temp}}>0), \tag{2.5}\] i.e., there is at least one component that _strictly_ changes sign. Before proceeding, we introduce the following partition of the components: \[\begin{cases}i\in\xi_{x^{k}}^{+}&\text{if }(x_{i}^{k}>0\text{ and }x_{i}^{\text{temp}}>0),\\ i\in\xi_{x^{k}}^{-}&\text{if }(x_{i}^{k}<0\text{ and }x_{i}^{\text{temp}}<0),\\ i\in\xi_{x^{k}}^{0}&\text{if }\operatorname{sign}(x_{i}^{\text{temp}}\cdot x _{i}^{k})\leq 0,\end{cases} \tag{2.6}\] and we define the following intermediate points: \[\text{\emph{Phase (1)}}\qquad x^{\prime}:=(x_{\xi_{x^{k}}^{+}}^{k},x_{\xi_{x^{k }}^{-}}^{k},0_{\xi_{x^{k}}^{0}}), \tag{2.7}\] and \[\text{\emph{Phase (2)}}\qquad x^{\prime\prime}:=x^{\prime}+v^{\prime\prime}, \tag{2.8}\] where \[v^{\prime\prime}:=-h\left(\partial_{\xi_{x^{k}}^{+}}^{-}f(x^{k}),\partial_{ \xi_{x^{k}}^{-}}^{-}f(x^{k}),\partial_{\xi_{x^{k}}^{0}}^{-}f(x^{\prime}) \right). \tag{2.9}\] We observe that (2.7) corresponds to the assignments of lines 9-10 in Algorithm 1, while (2.9) incorporates lines 11-12. Finally, \(x^{\prime\prime}\) is defined in (2.8) accordingly to line 13. We insist on the fact that in the update (2.8) the vector \(v^{\prime\prime}\) is computed by re-evaluating \(\partial_{\xi_{x^{k}}^{-0}}^{-}f\) at the point \(x^{\prime}\). This is because \(\partial_{\xi_{x^{k}}^{-0}}^{-}f\) may exhibit sudden changes when considering the points \(x^{k}\) and \(x^{\prime}\). In this regard, our construction guarantees that we employ the most trustworthy values for the choice of the decrease direction \(v^{\prime\prime}\). We point out that, if \(x_{i}^{k}=0\), then \(i\in\xi_{x^{k}}^{0}\). _Phase (1)._ From (2.7), we immediately observe that \[x^{\prime}=x^{k}+v^{\prime},\] with \[v_{i}^{\prime}:=\begin{cases}0&\text{if }i\in\xi_{x^{k}}^{+}\cup\xi_{x^{k}}^{-}, \\ -\eta_{i}h\partial_{i}^{-}f(x^{k})&\text{if }i\in\xi_{x^{k}}^{0},\end{cases} \tag{2.10}\] and where, for every \(i\in\xi^{0}_{x^{k}}\), we set \[\eta_{i}:=\begin{cases}\frac{x^{k}_{i}}{h\partial_{i}^{-}f(x^{k})}&\text{if } \partial_{i}^{-}f(x^{k})\neq 0,\\ 0&\text{if }\partial_{i}^{-}f(x^{k})=0.\end{cases}\] We first notice that \(\eta_{i}\in[0,1]\). Indeed, assuming that \(\partial_{i}^{-}f(x^{k})\neq 0\) (otherwise there is nothing to prove), since \(i\in\xi^{0}_{x^{k}}\), recalling (2.6) and (2.2), we have \[x^{k}_{i}\left(x^{k}_{i}-h\partial_{i}^{-}f(x^{k})\right)\leq 0, \tag{2.11}\] which in turn gives \(x^{k}_{i}\partial_{i}^{-}f(x^{k})\geq 0\) and, as a matter of fact, \(\eta_{i}\geq 0\). On the other hand, in order to show that \(\eta_{i}\leq 1\), we assume without loss of generality that \(x^{k}_{i}\neq 0\). Then, using again (2.11), it follows that \[0\geq\left(1-\frac{h\partial_{i}^{-}f(x_{k})}{x^{k}_{i}}\right)=\left(1-\frac {1}{\eta}_{i}\right),\] that yields \(\eta_{i}\leq 1\). Therefore, we conclude that \[0\leq\eta_{i}\leq 1\qquad\forall i\in\xi^{0}_{x^{k}}. \tag{2.12}\] Finally, from (2.5) we deduce that there exists at least one index \(\hat{i}\in\xi^{0}_{x^{k}}\) such that \(\eta_{\hat{i}}>0\). Using the partition \(\alpha^{+}_{x^{k}},\alpha^{-}_{x^{k}},\beta_{x^{k}}\) of \(\{1,\dots,n\}\) induced by the point \(x^{k}\) and prescribed by (1.9), we obtain that the following conditions are satisfied: * If \(i\in\alpha^{+}_{x^{k}}\), then either \(i\in\xi^{+}_{x^{k}}\) or \(i\in\xi^{0}_{x^{k}}\). In the first case, \(x^{\prime}_{i}=x^{k}_{i}>0\), then \(v^{\prime}_{i}=0\). In the second, \(x^{\prime}_{i}=0=x^{k}_{i}+v^{\prime}_{i}\). Hence, in any case, \(x^{k}_{i}+v^{\prime}_{i}\geq 0\). * If \(i\in\alpha^{-}_{x^{k}}\), then an analogous reasoning as before yields \(x^{k}_{i}+v^{\prime}_{i}\leq 0\). * If \(i\in\beta_{x^{k}}\), then \(x^{k}_{i}=0\). Hence, \(i\in\xi^{0}_{x^{k}}\), and \(x^{\prime}_{i}=0\). Therefore, \(v^{\prime}_{i}=0\). The previous argument proves that the vector \(v^{\prime}\) introduced in (2.10) satisfies the assumptions of Lemma 1.2 at the point \(x^{k}\). Thus, we deduce that \[\begin{split} f(x^{\prime})&\leq f(x^{k})+\langle \partial^{-}f(x^{k}),v^{\prime}\rangle+\frac{L}{2}|v^{\prime}|_{2}^{2}\\ &\quad=f(x^{k})-\sum_{i\in\xi^{0}_{x^{k}}}\left(h\eta_{i}-h^{2} \eta_{i}^{2}\frac{L}{2}\right)|\partial_{i}^{-}f(x^{k})|^{2}.\end{split} \tag{2.13}\] If we set \(\bar{\eta}:=\max\{\eta_{i}\mid i\in\xi^{0}\}\), we observe that (2.13) implies that \(f(x^{\prime})\leq f(x^{k})\) whenever \(h\in\left[0,\frac{2}{L\bar{\eta}}\right]\). We stress the fact that the condition (2.5) that characterizes the present scenario guarantees that \(\bar{\eta}>0\). _Phase (2)._ We now investigate the update described in (2.8)-(2.9). Let \(\alpha^{+}_{x^{\prime}},\alpha^{-}_{x^{\prime}}\) and \(\beta_{x^{\prime}}\) be the partition of the components \(\{1,\dots,n\}\) induced by the point \(x^{\prime}\) and prescribed by (1.9). Recalling (2.6) and the definition of \(x^{\prime}\) in (2.7), we observe that \(\alpha^{+}_{x^{\prime}}=\xi^{+}_{x^{k}}\), \(\alpha^{-}_{x^{\prime}}=\xi^{-}_{x^{k}}\) and \(\beta_{x^{\prime}}=\xi^{0}_{x^{k}}\). Hence, since \(x^{\prime}_{i}=x^{k}_{i}\neq 0\) for every \(i\in\alpha^{+}_{x^{\prime}}\cup\alpha^{-}_{x^{\prime}}\), from (1.8) it descends that that \[\partial_{i}^{-}f(x^{\prime})-\partial_{i}^{-}f(x^{k})=\partial_{i}g(x^{\prime })-\partial_{i}g(x^{k})\qquad\forall i\in\alpha^{+}_{x^{\prime}}\cup\alpha^{-} _{x^{\prime}},\] which, in virtue of (2.9), yields \[v^{\prime\prime}_{i}+h\partial_{i}^{-}f(x^{\prime})=h\left(\partial_{i}g(x^{ \prime})-\partial_{i}g(x_{k})\right)\qquad\forall i\in\alpha^{+}_{x^{\prime}} \cup\alpha^{-}_{x^{\prime}}. \tag{2.14}\] Moreover, using (2.9), (2.2) and (2.6), we deduce that \[\begin{cases}i\in\alpha^{+}_{x^{\prime}}\implies x^{\prime\prime}_{i}=x^{k}_{i }-h\partial_{i}^{-}f(x^{k})>0\implies x^{\prime\prime}_{i}=x^{\prime}_{i}+v^{ \prime\prime}_{i}>0,\\ i\in\alpha^{-}_{x^{\prime}}\implies x^{\prime\prime}_{i}=x^{k}_{i}-h \partial_{i}^{-}f(x^{k})<0\implies x^{\prime\prime}_{i}=x^{\prime}_{i}+v^{ \prime\prime}_{i}<0.\end{cases} \tag{2.15}\] On the other hand, from (2.9) and recalling that \(\beta_{x^{\prime}}=\xi_{x^{k}}^{0}\), we have that \[i\in\beta_{x^{\prime}}\implies v_{i}^{\prime\prime}=-h\partial_{i}^{-}f(x^{ \prime}). \tag{2.16}\] Combining (2.15) and (2.16), we obtain that the hypotheses of Lemma 1.2 are met when considering the point \(x^{\prime}\) and the direction \(v^{\prime\prime}\). Hence, it follows that \[\begin{split} f(x^{\prime\prime})\leq& f(x^{\prime })+\langle\partial^{-}f(x^{\prime}),v^{\prime\prime}\rangle+\frac{L}{2}|v^{ \prime\prime}|_{2}^{2}\\ =& f(x^{\prime})-\left(h-\frac{L}{2}h^{2}\right)| \partial^{-}f(x^{\prime})|_{2}^{2}\\ &+(1-Lh)\langle\partial^{-}f(x^{\prime}),v^{\prime\prime}+h \partial^{-}f(x^{\prime})\rangle+\frac{L}{2}|v^{\prime\prime}+h\partial^{-}f (x^{\prime})|_{2}^{2}.\end{split} \tag{2.17}\] On the other hand, recalling (2.9) and (2.14), we have that \[\begin{split}|v^{\prime\prime}+h\partial^{-}f(x^{\prime})|_{2}^ {2}&=h^{2}\sum_{i\in\alpha_{x^{\prime}}^{+}\cup\alpha_{x^{\prime }}^{-}}|\partial_{i}g(x^{k})-\partial_{i}g(x^{\prime})|^{2}\leq h^{2}|\nabla g( x^{k})-\nabla g(x^{\prime})|_{2}^{2}\\ \leq& h^{4}L^{2}\sum_{i\in\xi_{x^{k}}^{0}}\eta_{i}^{ 2}|\partial_{i}^{-}f(x^{k})|^{2},\end{split} \tag{2.18}\] where we used the Lipschitz-continuity of \(\nabla g\), (2.10) and the fact that \(x^{\prime}-x^{k}=v^{\prime}\). If we set \(h=\frac{1}{L}\) in (2.17), owing to (2.18) we deduce that \[\begin{split} f(x^{\prime\prime})\leq f(x^{\prime})-\frac{1}{2L} |\partial^{-}f(x^{\prime})|_{2}^{2}+\frac{1}{2L}\sum_{i\in\xi_{x^{k}}^{0}}\eta _{i}^{2}|\partial_{i}^{-}f(x^{k})|^{2}.\end{split}\] Moreover, combining the last inequality with (2.13) (using again \(h=\frac{1}{L}\)), we obtain that \[\begin{split} f(x^{\prime\prime})\leq& f(x^{k})- \frac{1}{2L}|\partial^{-}f(x^{\prime})|_{2}^{2}+\frac{1}{L}\sum_{i\in\xi_{x^{k }}^{0}}(\eta_{i}^{2}-\eta_{i})|\partial_{i}^{-}f(x^{k})|^{2}\\ \leq& f(x^{k})-\frac{1}{2L}|\partial^{-}f(x^{\prime}) |_{2}^{2},\end{split} \tag{2.19}\] where we used (2.12) in the last passage. In virtue of Lemma 1.1, from (2.19) we deduce that \[f(x^{\prime\prime})-f(x^{*})\leq f(x^{k})-f(x^{*})-\frac{\mu}{L}\left(f(x^{ \prime})-f(x^{*})\right). \tag{2.20}\] We now distinguish two possibilities, corresponding to the if-else statement at lines 14 and 16 of Algorithm 1. * If \(f(x^{\prime})<f(x^{\prime\prime})\), then we set \(x^{k+1}:=x^{\prime}\). * If \(f(x^{\prime\prime})\leq f(x^{\prime})\), then we set \(x^{k+1}:=x^{\prime\prime}\). In any case, from (2.20) we obtain \[f(x^{k+1})-f(x^{*})\leq\left(1+\frac{\mu}{L}\right)^{-1}(f(x^{k})-f(x^{*})). \tag{2.21}\] **Conclusion.** Combining (2.4) and (2.21), if we set \[\kappa=\max\left(\left(1-\frac{\mu}{L}\right),\left(1+\frac{\mu}{L}\right)^{-1 }\right),\] we deduce the thesis. ## 3. Accelerated subgradient method In this section we propose a momentum-based acceleration of Algorithm 1 for an objective function \(f:\mathbb{R}^{n}\to\mathbb{R}\) with the \(\ell_{1}\)-composite structure introduced in (1.5). As observed in the Introduction, in the smooth-objective framework it is possible to design minimization schemes with momentum by discretizing second order ODEs of the form: \[\ddot{x}+\nabla V(x)=-A(x,t)\dot{x}\iff\begin{cases}\dot{x}=p\\ \dot{p}=-\nabla V(x)-A(x,t)\dot{x},\end{cases} \tag{3.1}\] where \(V:\mathbb{R}^{n}\to\mathbb{R}\) represents the objective function, and \(A(x,t)\in\mathbb{R}^{n\times n}\) is a positive semi-definite matrix that tunes the generalized viscosity friction. In [11] it was noticed that _adaptive restart strategies_ can further accelerate the convergence to the minimizer, since they are capable of eliminating the oscillations typical of under-damped mechanical systems. The term _adaptive restart_ denotes a procedure that resets to \(0\) the momentum/velocity variable (i.e., \(p\) in (3.1)), as soon as a suitable condition is satisfied. In [17] it was considered a _conservative_ dynamics by dropping the viscosity term, i.e, choosing \(A(x,t)\equiv 0\) in (3.1). Then, using the symplectic Euler scheme (see, e.g., [7]) to discretize the system, it was proposed the following _conservative_ algorithm: \[\begin{cases}p^{k+1}=p^{k}-h_{m}\nabla V(x^{k}),\\ x^{k+1}=x^{k}+h_{m}p^{k},\end{cases} \tag{3.2}\] where \(h_{m}>0\) represents the discretization step-size. In the case of a regular and convex objective \(V\), the conservative scheme (3.2) achieves at each iteration a decrease of the function \(V\) greater or equal than the classical gradient descent. This fact relies on the following restart strategy: "reset \(p_{k}=0\) whenever \(\langle\nabla V(x^{k+1}),p^{k}\rangle>0\)". In [17] it was also investigated a heuristic extension of (3.2) to the case of a non-smooth objective \(f:\mathbb{R}^{n}\to\mathbb{R}\) with \(\ell_{1}\)-composite structure, where \(\partial^{-}f(x^{k})\) was used in (3.2) in place of \(\nabla V(x^{k})\), i.e., \[\begin{cases}p^{k+1}=p^{k}-h_{m}\partial^{-}f(x^{k}),\\ x^{k+1}=x^{k}+h_{m}p^{k}.\end{cases} \tag{3.3}\] In this section, taking advantage of the observations done in Section 2 for the non-accelerated subgradient method, we propose a variant of the algorithm described in [17, Algorithm 4]. The main differences concern the way we manage the changes of sign in the components, and the condition for the reset of the momentum variable. Indeed, from (3.3) we deduce that \[x^{k+1}=x^{k}-h\partial^{-}f(x^{k})+\sqrt{h}p^{k}, \tag{3.4}\] where we set \(h=h_{m}^{2}\). Therefore, it is natural to divide every step of the accelerated algorithm into two phases: * \(q\gets x^{k}-h\partial^{-}f(x^{k})\) (subgradient phase). If sign changes in the components occur, we adopt the same procedures as in Algorithm 1. * \(q^{\prime}\gets q+\sqrt{h}p^{k}\) (momentum phase). Also in this phase, we have particular care of sign changes of the components. Moreover, we use the general principle that "in the _momentum phase_ we do not modify null components". This is motivated by the fact that the momentum variable carries information about the previous values of the \(\partial^{-}f\). However, since \(\partial_{i}^{-}f\) typically undergoes sudden modification when the \(i\)-th component of the state variable \(x^{k}\) vanishes or changes sign, the information contained in \(p_{i}^{k}\) could be of little use, if not misleading. For this reason, in Algorithm 2 we set \(p_{i}^{k}=0\) if the \(i\)-th component of the state variable is null, or if it has been involved in a sign change. See, respectively, line 10 and line 17 of the accelerated subgradient method reported in Algorithm 2. Finally, in virtue of (3.4) and the remarks done above, we observe that a natural choice for the stepsize is \(h=1/L\), where \(L\) is the Lipschitz constant of the gradient of the regular term \(g:\mathbb{R}^{m}\to\mathbb{R}\). ``` 1:\(x\gets x^{0}\) 2:\(p\gets 0\in\mathbb{R}^{n}\) 3:\(k=1\) 4:while\(k\leq\max_{\text{iter}}\)do 5:\(x^{\text{temp}}\gets x-h\partial^{-}f(x)\) 6:if \(\operatorname{sign}(x^{\text{temp}}_{i}\cdot x_{i})\geq 0,\forall i=1, \ldots,n\)then 7:\(q^{\text{old}}\gets x\) 8:\(x\gets x^{\text{temp}}\) 9:\(q\gets x\) 10:\(p_{i}\gets 0\), if \(x^{\text{temp}}_{i}=0\) 11:else 12:\(I\leftarrow\{i\mid\operatorname{sign}(x^{\text{temp}}_{i}\cdot x_{i})\leq 0\}\) 13:\(x^{\prime}_{i}\gets x_{i},\ \forall i\in I\) 14:\(x_{i}\gets 0,\ \forall i\in I\) 15:\(v^{\prime}_{i}\gets-h\partial^{-}_{i}f(x),\ \ \forall i\in I\) 16:\(v^{\prime\prime}_{i}\gets-h\partial^{-}_{i}f(x^{\prime}),\ \ \forall i\in I\) 17:\(p_{i}\gets 0,\ \ \forall i\in I\) 18:\(x^{\prime\prime}_{i}+x^{\prime}+v^{\prime\prime}\) 19:\(q^{\text{old}}\gets x^{\prime}\) 20:if\(f(x^{\prime})<f(x^{\prime\prime})\)then 21:\(x\gets x^{\prime}\) 22:\(q\gets x^{\prime}\) 23:\(p\gets 0\) 24:else 25:\(x\gets x^{\prime\prime}\) 26:\(q\gets x^{\prime\prime}\) 27:endif 28:endif 29:\(q^{\prime}\gets q+\sqrt{h}p\) 30:if\(\exists i=1,\ldots,n:\operatorname{sign}(q^{\prime}_{i}\cdot x_{i})<0\)then 31:\(J\leftarrow\{i\mid\operatorname{sign}(q^{\prime}_{i}\cdot x_{i})<0\}\) 32:\(q^{\prime}_{i}\gets 0,\ \ \forall i\in J\) 33:\(p\leftarrow(q^{\prime}-q)/\sqrt{h}\) 34:endif 35:\(r\leftarrow(\widetilde{\partial}f(q^{\prime}),p)\) 36:if\(r\leq 0\)then 37:\(p\gets p+(q-q^{\text{old}})/\sqrt{h}\) 38:else 39:\(q^{\prime}\gets q\) 40:\(p\leftarrow(q-q^{\text{old}})/\sqrt{h}\) 41:endif 42:\(x\gets q^{\prime}\) 43:\(k\gets k+1\) 44:endwhile ``` **Algorithm 2** Accelerated conservative subgradient method for \(\ell_{1}\)-composite optimization _Remark 3_.: In line 35 of Algorithm 2 we have introduced the quantity \(\widetilde{\partial}f(q^{\prime})\). We recall that \(f(x)=g(x)+\gamma|x|_{1}\), where \(g\) is convex and \(C^{1}\)-regular, and \(\gamma>0\). Using the same notations as in Algorithm 2, \(\widetilde{\partial}f(q^{\prime})=(\widetilde{\partial}_{1}f(q^{\prime}), \ldots,\widetilde{\partial}_{n}f(q^{\prime}))\) is defined as follows: \[\widetilde{\partial}_{i}f(q^{\prime}):=\begin{cases}\partial_{i}g(q^{\prime})+ \gamma&\text{if }(q_{i}>0)\vee(q^{\prime}_{i}>0),\\ \partial_{i}g(q^{\prime})+\gamma&\text{if }(q_{i}<0)\vee(q^{\prime}_{i}<0),\\ \partial_{i}g(q^{\prime})&\text{if }(q_{i}=0)\wedge(q^{\prime}_{i}=0),\end{cases} \tag{3.5}\] for every \(i=1,\ldots,n\). We observe that \(\widetilde{\partial}f(q^{\prime})\) is well-defined for every component since, by construction, \(\operatorname{sign}(q_{i}\cdot q^{\prime}_{i})\geq 0\) for every \(i=1,\ldots,n\). _Remark 4_.: We observe that the computation of the quantity \(r\) at the line 35 requires an evaluation of the subdifferential of \(f\) at the point \(q^{\prime}\). From a computational viewpoint, the demanding part is the evaluation of the gradient of the regular term, i.e., \(\nabla g(q^{\prime})\). However, if \(r\leq 0\), then \(x\gets q^{\prime}\) (line 42), and \(\nabla g(q^{\prime})\) can be stored and re-used for the construction of \(\partial^{-}f(x)\) at the subsequent iteration. We can prove the following comparative result on the decrease of the objective function \(f\). **Proposition 3.1**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a function such that \(f(x)=g(x)+\gamma|x|_{1}\) for every \(x\in\mathbb{R}^{n}\), where \(\gamma>0\) and \(g:\mathbb{R}^{n}\to\mathbb{R}\) is a \(C^{1}\) convex function such that \(\nabla g\) is \(L\)-Lipschitz continuous, with \(L>0\). Let us consider \(x^{0}\in\mathbb{R}^{n}\) as the initial point, and let \(q^{\prime}\) be the output produced by an iteration of Algorithm 2 and let \(q\) be the output of an iteration of Algorithm 1 (see line 29 of Algorithm 2). Then, we have that \(f(q^{\prime})\leq f(q)\)._ Proof.: Using the same notations as in Algorithm 2, we have that \(q\) is obtained from \(x^{0}\) with an iteration of Algorithm 1 (see line 22 and line 26 of Algorithm 2). If \(p=0\), then there is nothing to prove. On the other hand, owing to the if statement at lines 30-34, we have that \(\operatorname{sign}(q^{\prime}_{i}\cdot q_{i})\geq 0\) for every \(i=1,\ldots,n\). We further observe that \(q^{\prime}=q+\sqrt{h}p\) holds in every case (see line 29 and line 33). Let us define \[\xi^{+} =\{i=1,\ldots,n\mid(q_{i}>0)\vee(q^{\prime}_{i}>0)\},\] \[\xi^{-} =\{i=1,\ldots,n\mid(q_{i}<0)\vee(q^{\prime}_{i}<0)\},\] \[\xi^{0} =\{i=1,\ldots,n\mid(q_{i}=0)\wedge(q^{\prime}_{i}=0)\},\] and the set \[Z_{\xi^{\pm},\xi^{0}}:=\{z\in\mathbb{R}^{n}\mid z_{i}\geq 0\text{ if }i\in\xi^{+},z_{i} \leq 0\text{ if }i\in\xi^{-},z_{i}=0\text{ if }i\in\xi^{0}\}.\] Then, we have that \(q,q^{\prime}\in Z_{\xi^{\pm},\xi^{0}}\), and that the restrictions \(f|_{Z_{\xi^{\pm},\xi^{0}}}\equiv\tilde{f}|_{Z_{\xi^{\pm},\xi^{0}}}\), where \(\tilde{f}:\mathbb{R}^{n}\to\mathbb{R}\) is a \(C^{1}\)-regular and convex function that satisfies: \[z\mapsto\tilde{f}(z)=g(z)+\gamma\sum_{i\in\xi^{+}}z_{i}-\gamma\sum_{i\in\xi^{ -}}z_{i}.\] Moreover, from (3.5) we read that \(\nabla\tilde{f}(q^{\prime})=\widetilde{\partial}f(q^{\prime})\). Since \(\tilde{f}\) is convex, we have that \[\tilde{f}(q)\geq\tilde{f}(q^{\prime})+\langle\nabla\tilde{f}(q^{\prime}),- \sqrt{h}p\rangle\] and, recalling that \(f(q)=\tilde{f}(q)\) and \(f(q^{\prime})=\tilde{f}(q^{\prime})\), it follows that the condition \(\langle\nabla\tilde{f}(q^{\prime}),p\rangle\leq 0\) implies \(f(q^{\prime})\leq f(q)\). On the other hand, if \(\langle\nabla\tilde{f}(q^{\prime}),p\rangle>0\), then we reset \(q^{\prime}=q\) (see line 39), and \(f(q^{\prime})=f(q)\). _Remark 5_.: Under the same assumptions as Theorem 2.1, i.e., when \(g:\mathbb{R}^{n}\to\mathbb{R}\) is \(\mu\)-strongly convex, from Proposition 3.1 it follows that Algorithm 2 achieves a linear convergence rate. Indeed, if we denote by \((x^{k})_{k\geq 0}\) the sequence generated by Algorithm 2 setting the step-size \(h\) equal to the inverse of the Lipschitz constant of \(\nabla g\), then for every \(k\geq 0\) we have: \[f(x^{k+1})-f(x^{*})\leq f(q)-f(x^{*})\leq\kappa\left(f(x^{k})-f(x^{*})\right),\] where \(\kappa\in(0,1)\) is the constant appearing in Theorem 2.1, and \(q\in\mathbb{R}^{n}\) is the output of a single iteration of Algorithm 1 with starting point \(x^{k}\). However, we insist on the fact that Proposition 3.1 should be understood as a _qualitative_ result, relating the per-iteration decrease of Algorithm 2 with the one of Algorithm 1. ## 4. Numerical experiments In this section we present some numerical experiments involving composite objective functions with \(\ell_{1}\)-regularization. We tested Algorithm 1 and its accelerated version Algorithm 2 on objective functions of the form \(f(x)=g(x)+\gamma|x|_{1}\), where \(g:\mathbb{R}^{n}\to\mathbb{R}\) is convex and regular. We considered both the strongly convex and the non-strongly case. For each class of problems, we compared the performances of our methods with ISTA, i.e., the standard _forward-backward_ thresholding algorithm for \(\ell_{1}\)-regularized problems (see, e.g., [5]). In [3] an accelerated version of ISTA (called Fast ISTA, or FISTA) was proposed, and in [11] it was observed that the convergence rate of FISTA can be further improved by means of adaptive restarts. We use the restarted FISTA described in [11] as the benchmark for the experiments of this part. We also reported the performances of the conservative-restart algorithm introduced in [17]. The results are illustrated in Figure 2. Quadratic function with \(\ell_{1}\)-regularizationWe considered a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) of the form \[f(x)=\frac{1}{2}x^{T}Mx+b^{T}x+\gamma|x|_{1},\] where \(M\in\mathbb{R}^{n\times n}\) is a symmetric positive definite matrix with eigenvalues sampled uniformly in the interval \([0.02,100]\), and \(b\in\mathbb{R}^{n}\) was generated with a gaussian distribution \(\mathcal{N}(0,4)\). We set \(\gamma=0.25\,|b|_{\infty}\), and we sampled the starting point using \(\mathcal{N}(0,2)\). We fixed the dimension \(n=1000\). We observe that the objective function \(f\) is strongly convex. Quadratic regression with \(\ell_{1}\)-regularizationWe considered a _sparse_ quadratic regression problem. We generated a sparse random vector \(y\in\mathbb{R}^{n}\) whose components were non-zero with probability \(p=0.3\). These values were sampled using a uniform distribution over \([0,1]\). We took a matrix \(M\in\mathbb{R}^{m\times n}\) whose singular values were uniformly sampled in \([1,10]\), and we set \(b=My+w\), where \(w\in\mathbb{R}^{m}\) represented a gaussian noise distributed as \(\mathcal{N}(0,0.1)\). Finally, the objective function had the form \[f(x)=\frac{1}{2}|Mx-b|_{2}^{2}+\gamma|x|_{1},\] with \(\gamma=1\). We used \(n=1000\) and \(m=500\), and we sampled the component of the initial guess with \(\mathcal{N}(0,2)\). This problem is non-strongly convex. Logistic regression with \(\ell_{1}\)-regularizationWe considered a sparse logistic regression problem. We constructed \(x^{\rm real}\in\mathbb{R}^{n}\) with the following procedure: each component was zero with probability \(p=0.8\), and, if nonzero, its value was sampled using a standard normal \(\mathcal{N}(0,1)\). Then, we independently sampled the entries of \(b=(b_{1},\ldots,b_{m})\in\{0,1\}^{m}\) using the distribution: \(\mathbb{P}(b_{i}=1)=(1+\exp(\langle M_{i},x^{\rm real}\rangle))\) for every \(i=1,\ldots,m\), where \(M_{1},\ldots,M_{m}\in\mathbb{R}^{n}\) are the rows of a matrix \(M\in\mathbb{R}^{m\times n}\) with independent components generated with \(\mathcal{N}(0,1)\). Supposing to know the matrix \(M\) and the measurements \(b\), the sparse log-likelyhood maximization can be formulated as the problem of minimizing \[f(x)=g(x)+\gamma|x|_{1},\quad g(x)=\sum_{i=1}^{m}\Big{[}(1-b_{i})\langle M_{i},x\rangle+\log(1+\exp(-\langle M_{i},x\rangle))\Big{]},\] where we set \(\gamma=0.25|\nabla g(0)|_{\infty}\). We used \(n=100\) and \(m=500\), and we sampled the component of the initial guess with \(\mathcal{N}(0,2)\). This problem is convex but not strongly convex. LogSumExp with \(\ell_{1}\)-regularizationWe considered the function \(f:\mathbb{R}^{n}\to\mathbb{R}\) defined as follows: \[f(x)=g(x)+\gamma|x|_{1},\quad g(x)=r\log\left(\sum_{i=1}^{k}\exp\left(\frac{ \langle M_{i},x\rangle-b_{i}}{r}\right)\right),\] where \(M_{1},\ldots,M_{k}\in\mathbb{R}^{n}\) are the rows of the matrix \(M\in\mathbb{R}^{k\times n}\), and \(b\in\mathbb{R}^{k}\). The entries of \(M\) and \(b\) were independently sampled using a gaussian \(\mathcal{N}(0,1)\), as well as the components of the starting point. We set \(r=5\), and we used \(n=200\) and \(k=500\). This is another example of non-strongly convex problem. ## Conclusions In this paper, we considered _composite_ convex optimization problems with \(\ell_{1}\)-penalization, and we formulated a subgradient algorithm with constant step-size. In the case of strongly convex objective, we establish a linear convergence result for the method. Using dynamical system Figure 2. Experiments with \(\ell_{1}\)-regularization. We compared the performances of Algorithm 1 (red) and Algorithm 2 (black) with ISTA (dashed blue), restarted FISTA (magenta) and the conservative-restart scheme proposed in [17] (dashed black). Each problem was solved 50 times, and the plots were obtained by taking the average. The convergence rate is measured by evaluating \(|\partial^{-}f|_{2}\) at each iteration. We observe that the non-accelerated algorithms, i.e., Algorithm 1 and ISTA, have always similar performances. Restarted FISTA is the most performing in the strongly convex case, while Algorithm 2 seems to be the most efficient with non-strongly convex objectives. If compared to the restart-conservative of [17], we observe that Algorithm 2 is much faster in the early phases of the minimization process. considerations, we proposed an accelerated version of the subgradient algorithm, and we observed in numerical experiments that it can effectively compete with one of the most performing schemes for this kind of problems, i.e., FISTA combined with an adaptive restart strategy. For future work, it could be interesting to design subgradient algorithms for composite optimization involving a non-smooth term of the form \(x\mapsto|Ax|_{1}\). In this case, a challenging point consists in finding strategies for computing \(\partial^{-}f\) (or a suitable approximation) that could be practical for high-dimensional settings. ### Acknowledgments A.S. acknowledges partial support from INdAM-GNAMPA.
2308.07235
KD-Club: An Efficient Exact Algorithm with New Coloring-based Upper Bound for the Maximum k-Defective Clique Problem
The Maximum k-Defective Clique Problem (MDCP) aims to find a maximum k-defective clique in a given graph, where a k-defective clique is a relaxation clique missing at most k edges. MDCP is NP-hard and finds many real-world applications in analyzing dense but not necessarily complete subgraphs. Exact algorithms for MDCP mainly follow the Branch-and-bound (BnB) framework, whose performance heavily depends on the quality of the upper bound on the cardinality of a maximum k-defective clique. The state-of-the-art BnB MDCP algorithms calculate the upper bound quickly but conservatively as they ignore many possible missing edges. In this paper, we propose a novel CoLoring-based Upper Bound (CLUB) that uses graph coloring techniques to detect independent sets so as to detect missing edges ignored by the previous methods. We then develop a new BnB algorithm for MDCP, called KD-Club, using CLUB in both the preprocessing stage for graph reduction and the BnB searching process for branch pruning. Extensive experiments show that KD-Club significantly outperforms state-of-the-art BnB MDCP algorithms on the number of solved instances within the cut-off time, having much smaller search tree and shorter solving time on various benchmarks.
Mingming Jin, Jiongzhi Zheng, Kun He
2023-08-14T16:17:29Z
http://arxiv.org/abs/2308.07235v2
KD-Club: An Efficient Exact Algorithm with New Coloring-based Upper Bound for the Maximum \(k\)-Defective Clique Problem ###### Abstract The Maximum \(k\)-Defective Clique Problem (MDCP) aims to find a maximum \(k\)-defective clique in a given graph, where a \(k\)-defective clique is a relaxation clique missing at most \(k\) edges. MDCP is NP-hard and finds many real-world applications in analyzing dense but not necessarily complete subgraphs. Exact algorithms for MDCP mainly follow the Branch-and-bound (BnB) framework, whose performance heavily depends on the quality of the upper bound on the cardinality of a maximum \(k\)-defective clique. The state-of-the-art BnB MDCP algorithms calculate the upper bound quickly but conservatively as they ignore many possible missing edges. In this paper, we propose a novel Coloring-based Upper Bound (CLUB) that uses graph coloring techniques ingeniously to detect independent sets so as to detect missing edges ignored by the previous methods. We then develop a new BnB algorithm for MDCP, called KD-Club, using CLUB in both the preprocessing stage for graph reduction and the BnB searching process for branch pruning. Extensive experiments show that KD-Club significantly outperforms state-of-the-art BnB MDCP algorithms on the number of solved instances within the cut-off time, having much smaller search tree and shorter solving time on various benchmarks. ## 1 Introduction Investigating structured subgraphs is a practical task with numerous demands in many optimization problems and real-world applications. The clique is a famous and well-studied structured subgraph where any two distinct vertices are restricted to be adjacent. However, in many real-world applications, such as biological networks [22], social network analysis [1], and community detection [14, 15], dense subgraphs need not be complete but allow missing a few connections. Therefore, many relaxations of the clique, such as the quasi-clique [1], \(k\)-plex [1], \(k\)-defective clique [23], etc., have been proposed. Among these relaxations, the \(k\)-defective clique allows missing at most \(k\) edges over a clique, and the \(k\)-plex allows missing at most \(k-1\) edges for each vertex. Obviously, the relaxation of the \(k\)-defective clique is between the clique and the \(k\)-plex, _i.e._, a clique must be a \(k\)-defective clique, and a \(k\)-defective clique must be a \((k+1)\)-plex. The clique has been well studied in the past decades. Recently, the \(k\)-plex also attracted much attention [1, 14, 15, 16, 17], yet there are relatively few studies on the \(k\)-defective clique [1, 18, 19] which also has wide applications, such as transportation [2], clustering [15], and protein interaction prediction [23]. In this paper, we address the Maximum \(k\)-Defective Clique Problem (MDCP), which aims to find the maximum \(k\)-defective clique in a given graph, and we focus on its exact solving. Since the \(k\)-defective clique is a relaxation of the clique, the difficulty of solving MDCP is as hard as finding the maximum clique in a given graph, which is a famous NP-hard problem. Some exact algorithms for MDCP have been proposed, coming up with a series of efficient techniques, such as reduction rules and upper bounds. For representative methods, there are the generic RDS algorithm [13, 14] and a branch-and-price framework [15] for various relaxed clique problems, including MDCP, and the most recent MADEC\({}^{+}\)[14] and KDBB [18] algorithms that proposed some new upper bounds and reduction rules. As the state-of-the-art exact algorithms for MDCP, both MADEC\({}^{+}\) and KDBB follow the branch-and-bound (BnB) framework [16, 17]. A BnB algorithm for MDCP usually maintains a growing partial solution \(S\) (_i.e._, a \(k\)-defective clique) and the corresponding candidate vertex set \(\mathcal{C}\). Reduction or pruning is performed if the upper bound of the size of the maximum \(k\)-defective clique containing \(S\) is no larger than the size of the maximum \(k\)-defective clique found so far (_i.e._, lower bound). Therefore, the quality of the upper bound greatly influences the algorithm's efficiency. MADEC\({}^{+}\) proposes an upper bound based on graph coloring. Note that an independent set in a graph is a vertex set where any two distinct vertices are non-adjacent. MADEC\({}^{+}\) uses graph coloring methods to assign each vertex a color with the constraint that adjacent vertices cannot be in the same color, so as to partition the entire subgraph of \(G\) induced by \(V\backslash S\) into independent sets and then calculates the upper bound. Such an upper bound regards \(S\) and \(V\backslash S\) as entirely independent and increases significantly with the increase of \(k\). KDBB proposes some new upper bounds that focus on the missing edges between candidate vertices and vertices in \(S\), which increase softly with the increase of \(k\). Thus KDBB shows excellent performance for MDCP instances with large values of \(k\). The upper bounds in KDBB also lead to efficient reduction rules, helping KDBB reduce massive sparse graphs significantly. However, these upper bounds are still not very tight since they ignore the missing edges between candidate vertices. In other words, they regard the entire candidate set \(C\) as a clique, which is over-conservative. To address the above issues, we propose a new upper bound based on graph coloring, called **Co**Loring-based **U**pper **B**ound (**CLUB**), that considers the missing edges between not only vertices in \(C\) and vertices in \(S\) but also vertices in \(C\) themselves. The main ideas are that for a subset \(C^{\prime}\) of the candidate set \(C\), we use the graph coloring method to partition \(C^{\prime}\) into \(r\) independent sets. Then, when we want to add \(t\) vertices (suppose \(r<t\leq|C^{\prime}|\)) from \(C^{\prime}\) to \(S\), except the missing edges between the \(t\) vertices and vertices in \(S\) that might exist, there must be missing edges between the \(t\) added vertices since there must be added vertices belonging to the same independent set. Since CLUB considers the missing edges more thoroughly, it is strictly no worse than the upper bounds proposed in KDBB. Our proposed CLUB is a generic approach that not only can be used during the BnB searching process to prune the branches but also can be used in preprocessing for graph reduction. Based on CLUB, we propose a new BnB algorithm for MDCP, called **KD-Club** (maximum **K**-Defective clique algorithm with **CLUB**). Since the upper bound in \(\text{MADEC}^{+}\) increases significantly with the increase of \(k\), \(\text{MADEC}^{+}\) does not work well for MDCP instances with large \(k\) values. Since the reduction rules in KDBB cannot help it reduce dense graphs, KDBB does not work well for MDCP instances based on dense graphs. Our proposed CLUB significantly outperforms the previous upper bounds for MDCP by considering the connectivity between vertices more thoroughly, helping the BnB algorithm significantly reduce the search space. Therefore, KD-Club has excellent performance and robustness for solving MDCP instances based on either massive sparse or dense graphs, with either small or large \(k\) values, as shown in our experiments. ## 2 Preliminaries This section introduces some definitions, and two reduction rules used in the preprocessing of the KDBB algorithm. ### Definitions Given an undirected graph \(G=(V,E)\), where \(V\) is the vertex set and \(E\) the edge set, the density of \(G\) is \(2|E|/(|V|(|V|-1))\). Each edge \(e\in E\) is denoted by its two endpoints \((u,v)\). We define \(\overline{G}=(V,\overline{E})\) as the complementary graph of \(G\), _i.e._, \(\overline{E}=\{(u,v)|u\in V\wedge v\in V\wedge(u,v)\notin E\}\). We denote the set of vertices adjacent to \(v\) in \(G\) as \(N_{G}(v)\), \(|N_{G}(v)|\) is the degree of \(v\) in \(G\), and the common neighbor of two vertices \(u,v\) as \(N_{G}(u,v)=N_{G}(u)\cap N_{G}(v)\). Moreover, we define \(N_{G}[v]=N_{G}(v)\cup\{v\}\) and \(N_{G}[u,v]=N_{G}(u,v)\cup\{u,v\}\). Given a vertex set \(V^{\prime}\subseteq V\), \(G[V^{\prime}]\) is defined as the subgraph induced by \(V^{\prime}\). Given a positive integer \(k\), \(V^{\prime}\subseteq V\) is a \(k\)-defective clique if \(G[V^{\prime}]\) has at least \(\binom{|V^{\prime}|}{2}-k\) edges. We use \(S\subseteq V\) to denote a growing partial solution (_i.e._, a growing \(k\)-defective clique) in BnB algorithms, and \(C\subseteq V\backslash S\) as the corresponding candidate vertex set of \(S\). The size of the maximum \(k\)-defective clique in \(G\) that includes all vertices in \(S\) is denoted by \(\omega_{G,k}(S)\), and the size of the maximum \(k\)-defective clique in \(G\) is \(\omega_{G,k}(\emptyset)\). ### Revisiting the Reduction Rules of KDBB The KDBB algorithm defines function \(rem_{G,k}(S,v)=k-|E(\overline{G}[S\cup\{v\}])|\) as the remaining number of allowed missing edges of \(S\cup\{v\}\) to form a feasible \(k\)-defective clique, and function \(res_{G,k}(S,v)=\min\{rem_{G,k}(S,v),|C\backslash N_{G[C\cup\{v\}]}[v]|\}\) as the maximum number of extra vertices in \(C\) non-adjacent to \(v\) that are allowed to be added to \(S\cup\{v\}\). KDBB proposes an upper bound of \(\omega_{G,k}(S\cup\{v\})\) calculated as \(UB_{G,k}(S,v)=|S\cup\{v\}|+|N_{G[C]}(v)|+res_{G,k}(S,v)\), which actually considers \(C\) as a clique and adding a vertex from \(C\backslash N_{G[C\cup\{v\}]}[v]\) to \(S\) only leads to one more missing edge. Similarly, KDBB defines function \(rem_{G,k}(S,u,v)=k-|E(\overline{G}[S\cup\{u,v\}])|\) as the remaining number of allowed missing edges of \(S\cup\{u,v\}\) to form a feasible \(k\)-defective clique, and function \(res_{G,k}(S,u,v)=\min\{rem_{G,k}(S,u,v),|C\backslash N_{G[C\cup\{u,v\}]}[u,v]|\}\) as the maximum number of extra vertices in \(C\) non-adjacent to \(u\) or \(v\) that are allowed to be added to \(S\cup\{u,v\}\). And KDBB proposes another upper bound of \(\omega_{G,k}(S\cup\{u,v\})\) calculated as \(UB_{G,k}(S,u,v)=|S\cup\{u,v\}|+|N_{G[C]}(u,v)|+res_{G,k}(S,u,v)\), regarding \(C\) as a clique also. Given a lower bound \(LB\) of \(\omega_{G,k}(\emptyset)\), KDBB proposes the following two reduction rules based on the above bounds. **Rule 1.** Remove vertex \(v\) from \(G\) if it satisfies \(UB_{G,k}(\emptyset,v)\leq LB\). **Rule 2.** Remove edge \((u,v)\) from \(G\) if it satisfies \(UB_{G,k}(\emptyset,u,v)\leq LB\). ## 3 Coloring-based Upper Bound This section introduces our proposed upper bound, CLUB. We first introduce the main idea and definition of CLUB, then present an example for illustration. In the end, we introduce our ColorBound algorithm to calculate the CLUB. ### Main Idea Suppose \(\{C_{0},\cdots,C_{k}\}\) is a partition of \(C\)_w.r.t._\(S\), where \(C_{i}\) is the set of vertices with \(i\) non-adjacent vertices in \(S\), _i.e._, \(C_{i}=\{v|v\in C\wedge|N_{G}(v)\cap S|=|S|-i\}\). In other words, adding each vertex \(v\in C_{i}\) to \(S\) leads to \(i\) missing edges between \(v\) and vertices in \(S\). Then, we can summarize the following important lemma which implies our main idea. **Lemma 1**.: _Suppose \(C_{i}\) can be partitioned into \(r_{i}\) independent sets \(\{I_{i,1},\cdots,I_{i,r_{i}}\}\), adding any \(1\leq t\leq|C_{i}|\) vertices in \(C_{i}\) to \(S\) leads to at least \(c\times\frac{d(d+1)}{2}+(r_{i}-c)\times\frac{d(d-1)}{2}\) more missing edges between the \(t\) added vertices, where \(c=t\mod r_{i}\) and \(d=\lfloor t/r_{i}\rfloor\)._ Proof.: Suppose the \(t\) vertices contain \(d_{j}\) vertices in \(I_{i,j}\), then the number of missing edges between the \(t\) vertices is at least \(\sum_{j=1}^{r_{i}}\frac{d_{j}(d_{j}-1)}{2}=\frac{1}{2}\left(\sum_{j=1}^{r_{i}} d_{j}^{2}-t\right)\), which occurs only when vertices in \(C_{i}\) in different independent sets are all adjacent, _i.e._, any vertices from different independent sets form a clique. Obviously, to make the above lower bound as small as possible, every \(d_{j}\) is expected to be \(t/r_{i}\). Since \(d_{j}\) must be an integer, every \(d_{j}\) should be either \(\lceil t/r_{i}\rceil\) or \(\lfloor t/r_{i}\rfloor\) to minimize the lower bound. In this case, there are \(t\mod r_{i}\) independent sets containing \(\lceil t/r_{i}\rceil\) vertices in \(t\) and the rest \(r_{i}-t\mod r_{i}\) independent sets contain \(\lfloor t/r_{i}\rfloor\) vertices in \(t\), which results in the lower bound in Lemma 1. Lemma 1 provides a lower bound of the number of increased missing edges. By referring to such a lower bound, we can calculate an upper bound of the number of vertices \(C\) can provide for \(S\) to form a feasible \(k\)-defective clique with at most \(k\) missing edges, _i.e._, the proposed CLUB. Lemma 1 indicates that the smaller the value of \(r_{i}\), the more missing edges by adding vertices in \(C_{i}\) to \(S\). KDBB actually regards \(r_{i}=|C_{i}|\) since it regards \(C\) as a clique for the bound calculation, while our CLUB uses graph coloring methods to partition \(C_{i}\) and determine the value of \(r_{i}\). So CLUB is strictly no worse than the upper bounds in KDBB. ### Definition of CLUB For each set \(C_{i}\) that can be partitioned into \(r_{i}\) independent sets, we re-partition \(C_{i}\) into \(m_{i}=\lceil|C_{i}|/r_{i}\rceil\) disjoint subsets \(\{I^{\prime}_{i,1},\cdots,I^{\prime}_{i,m_{i}}\}\). If \(|C_{i}|\mod r_{i}=0\), each of the \(m_{i}\) sets contains \(r_{i}\) vertices in \(C_{i}\). Otherwise, \(I^{\prime}_{i,m_{i}}\) contains \(|C_{i}|\mod r_{i}\) vertices in \(C_{i}\) and each of the remaining \(m_{i}-1\) sets contains \(r_{i}\) vertices in \(C_{i}\). We assume that each subset \(I^{\prime}_{i,j}\) is a clique, which only occurs when vertices in \(I^{\prime}_{i,j}\) belong to different independent sets. We further assume that vertices in \(I^{\prime}_{i,j}\) (\(1<j\leq m_{i}\)) can be added to \(S\) only when all vertices in \(\{I^{\prime}_{i,1},\cdots,I^{\prime}_{i,j-1}\}\) have been added to \(S\). Actually, these assumptions lead to the lower bound introduced in Lemma 1. In this case, adding each vertex \(v\in I^{\prime}_{i,j}\) to \(S\) leads to \(j-1\) missing edges between \(v\) and vertices in \(\{I^{\prime}_{i,1},\cdots,I^{\prime}_{i,j-1}\}\), and \(j-1+i\) missing edges between \(v\) and vertices in \(S\). Suppose \(P_{l}=\cup_{j-1+i=l}I^{\prime}_{i,j}\), then following the above assumptions, adding each vertex \(v\in P_{l}\) to \(S\) leads to \(l\) more missing edges. We define function \(Sub(v)=l|v\in P_{l}\) as the subscript of the subset that \(v\in C\) belongs to, and define an ordered set of \(C\) as \(ord(C)=\{v_{1},\cdots,v_{|C|}\}\) such that for any pair of \(v_{i}\) and \(v_{j}\), we have \(Sub(v_{i})\leq Sub(v_{j})\) if \(i<j\). With such an ordered set, we can calculate a lower bound on the number of increased missing edges caused by adding any \(t\) vertices of \(C\) to \(S\), which is defined as \(LB_{inc}(ord(C),t)=\sum_{i=1}^{t}Sub(v_{i})\). Finally, we define our CLUB of \(\omega_{G,k}(S)\) as follows. \[CLUB_{G,k}(S)=|S|+\max\{i|0\leq i\leq|C|\\ \wedge LB_{inc}(ord(C),i)\leq k-|E(\overline{G}[S])|\}. \tag{1}\] ### An Example for Illustration In this subsection, we provide an example to show how the upper bound in KDBB and CLUB are calculated. Figure 1 illustrates a growing partial 1-defective clique \(S=\{v_{0}\}\) and its candidate set \(C=\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6}\}\), which can be partitioned into \(C_{0}=\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) and \(C_{1}=\{v_{6}\}\). Suppose the current lower bound \(LB\) of \(\omega_{G,1}(\emptyset)\) is 6. KDBB regards \(C_{0}\) and \(C_{1}\) as cliques, _i.e._, adding all vertices in \(C_{0}\) to \(S\) does not increase any missing edges, and adding each vertex in \(C_{1}\) to \(S\) leads to one more missing edge. Thus, KDBB considers the upper bound of \(\omega_{G,1}(S)=|S|+6=7>LB\), and the current branch cannot be pruned, _i.e._, \(v_{0}\) will not be removed from the graph. In CLUB, \(C_{0}\) and \(C_{1}\) are partitioned into 3 and 1 independent sets, respectively (_e.g._, \(I_{0,1}=\{v_{1},v_{4}\},I_{0,2}=\{v2\},I_{0,3}=\{v3,v5\}\), and \(I_{1,1}=\{v_{6}\}\)). Thus, we can re-partition \(C_{0}\) into \(I^{\prime}_{0,1}=\{v_{1},v_{2},v_{3}\},I^{\prime}_{0,2}=\{v_{4},v_{5}\}\), and \(C_{1}\) into \(I^{\prime}_{1,1}=\{v_{6}\}\). And we have \(P_{0}=I^{\prime}_{0,1}=\{v_{1},v_{2},v_{3}\},P_{1}=I^{\prime}_{0,2}\cup I^{ \prime}_{1,1}=\{v_{4},v_{5},v_{6}\}\). According to Eq. 1, \(CLUB_{G,1}(S)=|S|+4=5<LB\), and the current branch can be pruned, _i.e._, \(v_{0}\) will be removed. ### The ColorBound Algorithm We propose the ColorBound algorithm for practically calculating the proposed CLUB, as summarized in Algorithm 1. The algorithm first uses \(|S|\) to initialize the upper bound \(UB\) (line 1), then uses the Extract() function to extract subsets \(\{P_{0},\cdots,P_{k}\}\) from the candidate set \(C\) (line 3), such that adding each vertex in \(P_{l}\) to \(S\) leads to at least \(l\) more missing edges when adding the vertices sequentially according to \(ord(C)\). Note that subsets \(P_{l}\) with \(l>k\) are ignored since there are at most \(k\) missing edges in a \(k\)-defective clique. After that, CLUB is calculated by simulation the process of adding vertices sequentially until the number of allowed missing edges cannot afford one more vertex or all the candidate vertices have been added (lines 4-9). Function Extract() is summarized in Algorithm 2, which first partitions \(C\) into subsets \(\{C_{0},\cdots,C_{k}\}\) according to the Figure 1: An example for the upper bound calculation. number of missing edges between each vertex and the vertices in \(S\) and initializes each set \(P_{i}\) to \(\emptyset\) (lines 1-3). Then, for each subset \(C_{i}\neq\emptyset\), the algorithm sequentially colors each vertex in \(C_{i}\) with the minimum feasible index of color, satisfying that adjacent vertices cannot be in the same color, and obtains the value of \(r_{i}\) (lines 4-6). Finally, the algorithm iteratively moves \(r_{i}\) vertices, _i.e._, set \(I^{\prime}_{i,j}\), from \(C_{i}\) to \(P_{j-1+i}\) until \(C_{i}\) is empty or \(j-1+i>k\) (lines 8-12), which means adding any vertex in \(I^{\prime}_{i,j}\) to \(S\) according to \(ord(C)\) leads to more than \(k\) missing edges. ``` Input: a graph \(G\), an integer \(k\), the current \(k\)-defective clique \(S\), the candidate set \(C\) Output: subsets \(\{P_{0},\cdots,P_{k}\}\) of \(C\) 1for\(i\gets 0:k\)do 2\(C_{i}\leftarrow\{v|v\in C\wedge|N_{G}(v)\cap S|=S-i\}\); 3 initialize \(P_{i}\leftarrow\emptyset\); 4for\(i\gets 0:k\)do 5if\(C_{i}\neq\emptyset\)then 6 partition \(C_{i}\) into \(r_{i}\) independent sets by sequentially coloring the vertices; 7\(j\gets 1\); 8while\(|C_{i}|\geq r_{i}\wedge j-1+i\leq k\)do 9\(I^{\prime}_{i,j}\leftarrow\) set of \(r_{i}\) vertices in \(C_{i}\); 10\(C_{i}\gets C_{i}\backslash I^{\prime}_{i,j}\), \(P_{j-1+i}\gets P_{j-1+i}\cup I^{\prime}_{i,j}\); 11\(j\gets j+1\); 12if\(j-1+i\leq k\)then\(P_{j-1+i}\gets P_{j-1+i}\cup C_{i}\); 13 14return\(\{P_{0},\cdots,P_{k}\}\); ``` **Algorithm 1**ColorBound\((G,k,S,C)\) The time complexity of either the ColorBound algorithm or the Partition() function is dominated by the graph coloring process. Thus, their time complexities are both \(O(D|C|)\), where \(D\) is the maximum degree of vertices in \(G\). ## 4 Branch and Bound Algorithm We propose a new BnB algorithm for MDCP, called KD-Club, where a new preprocessing method based on CLUB is introduced to reduce the graph, and the CLUB is also used to prune branches during the BnB process. We first present the main framework of KD-Club, and then introduce the preprocessing method and the BnB process, respectively. ### General Framework The framework of KD-Club is summarized in Algorithm 3. KD-Club maintains a lower bound \(LB\) of the size of the maximum \(k\)-defective clique in the input graph \(G\), which is initialized by a method called _FastLB_ (line 1), which is also used in the MADEC\({}^{+}\)(Chen et al., 2021) and KDBB (Gao et al., 2022) algorithms for calculating an initial lower bound. After calculating \(LB\), the Preprocessing() function is called to reduce the graph (line 2), and the reduced graph is sent to the BnB() function to find the maximum \(k\)-defective clique (line 3). During the BnB process, \(LB\) will be updated once a larger \(k\)-defective clique is found. After the BnB() function traverses the entire search tree, we have \(LB=\omega_{G,k}(\emptyset)\). ### Preprocessing Method Preprocessing plays an essential role in solving massive sparse instances. Given a lower bound \(LB\) of \(\omega_{G,k}(\emptyset)\), we propose two new rules to reduce the graph based on CLUB. **Rule 3.** Remove vertex \(v\) from \(G\) if it satisfies \(CLUB_{G,k}(\{v\})\leq LB\). **Rule 4.** Remove edge \((u,v)\) from \(G\) if it satisfies \(CLUB_{G,k}(\{u,v\})\leq LB\). Algorithm 4 shows the detailed procedure of the Preprocessing method, where four functions with Rules 1-4 are used to reduce the input graph \(G\). The functions \(check\_vertex\_with\_Rule1\) and \(check\_edge\_with\_Rule2\) are derived from KDBB, which use Rules 1 and 2 to remove vertices and edges from \(G\), respectively. Similarly, the functions \(check\_vertex\_with\_Rule3\) and \(check\_edge\_with\_Rule4\) use our proposed Rules 3 and 4 to remove vertices and edges from \(G\). Since Rules 1 and 2 ignore the connectivity between vertices in the candidate set \(C\) (we have \(C=V\backslash\{v\}\) when trying to remove vertex \(v\)), they are computationally efficient for removing vertices with small degrees or edges whose endpoints have small degrees. Hence, our Preprocessing method uses Rules 1 and 2 to quickly remove vertices and edges that are _easy_ to be removed before using our Rules 3 and 4 to further reduce the graph until no more vertices and edges can be removed. Similarly, before calculating our CLUB to reduce the graph, we also try to use Rules 1 and 2 to achieve the current reduction and save computation time (lines 14 and 22). In other words, CLUB is used only when the vertex or edge cannot be removed by Rules 1 and 2. ``` Input: a graph \(G\), an integer \(k\) Output: the reduced graph \(G\) 1\(G\gets check\_vertex\_with\_Rule1(G,k,LB)\) ; 2\(G\gets check\_vertex\_with\_Rule3(G,k,LB)\) ; 3\(G\gets check\_edge\_with\_Rule2(G,k,LB)\) ; 4whiletruedo 5\(G^{\prime}\gets check\_vertex\_with\_Rule3(G,k,LB)\) ; 6\(G^{\prime}\gets check\_edge\_with\_Rule4(G^{\prime},k,LB)\) ; 7if\(G^{\prime}\) and \(G\) is the samethenbreak 8else\(G\gets G^{\prime}\); 9return\(G\); 10Function\(check\_vertex\_with\_Rule3(G,k,LB)\) 11\(Q\leftarrow\emptyset\), add all vertices in \(V\) to \(Q\); 12while\(Q\) is notemptydo 13\(v\gets pop(Q)\), \(N_{v}\gets N_{G}(v)\); 14 apply Rules 1 and 3 on \(v\) with \(LB\) and \(k\); 15if\(v\) is removedthen 16 add all vertices in \(N_{v}\) to \(Q\); 17 18return\(G\); 19Function\(check\_edge\_with\_Rule4(G,k,LB)\) 20\(Q\leftarrow\emptyset\); add all edges in \(E\) to \(Q\); 21while\(Q\) is notemptydo 22\((u,v)\gets pop(Q)\); 23 apply Rules 2 and 4 on \((u,v)\) with \(LB\) and \(k\); 24if\((u,v)\) is removedthen 25 add all edges adjacent to \(u\) or \(v\) in \(E\) to \(Q\); 26 27return\(G\); ``` **Algorithm 4**Preprocessing\((G,k)\) ### Branch and Bound Process The BnB process is depicted in Algorithm 5. The algorithm first calls function \(reduction()\) used in KDBB [10] to reduce the input graph \(G\) according to the current solution \(S\) (line 1), which actually removes each vertex \(v\in V\backslash S\) (resp. edge \((u,v)\in E\)) if \(S\cup\{v\}\) (resp. \(S\cup\{u,v\}\)) is not a feasible solution. After which, we have the candidate set \(C=V\backslash S\) (line 3). Then, the algorithm calculates CLUB and checks whether it is larger than the current lower bound (lines 4-5). If so, the algorithm will continue to search the subtree. Otherwise, the branch will be pruned. For the selection of the branching vertex, we select the vertex in \(C\) with the minimum degree (ties are broken randomly) to find larger \(LB\) quickly and reduce the tree size for the subsequent searching. After selecting a branching vertex \(u\), the algorithm uses a binary branching strategy, that is, either adding \(u\) to \(S\) or removing it from \(G\) (lines 7-11). ``` Input: a graph \(G\), an integer \(k\), the current \(k\)-defective clique \(S\), the branching vertex \(v\), the lower bound \(LB\) Output:\(\omega_{G,k}(S)\) 1if\(v\neq null\)then\(G\gets reduction(G,k,S,v,LB)\); 2if\(|V|\leq LB\)thenreturn\(LB\); 3\(C\gets V\backslash S\); 4\(ub\leftarrow\) ColorBound\((G,k,S,C)\); 5if\(ub>LB\)then 6 select a vertex \(u\) in \(C\) with the minimum degree; 7\(size\leftarrow\) BnB\((G,k,S\cup\{u\},u,LB)\); 8if\(size>LB\)then\(LB\gets size\); 9 remove \(u\) from \(G\); 10\(size\leftarrow\) BnB\((G,k,S,u,LB)\); 11if\(size>LB\)then\(LB\gets size\); 12 13return\(LB\); ``` **Algorithm 5**BnB\((G,k,S,v,LB)\) ## 5 Empirical Evaluation In this section, we first introduce the benchmarks and algorithms (also called solvers) used in the experiments, then present and analyze the experimental results. All the algorithms were implemented in C++ and run on a server using an AMD EPYC 7H12 CPU, running Ubuntu 18.04 Linux operation system. Since our machine is about 5-10 times faster than the machine in [10], which sets the cut-off time to 10,800 seconds for each instance, we set the cut-off time to 1,800 seconds in our experiments. ### Benchmark Datasets We evaluated the algorithms on four public datasets that are widely used in existing works. * **Facebook1**: This dataset contains 114 massive sparse graphs derived from Facebook social networks, which is used in KDBB [10]. Footnote 1: [https://networkrepository.com/socfb.php](https://networkrepository.com/socfb.php) * **Realword2**: This dataset contains 139 massive sparse graphs from the Network Data Repository [14], which is frequently used in studies related to relaxation clique models, including the \(k\)-defective clique and \(k\)-plex. Footnote 2: [http://lcs.ios.ac.cn/%7Ecais/Resource/realworld%20](http://lcs.ios.ac.cn/%7Ecais/Resource/realworld%20) graphs.tar.gz * **SNAP3** and **DIMACS104**: This dataset contains 39 graphs with up to \(1.05\times 10^{6}\) vertices from the Stanford large network dataset collection (SNAP) and the 10th DIMACS implementation challenge, which are used in both KDBB and MADEC\({}^{+}\)[10]. Footnote 3: [http://snap.stanford.edu/data/](http://snap.stanford.edu/data/) * **DIMACS25**: This dataset contains 49 dense graphs with up to 1,500 vertices from the 2nd DIMACS implementation challenge, which is used in MADEC\({}^{+}\). Footnote 4: [https://www.cc.gatech.edu/dimacs10/downloads.shtml](https://www.cc.gatech.edu/dimacs10/downloads.shtml) Footnote 5: [http://archive.dimacs.rutgers.edu/pub/challenge/graph/benchmarks/clique/](http://archive.dimacs.rutgers.edu/pub/challenge/graph/benchmarks/clique/) For each graph, we generated six MDCP instances with \(k=1,3,5,10,15,20\) as KDBB did. Therefore, there are a total of \(6\times(114+139+39+49)=2046\) MDCP instances. ### Solvers To evaluate the performance of our proposed KD-Club algorithm, we select the state-of-the-art BnB MDCP algorithms, MADEC\({}^{+}\)(Chen et al., 2021) and KDBB (Gao et al., 2022) as the baseline algorithms. To evaluate the effectiveness of CLUB in different stages of KD-Club, we generate two variant algorithms of KD-Club, called KD-Club\({}^{-}_{\text{Pre}}\) and KD-Club\({}^{-}_{\text{BaB}}\). Details are as follows. * **MADEC\({}^{+}\)**: A Bn MDCP algorithm with a rough coloring-based upper bound that increases sharply with the increment of \(k\). It shows good performance on instances based on dense DIMACS2 graphs with small \(k\) values. The source code is available at6. Footnote 6: [https://github.com/chenxiaoyu233/k-defective](https://github.com/chenxiaoyu233/k-defective) * **KDBB**: The best-performing BnB MDCP algorithm for instances based on massive sparse graphs and instances with large \(k\) values. Since its code has not been open source, we use our implemented version in the experiments, which shows better performance than the results reported in the literature. * **KD-Club**: An implementation of our proposed algorithm7. Footnote 7: Codes will be open source with publication. * **KD-Club\({}^{-}_{\text{Pre}}\)**: A variant of KD-Club without the CLUB during preprocessing, _i.e._, replacing the preprocessing in KD-Club with that in KDBB; Also a variant of KDBB with the BnB searching process in KD-Club. * **KD-Club\({}^{-}_{\text{BaB}}\)**: A variant of KD-Club without the CLUB during BnB searching, _i.e._, replacing the BnB searching process in KD-Club with that in KDBB; Also a variant of KDBB with the preprocessing method in KD-Club. ### Performance Comparison We first compare KD-Club, KDBB, and MADEC\({}^{+}\) on all four benchmarks to evaluate the overall performance of these solvers. The results are summarized in Table 1, which presents the number of instances that can be solved within the cut-off time by each algorithm in solving instances of each benchmark grouped according to the \(k\) values. From the results, one can see that KD-Club solves considerably more instances than the baselines on most groups of benchmark instances, especially on instances with larger \(k\) values. Since the upper bound in MADEC\({}^{+}\) increases significantly with the increment of \(k\), it fails to solve most instances with \(k>5\). With the increment of \(k\), the reduction rules in KDBB can hardly reduce the graph. Thus, its performance also declines significantly. With the benefit of our CLUB that considers the missing edges more thoroughly, the preprocessing and BnB stages in KD-Club are both very efficient, and thus KD-Club shows excellent performance on various benchmarks, even with large \(k\) values. Moreover, although the number of solved instances of KD-Club is not very larger than the baselines in solving instances with small \(k\) values, the running time of KD-Club to solve the instances is usually much shorter than that of the baselines, as indicated by the follow-up experiments. We further present the detailed results of KD-Club and KDBB in solving 32 representative instances from four benchmarks with \(k=3\) and \(k=10\), as shown in Table 2. The results include the number of vertices (column \(|V|\)) and edges (column \(|E|\)) of each original graph, the number of vertices (column \(|V^{\prime}|\)) and edges (column \(|E^{\prime}|\)) of each graph after reducing by the preprocessing method of each algorithm, the running time in seconds (column _Time_) and the sizes of their entire search trees in \(10^{4}\) (column _Tree_) to solve the instances. The symbol 'NA' means the algorithm cannot solve the instance within the cut-off time. The results show that when solving massive sparse graphs, such as the Facebook instances started with'socfb', the preprocessing method based on CLUB can help KD-Club reduce the graph size to an order of magnitude smaller than the graph reduced by the preprocessing in KDBB, indicating a significant reduction. When solving dense graphs, such as the DIMACS2 instances C125-9, johnson8-4-4, and san200-0-7-1, the preprocessing cannot reduce any vertex or edge, while the BnB process based on CLUB can still help KD-Club solve these instances with much shorter running time as compared to the cut-off time, but KDBB even cannot solve them within the cut-off time. As a result, both the search tree sizes and running time of KD-Club are several orders of magnitude smaller than those of KDBB in solving these instances with either small or large \(k\) values. ### Ablation Study We then compare KD-Club with its two variants, KD-Club\({}^{-}_{\text{Pre}}\) and KD-Club\({}^{-}_{\text{BaB}}\), as well as KDBB to evaluate the effectiveness of CLUB in preprocessing and BnB searching stages. The results are shown in Figure 2, where we present the variation of the number of solved instances with \(k=3\) \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c} \multirow{2}{*}{\(k\)} & \multicolumn{3}{c|}{Facebook} & \multicolumn{3}{c|}{Realworld} & \multicolumn{3}{c|}{SNAP and DIMACS10} & \multicolumn{3}{c}{DIMACS2} \\ & KD-Club & KDBB & MADEC\({}^{+}\) & KD-Club & KDBB & MADEC\({}^{+}\) & KD-Club & KDBB & MADEC\({}^{+}\) & KD-Club & KDBB & MADEC\({}^{+}\) \\ \hline 1 & **112** & 110 & 12 & **130** & 124 & 81 & **39** & **39** & 24 & 31 & 17 & **32** \\ 3 & **112** & 110 & 0 & **125** & 116 & 62 & **38** & **38** & 23 & **27** & 12 & 14 \\ 5 & **111** & 109 & 0 & **121** & 110 & 52 & 37 & **38** & 23 & **25** & 12 & 12 \\ 10 & **109** & 108 & 0 & **106** & 94 & 28 & **34** & **34** & 11 & **19** & 11 & 6 \\ 15 & **108** & 104 & 0 & **94** & 70 & 22 & **30** & 28 & 9 & **13** & 11 & 2 \\ 20 & **104** & 86 & 0 & **85** & 59 & 16 & **27** & 24 & 6 & **12** & 11 & 1 \\ \end{tabular} \end{table} Table 1: Summarized results of KD-Club, KDBB, and MADEC\({}^{+}\) on four benchmarks. The best results appear in bold. and \(k=10\) for each algorithm over all the 341 graphs during the running time (in seconds). To present more clearly, we omit the results of easy instances solved within 10 seconds for each algorithm. In general, KD-Club yields better performance than the two variants, and the two variants perform better than KDBB, indicating that using CLUB in both preprocessing and BnB stages can improve the performance of the BnB algorithm. Moreover, when solving instances with \(k=10\), the performance of KD-Club and KD-Club\(\widetilde{\text{b}_{\text{Pre}}}\) is similar, and the performance of KD-Club\(\widetilde{\text{b}_{\text{BnB}}}\) and KDBB is similar. This is because the larger the value of \(k\), the smaller the size of the graph that can be reduced by preprocessing, and the worse the effectiveness of preprocessing. On the other hand, when solving instances with large \(k\) values, the BnB searching process based on CLUB also shows excellent performance, helping KD-Club significantly outperform KD-Club\(\widetilde{\text{b}_{\text{BnB}}}\), and KD-Club\(\widetilde{\text{b}_{\text{Pre}}}\) significantly outperform KDBB. ## 6 Conclusion For the NP-hard Maximum \(k\)-Defective Clique Problem (MDCP), we proposed a novel CoLoring-based Upper Bound (CLUB) and then a new BnB algorithm called KD-Club. CLUB considers the missing edges that are ignored by previous methods, and uses graph coloring techniques in an ingenious way to calculate lower bounds on the number of missing edges, so as to obtain a tighter upper bound. By using CLUB on graph reduction and branch pruning, KD Club significantly outperforms state-of-the-art BnB MDCP algorithms and exhibits excellent performance and robustness on instances based on either massive sparse or dense graphs, with either small or large \(k\) values. In future work, we plan to apply our approach to improve the upper bounds used in BnB algorithms for other combinatorial optimization problems, such as other relaxation clique problems. In fact, existing BnB algorithms might ignore part of the relationship between elements in the problem to be solved, and could be greatly improved via a similar relationship detection method.
2307.08783
Demonstration of Niobium Tin in 218 MHz Low-beta Quarter Wave Accelerator Cavity
A 218 MHz quarter wave niobium cavity has been fabricated for the purpose of demonstrating Nb3Sn technology on a low-beta accelerator cavity. Niobiumtin has been established as a promising next generation SRF material, but development has focused primarily in high-beta elliptical cell cavities. This material has a significantly higher TC than niobium, allowing for design of higher frequency quarter wave cavities (that are subsequently smaller) as well as for significantly lowered cooling requirements (possibly leading to cryocooler based designs). The fabrication, initial cold testing, and Nb3Sn coating are discussed as well as test plans and details of future applications.
T. B. Petersen, M. P. Kelly, T. Reid, M. Kedzie, B. Guilfoyle, G. Chen, S. Posen, B. Tennis, G. Eremeev
2023-07-17T18:54:22Z
http://arxiv.org/abs/2307.08783v1
# Demonstration of Niobium Tin in 218 MHz ###### Abstract A 218 MHz quarter wave niobium cavity has been fabricated for the purpose of demonstrating Nb3Sn technology on a low-beta accelerator cavity. Niobium-tin has been established as a promising next generation SRF material, but development has focused primarily in high-beta elliptical cell cavities. This material has a significantly higher T\({}_{C}\) than niobium, allowing for design of higher frequency quarter wave cavities (that are subsequently smaller) as well as for significantly lowered cooling requirements (possibly leading to cryocooler based designs). The fabrication, initial cold testing, and Nb3Sn coating are discussed as well as test plans and details of future applications. ## 1 Introduction Niobium-tin (Nb\({}_{3}\)Sn) has been identified as the most promising next-generation superconducting material for accelerator cavities. The main reason for this choice is the higher critical temperature (T\({}_{C}\)= 18.3 K compared to 9.2 K for pure niobium), which corresponds to a significantly lower surface resistance for a given temperature and frequency. This is a consequence of the dependence of the Bardeen-Cooper-Schrieffer (BCS) resistance on material characteristics like critical temperature T\({}_{C}\) among others. [1] The relationship between frequency, critical temperature, operating temperature, and R\({}_{\text{BCS}}\) (which is inversely proportional to RF power losses/cryogenic load) is illustrated in Figure 1 below (generated using SRIMP code) [2]. ## 2 Cavity Design and Fabrication The cavity design was aimed at demonstrating a useful low-beta cavity near 200 MHz and with a peak magnetic field of 60 mT. The frequency was chosen as a multiple of the ATLAS clock, with the goal of installing two of these 218 MHz cavities in ATLAS. Cavity EM parameters are shown below in Figure 3[3]. Figure 1: Predicted BCS surface resistance of Nb and Nb3Sn vs operating temperature for two frequencies. Figure 3: Cavity field distributions and EM parameters. Figure 2: Physical scale of 218 MHz cavity compared to similar geometry 72 MHz cavity in ATLAS. This technology could expand the useful range of different cavity geometries. The 218 MHz cavity was fabricated from high RRR niobium with hydroforming techniques utilizing a local vendor, Stuecklen Mfg. Individual parts (seen in Figure 4) were electron beam welded together at another local vendor, Sciaky Inc. The cavity was coarsely tuned by iterations of wire EDM cutting the housing (on both the toroid and dome side) followed by a fit up with indium for a frequency check. The cavity was hand sanded up to 400 grit to remove large surface imperfections. ## Initial Surface Processing Following tuning and fabrication, the cavity went through a bulk electropolishing process. This removed the surface layer of the niobium with a target removal of 120 microns. The mirror finish is seen below in Figure 5. ## Cavity Cold Testing Before coating, a baseline cold test was required to ensure the bare niobium cavity performed satisfactorily. The baseline requirement was to reach a peak surface magnetic field of 60 mT. Before cold testing, the cavity was ultrasonically cleaned and high pressure rinsed, seen in Figure 6. The cavity was placed in a stainless steel vessel designed for the test cryostat TC3 at Argonne, seen in Figure 7. The vessel fills entirely with liquid helium while allowing RF and vacuum pumping connections to be made. This allowed for active p Figure 4: Cavity parts during fabrication (left, bottom) and electron beam welding (right). Figure 5: Post-EP cavity surface finish. Figure 6: Cavity fixturing used for ultrasonic cleaning and high pressure rinsing. Cavity testing saw numerous hurdles with RF connections and liquid helium refrigerator issues, with a final cold test being completed late May. The initial cold test suffered from RF cabling issues, with coupling becoming exceedingly weak at cryogenic temperatures. This limited the power input and ultimate field level achieved but allowed for a low field Q measurement to be made. The second cold test was successful but was limited to very low field level by breakdown (Vacc \(<\) 10 kV). This was later found to be caused by very low field multipacting, but the field breakdown looked different than multipacting that had been seen before. This breakdown prompted us to warm and re-clean the cavity and replace a suspicious antenna. The subsequent cooldown suffered identical low field breakdown. This prompted exploration of the next cavity mode, 3\(\lambda\)/4. This HOM of the cavity ran without low field limit and saw conditioning through a multipacting band that looked like multipacting the group had seen before (steady cavity field level for increasing input power). After running at the 3\(\lambda\)/4 mode, the cavity was once again run at 218 MHz and still saw low field breakdown. However, when driving the cavity with high power without stepping up gradually the low field breakdown was not seen. This is when the breakdown was determined to be very low field multipacting, and the cavity performed as expected at power levels above this multipacting band. The cavity performed well with a low-field Q\({}_{0}\) of 8.2 x 10\({}^{8}\), and reached an accelerating gradient of E\({}_{\rm ACC}\) = 8.7 MV/m. The Q curve is given in Figure 8. This corresponded to a B\({}_{\rm peak}\) = 59.5 mT and an E\({}_{\rm peak}\) = 49 MV/m. Some field emission was seen but ultimate field level was limited by RF amplifier power and coupling strength. One of the benefits of the multiple cooldowns performed was to see how cooldown time affected Q\({}_{0}\). The initial cooldown (470 minutes) was significantly longer and had a Q\({}_{0}\) of 5.7 x 10\({}^{8}\). The faster cooldowns were 130 minutes (Q\({}_{0}\) = 7 x 10\({}^{8}\)) and 190 minutes (Q\({}_{0}\) = 8.2 x 10\({}^{8}\)). We believe this indicates Q disease is present, which is expected with no post-EP bake out. Improvements between the faster cooldowns may also be explained by degaussing performed on the stainless steel vessel, reducing trapped magnetic flux. Additional cavity performance factors were measured as well, particularly those of microphonics and df/dp (seen in Figure 9). These are informative data points taken with this bare cavity test, but final sensitivities will ultimately depend on final configuration for use, including helium jacketing and attached cryostat features. ## 6 Coating process After successful testing, the cavity was disassembled from the test setup and prepared for coating in the vacuum furnace at Fermilab. This required anodizing the niobium, done with a 30 V DC power supply, which turns the niobium blue (Figure 10). The cavity was ultrasonic cleaned and high pressure rinsed to eliminate contaminants. Additional cavity preparations for coating include fitting up flange covers (to cover NbTi), winding heater coils, and setting up Sn and SnCl\({}_{2}\) sources. Figure 8: Q curve measurement of 218 MHz cavity. Figure 7: Helium vessel used for cavity cold testing. Figure 9: df/dp measurement showing -65.9 Hz/torr sensitivity. The cavity coating process follows the temperature curve shown in Figure 11. This begins with a nucleation phase, with the temperature held at about 500\({}^{\circ}\)C, allowing the SnCl\({}_{2}\) to nucleate tin sites on the surface. Coating is then done with the cavity held at 1100\({}^{\circ}\)C and the Sn sources at \(\sim\)1200\({}^{\circ}\)C. The tin sources are heated above cavity temperature with molybdenum wire heaters, seen in Figure 12[3]. The cavity was successfully coated and awaits a cold test to verify RF performance. The coating process removes the blue anodization and leaves a more matte finish seen in Figure 13. ## 5 Future Work Planned additional work on this cavity mainly consists of cold testing following surface coating. This includes the normal high pressure rinsing and clean assembly of cavity hardware. This will be followed by a cold test to verify performance of the cavity post-coating. If it is determined that the Nb\({}_{3}\)Sn coating was a failure (through visual inspection or from poor performance during testing) Figure 11: Typical niobium-tin coating process heat cycle. Figure 12: Cavity being prepared for coating in vacuum furnace. Figure 10: Bare cavity after anodization, a preparation for Nb3Sn coating. Figure 13: Post-coating cavity surface. additional chemistry will be performed to remove the coated surface before an additional attempt. Following a demonstration of a high quality factor and surface magnetic field focus will shift to production of a 145 MHz cavity to be used as a re-buncher in ATLAS. This will necessitate building a coarse tuner system, with many approaches being considered. Traditional mechanical tuning may be difficult as there is a material strength difference between Nb and Nb\({}_{3}\)Sn that can cause cracking of the SRF surface. ## 5 Summary A 218 MHz quarter wave cavity has been fabricated and coated with Nb\({}_{3}\)Sn, the most promising next generation SRF material. Initial bare Nb cavity performance was tested before the coating process, and a planned cold test will verify the coated cavity performance. Successful demonstration of SRF performance will lead to further work fabricating a 145 MHz cavity that will ultimately be installed in ATLAS as a re-buncher. ## 6 Acknowledgements We would like to thank G. Chen for her initial work fabricating this cavity, working with vendors and documenting the process. Additional thank you to our colleagues at Fermilab, particularly Sam Posen and Brad Tennis, who have been critical partners in the project and facilitating coating of this cavity.
2306.07799
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer
Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.
Dongqi Pu, Vera Demberg
2023-06-13T14:21:35Z
http://arxiv.org/abs/2306.07799v1
ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer ###### Abstract Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.1 Footnote 1: The project information of our study can be accessed at [https://dongqi.me/projects/ChatGPT_vs_Human](https://dongqi.me/projects/ChatGPT_vs_Human). ## 1 Introduction Generative Pre-trained Transformer (GPT; _e.g.,_ ChatGPT) models, which produce results from given conditional input prompts, have exhibited exceptional performance on various natural language understanding (NLU) and generation (NLG) tasks (Jiao et al., 2023; Wang et al., 2023; Bang et al., 2023; Zhou et al., 2023; Dai et al., 2023). For instance, in NLU tasks, Qin et al. (2023) have proved that ChatGPT is comparable to state-of-the-art fine-tuning models in language reasoning. In NLG tasks, Yang et al. (2023) assessed four widely used benchmark datasets, such as QMSum, and confirmed ChatGPT's comparability to traditional fine-tuning methods. Peng et al. (2023) further investigated effective strategies for machine translation using ChatGPT and highlight its strong translation ability. Additionally, ChatGPT can even facilitate multi-modal tasks (Yang et al., 2023; Shen et al., 2023), as well as the application of data augmentation (Dai et al., 2023). Although the studies mentioned above have demonstrated notable performance of ChatGPT across different domains, there remains a dearth of qualitative and quantitative evaluation of the texts generated by ChatGPT. Such an evaluation is vital to uncover the behavioral differences, potential limitations, and challenges associated with ChatGPT-generated texts, especially when compared with human-authored texts. Controllable text generation seems to be a task in which ChatGPT-like models could potentially excel. This task is driven by the desire to tailor text for a diverse array of target users (_e.g.,_ experts and laypersons) (Kumar et al., 2022; Cao et al., 2020; Luo et al., 2022), and thereby enhancing the accessibility of textual information. In controllable text generation, one delineates a particular set of parameters or provides a prompt that defines the intended target style. This area has recently received growing interest from researchers in the field (Hu and Li, 2021; Li et al., 2022; Zhang et al., 2022; Dathathri et al., 2019; August et al., 2022; Carlsson et al., 2022; Gu et al., 2022; Li et al., 2022; Keskar et al., 2019; Dathathri et al., 2019). The traditional natural language generation task (Pu and Sima'an, 2022), which focuses solely on adequately responding with respect to a given input, can be regarded as a special case of controllable natural language generation, wherein the control setting remains unconditioned. Considering ChatGPT as the most recent language generation capability, the assessment of its language generation proficiency, specifically in the realm of controllable language generation, remains largely uncharted. Therefore, our study delves into two distinct applications of ChatGPT, namely controllable summary generation and sentence style trans fer. In the former, we examine ChatGPT's ability to generate summaries that cater to two distinct readerships, namely experts and non-experts, for a given academic literature. Concerning sentence style transfer, we investigate ChatGPT's capability to generate both formal and informal sentences for the task of sentence formality. The objective of this study is to tackle the research question: **In relation to the human-produced text, to what extent does ChatGPT-generated content demonstrate significant divergence from human behavior and the potential susceptibility to inaccuracies?** Our primary contributions are enumerated below: * To the best of our knowledge, we are the first to utilize ChatGPT to evaluate its effectiveness in controllable text generation. * Our findings indicate that there are substantial performance disparities between the text generated by ChatGPT and that generated by humans. * Our study exposes and quantifies the existence of numerous hard-to-spot errors in the text generated by ChatGPT, which have a tendency to amplify with successive transformations of the text. ## 2 Related Work ### Controllable Text Summarization Controllable text summarization is a rapidly evolving field that aims to produce summaries with specific characteristics, such as length, style, or content [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. A range of approaches has been proposed for this task, including the use of sequence-to-sequence models such as the Transformer model [26]. These models have demonstrated promising progress in producing high-quality summaries that can be modulated according to specific requirements [23, 22, 30]. Additionally, other techniques also have been proposed to enhance the controllability of the summaries, such as conditional generation [10, 31], prompt-based summarization [26, 32, 33], and multi-task learning [23, 34]. ### Text Style Transfer Text style transfer is a task that involves transforming an input sentence into a desired style while retaining its style-independent semantics [22, 24, 25, 26, 27, 28, 29, 30, 31, 32]. To achieve this, prior research has examined sequence-to-sequence learning strategies that utilize parallel corpora with paired source/target sentences in different styles [30, 22, 31, 32]. Owing to the considerable demand for human resources and material investments in data labeling, parallel data across diverse styles are scarce. This has led to an increased interest in exploring more pragmatic situations where only non-parallel stylized corpora are accessible [25, 26]. ### ChatGPT ChatGPT2 is a large language model (LLM), which is built upon the innovations and improvements of its predecessors, such as GPT-33. In terms of training strategies, ChatGPT employs instruction learning and reinforcement learning from human feedback (RLHF; 29) to enhance its overall performance and adaptability. Footnote 2: [https://openai.com/blog/chatqpt](https://openai.com/blog/chatqpt) Footnote 3: [https://openai.com/research/instruction-following](https://openai.com/research/instruction-following) Upon its emergence, ChatGPT has garnered considerable attention from researchers, who have undertaken initial studies into the model. Scholars such as Baidoo-Anu and Owusu Ansh (2023); Rudolph et al. (2023); West (2023); Sobania et al. (2023); Gilson et al. (2023); Lai et al. (2023); Wang et al. (2023) have explored the notable strengths of ChatGPT from the fields of education, science, programming, healthcare, and text generation, respectively. However, Bang et al. (2023) discovered that ChatGPT suffers from hallucination issues in the context of logical reasoning. Due to its immense and inaccessible training corpus and parameters, and the inability to access external knowledge for reliable sources of support, it is imperative to question whether ChatGPT demonstrates the same hallucination issue as other LLMs when performing sentence generation. Based on these clues, we firmly assert that in-depth analysis of the text generated by ChatGPT and its behavioral patterns are both significant and valuable, and can provide meaningful insights to the readers of this paper. Study on Controllable Summarization ### Prompt Formulation In this section, our main objective is to test the zero-shot performance of ChatGPT on controllable summarization, with the goal to generate summaries for laymen vs. experts. To this end, we constructed several prompts as natural language instructions for ChatGPT. The prompts we tested include for the layman style: _Please give me a layman / simple / simplified and understandable / easy-to-comprehend / straightforward / general audience summary of X_, where \(X\) was replaced by the source text that should be summarized. Similarly, for the expert summary, we experimented with the prompts: _Please give me an expert / a technical / comprehensive and detailed / difficult-to-comprehend / in-depth / complicated summary of X_. ### Experimental Setup For all experiments, we used ChatGPT _gpt-3.5-turbo_, which was, at the time of experimentation, the best-performing publicly accessible version provided by OpenAI. For the hyper-parameter setting, we set temperature = 0, top p = 1, frequency penalty = 0.2, and presence penalty = 0.2. For summary generation, we configured the maximum number of generated tokens to 512. The remaining hyper-parameters were set to their default values as recommended by OpenAI. It is noteworthy that ChatGPT has the potential to generate empty responses (i.e., empty strings) as the result of network transmission timeouts or API request overloads. Should this arise, we adhere to the established practice of resubmitting the request until ChatGPT provides non-empty responses. All of our experiments were conducted on the version of ChatGPT between 15 Feb 2023 and 30 Apr 2023 by using the OpenAI's ChatGPT API.4 We should emphasize that to prevent any potential interference from the prior responses, we cleared the conversation history each time we submit a new query to ChatGPT. Unless otherwise specified, we refrained from engaging in any further conversation with ChatGPT to modify its responses. Footnote 4: [https://platform.openai.com/overview](https://platform.openai.com/overview) ### Dataset We selected ELIFE (Goldsack et al., 2022) dataset for our experiments. It contains summaries of academic literature that exhibit varying levels of readability, tailored to suit either expert or non-expert audiences. By means of this dataset, we can examine to what extent ChatGPT can regulate the summary generation process in accordance with the intended target users, and compare its summaries to human summaries. ### Metrics In order to assess automatically whether ChatGPT summaries substantially differ in terms of their audience design based on the given prompt, we opted for a set of three automatic readability metrics: Flesch Reading Ease (FRE; Kincaid et al., 1975), Coleman-Liau Index (CLI; Coleman and Liau, 1975), and Dale-Chall Readability Score (DCR; Chall and Dale, 1995). The Flesch Reading Ease (Kincaid et al., 1975) is a metric that gauges the comprehensibility of a given text. This index relies on the average number of syllables per word and the average number of words per sentence. A higher score signifies an easier-to-understand text. Additionally, the Coleman-Liau Index (Coleman and Liau, 1975) is a measure of the text's difficulty level, which considers the average number of characters per sentence and the average number of sentences per 100 words. A higher score indicates a more challenging text. The Dale-Chall Readability Score (Chall and Dale, 1995) is computed by comparing the number of complex words in the text with a list of common words. A higher score denotes a more challenging text. We also employed Rouge scores (Lin, 2004) to evaluate the performance of ChatGPT in the task of text summarization, with the aim of comparing its efficacy against the state-of-the-art model. In order to assess the extent to which the summaries re-use word sequences from the original text, we furthermore evaluated N-gram novelty (See et al., 2017; Gehrmann et al., 2019; Pu et al., 2022). Finally, we quantified inconsistency based on factual consistency checking metric SummaC (Laban et al., 2022), as well as hallucination checking metric (Cao et al., 2022; Fischer et al., 2021). SummaC (Laban et al., 2022) uses sentence compression and summarization techniques to extract important information and improve the detection of inconsistencies in NLI models by segmenting documents and aggregating scores. Named entity hallucination (Fischer et al., 2021) flags potential hallucinations in named entities if they do not match the original sources. We here used BERT semantic similarity, rather than exact matching, when computing the named entities matching. ### Results on Controllable Summarization #### 3.5.1 Effect of Prompt Formulation Table 1 illustrates that different prompt versions are somewhat consistent regarding whether the instructions asking for layman summaries actually lead to more readable texts than those asking for expert summaries, with FRE ranging between scores of 31 and 38 for automatically generated layman summaries, and between 28 and 37 for automatically generated expert summaries. Conversely, human-written summaries exhibit very large differences according to the automatic metrics, with FRE of 53.1 for layman summaries and 22.5 for expert summaries. Similar effects are observed for the CLI and DCR measures. This preliminary test was conducted on a subset of the ELIFE dataset, containing merely 500 random samples; for the rest of the tests, we proceeded to the entire dataset, selecting the prompts asking for "layman" and "expert" summaries, as responses for these prompts seemed to align with the right direction wrt. the readability measures. #### 3.5.2 Reading Difficulty Control Table 2 corroborates that the results of the whole dataset are consistent with the findings from the smaller sample. We conclude that ChatGPT can produce summaries with different levels of reading difficulty to a certain extent based on the provided prompts. Notably, ChatGPT-generated sentences for expert-style summaries show greater complexity than those for layman-style summaries. However, the magnitude of the difference in the reading difficulty scores between the two types of summaries is considerably smaller than that observed in human-written summaries. #### 3.5.3 Comparison to Previous SOTA Model We also compared summaries generated by ChatGPT to a previous state-of-the-art (SOTA) neural fine-tuned summarization model Pu et al. (2023). On the same test split, the summaries produced by ChatGPT reached Rouge-1=25.53, Rouge-2=5.48, Rouge-L=13.30 under unsupervised learning, and Rouge-1=47.88, Rouge-2=13.75, Rouge-L=42.44 in few-shot learning use the training samples from the same subset of Section 3.5.1, while the model by Pu et al. (2023) reached Rouge-1=48.70, Rouge-2=14.84, and Rouge-L=46.13. #### 3.5.4 Disparities in Summarization Behavior We next examined whether ChatGPT and Humans are consistent with each other regarding the readability of summarization with respect to different items - it could be possible, that some texts simply lead to less readable summaries than others. However, we discovered that Pearson correlations of FRE scores for summaries by humans and ChatGPT were only 0.31 for expert summaries, and 0.2 for layman summaries. (Scores were similarly low for the CLI and DCR metrics.) In addition, the statistical significance test elucidates the noteworthy divergence between the distinctive response styles produced by ChatGPT and the analogous styles of human-generated answers. Following this, we contrasted the n-gram novelty of human vs. ChatGPT summaries wrt. the original texts. Figure 1 reveals that a significantly higher \begin{table} \begin{tabular}{l c c c} \hline \hline Prompt version & FRE & CLI & DCR \\ \hline layman & 37.26\({}^{\dagger}\) & 14.82\({}^{\dagger}\) & 11.21\({}^{\dagger}\) \\ simple & 31.92\({}^{\dagger}\) & 15.70\({}^{\dagger}\) & 11.54\({}^{\dagger}\) \\ simplified and understand. & 35.48\({}^{\dagger}\) & 15.17\({}^{\dagger}\) & 11.21\({}^{\dagger}\) \\ easy-to-comprehend & 36.59\({}^{\dagger}\) & 14.93\({}^{\dagger}\) & 11.32\({}^{\dagger}\) \\ straightforward & 31.74\({}^{\dagger}\) & 15.58\({}^{\dagger}\) & 11.42\({}^{\dagger}\) \\ general audience & 35.86\({}^{\dagger}\) & 14.98\({}^{\dagger}\) & 10.96\({}^{\dagger}\) \\ human answer (for layman) & 53.06 & 12.36 & 8.90 \\ \hline expert & 29.89\({}^{\dagger}\) & 15.91\({}^{\dagger}\) & 11.88\({}^{\dagger}\) \\ technical & 36.65\({}^{\dagger}\) & 13.76\({}^{\dagger}\) & 12.20\({}^{\dagger}\) \\ comprehensive and detailed & 31.62\({}^{\dagger}\) & 15.47\({}^{\dagger}\) & 11.15\({}^{\dagger}\) \\ difficult-to-comprehend & 28.95\({}^{\dagger}\) & 16.14\({}^{\dagger}\) & 11.71\({}^{\dagger}\) \\ in-depth & 34.37\({}^{\dagger}\) & 14.93\({}^{\dagger}\) & 10.82\({}^{\dagger}\) \\ complicated & 29.05\({}^{\dagger}\) & 15.76\({}^{\dagger}\) & 11.40\({}^{\dagger}\) \\ human answer (for expert) & 22.54 & 17.65 & 11.79 \\ \hline \hline \end{tabular} \end{table} Table 1: Reading difficulty on different prompts, tested on a set of 500 randomly selected items from the dataset. \({}^{\dagger}\) indicates statistical significance (p\(<\)0.05) against corresponding human answers via paired t-test. \begin{table} \begin{tabular}{c c c c} \hline \hline Candidate & FRE & CLI & DCR \\ \hline Human Layman & 52.42 & 12.46 & 8.93 \\ Human Expert & 23.20 & 17.62 & 11.78 \\ ChatGPT Layman & 37.38\({}^{\dagger\ddagger}\) & 14.78\({}^{\dagger\ddagger}\) & 11.17\({}^{\dagger\ddagger}\) \\ ChatGPT Expert & 30.38\({}^{\dagger\ddagger}\) & 15.82\({}^{\dagger\ddagger}\) & 11.85\({}^{\dagger\ddagger}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Reading difficulty scores by automatic metrics; \({}^{\dagger}\) and \({}^{\ddagger}\) indicate statistical significance (p\(<\)0.05) against same-style human answers, and opposite-style ChatGPT answers via paired t-test, respectively. number of novel 4-grams are present in human-written summaries, particularly those aimed at laymen. This suggests that ChatGPT summaries are slightly more extractive compared to human summaries. #### 3.5.5 Inconsistencies and Hallucinations Given that ChatGPT has previously been reported to generate misinformation, we sought to evaluate its risk of hallucinating on our specific task. Figure 2 demonstrates that the SummaC consistency scores are lower for ChatGPT-generated summaries than for human-written summaries. A corresponding phenomenon is verified in the hallucination assessment. Precision scores provided in Table 3 demonstrates the extent to which ChatGPT-generated text contains named entities that are absent in the source text. A lower precision score suggests that the generated text has more named entities that lack support in the source text. The recall scores reflect the ability of ChatGPT to capture named entities from the source text. A lower recall score implies that ChatGPT has missed a considerable number of named entities from the source text. F1 score represents the harmonic mean of the precision and recall scores. By examining Table 3, our findings demonstrate that ChatGPT generates a greater number of named entities that are not present in the source text after undergoing multiple iterations of text conversions and modification. For example, in an expert summary, ChatGPT misinterpreted the meaning of "Geocode" as "regional regulations". ### Intermediary Discussion Our experiments show that ChatGPT-generated summaries do not adapt as strongly to the target audience as human-authored summaries. One possible reason could be that ChatGPT, given the zero-shot setting, had no way to "know" how strongly the texts should be adapted to the target style. Furthermore, we identified evidence for potential hallucinations generated during summarization. We, therefore, carried out two post-hoc experiments: (1) We modified the prompt to include an example from the dataset, so ChatGPT would have a chance to know the expected level of text adaptation. (2) We subjected the resulting summaries to several re-writing steps and test whether this further intensifies the occurrence of hallucinations. #### 3.6.1 Follow-up Experiment: Example Inclusion in Prompt We experimented with prompts that also include a human summary example. Unlike the previous few-shot learning experiment, we do not adjust the parameters of the ChatGPT, but just let the model perform unsupervised reasoning through the contents of the prompt. We observe (see Appendix Table 7) that when guided by a human example from the dataset, the summaries generated by ChatGPT indeed tend to be more aligned with human Figure 1: Comparison of abstractiveness between ChatGPT and human-generated summaries \begin{table} \begin{tabular}{c c c c} \hline \hline Candidate & Precision & Recall & F1 \\ \hline Human Layman & 0.78 & 0.63 & 0.70 \\ Human Expert & 0.92 & 0.61 & 0.73 \\ ChatGPT Layman & 0.75\({}^{\ddagger}\) & 0.47\({}^{\dagger}\) & 0.58\({}^{\dagger}\) \\ ChatGPT Expert & 0.90\({}^{\ddagger}\) & 0.49\({}^{\dagger}\) & 0.63\({}^{\dagger}\) \\ ChatGPT L2E2L & 0.74\({}^{\ddagger}\) & 0.39\({}^{\dagger\ddagger}\) & 0.51\({}^{\dagger\ddagger}\) \\ ChatGPT E2L2E & 0.88\({}^{\ddagger}\) & 0.47\({}^{\dagger\ddagger}\) & 0.62\({}^{\dagger\ddagger}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Named entity hallucination on Elife dataset. \({}^{\dagger}\) and \({}^{\ddagger}\) indicate statistical significance (p\(<\)0.05) against same-style human answers, and opposite-style ChatGPT answers via paired t-test, respectively. L stands for layman, E for expert. Figure 2: Summary consistency detection. L stands for layman, E for expert. performance, particularly on the Flesch Reading Ease metric (49.23 for layman, 28.88 for expert summaries). However, no significant changes are detected in other metrics. The degree of control over the summarization style has increased, yet it remains inferior to human capabilities. #### 3.6.2 Follow-up Experiment: Repeated Re-writing Summaries are further re-written based on the prompt _Please give me a layman/expert style version of \(X\)_, where \(X\) was the previously generated summary. Figure 2 and Table 3 display the performance of ChatGPT after re-writing in the entries "ChatGPT L2E2L" and "ChatGPT E2L2E" which stand for the order in which instructions were given (L stands for layman, and E for expert). The examinations point out that misinformation and hallucinations may be further increased during subsequent rewriting (lower SummaC scores, lower values in the named entity hallucination metric). ## 4 Study on Text Formality Transfer ### Prompt Formulation and Experimental Setup Our subsequent set of experiments investigates ChatGPT's capacity for style transfer concerning language formality. Our prompt for this task was formulated as _Please give me a formal / an informal version of \(X\)_. We utilized the same experimental setup as for the summarization task; however, we restricted the maximum number of generated tokens to 32. We again experimented with various prompts, as shown in Table 4 below. Unless otherwise specified, all experiments used the same configuration. ### Dataset We investigated whether ChatGPT can proficiently execute style transfer on sentences using data from the GYAFC Rao and Tetreault (2018) dataset. The dataset has two branches, Entertainment Music (EM) and Family & Relationships (FR). With the aid of this dataset, we aim to evaluate ChatGPT's ability for sentence style transfer, examine the differences in vocabulary selection and syntactic structures between ChatGPT and human performance, and identify the limitations of ChatGPT. ### Metrics To evaluate the level of formality in the generated text, we utilized Text Formality Score Heylighen and Dewaele (1999) and MTLD Lexical Diversity McCarthy and Jarvis (2010) metric. The Text Formality Score Heylighen and Dewaele (1999) is a metric that quantifies the degree of formality in language usage within a text, based on the adherence to formal linguistic norms. Another measure that evaluates language formality is the MTLD Lexical Diversity metric McCarthy and Jarvis (2010). This index measures the diversity and richness of the vocabulary used in the text, based on the frequency and number of unique words. A higher MTLD score indicates a greater variety of vocabulary, which typically corresponds to a more formal language style. We also utilized BLEU Papineni et al. (2002) score to draw a comparison between ChatGPT and SOTA approach. We additionally assessed the distribution of POS tags in the generated different styles, as well as the distribution of dependency labels5. For quantifying misinformation and hallucinations, we used DAE and named entity hallucination checking. The DAE algorithm Goyal and Durrett (2020) utilizes dependency arcs to identify entailment relationships between propositions and identify inconsistencies in factual information based on syntactic and semantic structures. Footnote 5: [https://spacy.io/](https://spacy.io/) ### Results on Formality Control #### 4.4.1 Effect of Prompt Formulation Table 4 presents the results for a set of 500 random samples from the GYAFC dataset. We observe that the Formality scores are very similar for ChatGPT formal vs. informal texts. We note however that the difference in ratings for human-written texts is also small for this metric. The MTLD metric on the other hand shows higher values for ChatGPT-generated formal texts; in fact, the scores are substantially larger than those of human-written texts, but differ not much from each other. We therefore proceed with the prompts using the formulation formal/informal for the rest of the experiments on the whole dataset. #### 4.4.2 Sentence Formality Control Table 5 offers supplementary evidence from the full dataset supporting ChatGPT's capacity to modify the formality level of sentences. By employing the Formality indicator Heylighen and Dewaele (1999), it is apparent that the generated text tends to manifest a higher level of formality overall. A primary factor contributing to this result is the pre disposition of ChatGPT's training corpus towards written sources, encompassing materials such as books and news articles, as opposed to spoken language corpora [3]. This perspective is further corroborated by an examination of the generated sentence samples. The MTLD metric underscores that ChatGPT's lexical diversity is considerably lower when generating informal sentences, but shows a marked increase when generating formal sentences. #### 4.4.3 Comparison to Previous SOTA Model We also find that ChatGPT outperforms the previous supervised SOTA model [13] by training on the same subset at Section 4.4.1 for few-shot learning, as evident from the higher BLEU score. Specifically, ChatGPT yields superior scores of 0.711 and 0.697 in the EM and FR branches, as compared to the SOTA model's scores of 0.671 and 0.652. However, ChatGPT achieved only 0.07 and 0.06 BLEU scores on the EM and FR branches, respectively, in the unsupervised setting. #### 4.4.4 Effect of Example Inclusion in Prompt We again examined the impact of including an example of the dataset into the prompt and find that this again helps ChatGPT slightly with matching the dataset style (with details provided in Table 8). Specifically, the formality score for the informal style is 50.67, while it climbs to 52.13 for the formal style, with the MTLD score also displaying an increase from 14.81 for informal texts to 19.22 for formal texts. #### 4.4.5 Disparities in Style Transfer Behavior In terms of controlling the formality of sentence style, ChatGPT's performance still exhibits significant differences compared to human behavior. While the by-item correlation is slightly higher for this dataset than for the summary task (Pearson correlation of around 0.4 for formal style and 0.5 for informal style on the Formality metric; 0.3 for MTLD measure), there are interesting disparities between the distributions of POS tags between ChatGPT and humans. The examination of statistical significance further substantiates our antecedent observation, indicating a substantial disparity between the different response styles engendered by the model, as well as between the answers conforming to the same styles exhibited by humans. Figure 3 illustrates the absolute differences in the distribution of Part-of-Speech (POS) tags. Based on this figure, it is evident that ChatGPT employs a higher frequency of adjectives, adpositions, determiners, and nouns in the generation of formal sentences when compared to those produced by human writers. Conversely, in the generation of informal sentences, ChatGPT tends to utilize more auxiliary words and punctuation marks. These variances in word choice between formal and informal styles, as exemplified by ChatGPT, are indicative of differences in its selected vocabulary for distinct stylistic modes compare with humans. By analyzing the distribution of dependency labels (Appendix Figures 5, 6, 7, 8), it is also clear that, in comparison to human-authored sentences, ChatGPT utilizes a greater frequency of adjectival modifiers, auxiliaries, determiners, objects of the preposition, and prepositional modifiers for formal \begin{table} \begin{tabular}{l c c} \hline \hline Prompt version & Formality & MTLD \\ \hline informal & 51.09 & 13.22\({}^{\dagger}\) \\ unprofessional & 51.20 & 16.23\({}^{\dagger}\) \\ spoken version & 51.30\({}^{\dagger}\) & 14.47\({}^{\dagger}\) \\ easygoing & 51.43\({}^{\dagger}\) & 14.11\({}^{\dagger}\) \\ casual & 51.00 & 16.30\({}^{\dagger}\) \\ laid-back & 51.27 & 13.94\({}^{\dagger}\) \\ human answer (for informal) & 50.76 & 11.42 \\ \hline formal & 52.22\({}^{\dagger}\) & 31.23\({}^{\dagger}\) \\ professional & 51.96\({}^{\dagger}\) & 31.98\({}^{\dagger}\) \\ written & 51.62\({}^{\dagger}\) & 29.69\({}^{\dagger}\) \\ stately & 51.30\({}^{\dagger}\) & 34.43\({}^{\dagger}\) \\ grandiose & 52.85\({}^{\dagger}\) & 30.71\({}^{\dagger}\) \\ majestic & 52.23\({}^{\dagger}\) & 33.49\({}^{\dagger}\) \\ human answer (for formal) & 53.92 & 14.99 \\ \hline \hline \end{tabular} \end{table} Table 4: Text formality on different prompts, tested on a set of 500 randomly selected items from the dataset. \({}^{\dagger}\) indicates statistical significance (p\(<\)0.05) against corresponding human answers via paired t-test. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Candidate & Formality & MTLD \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ \end{tabular} & \begin{tabular}{c} Human Informal \\ Human Formal & 49.87 & 15.20 \\ Human Formal & 53.57 & 18.70 \\ ChatGPT Informal & 50.77\({}^{\dagger}\)\({}^{\ddagger}\) & 14.60\({}^{\ddagger}\) \\ ChatGPT Formal & 52.06\({}^{\ddagger}\)\({}^{\ddagger}\) & 31.68\({}^{\dagger}\)\({}^{\ddagger}\) \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ **Post-Sear** \\ \end{tabular} & \begin{tabular}{c} Human Informal \\ Human Formal & 50.11 & 12.11 \\ Human Formal & 53.76 & 15.82 \\ ChatGPT Informal & 51.02\({}^{\ddagger}\)\({}^{\ddagger}\) & 12.01\({}^{\ddagger}\) \\ ChatGPT Formal & 51.98\({}^{\ddagger}\)\({}^{\ddagger}\) & 29.80\({}^{\ddagger}\)\({}^{\ddagger}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Text formality scores by automatic metrics; \({}^{\dagger}\) and \({}^{\ddagger}\) indicate statistical significance (p\(<\)0.05) against same-style human answers, and opposite-style ChatGPT answers via paired t-test, respectively. sentences. Contrarily, compounds and dependents are infrequently employed in the generation of informal sentences by ChatGPT. #### 4.4.6 Inconsistencies and Hallucinations In order to assess the risk of introducing erroneous information when ChatGPT performs sentence style transformation, we employed DAE (Goyal and Durrett, 2020) at the sentence level to examine the factuality after text style transformation, and compare again the effect of multiple re-writes. Similar to before, F denotes formal style, I signifies informal style, and X2X2X (X \(\in\) {F, I}) represents multiple rewriting transformations of the text. The outcomes of our inquiry are depicted in Figure 4, and Appendix Figure 14. We also again scrutinized the potential incorporation of hallucinatory information regarding named entities in the ChatGPT-generated text, and the findings are presented in Appendix Table 9. Upon conducting factuality checking (see Figure 4, and Appendix Figure 14), it is discovered that ChatGPT's performance is inferior to that of humans in sentence-style rewriting. Interestingly, with the increase in the number of text conversions and rewritings, ChatGPT's tendency to commit factual errors escalates while the output increasingly deviates from the original text, compromising the fidelity of the final result. In a particular instance, the human-generated formal expression states "She is a poor vocalist", whereas the formal rendition provided by ChatGPT articulates "She does not possess the ability to sing". This discrepancy represents a significant semantic alteration. The degree of dependency arc entailment is low in this case. Similarly, Appendix Table 9 reveals that recall scores on the named entity hallucination metric are lower in ChatGPT sentences than in human sentences. #### 4.4.7 Qualitative Examples To explore whether ChatGPT-generated sentences significantly alter the original semantics of the input text, we conducted a case study by randomly selecting 15 samples from each branch of the GYAFC dataset. Our findings indicate that ChatGPT poses a relatively severe risk of modifying the original semantics during sentence style transformation, with approximately 18% of the samples exhibiting noticeable semantic inconsistencies. The examples in Table 6 reveal that during the process of sentence style transfer, ChatGPT erroneously modifies the content words, resulting in significant semantic alterations. Furthermore, our examination of the visualized dependency tree (see Appendix Figures 11, 12, and 13), which relies primarily on the dependency arc entailment (DAE) algorithm for fact-checking, reveals that the text generated by ChatGPT contains a higher number of dependency arcs lacking support from the original text, when compared to human responses. \begin{table} \begin{tabular}{l c} \hline \hline Formal to Informal & \\ \hline It is such a waste of TV space. & (Reference) \\ Yes, because it’s such a waste of TV space! & (Human) \\ What a total waste of TV time! & (ChatGPT) \\ \hline The other boy isn’t that great. & (Reference) \\ The other boy is not that good. & (Human) \\ The other kid’s not so hot. & (ChatGPT) \\ \hline I really enjoy how the composition has the tac... & (Reference) \\ I really like how they do like the whole techn... & (Human) \\ I’m diggin’ how the techno beat slows down in... & (ChatGPT) \\ \hline \hline Informal to Formal & \\ \hline Fatboy Slim - Right Here, Right Now Or any oth... & (Reference) \\ Fatboy Slim is right here and now. He Rocks! & (Human) \\ Fatboy Slim’s “Right Here, Right Now” is an ex... & (ChatGPT) \\ \hline loved them since their first album. & (Reference) \\ I have loved them since their first album. & (Human) \\ I have held a fondness for them since the rele... & (ChatGPT) \\ \hline if u occasionally doing it then u already r add... & (Reference) \\ If you occasionally do it, then you are already... & (Human) \\ If you are engaging in the activity on a regul... & (ChatGPT) \\ \hline \hline \end{tabular} \end{table} Table 6: Case study of ChatGPT generated output Figure 3: Absolute differences in POS tags distribution of ChatGPT and human-generated sentences: GYAFC - EM Conclusion This paper presents a broad assessment of ChatGPT's proficiency in generating controllable text. We conducted quantitative and qualitative examinations at the document level (summarization task) and sentence level (text style transfer). The empirical findings show that ChatGPT outperforms the previous state-of-the-art models on automatic metrics, but that there are substantial disparities between its generated texts and human-written texts. These disparities are reduced by providing a target example of the human writing style. Furthermore, our investigations also confirm the previously reported problems of hallucinations and inaccuracies in text generated by ChatGPT. ## 6 Limitations The primary limitations of the current study pertain to the selection of prompts and evaluation metrics. The experimental cost of requesting API responses from OpenAI to assess ChatGPT's text generation abilities imposes significant constraints on our choice of datasets. Therefore, we have to limit our experimentation to only two related controllable text generation datasets. While we have evaluated ChatGPT's performance at both the document and sentence levels, we cannot extrapolate that ChatGPT has similar performance for other text generation datasets. Additionally, the experimental cost prohibits us from conducting traversal experiments on the selection of hyperparameters. We relied on the default configuration recommended by OpenAI, and we maintain consistency in all hyperparameters to ensure the fairness of the experiments. Secondly, although we have studied the impact of prompt engineering on ChatGPT, the selection of prompts is mainly affected by human understanding, and the number of potential prompts is infinite. Hence, we cannot guarantee whether other prompts that we did not select will yield the same conclusions as our experiment. Furthermore, ChatGPT is subject to continuous updates and iterations, which may lead to improved performance, making it difficult to predict if future versions of ChatGPT will have similar results to our experiments. Finally, to select appropriate evaluation metrics, we have included both domain-related evaluation metrics (such as reading difficulty and text formality) and domain-independent evaluation indicators (such as fact-checking and hallucination detection). However, we acknowledge that the automatic metrics may sometimes not capture all aspects of the intended construct correctly. ## 7 Ethics Considerations All datasets utilized in this study are publicly available, and we have adhered to ethical considerations by not introducing any additional information into ChatGPT's inputs. ## Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (Grant Agreement No. 948878).
2303.15543
The Impact of Asynchrony on Parallel Model-Based EAs
In a parallel EA one can strictly adhere to the generational clock, and wait for all evaluations in a generation to be done. However, this idle time limits the throughput of the algorithm and wastes computational resources. Alternatively, an EA can be made asynchronous parallel. However, EAs using classic recombination and selection operators (GAs) are known to suffer from an evaluation time bias, which also influences the performance of the approach. Model-Based Evolutionary Algorithms (MBEAs) are more scalable than classic GAs by virtue of capturing the structure of a problem in a model. If this model is learned through linkage learning based on the population, the learned model may also capture biases. Thus, if an asynchronous parallel MBEA is also affected by an evaluation time bias, this could result in learned models to be less suited to solving the problem, reducing performance. Therefore, in this work, we study the impact and presence of evaluation time biases on MBEAs in an asynchronous parallelization setting, and compare this to the biases in GAs. We find that a modern MBEA, GOMEA, is unaffected by evaluation time biases, while the more classical MBEA, ECGA, is affected, much like GAs are.
Arthur Guijt, Dirk Thierens, Tanja Alderliesten, Peter A. N. Bosman
2023-03-27T18:49:22Z
http://arxiv.org/abs/2303.15543v1
# The Impact of Asynchrony on Parallel Model-Based EAs ###### Abstract. In a parallel EA one can strictly adhere to the generational clock, and wait for all evaluations in a generation to be done. However, this idle time limits the throughput of the algorithm and wastes computational resources. Alternatively, an EA can be made asynchronous parallel. However, EAs using classic recombination and selection operators (GAs) are known to suffer from an evaluation time bias, which also influences the performance of the approach. Model-Based Evolutionary Algorithms (MBEAs) are more scalable than classic GAs by virtue of capturing the structure of a problem in a model. If this model is learned through linkage learning based on the population, the learned model may also capture biases. Thus, if an asynchronous parallel MBEA is also affected by an evaluation time bias, this could result in learned models to be less suited to solving the problem, reducing performance. Therefore, in this work, we study the impact and presence of evaluation time biases on MBEAs in an asynchronous parallelization setting, and compare this to the biases in GAs. We find that a modern MBEA, GOMEA, is unaffected by evaluation time biases, while the more classical MBEA, ECGA, is affected, much like GAs are. Genetic Algorithms, Model-Based Evolutionary Algorithms, Linkage Learning, Parallel Algorithms, Asynchronous Algorithms + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs: distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Statistical Society_ + Footnote †: ccs distributed by the _Journal of the Royal Society_ + Footnote †: ccs distributed by the _Journal of the However, not synchronizing does not guarantee better overall performance of the EA. For example, while in (Becker et al., 2015) the asynchronous configuration outperformed the synchronous configuration, in (Krishnan et al., 2017) performance degraded when using an asynchronous approach. Evaluation time biases were investigated in (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019), in which was shown that there exists an evaluation time bias, i.e., the distribution of the population is biased such that it is correlated with the corresponding evaluation times, such that this bias is not explained by fitness based selection. They note that this bias depends on both the distribution of evaluation times and the number of processors. More specifically, a preference towards both short and long evaluation times on a flat fitness landscape was observed. Even so, it is difficult to determine based on current literature, when asynchronous execution of an EA is problematic. This is in part due to how comparisons are often performed. First, the selection procedure is often altered when switching from synchronous to asynchronous, making it impossible to distinguish effects caused by asynchrony from those caused by steady-state selection and variation. Selection and variation are key aspects of an EA, and should also be considered as an additional influence. Furthermore, in order for effects of time biases to be interpretable, the time distributions of evaluations need to be known as well. More generally, for EAs the population size is important as well. Different approaches may require different population sizes to perform best, especially if variation and selection are different. When switching between synchronous and asynchronous the population size should therefore be tuned again to avoid giving preferential treatment to the approach for which the population size was tuned. Given all of this, we are particularly interested in the impact of selection and variation on the behavior of the EA. Together they induce a bias towards higher fitness solutions in the population. An oversight in how these operators work could very well induce, preserve, or halt evaluation time biases too. In this work, we will explicitly also consider Model-based Evolutionary Algorithms (MBEAs). Through the use of linkage learning (LL) in MBEAs, variation can be performed based on inferred variable dependencies. This can result in significant performance improvements. To our knowledge, no prior work has studied the impact of asynchronous parallelization on MBEAs. Yet this is of interest, LL infers the structure of a problem through the use of the population. If the composition of the population is based not only on the fitness, but also the evaluation time associated with these solutions, then this will also affect the structure learning process. Therefore, while LL is known to improve performance, this could be disrupted by biases, such as evaluation time biases. Our research questions are therefore: 1. How does selection affect performance, the ability to find a solution with target fitness, under various evaluation time distributions in an asynchronous setting? 2. How does variation affect performance under various evaluation time distributions in an asynchronous setting? 3. More specifically, how are MBEAs like GOMEA and ECGA affected by the evaluation time distribution when made (a)synchronous parallel? The remainder of this work is structured as follows. First, in Section 2 we will describe the EAs used in this work. Following that, in Section 3, the artificial benchmark functions and the evaluation time distributions used, in addition to a real-world NAS benchmark are described. The remainder of experimental considerations is described in Section 4. We discuss the results in Section 5 and conclude in Section 6. ## 2. Approaches In this work, we include a Simple GA as described in Subsection 2.1. To study how linkage learning (LL) and evaluation time biases interact, we use both a classic MBEA named the Extended Compact Genetic Algorithm (ECGA) and a modern MBEA named the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) as described in Subsections 2.2 and 2.3 respectively. ### Genetic Algorithm (GA) In previous works, selection in GAs is often altered simultaneously with the (a)synchronous nature of the approach. For example, in (Krishnan et al., 2018), the synchronous configuration uses a generational selection scheme, whereas the asynchronous configuration is steady-state. A steady-state configuration exhibits different behavior compared to GAs employing a generational selection scheme (Krishnan et al., 2018). We therefore also investigate a'synchronous' steady-state variant and an asynchronous 'generational' approach in addition to the original two configurations. Pseudocode for these approaches can be found in the supplementary material. For the synchronous steady-state approach we operate in batches of \(|P|\) offspring, generating each offspring at the start of the evaluation, and synchronizing until all \(|P|\) offspring solutions are sampled and evaluated. Steady-state selection is then performed once the evaluation of an offspring solution is finished. For this we opt to randomly select a solution from the population and replace this solution if it is worse than the newly evaluated offspring solution. This ensures that the population's average fitness cannot decrease, and thus stops any bias contrary of improvement to fitness from taking over the population. When using generational selection in an asynchronous setting, every solution that completes evaluation is added into a selection pool first, similar to the approach described in (Krishnan et al., 2018). Once this pool has reached the prerequisite size, we apply the generational selection operator, replacing the entire population Figure 1. Resource consumption of synchronous and asynchronous approaches under heterogeneous evaluation times. Ideally, without synchronization the target solution can be found earlier (in this example: the most expensive solution of the second ‘generation’). with the end result. We perform generational selection using a Parent + Offspring (P+O) tournament of size 4, where the number of offspring is equal to the population size. Tournaments are created based on shuffling, then by splitting the population in blocks of the tournament size, repeating as often as necessary to select P solutions. For recombination, we use Uniform Crossover (UX) and Two-point Crossover (TPX). In addition to Subfunction Crossover (SFX) for problems for which subfunction information is available. While UX is included as a baseline, TPX and SFX are included as they suit the structure of at least one of the artificial benchmark functions described in Subsection 3.1. Specifically, in SFX each block of variables that forms a subfunction is exchanged with \(p=0.5\), perfectly mixing the blocks of the concatenated DT function, whereas TPX is especially suited for the ANKL problem due to its sequential adjacent structure. These operators should showcase how the behavior changes when a well-suited operator is used from the start and throughout the search, also providing an idealized reference for approaches employing LL. ### Extended Compact Genetic Algorithm (ECGA) ECGA is one of the first approaches employing LL. Unlike GOMEA, explained in the next subsection, this approach still has separate recombination / variation and selection steps. This allows us to use exactly the same selection procedures as for the GA. With ECGA, a marginal product model (MPM) is learned using a metric based on model complexity and population compression after applying selection (Beng et al., 2017). For this selection step, we apply tournament selection of size 4. In the resulting model every variable is grouped disjoint subsets, thereby modeling each subset of variables jointly. For example, given MPM \(\mathcal{M}=\{\{0,1\},\{2,3\}\}\) and population (after selection) \(P_{M}=0011,1100\), allows us to sample 00 and 11 with, for each subset of variables, equal likelihood. Therefore, allowing us to sample 0000, 0011, 1100, and 1111. Learning a model is costly. Therefore, performing continuous updates of this model for every solution sampled is too computationally expensive to be benchmarked properly. As such, for the asynchronous configuration the model is updated only when \(|P|\) (population size) evaluations finish. Thereby updating the model with the same frequency as the generational approach (generationally). While this reduced update frequency should not have too significant an impact on the learning of the MPM, solutions are sampled using out-of-date frequency estimates. ### Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) While ECGA utilizes LL, it does so differently from most modern MBEAs. Modern model-based approaches like P3 (Beng et al., 2017; Chen et al., 2017), DSGMA-II (Chen et al., 2017; Chen et al., 2017), and GOMEA (Chen et al., 2017) all use an incremental change and accept/deny mechanism, reminiscent of local search. This allows these approaches to utilize non-disjoint models, like the Incremental Linkage Set (ILS) in DSMGA and the Linkage Tree (LT) in GOMEA and P3. In this work we will be using the LT, which is constructed through UPGMA hierarchical clustering applied to normalized mutual information (NMI) of the variables in the population. Each of these models are often represented as a Family of Subsets (FOS), i.e., a list of subsets containing variables to which variation should be applied jointly. For the LT the resulting tree is flattened such that each node in the tree corresponds to a subset of variables. Variation and selection are performed using Gene-pool Optimal Mixing (GOM). In GOM changes are made to subsets of variables as defined by a Family of Subsets, sampled from the population. After each change solutions immediately compete against their parent. Because of this, changes are evaluated more directly, preventing changes to other variables from being a source of noise for assessing the quality of the current change (Chen et al., 2017). If no change was made to a solution through all steps in GOM, or the strict non-improvement stretch of a solution (\(1+\left\lfloor\log_{2}(|P|)\right\rfloor\)) was reached, Forced Improvements (FI) are applied. FI is GOM where the donor is the current best solution (elitist). If FI fails to improve a solution, the solution is replaced with the current elitist. While this integrated variation and selection operator prevents us from changing the selection operator, this combined recombination and selection process is interesting in itself. #### 2.3.1. Synchronous The synchronous approach closely follows the sequential version of GOMEA. Every generation starts with learning a linkage model. Then, an offspring population is made, initially containing a copy of each individual in the population. GOM and potentially FI are then scheduled to be applied in parallel on each of these offspring solutions, leaving the population from which is sampled during GOM, unchanged. When processors are available, and there are still offspring solutions left on which GOM needs to be applied, GOM is applied to of the offspring solutions in parallel. Each application of GOM is scheduled continuously until all involved evaluations have completed. Once GOM (and FI) complete the processor is freed up again. A generation ends once all solutions had GOM applied to it once. #### 2.3.2. Asynchronous When making GOMEA asynchronous, there is no longer a generational offspring population which is generationally improved. Instead, GOM is scheduled to be applied after initialization, and after completing GOM, using a queue. Once a processor has finished its current task, the next task from the queue gets executed. At the start of (asynchronous) GOM, if GOM has been applied \(|P|\) (population size) times, a new FOS is learned. Furthermore, a copy of the population is made. This effectively turns every application of GOM into its own mini-generation. Consequently, at the end of GOM we copy the generated offspring to the population from which is sampled. We label this configuration "a/e", for _asynchronous end_. However, this configuration leads to significant use of out-of-date information: none of the accepted changes of solutions undergoing GOM are visible to other solutions. As such we also evaluate an alternative configuration that copies the offspring to the shared population at _intermediate_ stages during GOM, not just at the end. This configuration is labelled "a/i". ## 3. Problems Previous work has indicated that heritable heterogeneous evaluation times can negatively influence the behavior of an asynchronous EA (Kal than external factors like machine load. A good example of a problem with this property is training a neural network, which we will discuss in Subsection 3.2. However, both the fitness landscape and evaluation times of such a problem are generally complex, which may complicate analysis. We will therefore first describe benchmark functions with varying kinds of landscapes and structure, for which we will also vary the evaluation times associated with the solutions, in Subsection 3.1. ### Artificial Benchmark Functions In (Han et al., 2017) it is indicated that evaluation times need to be heritable, and as such we will focus on this aspect. Similarly, evaluation times need to be preserved under variation too, as otherwise no time bias can develop. Furthermore, in (Han et al., 2017) settings in which the evaluation cost was perfectly positively and negatively correlated were investigated - and found no significant difference in performance. As any bias contrary to the improvement of fitness would be unlikely to survive selection, performance of an EA is likely not impacted. We therefore select a distribution based on the genotype which is not a simple function of the fitness. Furthermore, as the results in (Han et al., 2017) indicate a bias towards the extreme evaluation times, we will control where in our benchmark functions these extremes are located. The evaluation time setting in this work is expressed as a ratio \(a:b\), where \(b\) is the cost of the optimum \(s^{*}\) and \(a\) is the cost of the bitwise complement \(\tilde{s^{*}}\) - i.e., the solution with all bits flipped. The evaluation time \(E(s)\) of a solution \(s\), given the normalized Hamming distance \(H(s,\ s^{*})\) between this solution \(s\) and the optimum \(s^{*}\) is then: \[E(s)=H\left(s,\ s^{*}\right)\ a+\left(1-H\left(s,s^{*}\right)\right)\ b \tag{1}\] We choose to use the ratios \(100:1\), \(10:1\), \(2:1\), \(1:1\), \(1:2\), \(1:10\) and \(1:100\), that is, ranging from a cheaper optimum to a more expensive optimum. #### 3.1.1. Concatenated Deceptive Trap (DT) The first problem we will use is the concatenated DT function (Han et al., 2017; Goyal et al., 2017). It consists of \(n\) blocks of size \(k\) which are concatenated together, forming a full string of length \(\ell=nk\). In this work, we will only consider the case for \(k=5\). In order to evaluate the function, first the number of '1' bits within the block \(b\) is counted. Following that, the \(DT\) function is applied to the unitation of each block \(0\) through \(n-1\) and aggregated by summation: \[DT(u)=\left\{\begin{array}{ll}k&u=k\\ k-u-1&\text{otherwise}\end{array}\right. \tag{3}\] \[f(x)=\sum_{b=0}^{n-1}DT\left(\sum_{i=bk}^{bk+k-1}x_{i}\right) \tag{2}\] As there are many search paths indicating that more zeroes is better, high-fitness solutions consisting of zeroes are easiest to find. The solution consisting of all zeroes is referred to as the deceptive attractor. Yet, the needle in a haystack where \(u=k\), has better fitness: \(k\) as opposed to \(k-1\) for the deceptive attractor. It is therefore all ones that is actually the optimum to this function. In order to solve this problem in a scalable manner, variables should be exchanged at the level of these blocks (Han et al., 2017), for example by recognizing block structure by using linkage learning. The complement of the optimum to this problem consists of all zeroes, and is the solution consisting of only attractors. We have defined the range of evaluation times depending on these two solutions. A preference towards cheaper solutions could therefore make a cheap-to-evaluate attractor even more attractive. #### 3.1.2. Adjacent NK-Landscapes (ANKL) While the (additively decomposable) concatenated DT function is difficult to solve with a local searcher, it is separable. This makes the problem easy to solve if the separability is known, or when this separability can be inferred. We therefore also consider the non-separable problem of ANKL. This problem consists of overlapping blocks consisting of \(k\) adjacent variables with some stride \(s\) from block to block. In this work, we consider \(k=5\) and \(s=2\). For each block \(i\) a randomly generated function \(f_{i}\) is defined that maps each genotype of this block to a value ranging from \([0,1]\). Given random functions \(f_{0}\) through \(f_{n-1}\), where \(x\) is a genotype of length \(\ell=sn+k-1\): \[f(x)=\sum_{i=0}^{n-1}f_{i}\left(x_{si},\ldots,\ x_{si+k-2},\ x_{si+k-1}\right) \tag{4}\] Due to this more complex overlapping structure as well as the random nature of the subfunctions, it will be more difficult for the linkage learning approaches to configure the linkage model appropriately. As a result, such approaches require more generations to obtain a suitable model. This may therefore provide more time for any evaluation-time biases to steer the population towards or away from the optimum, potentially causing structure to be found earlier, or preventing the structure from being found at all. ### NASBench 301 While benchmark functions are interesting and useful for analysis, they are not necessarily representative of practical problems. Real-world problems often contain elements that simpler benchmark functions do not account for. We will therefore apply the aforementioned approaches to the Neural Architecture Search benchmark NASBench 301 (Nagas et al., 2017). NASBench 301 is a benchmark for neural architecture search applied to the DARTS search space. In this search space, each network is trained from scratch. In order to make this less expensive as a benchmark, the benchmark provides both a surrogate model for the fitness and the corresponding evaluation time. This allows for an efficient simulation of a run on this problem without requiring a GPU for hours. We use version 0.9 of the XGB surrogate model for performance and LGB model for runtime. As noisy objective functions are not a subject of research for this work, we have disabled noise for the performance model. For this experiment only the runtime model is used as an evaluation time distribution. The runtime model does not contain noise. ## 4. Experimental Setup For all problems, a run using a specific population size is stopped if it has reached the target value, or if it has converged, i.e., when all genotypes in the population are identical. As there will be configurations and problem pairs for which the target value is not reached within a reasonable amount of time, for each set of problems, we will also be limiting the amount of time spent on a full bisection run. If bisection was terminated prematurely we will select the smallest successful population size found by bisection within the time limit. Statistical tests are performed using the Mann-Whitney U-test (Mann and Whitney, 1977; Mann and Whitney, 1977) with \(p=0.05\) with Holm-Bonferroni (Holm and Bonferroni, 1977) correction where applicable. All experiments are carried out on a machine with two AMD EPYC 7282 16-Core Processors 2.8GHz, with 252 GB of RAM. Source code will be made available. ### Artificial Benchmark Functions Prior work (Mohammad et al., 2017; Wang et al., 2018; Wang et al., 2018) investigated the distribution of evaluation times. However, if an approach is influenced such that the distribution of evaluation times is biased, this does necessarily indicate a negative impact on the performance of an approach, like the time required to reach the optimum. We will focus on performance under various evaluation time configurations. However, using standard performance metrics is problematic. When using the number of evaluations required to reach a target fitness, synchronous approaches are favored as waiting does not incur any penalty, whereas utilizing these resources with additional evaluations does count as additional evaluations. For our first experiment, we will be changing the distribution of evaluation times, and compare all investigated distributions. Using the wall time in this comparison is problematic as the total wall time spent will change with the distribution. For example, scaling all evaluation times by 10 will not change the evaluation times with respect to one another. As such a run will proceed identically, except with evaluation times that are 10 times larger. For more complex distributions it would be hard to say whether the approach is actually negatively affected by the change in evaluation times, or whether the solutions to be evaluated are more costly. We instead choose a measure that stays constant if behavior is the same, yet still indicates when performance worsens. For this we use the minimally required population size to reach a target value as determined by bisection. This measure works for a convergent EA as the population acts as a buffer for diversity loss, the larger the population the longer it takes for biases to take over the population. If a bias leads to a decrease in diversity such that the target fitness is no longer reached, increasing the population size will likely allow for solving the problem again (assuming no limits are hit). Furthermore, we will perform these first experiments with the number of processors equal to \(|P|\) (the population size) as this maximizes the degree of parallelization proportional to the number of evaluations performed. Additionally, given this setting, the evaluations with a fast evaluation time will also be the first to complete, which is not guaranteed with low degrees of parallelization. The minimally required population size cannot be readily compared across different approaches for performance. Even if an approach has a smaller minimally required population size it may still perform more evaluations and spend more time finding the optimum than another approach. Furthermore, the number of times offspring are generated also impacts the amount of used resources. Yet as before, measuring this invariant to the evaluation time distribution is difficult. Instead, we compare how the minimally required population size scales across varying evaluation time distributions. For these experiments, each configuration will be evaluated for 100 random seeds, with each bisection run limited to 1 hour. This is orders of magnitude more than what most approaches need, and ensures even the worst performing algorithms have a chance. ### NASBench 301 While comparing how approaches scale across different time distributions will allow us to study the impact of evaluation time biases, a practical problem often only has a single evaluation time distribution associated with it. Furthermore, the amount of computational hardware available is not unbounded in practice. If this were not the case, one could evaluate the entire search space in parallel at the cost of the most expensive solution. A more practical goal for parallelization for a specific problem is to minimize the amount of wall time spent to hit a target fitness given a limited amount of resources. As we only regard a single evaluation time distribution, aforementioned concerns do not apply to the NASBench 301 experiment. Therefore, for the NASBench 301 experiment we will run experiments for the simulated wall time required to reach a target value. Here, first a range bounding this minimal evaluation time is determined, followed by a modified golden section search (Mohammad et al., 2017), detailed further in the supplementary material. This is to find the population size that minimizes the corresponding wall time to ensure that each approach can be compared fairly. For this experiment the number of processors is restricted to 64. Each configuration is evaluated for 20 different random seeds for at most 16 hours, due to the more costly nature of using the surrogate over a benchmark function. As in preliminary experiments this time limit was found to be insufficient due to wasting a significant amount of time on small population sizes, we additionally require an improvement to be found every \(2e+10P\) evaluations, where \(e\) is the number of evaluations issued at the last improvement and \(P\) is the population size. ## 5. Results and Discussion First, we will discuss the results on the artificial benchmark functions (Table 1 for DT and Table 2 for ANKL). After this, we will discuss the results on NASBench 301. ### Artificial Benchmark Functions From Figure 2 it is apparent that asynchronous configurations experience an evaluation time bias that leads to a change in behavior. Specifically, the required population size is lower when the optimum is cheaper, i.e., is faster to evaluate, and larger when the optimum is more expensive, i.e., takes longer to evaluate. Simultaneously, synchronous approaches are invariant to the distribution of evaluation times investigated. This is in line with what would be expected based on the results in literature (Mohammad et al., 2017; Wang et al., 2018; Wang et al., 2018). However, the extent of the differences in minimally required population size is highly dependent on the crossover used. When the most suitable crossover is used, i.e., SFX for DT and TPX for ANKL, the differences between the timing settings are small. At the same time, when an unsuitable crossover is used, differences in required population size range _across orders of magnitude_. In the worst case, the problem is not solved within the allotted time \begin{table} \begin{tabular}{c|c|c for more expensive optima, as such evaluation time biases can negatively impact the performance of an approach. The selection method can also impact how the approach scales across different evaluation time settings. For example, on ANKL the asynchronous steady-state GAs with UX and SFX degrade substantially, even to the point of failure, whereas the same GA, but with a (pseudo-)generational scheme degrades substantially less, by at most a factor 5. Yet, with the right crossover, steady-state actually has a smaller minimally required population size, and degradation between both configurations is at most a factor of 2. We conjecture that this is caused by the following. In a steady-state approach the solution is immediately integrated into the population. Additionally, it is also immediately available to be used by crossover to generate new offspring. These offspring are now more likely to be fast evaluating solutions, depending on how well variation will preserve the evaluation time of a solution. This effect can accumulate over time, leading to premature convergence. On the other hand, in a generational scheme these solutions are not immediately integrated into the population. Newly generated offspring are hence distributed according to the original distribution of solutions - with less evaluation time bias affecting them. Simultaneously, solutions with longer evaluation times will take longer for a processor to evaluate. During this time, no other solutions will be sampled for evaluation on this processor. In effect, the pool of currently evaluating solutions performs selection against short evaluation times. This will cause the amount of resources allotted to fast evaluating solutions to be lower. Finally, this results in less accumulation of evaluation time bias towards fast evaluating solutions, avoiding premature convergence towards such solutions. Variation also has a notable impact on the impact of evaluation times. When variation aligns with the problem's structure, the different selection schemes unexpectedly scale similarly across evaluation time settings. Picking the right crossover operator will help with avoiding significant degradation for asynchronous approaches, even if the approach is in a steady-state configuration. These results are promising for MBEAs because in MBEAs problem structure is automatically learned to inform its variation operator. One could therefore expect performance to always be reasonable, resulting in the precise selection scheme mattering less. Before we discuss the results for ECGA, we repeat that ECGA's steady-state configuration is not truly steady-state, but only applies steady-state like selection. This is because ECGA only updates its model once \(|P|\) (population size) solutions finish evaluating, as stated in Section 2.2. This results in an approach with behavior more closely resembling that of the generational approach. The differences between the two selection methods is negligible if the distribution updates less frequently, as is the case for ECGA. First of all, there is the oddity that the constant time distribution requires a larger population size in order to find the optimum than the other asynchronous configurations for ECGA. The required population size is more in line with the synchronous configuration than the other time distributions. We explain this as follows. In an asynchronous setting with heterogeneous evaluation times, after the first few solutions finish evaluation, not enough solutions have finished evaluation yet to update the model. As such, the next solutions to be evaluated are still sampled from the initial random model. These solutions are therefore additional random solutions. Conversely, in the case where the evaluation time distribution is constant, all evaluations finish at the same time. This results in the model being updated. Therefore, the next solutions to be evaluated on the freed up processors, will be sampled using an _updated_ model, potentially with some (evaluation time) bias. After a model update, only the solutions that are currently being evaluated can originate from an older model. There are at most "number of processors" such solutions. In the case of the experiments above, this would be equal to the population size. This is reflected in the results: the minimally required population size is approximately twice as big, only for the constant evaluation time. The set of solutions that are currently evaluating may therefore act like an extension of the population itself. When observing how ECGA's minimally required population size scales for different evaluation time settings in Figure 2, note that it scales worse compared to a GA with a suitable crossover for ANKL, and even with a non-suitable crossover with generational selection. Variation is more than the subsets of variables that are captured in the MPM model. How the model is used is just as important. In this case, the global sampling for each subset seems to be worse than the recombinative crossover of a GA. In conclusion, though not necessarily the fault of LL, LL within an MBEA is no guarantee that variation will perform well. In contrast to ECGA and the GAs, GOMEA is found to be invariant to the evaluation time setting in most cases. There exists only a single time setting and problem combination for GOMEA for which the medians are statistically significantly different from the rest: 'a/e' on DT for 1:100, see Table 1. This is likely due to the combined variation and selection method used in GOMEA. As changes are made to individuals, they only compete against their parent. When parent and offspring are similar in evaluation time, this automatically results in niching behavior with respect to the evaluation times of solutions. In comparison to global selection, this approach stops solutions that evaluate quicker from taking over the population, in effect removing a large source of evaluation time bias. ### NASBench 301 For NASBench 301 the distribution of the objective and evaluation time of a solution is shown in Figure 3. From the positive correlation and observations on the benchmarks one would expect to Figure 3. Objective (accuracy of the trained network) versus evaluation time sampled by runs of the approaches for NASBench 301. Showing the experienced correlation between the two for the approaches. see asynchronous configurations to be outperformed by their corresponding synchronous approaches. Yet, if one refers to Table 3, this is not the case. As a matter of fact, some of the best performing approaches are asynchronous. We explain this as follows. Referring to Figure 3, one may note that the correlation is not perfect. The target solution for this problem is not actually the most expensive or least expensive solution, as was the case for the artificial benchmark functions. Additionally, an EA does not utilize all of these solutions simultaneously. If we look at the correlation from a particular fitness value onwards, i.e., truncating the population, the correlation decreases as this fitness threshold increases. Eventually, this correlation even becomes slightly negative for these data points: a setting which would actually be considered beneficial for steady-state asynchronous GAs based on our previous results. For ECGA this is particularly notable. The approach itself is known to have trouble working with certain multi-modal functions (Beng et al., 2017), and it seems NASBench 301 is among these problems. Even so, the asynchronous approach has runs in which a solution with at least the target fitness was found, whereas the synchronous approach does not. This could be due to evaluation time biases helping the approach, much like what happens with the asynchronous steady-state GA for ANKL. It is therefore plausible that evaluation time biases may in part be responsible for the improvements in performance observed for asynchronous approaches, rather than only the improvements in throughput. Furthermore, we have observed ECGA to prematurely merge FOS elements together on NASBench 301. As the model is unlikely to merge variables if they are not correlated, such a merge seems to only further reinforce correlation for this problem. Combined with the high selection pressure, this is likely to result in premature convergence to a single mode. In contrast, the linkage tree used in GOMEA does not suffer from this issue. The LT FOS always includes the univariate FOS elements: subsets with each variable on their own. Combined with the niching behavior described above, this significantly reduces the possibility of premature convergence no matter the source. This further reinforces that an MBEA not only has to learn the right linkage from the population, but also use it in the right manner. For GOMEA we would expect a difference between the time required for the synchronous and asynchronous approach. Since with GOM many similar variation steps are done to a single solution sequentially, the variance in evaluation times is potentially amplified. Furthermore, population sizes are relatively small compared to the GA and ECGA. We would therefore expect the throughput for asynchronous GOMEA to be considerably higher, and as such the time required to be lower. However, no statistically significant difference in time between the asynchronous/e and synchronous approach is observed (\(p=0.52\), \(U=224\), \(20\) samples). Furthermore, a more detailed investigation of a single run does indicate that the amount of time spent idle for the synchronous approach is notable, on average 140059s per processor over the entire run. In contrast, there is a statistically significant difference for the time required between the asynchronous/i and synchronous approach (\(p=0.007<0.05\), \(U=100\), \(20\) samples). Effectively, when using outdated parents, offspring are more likely to have a lower fitness value. This degrades the performance of the approach, counteracting any gains in throughput. For an asynchronous MBEA to actually gain performance compared to its synchronous counterpart, model updates should not be too infrequent and material with which is recombined should be recent. GOMEA could still be improved in this regard: each run of GOM keeps a copy of the population for sampling. As this population was created at the start of GOM, prior to the first evaluation, this population will gradually become outdated. Updating this population during GOM could result in recombination with more up-to-date solutions, potentially further improving performance. ## 6. Conclusions Answering RQ 1, we have observed that steady-state asynchronous EAs are much more vulnerable to the biases induced by heterogeneous evaluation times, compared to asynchronous EAs using a generational scheme. Furthermore, answering RQ 2, when paired with a variation operator that is not competent in terms of improving fitness, the impact on an algorithm's capability to solve a problem can be severe. In contrast, when using well suited variation and selection operators, any differences between synchronous and asynchronous configurations become much smaller. For the first time we have investigated the impact of heterogeneous evaluation times on parallel MBEAs and linkage learning (LL). The addition of LL promises to automatically align a variation operator with the structure of a problem. However, answering RQ 3, there are significant differences in the impact of evaluation times on the performance of ECGA, and GOMEA. While LL is not affected to the extent that evaluation time biases prevent the approach from finding high quality solutions, its performance greatly depends on how variation and selection are performed. Finally, rather than negatively impacting the results, LL, when paired with the right variation and selection scheme can be a useful tool for obtaining good performance in general, even when an EA is asynchronous. GOMEA is an example of such an EA. GOMEA gets selection and variation right for all the problems evaluated, and is invariant to the choice between a synchronous or asynchronous configuration. ###### Acknowledgements. This publication is part of the project "DAEDALUS - Distributed and Automated Evolutionary Deep Architecture Learning with Unprecedented Scalability" with project number 18373 of the research programme Open Technology Programme which is (partly) financed by the Dutch Research Council (NWO). Other financial contributions as part of this project have been provided by Elekta AB and Ortec Logiqcare BV.
2309.00673
Naked forward shock seen in the TeV afterglow data of GRB221009A
We explore the implications of the light curve of the early TeV gamma-ray afterglow of GRB221009A reported by the LHAASO collaboration. We show that the reported offset of the reference time, $T_*$, allows the determination of the relativistic jet activation time, which occurs approximately $200\,\mathrm{s}$ after the GBM trigger time and closely precedes the moment at which GBM was saturated. We find that while the LHAASO data do not exclude the homogeneous circumburst medium scenario, the progenitor wind scenario looks preferable, finding excellent agreement with the expected size of the stellar bubble. We conclude that the initial growth of the light curve is dominated by processes internal to the jet or by gamma-gamma attenuation on the photons emitted during the prompt phase. Namely, either the activation of the acceleration process or the decrease of internal gamma-gamma absorption can naturally explain the initial rapid flux increase. The subsequent slow flux growth phase observed up to $T_*+18\,\mathrm{s}$ is explained by the build-up of the synchrotron radiation -- the target for inverse Compton scattering, which is also supported by a softer TeV spectrum measured during this period. The duration of this phase allows an almost parameter-independent determination of the jet's initial Lorentz factor, $\Gamma_0\approx600$, and magnetic field strength, $B'\sim0.3\,\mathrm{G}$. These values appear to match well those previously revealed through spectral modeling of the GRB emission.
Dmitry Khangulyan, Felix Aharonian, Andrew M. Taylor
2023-09-01T18:00:02Z
http://arxiv.org/abs/2309.00673v1
# Naked forward shock seen in the TeV afterglow data of GRB221009A ###### Abstract We explore the implications of the light curve of the early TeV gamma-ray afterglow of GRB221009A reported by the LHAASO collaboration. We show that the reported offset of the reference time, \(T_{*}\), allows the determination of the relativistic jet activation time, which occurs approximately \(200\,\mathrm{s}\) after the GBM trigger time and closely precedes the moment at which GBM was saturated. We find that while the LHAASO data do not exclude the homogeneous circumburst medium scenario, the progenitor wind scenario looks preferable, finding excellent agreement with the expected size of the stellar bubble. We conclude that the initial growth of the light curve is dominated by processes internal to the jet or by gamma-gamma attenuation on the photons emitted during the prompt phase. Namely, either the activation of the acceleration process or the decrease of internal gamma-gamma absorption can naturally explain the initial rapid flux increase. The subsequent slow flux growth phase observed up to \(T_{*}+18\,\mathrm{s}\) is explained by the build-up of the synchrotron radiation -- the target for inverse Compton scattering, which is also supported by a softer TeV spectrum measured during this period. The duration of this phase allows an almost parameter-independent determination of the jet's initial Lorentz factor, \(\Gamma_{0}\approx 600\), and magnetic field strength, \(B^{\prime}\sim 0.3\,\mathrm{G}\). These values appear to match well those previously revealed through spectral modeling of the GRB emission. Gamma-rays (637) -- Gamma-ray transient sources (1853) -- Gamma-ray bursts (629) -- Gamma-ray sources (633) + Footnote †: journal: 0000-0002-8820-2227]Dmitry Khangulyan 0000-0002-8820-2227]Felix Aharonian 0000-0002-8820-2227]Andrew M. Taylor ## 1 Introduction Gamma-ray bursts (GRBs) result from gigantic explosions in the Universe occurring at redshifts on average \(z\sim 2-3\). These events are believed to be powered by ultra-relativistic outflows formed either by the collapse of massive stars or binary system mergers. Thanks to the bright, non-thermal emission generated in their outflows, GRBs are detected in a broad range of frequency bands. Synchrotron emission is believed to dominate from X- up to MeV gamma-ray energies, while the radiation in the GeV and TeV bands emerges from the inverse Compton (IC) scattering. The detection of TeV gamma rays from GRBs is considered an important tool for constraining the physical conditions in the production region. Unfortunately, the attenuation by extragalactic background light (EBL), can severely hinder the detection of this component from GRBs at cosmological distances. So far, only a few GRBs have been detected in the TeV regime (MAGIC Collaboration et al., 2019; Abdalla et al., 2019; H. E. S. S. Collaboration et al., 2021), revealing that the GRB afterglow phase is characterized by bright TeV emission. The position of GRB221009A at the trigger time, \(T_{0}\), serendipitously appeared in the LHAASO field of view. Thanks to the extraordinarily high flux of TeV radiation and the superior sensitivity of the LHAASO detectors, the growth phase of the TeV emission associated with GRB221009A was detected in the background-free observation regime. More than \(60,000\) very-high-energy (VHE) photons have been detected within the first \(3,000\,\mathrm{s}\) after the trigger, providing unprecedented TeV photon statistics allowing for the potential detection of \(\geq 10\%\) fluctuations on \(\sim 1\) min time scales. However, the TeV light curve demonstrated a very smooth evolution. Together with the accurately measured time delay, this smooth behavior of the light curve suggests that the TeV emission originates from the forward shock, i.e., represents the early afterglow emission phase. Note that GRB190114C was detected in the same early afterglow stage by MAGIC but with much lower statistics (MAGIC Collaboration et al., 2019). The two other TeV GRBs afterglows, GRB180728B and GRB190829A, were detected with H.E.S.S. in late afterglow phase (Abdalla et al., 2019; H. E. S. S. Collaboration et al., 2021). While light curves covering early afterglow phases have already been obtained for many GRBs in the X-rays, MeV, and occasionally also GeV gamma-ray bands, the prompt phase emission strongly dominates in these bands at the early epochs making challenging the study of the onset of the afterglow emission with X-ray and MeV/GeV data only (see, however, Ghirlanda et al., 2010). LHAASO observations of GRB221009A reveal no evidence for TeV emission at the prompt phase. Thus, the early TeV light curve provides us with a unique opportunity to study the early afterglow physics, i.e., the processes occurring at the forward shock, without contamination from the prompt emission. Thus, GRB221009A offers the unique opportunity to observe the "naked" launching of a shock into the circumburst medium. In particular, the TeV light curve helps us to understand the initial phase's shock dynamics and the associated particle acceleration mechanism. Several factors may significantly affect the early afterglow light curve, being, however, less important during the late afterglow phase. For example, spectral modeling of afterglow emission favors a Gauss-strength magnetic field in the production region (e.g., Derishev & Piran, 2019), which requires that the magnetic field is significantly amplified. Processes related to the magnetic field evolution are believed to be key factors leading to the particle acceleration at relativistic shocks (see Lemoine & Pelletier, 2012, for a review), and the time required for their activation may lead to a delay in the onset of TeV particle acceleration. Also, at the initial stage, the blast wave propagates through an inhomogeneous environment, which includes the SN shell, stellar wind, its termination shock, and the stellar bubble (i.e., a layer of shocked stellar wind and compressed interstellar medium). As the blast wave's propagation proceeds in the ultrarelativistic regime, this inhomogeneity may cause an apparent delay concerning the trigger time. In this study, we focus on effects related to the shock dynamics at the early afterglow stage and investigate the implications of the LHAASO observation of GRB221009A on the properties of the circumburst medium. ## 2 Shock dynamics The self-similar solution by Blandford & McKee (1976, hereafter BM76) provides a good description for the dynamics of the forward shock only when the energy of the swept-up shell, \(M\), is comparable to the explosion energy, \(E_{0}\lesssim\Gamma_{0}^{2}Mc^{2}\), i.e., when \[M\gtrsim 6\times 10^{-5}{\rm M_{\odot}}\bigg{(}\frac{E_{0}}{10^{55}\,{\rm erg }}\bigg{)}\bigg{(}\frac{\Gamma_{0}}{300}\bigg{)}^{-2}\,. \tag{1}\] An extrapolation of the BM76 solution to the very beginning of the explosion is, however, incorrect. During the initial explosion phase, the propagation of the blast wave is little affected by the circumburst medium (refered to as the "coasting phase", an analogue of the "ejecta-dominate phase" for supernovae explosions). Thus the blast wave moves with nearly constant Lorentz factor, and lags behind the photon front which propagates out from the explosion origin differently than that predicted from the BM76 solution (Kobayashi & Zhang, 2007). This lag leads to an offset of the self-similar solution reference time, \(T_{*}\), with respect to the trigger time, \(T_{0}\), even if the trigger time accurately determines the moment when the GRB jet starts propagating from its origin. We note additionally that the GRB precursor can be triggered on, in which case the reported trigger time may refer (at least for some specific cases) to a time prior to the jet activation. Since the blast wave propagates in the ultrarelativistic regime, one needs to account for relativistic effects and kinematic delays of the signal. Letting \(\tau\) be the time elapsed since the explosion in the progenitor reference frame; \(\tau^{\prime}\) is the time in the blast co-moving frame; and \(t\) is the detection time (which is measured relative to the trigger time) of (hypothetical) photons emitted at the shock front at time \(\tau\). If the blast wave Lorentz factor is \(\Gamma\), then \(d\tau=\Gamma\,d\tau^{\prime}\) due to the relativistic time dilation. If two photons were emitted at the blast wave front at \(\tau\) and \(\tau+d\tau\), when their detection time is separated by the observer time interval of \(dt=d\tau\,/(2\Gamma^{2})\), as the blast wave fronts moves with \(v\approx c(1-\nicefrac{{1}}{{(2\Gamma^{2})}})\). Since the blast wave Lorentz factor changes as the wave propagates, then \[t_{2}-t_{1}\approx\int\limits_{\Gamma_{1}}^{\Gamma_{2}}\frac{d\tau}{2\Gamma^{2 }(\tau)}\approx\int\limits_{R_{1}}^{R_{2}}\frac{dr}{2c\Gamma^{2}(r)}\,. \tag{2}\] If a photon is emitted from the blast wave position at the moment when the wave reaches a distance \(R\) from the explosion origin (the distance is measured in the progenitor frame), the observer detects this photon delayed with respect to the trigger time by \[t\approx\int\limits_{0}^{R}\frac{dr}{2c\Gamma^{2}(r)}\,. \tag{3}\] During the coasting phase, the bulk Lorentz factor of the forward shock is roughly a constant, \(\Gamma_{0}\). Thus, Eq. (3) is reduced to \(t\approx R/(2c\Gamma_{0}^{2})\). As the initial Lorentz factor is expected to be large, \(\Gamma_{0}\gg 10^{2}\), the blast wave overtakes the supernovae shell already on a very short time interval for the observer: \[t_{\rm shell}\sim 20\biggl{(}\frac{R_{\rm shell}}{10^{12}\,{\rm cm}}\biggr{)} \biggl{(}\frac{\Gamma_{0}}{300}\biggr{)}^{-2}\,{\rm ms}\,. \tag{4}\] Since the LHAASO photon statistics for GRB221009A corresponds to \(0.02\,{\rm photon\,ms^{-1}}\), resolving such millisecond times-scales remains challenging even for present day gamma-ray detectors. ### Interaction with stellar wind As illustrated in Fig. 1, after emerging through the SN shell, the blast wave starts interacting with the stellar wind. The typical stellar wind speed is \(v_{\rm w}\approx 2\times 10^{3}\,{\rm km\,s^{-1}}\) and the mass loss rates of a massive star can be quite high, say \(M_{\rm w}\sim 10^{-7}{\rm M_{\odot}\,yr^{-1}}\). Thus, the blast wave converges to the self-similar solution when it Figure 1: Structure of the circumburst environment. The GRB jet initially propagates through the fast stellar wind, then susbsequently interacts with the stellar bubble (shocked stellar wind and compressed ISM layer). After that the jet finally reaches the regular ISM. Figure 2: Transition from coasting phase to BM76 self-similar solution results in an offset of the reference time in respect to the trigger time. Top panel: GRB jet interacting with the progenitor wind; middle panel: GRB jet interacting with homogeneous medium; bottom panel: GRB jet reaches the stellar bubble. reaches a distance of \[\begin{split} R_{\rm w,ss}&\sim\frac{E_{0}v_{\rm w}}{ \Gamma_{0}^{2}c^{2}\dot{M}_{\rm w}}\\ &\sim 1\bigg{(}\frac{E_{0}}{10^{55}\,{\rm erg}}\bigg{)}\bigg{(}\frac{ \Gamma_{0}}{300}\bigg{)}^{-2}\times\\ &\bigg{(}\frac{v_{\rm w}}{2\times 10^{3}\,{\rm km\,s}^{-1}} \bigg{)}\bigg{(}\frac{\dot{M}_{\rm w}}{10^{-7}{\rm M}_{\odot}}\bigg{)}^{-1}\, {\rm pc}\,.\end{split} \tag{5}\] In principle, this distance may exceed the inner size of the stellar bubble, \(R_{\rm bbl,r}\). However, for the present discussion we assume that it is smaller. For the range of distances \(R_{\rm w,ss}<R<R_{\rm bbl,r}\), the blast wave propagates in the self-similar regime within the circumburst density \(n_{\rm c}\propto R^{-2}\). In this case, the blast wave Lorentz factor is \[\Gamma\approx\sqrt{\frac{E_{0}v_{\rm w}}{R\dot{M}_{\rm w}c^{2}}}\,. \tag{6}\] Thus, according to Eq. (2), the blast wave front lags behind the trigger photon front by \[\begin{split} t=\begin{cases}\frac{R}{(2c\Gamma_{0}^{2})}& \quad\text{for}\quad\quad R<R_{\rm w,ss}\\ \frac{R_{\rm w,ss}}{(2c\Gamma_{0}^{2})}+\frac{(R^{2}-R_{\rm w,ss}^{2})\dot{M}_{ \rm w}c}{4E_{0}v_{\rm w}}&\quad\text{for}\,\,\,R_{\rm w,ss}<R<R_{\rm bbl,r }\,.\end{cases}\end{split} \tag{7}\] Normalizing the parameters in Eq. (7) one obtains \[\begin{split}\frac{R}{(2c\Gamma_{0}^{2})}&=6\times 1 0^{2}\bigg{(}\frac{R}{1\,{\rm pc}}\bigg{)}\bigg{(}\frac{\Gamma_{0}}{300} \bigg{)}^{-2}\,{\rm s}\\ \frac{\big{(}R^{2}-R_{\rm w,ss}^{2}\big{)}\dot{M}_{\rm w}c}{4E_{0} v_{\rm w}}&=2.2\times 10^{2}\bigg{(}\frac{R^{2}-R_{\rm w,ss}^{2}}{1\,{\rm pc }^{2}}\bigg{)}\times\end{split} \tag{8}\] \[\Bigg{(}\frac{\dot{M}_{\rm w}}{10^{-7}{\rm M}_{\odot}\,{\rm yr}^{-1}}\bigg{)} \bigg{(}\frac{E_{0}}{10^{55}\,{\rm erg}}\bigg{)}^{-1}\bigg{(}\frac{v_{\rm w}}{ 2\times 10^{3}\,{\rm km\,s}^{-1}}\bigg{)}^{-1}\,{\rm s}\,.\] When the blast wave propagates in the self-similar regime, the corresponding reference time differs from the trigger time. Using Eq. (7) one can obtain this offset of the reference time with respect to the trigger time (see in Fig. 2) as: \[\begin{split} T_{*}&\approx\frac{R_{\rm w,ss}}{2c \Gamma_{0}^{2}}-\frac{R_{\rm w,ss}^{2}\dot{M}_{\rm w}c}{4E_{0}v_{\rm w}}\approx \frac{R_{\rm w,ss}}{4c\Gamma_{0}^{2}}\\ &\approx 3\times 10^{2}\bigg{(}\frac{R_{\rm w,ss}}{1\,{\rm pc}} \bigg{)}\bigg{(}\frac{\Gamma_{0}}{300}\bigg{)}^{-2}\,{\rm s}\,,\end{split} \tag{9}\] where we also accounted for Eq. (5). Thus, the blast wave radius is \[R\approx\begin{cases}\frac{2c\Gamma_{0}^{2}t}{\sqrt{\frac{4E_{0}v_{\rm w}(t-T _{*})}{M_{w}c}}}&\quad\text{for}\,\,\,\,t<\frac{R_{\rm w,ss}}{2c\Gamma_{0 }^{2}}\,,\end{cases} \tag{10}\] where \[2c\Gamma_{0}^{2}t\approx 5\times 10^{17}\bigg{(}\frac{\Gamma_{0}}{300}\bigg{)}^ {2}\frac{t}{100\,{\rm s}}\,{\rm cm} \tag{11}\] and \[\sqrt{\frac{4E_{0}v_{\rm w}(t-T_{*})}{\dot{M}_{\rm w}c}}\approx 2 \times 10^{18}\bigg{(}\frac{E_{0}}{10^{55}\,{\rm erg}}\bigg{)}^{\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and the shock radius is \[R\approx\begin{cases}\frac{2c\Gamma_{0}^{2}t}{\sqrt{\frac{6E_{0}(t-T_{*})}{\pi nm _{p}c}}}&\text{for}\ \ t<\frac{R_{\text{bbl},\text{sr}}}{2c\Gamma_{0}^{2}}\\ \end{cases}, \tag{18}\] where \[\sqrt[4]{\frac{6E_{0}(t-T_{*})}{\pi nm_{p}c}}=4\times 10^{17}\bigg{(}\frac{E_{0} }{10^{55}\,\text{erg}}\bigg{)}^{\nicefrac{{1}}{{4}}}\times \tag{19}\] \[\Big{(}\frac{n}{1\,\text{cm}^{-3}}\Big{)}^{\nicefrac{{-1}}{{4}}}\Big{(}\frac{ t-T_{*}}{100\,\text{s}}\Big{)}^{\nicefrac{{1}}{{4}}}\,\text{cm}\,.\] ### Stellar bubble As shown in Fig. 1, before reaching the ISM, the shock should additionally propagate through the stellar bubble, which consists of two layers: the shocked stellar wind and the shocked ISM. The mass of the shocked stellar wind depends on the mass loss rate of the stellar wind and the star life time, \(\tau_{\text{star}}\): \[\dot{M}_{\text{w}}\tau_{\text{star}}=0.1\text{M}_{\odot}\bigg{(}\frac{\dot{M} _{\text{w}}}{10^{-7}\text{M}_{\odot}\,\text{yr}^{-1}}\bigg{)}\bigg{(}\frac{ \tau_{\text{star}}}{10^{6}\,\text{yr}}\bigg{)}\,, \tag{20}\] which almost unavoidably appears larger than the amount of external gas required for the transition into the self-similar regime given by Eq. (1). Thus, even if the blast wave traveled in the wind zone with constant Lorentz factor, it should start slowing down within the shocked stellar wind, well before reaching the shocked ISM. It is also possible to estimate the blast wave Lorentz factor at the moment of reaching the contact discontinuity at \(R_{\text{bbl},\text{r}}+\Delta R_{\text{bbl},\text{r}}\): \[\Gamma \approx\frac{E_{0}}{\dot{M}_{\text{w}}\tau_{\text{star}}c^{2}} \tag{21}\] i.e., although the shock is still expected to be relativistic, it gets decelerated very significantly from its initial Lorentz factor, \(\Gamma_{0}\gg 10^{2}\). The outer radius of the stellar bubble can be estimated using the self-similar solution for non-relativistic gas: \[R_{\text{bbl},\text{f}} \approx\sqrt[4]{\frac{L_{\text{w}}\tau_{\text{star}}^{3}}{m_{p} n_{\text{ism}}}} \tag{22}\] \[\sim 20\bigg{(}\frac{L_{\text{w}}}{10^{35}\,\text{erg}\,\text{s}^{ -1}}\bigg{)}^{\nicefrac{{1}}{{6}}}\bigg{(}\frac{\tau_{\text{star}}}{10^{6}\, \text{yr}}\bigg{)}^{\nicefrac{{3}}{{6}}}\Big{(}\frac{n_{\text{ism}}}{1\, \text{cm}^{-3}}\Big{)}^{\nicefrac{{-1}}{{6}}}\,\text{pc}\,.\] Here \(L_{\text{w}}=\dot{M}_{\text{w}}v_{\text{w}}^{2}/2\) is the kinetic luminosity of the stellar wind and \(n_{\text{ism}}\) is the ISM density. The shocked ISM layer density can be estimated from the shock compression ratio. For a strong non-relativistic shock, for a gas with polytropic gas index of 5/3, a compression ratio of 4 is expected (see, e.g. Landau and Lifshitz, 1987). The layer thickness can be obtained from the mass conservation, which for a spherically symmetric configuration yields a value of approximately \[\Delta R_{\text{bbl},\text{f}}\approx 0.1R_{\text{bbl},\text{f}} \tag{23}\] i.e., it represents a quite thin compressed layer. The mass of the ISM gas swept-up by the stellar wind is expected to be very significant: \[M_{\text{bbl}}\approx 10^{2}\text{M}_{\odot}\Big{(}\frac{n_{\text{ism}}}{1\, \text{cm}^{-3}}\Big{)}\bigg{(}\frac{R_{\text{bbl}}}{10\,\text{pc}}\bigg{)}^{3}\,. \tag{24}\] Expansion of this heavy external layer is supported by the inner layer that consists of stellar wind that passes through the stellar bubble reverse shock at \(R_{\text{bbl},\text{r}}\). This shock terminates the stellar wind and creates a layer of hot gas of approximately constant density. The density of the layer is determined by the jump conditions at a strong non-relativistic shock: \[n_{\text{bbl},\text{r}} \approx\frac{\dot{M}_{\text{w}}}{\pi R_{\text{bbl},\text{r}}^{2}v _{\text{w}}m_{p}} \tag{25}\] \[\sim 10^{-3}\bigg{(}\frac{\dot{M}_{\text{w}}}{10^{-7}\text{M}_{ \odot}\,\text{yr}^{-1}}\bigg{)}\times\] As shown above, the contact discontinuity is located at approximately \(R_{\text{bbl},\text{f}}\), thus we can use the conservation of the mass ejected by the wind to obtain the inner radius of the stellar bubble, i.e., the radius of the wind termination shock: \[\frac{4\pi}{3}n_{\text{bbl},\text{r}}m_{p}\big{(}R_{\text{bbl},\text{f}}^{3}- R_{\text{bbl},\text{r}}^{3}\big{)}=\dot{M}_{\text{w}}\tau_{\text{star}}\,. \tag{26}\] Solving this equation one obtains the approximate solution \[\frac{R_{\text{bbl},\text{r}}}{R_{\text{bbl},\text{f}}} \approx\sqrt{\frac{4R_{\text{bbl},\text{f}}}{3\tau_{\text{star}}v_{ \text{w}}}} \tag{27}\] \[\approx 0.1\bigg{(}\frac{R_{\text{bbl},\text{f}}}{30\,\text{pc}} \bigg{)}^{\nicefrac{{1}}{{2}}}\bigg{(}\frac{\tau_{\text{star}}}{10^{6}\,\text{yr }}\bigg{)}^{\nicefrac{{-1}}{{2}}}\bigg{(}\frac{v_{\text{w}}}{2\times 10^{3}\,\text{km}\, \text{s}^{-1}}\bigg{)}^{\nicefrac{{-1}}{{2}}}\] Thus, one should expect the following structure of the stellar bubble around a massive star: the forward shock of typical radius of \(\sim 20\,\mathrm{pc}\), a contact discontinuity at almost the same distance, and the reverse shock at a few parsec from the explosion origin. When the relativistic blast wave reaches the reverse shock it starts interacting with the homogeneous medium with the typical density given by Eq. (25). Subsequently, once the explosion energy has been transferred to the shocked stellar wind, the further propagation of the shock proceeds in the self-similar regime expected for a homogeneous medium, \(n\propto\mathrm{const}\). In Appendix A we provide analytic expression for the solution describing the shock propagation in the stellar bubble. In this case, the delay of the blast wave propagating though the shocked stellar wind behind the trigger photon front for \(R>R_{\mathrm{bbl}}\) is given by \[t\approx t(R_{\mathrm{bbl,r}})+\frac{\dot{M}_{\mathrm{w}}cR_{\mathrm{bbl,r}}( R-R_{\mathrm{bbl,r}})}{2E_{0}v_{\mathrm{w}}}+ \tag{28}\] \[\frac{\pi}{6}\frac{n_{\mathrm{bbl,r}}m_{p}c\Big{(}R^{4}-R_{ \mathrm{bbl,r}}^{4}\Big{)}}{E_{0}}-\] \[\frac{4\pi}{6}\frac{n_{\mathrm{bbl,r}}m_{p}c(R-R_{\mathrm{bbl,r}} )R_{\mathrm{bbl,r}}^{3}}{E_{0}}\,,\] where \(t(R_{\mathrm{bbl,r}})\) is the delay accumulated in the unshocked wind: \[t(R_{\mathrm{bbl,r}})=\begin{cases}\frac{R_{\mathrm{bbl,r}}}{(2ct_{0}^{2})}& \mathrm{if}\ \ R_{\mathrm{bbl,r}}<R_{\mathrm{w,ss}}\\ \frac{R_{\mathrm{w,ss}}}{(4ct_{0}^{2})}+\frac{R_{\mathrm{bbl,r}}^{2}\dot{M}_{ \mathrm{w}}c}{4E_{0}v_{\mathrm{w}}}&\mathrm{if}\ \ R_{\mathrm{bbl,r}}>R_{\mathrm{w,ss}}\,.\end{cases} \tag{29}\] The size of the stellar bubble reverse shock determines the reference time for the self-similar solution (which should emerge for \(R\gg R_{\mathrm{bbl,r}}\)): \[T_{\mathrm{bbl,*}} \approx t(R_{\mathrm{bbl,r}})-\frac{\dot{M}_{\mathrm{w}}cR_{\mathrm{bbl,r }}^{2}}{2E_{0}v_{\mathrm{w}}}+\frac{\pi}{2}\frac{n_{\mathrm{bbl,r}}m_{p}cR_{ \mathrm{bbl,r}}^{4}}{E_{0}}\] \[\approx\frac{R_{\mathrm{w,ss}}}{(4c\Gamma_{0}^{2})}+\frac{R_{ \mathrm{bbl,r}}^{2}M_{\mathrm{w}}c}{4E_{0}v_{\mathrm{w}}}=t(R_{\mathrm{bbl,r }})\,,\] where we also used Eq. (25). The normalized terms in Eqs. (29) and (30) are given by Eq. (8) and (16). We note here, that under our adopted simplified description the reference time for the shock propagating through the stellar bubble coincides with the time when the shock reaches the inner boundary of the stellar bubble. Thus, one can use the light curve to determine this specific moment of time. Thus, the dependence of the blast wave radius (which determines the Lorentz factor) for \(R<R_{\mathrm{bbl,r}}\) follows Eq. (10), and for \(R\gg R_{\mathrm{bbl,r}}\) \[R \approx\sqrt[4]{\frac{6E_{0}(t-T_{\mathrm{bbl,*}})}{\pi n_{\mathrm{ bbl,r}}m_{p}c}}\,, \tag{31}\] \[\approx 2\times 10^{18}\bigg{(}\frac{E_{0}}{10^{55}\,\mathrm{erg}} \bigg{)}^{\nicefrac{{1}}{{4}}}\Big{(}\frac{n_{\mathrm{bbl,r}}}{10^{-3}\, \mathrm{cm}^{-3}}\Big{)}^{\nicefrac{{-1}}{{4}}}\times\] \[\quad\bigg{(}\frac{t-T_{*}}{100\,\mathrm{s}}\bigg{)}^{\nicefrac{{ 1}}{{4}}}\mathrm{cm}\,.\] We note that in this case two different reference times appear in the solution: in the range \(R_{\mathrm{w,ss}}<R<R_{\mathrm{bbl,r}}\) the reference time is given by Eq. (9) and for \(R\gg R_{\mathrm{bbl,r}}\) the reference time is given by Eq. (30). ## 3 Light Curve In addition to the change of the bulk Lorentz factor discussed in the previous section, there are several basic processes that operate simultaneously in the production region which can also imprint themselves onto the afterglow emission lightcurve. These include particle acceleration, formation of the target, gamma-gamma attenuation, and injection of energy into the production region. We next discuss each of these in turn. ### Acceleration The particle acceleration process operates in the shock co-moving frame (note that here, for the sake of simplicity, we do not distinguish the downstream and shock reference frames) and the most basic acceleration time-scale depends on the magnetic field strength \(B^{\prime}\) in the downstream: \[\tau^{\prime}_{\mathrm{acc}}\sim\frac{\eta E^{\prime}}{eB^{\prime}c}\approx 0.1 \eta_{\mathrm{acc}}\bigg{(}\frac{E^{\prime}}{1\,\mathrm{TeV}}\bigg{)}\bigg{(} \frac{B^{\prime}}{1\,\mathrm{G}}\bigg{)}^{-1}\,\mathrm{s}\,. \tag{32}\] Here \(\eta_{\mathrm{acc}}=B^{\prime}/\mathcal{E}\) is a phenomenological factor determining the acceleration efficiency where \(\mathcal{E}\) denotes the accelerating electric field, and \(E^{\prime}\) is the particle energy in the comoving frame. The acceleration time converts to a detection delay of \[t_{\mathrm{acc}}\approx\frac{\tau^{\prime}_{\mathrm{acc}}}{2\Gamma}\approx 0.2 \eta_{\mathrm{acc}}\bigg{(}\frac{E^{\prime}}{1\,\mathrm{TeV}}\bigg{)}\bigg{(} \frac{B^{\prime}}{1\,\mathrm{G}}\bigg{)}^{-1}\bigg{(}\frac{\Gamma}{300}\bigg{)} ^{-1}\,\mathrm{ms}\,, \tag{33}\] i.e., it cannot cause any considerable delay unless the magnetic field is very weak, \(B^{\prime}\sim\,\mathrm{mG}\), and the blast wave Lorentz factor is small, \(\Gamma<10\). We note, however, that this estimate does not account for the time required for activation of the acceleration process, in particular for the magnetic field amplification. For example, it was revealed with H.E.S.S. observations of RS Ophiuchi that this time scale can be an important factor delaying the onset of TeV emission from shocks accelerating in the non-relativistic regime (H. E. S. S. Collaboration et al., 2022). ### Target development If one considers SSC as the radiation process dominating gamma-ray emission in the VHE band, the typical time-scale to create the target is determined by the cooling time of electrons that provide the synchrotron photons. To generate TeV emission in the observer frame (\(\hbar\omega\sim 1\,\mathrm{TeV}\)), the comoving energy of emitting electrons can be roughly estimated as \(\hbar\omega/\Gamma\). These electrons up-scatter target photons with energy \(\hbar\omega^{\prime}_{\mathrm{ph}}<\Gamma m_{e}^{2}c^{4}/(\hbar\omega)\approx 1 00(\hbar\omega/\mathrm{TeV})^{-1}(\Gamma/300)\,\mathrm{eV}\) the most efficiently (see, e.g., Khangulyan et al., 2023, references therein). Electrons emitting these photons have energy \[E^{\prime}_{e}\lesssim 40\bigg{(}\frac{B^{\prime}}{1\,\mathrm{G}}\bigg{)}^{- \nicefrac{{1}}{{2}}}\bigg{(}\frac{\hbar\omega}{1\,\mathrm{TeV}}\bigg{)}^{- \nicefrac{{1}}{{2}}}\bigg{(}\frac{\Gamma}{300}\bigg{)}^{\nicefrac{{1}}{{2}}} \,\mathrm{GeV}\,. \tag{34}\] Thus the typical time scale for developing the target field is \[\tau^{\prime}_{\mathrm{ph}}\gtrsim 10^{4}\bigg{(}\frac{B^{\prime}}{1\, \mathrm{G}}\bigg{)}^{-\nicefrac{{3}}{{2}}}\bigg{(}\frac{\hbar\omega}{1\, \mathrm{TeV}}\bigg{)}^{\nicefrac{{1}}{{2}}}\bigg{(}\frac{\Gamma}{300}\bigg{)} ^{-\nicefrac{{1}}{{2}}}\,\mathrm{s}\,. \tag{35}\] This time scale corresponds to a detection delay of \[t_{\mathrm{ph}}\approx\frac{\tau^{\prime}_{\mathrm{ph}}}{2\Gamma}\approx 20 \bigg{(}\frac{B^{\prime}}{1\,\mathrm{G}}\bigg{)}^{-\nicefrac{{3}}{{2}}}\bigg{(} \frac{\hbar\omega}{1\,\mathrm{TeV}}\bigg{)}^{\nicefrac{{1}}{{2}}}\bigg{(} \frac{\Gamma}{300}\bigg{)}^{-\nicefrac{{3}}{{2}}}\,\mathrm{s}\,. \tag{36}\] This delay is non-negligible even if the magnetic field in the production region has Gauss strength, and for weaker values the delay due to this process would become even larger. If the target development is indeed responsible for the formation of part "\(a\)" of the light curve, one would expect for this to be accompanied by a corresponding spectral transformation of the IC spectrum. This transformation is caused by the change of the slope of the target photon field, which transitions from a slow cooling to a fast cooling spectra. If the power-law index of the cooled electron distribution is \(\alpha\), then the photon index of synchrotron slow cooling spectrum is \(\alpha/2\), which softens to \((\alpha+1)/2\) in the fast cooling regime. According to Khangulyan et al. (2023), in the former case the IC spectrum should be \(\alpha/2+1\), which in the latter case becomes \(\alpha/2+1/2\), i.e., one expects a hardening of IC once the photon target development is completed. ### Absorption High-energy photons can effectively interact with low-energy target photons to create electron-positron pairs. The maximum of the cross-section, \(\sigma_{\mathrm{e^{+}e^{-}}}\approx 0.26\sigma_{\mathrm{T}}\) (here \(\sigma_{\mathrm{T}}\) is the Thomson cross section), is achieved when TeV gamma-rays interact with \[\varepsilon_{\mathrm{ph}}\approx\frac{2.8m_{\mathrm{e}}^{2}c^{4}}{(1-\cos\theta )\hbar\omega}\approx 0.1\bigg{(}\frac{\Gamma}{300}\bigg{)}^{2}\bigg{(} \frac{\hbar\omega}{1\,\mathrm{TeV}}\bigg{)}^{-1}\,\mathrm{MeV} \tag{37}\] target photons. Here we consider an interaction with a specific scattering angle, \(\theta\), and we have taken into account that for a relativistically moving shell the photons have a highly anisotropic angular distribution, thus \((1-\cos\theta)\approx 1/\big{(}2\Gamma^{2}\big{)}\). Using the standard expression for the optical depth, one obtains that the gamma-gamma absorption is important if \[\frac{0.26\sigma_{\mathrm{T}}(1-\cos\theta)L_{\mathrm{ph}}}{4\pi Rc\varepsilon _{\mathrm{ph}}}>1\,. \tag{38}\] Here \(L_{\mathrm{ph}}\) should be understood as the luminosity of the source responsible for target photons in the range of frequencies approximately from \(\approx\varepsilon_{\mathrm{ph}}/1.5\) to \(\approx 1.5\varepsilon_{\mathrm{ph}}\), i.e spanning a factor of \(\approx 2\) in frequency. For attenuation of TeV gamma-ray photons, emitted from a shell moving with bulk Lorentz factor of \(\simeq 300\), MeV frequency band plays the most important role. Thus, we obtain a lower limit for the luminosity of the target photon source above which gamma-gamma absorption becomes important: \[L_{\mathrm{ph}}(\varepsilon_{\mathrm{ph}}) >10^{3}\Gamma^{4}\frac{Rm_{\mathrm{e}}^{2}c^{5}}{\sigma_{\mathrm{ T}}\hbar\omega} \tag{39}\] \[\gtrsim 10^{52}\bigg{(}\frac{\Gamma}{300}\bigg{)}^{4}\bigg{(} \frac{R}{10^{17}\,\mathrm{cm}}\bigg{)}\bigg{(}\frac{\hbar\omega}{1\,\mathrm{TeV }}\bigg{)}^{-1}\,\mathrm{erg}\,\mathrm{s}^{-1}\,.\] ### Energy supply Another key factor determining the light curve dependence is the rate at which energy is supplied to the production region. While the bulk Lorentz factor is sufficiently high, \(\Gamma>\theta_{\mathrm{jet}}^{-1}\) (where \(\theta_{\mathrm{jet}}\) is GRB jet half-opening angle), the observer can detect photons emitted from only a small fraction of the blast wave, i.e., one can use a spherically symmetric approximation for the blast wave. In this case the luminosity of the shock is simply \[L_{\mathrm{iso}}\approx 4\pi R^{2}\eta\Gamma^{4}m_{p}n_{\mathrm{c}}c^{3}\,, \tag{40}\] where \(\eta\) is the radiation efficiency, determined as a ratio of energy emitted the shocked gas to the kinetic energy of the gas entering in the production region. Although Eq. (40) is very basic, it nevertheless allows one to derive insightful conclusions. If the blast wave interacts with the stellar wind, \(n_{\mathrm{c}}\propto R^{-2}\), then the luminosity is determined by the dependence on time of the Lorentz factor and the radiation efficiency: \[L_{\mathrm{w,iso}}\approx\frac{\eta\Gamma^{4}\dot{M}_{\mathrm{w}}c^{3}}{v_{ \mathrm{w}}}\,, \tag{41}\] During the coasting phase the Lorentz factor is constant, thus the luminosity is determined by the dependence of \(\eta\): \[L_{\rm w,iso}\approx 7\times 10^{51}\eta\Bigg{(}\frac{\dot{M}_{\rm w}}{10^{-7} \rm M_{\odot}\,yr^{-1}}\Bigg{)}\times \tag{42}\] \[\left(\frac{v_{\rm w}}{2\times 10^{3}\,\rm km\,s^{-1}}\right)^{-1}\left(\frac{ \Gamma_{0}}{300}\right)^{4}\rm erg\,s^{-1}\,.\] When the blast expansion enters into the self-similar regime, one needs to account for the Lorentz factor dependence of \(t\): \[\Gamma=\sqrt[4]{\frac{E_{0}v_{\rm w}}{4\dot{M}_{\rm w}c^{3}(t-T_{*})}}\,. \tag{43}\] Thus, the luminosity is simply \[L_{\rm w,iso}=\frac{\eta}{4}\frac{E_{0}}{(t-T_{*})}\,. \tag{44}\] We note that here, for the sake of simplicity, we assume an \(\omega^{-2}\) emission spectrum. When the jet break occurs, i.e., the blast wave Lorentz factor drops below the inverse jet half-opening angle, \(\Gamma<\theta_{\rm j}^{-1}\), then one needs to introduce an additional factor \((\Gamma\theta_{\rm j})^{2}\), which leads to a break by 0.5 in the light curve: \[L_{\rm w,jb}\propto(t-T_{*})^{-3/2}\,. \tag{45}\] If the blast wave interacts with the homogeneous medium the situation is quite different. If the Lorentz factor is initially constant, the luminosity is \[L_{\rm h,iso}\approx 16\pi\eta\Gamma\delta_{0}^{4}t^{2}m_{p}nc^{5}\,. \tag{46}\] where we have accounted for the relation between the radius of the blast wave and time lag: \(R=2c\Gamma_{0}^{2}t\). Once the expansion rate approaches the self-similar solution, one needs to account for the change of the Lorentz factor and for the dependence of \(R\) on \(t\): \[\Gamma\approx\frac{1}{2}\sqrt[3]{\frac{3E_{0}}{8\pi m_{p}nc^{5}(t-T_{*})^{3}} }\,. \tag{47}\] Thus, one obtains \[L_{\rm h,iso}\approx\frac{3\eta}{8}\frac{E_{0}}{(t-T_{*})}\,, \tag{48}\] where one also accounted for \(R\approx\sqrt[3]{\frac{E_{0}}{(4\pi/3)m_{p}nc^{2}\Gamma^{2}}}\). After the jet break, one needs to account for an additional factor \((\theta_{\rm j}\Gamma)^{2}\propto(t-T_{*})^{-3/4}\), thus one obtains \[L_{\rm h,jb}\propto(t-T_{*})^{-\gamma/4}\,. \tag{49}\] It is also important to emphasize that the offset of the reference time appears only in the self-similar phase of the blast wave propagation: thus in Eq. (46) \(t\) is measured from the moment of the jet launch, but in Eqs. (48) and (49) an offset of the reference time, \(T_{*}\), appears. ## 4 LHAASO Observations The obtained light curve is consistent with a broken power-law, whose reference time is \(T_{*}=T_{\rm gbm,0}+226\,\rm s\), where \(T_{\rm gbm,0}\) is the trigger time reported by GBM. For the first 4 s after the reference time (i.e., within 230 s after the trigger), the TeV emission (if any) was only marginally detected with LHAASO, implying a very rapid flux growth: \[F_{0}\approx 2\times 10^{-6}\bigg{(}\frac{t-T_{*}}{4\,\rm s}\bigg{)}^{14.9^{+5.7 }_{-3.9}}\rm erg\,s^{-1}\,cm^{-2}\,. \tag{50}\] We dub this part of the light curve as "interval 0". This rapid initial growth was followed with a slower rising phase (interval "\(a\)" from LHAASO Collaboration et al., 2023), which lasted for \(\approx 14\,\rm s\): \[F_{\rm a}\approx 2\times 10^{-6}\bigg{(}\frac{t-T_{*}}{4\,\rm s}\bigg{)}^{1.8 ^{+0.21}_{-0.18}}\rm erg\,s^{-1}\,cm^{-2}\,. \tag{51}\] After that the flux saturated at the level of \(F_{\rm max}=F_{\rm b}\approx 2\times 10^{-5}\rm erg\,s^{-1}\,cm^{-2}\) (interval "\(b\)" from LHAASO Collaboration et al., 2023). Following the peak, the light curve started its initial fading phase (intervals "_c+d_" from LHAASO Collaboration et al., 2023). During \(\approx 10^{3}\,\rm s\), the TeV flux evolved as \[F_{\rm c}\approx 2\times 10^{-5}\bigg{(}\frac{t-T_{*}}{18\,\rm s}\bigg{)}^{-1.1 15^{+0.012}_{-0.012}}\rm erg\,s^{-1}\,cm^{-2}\,. \tag{52}\] Finally, after \(T_{*}+670^{+230}_{-110}\,\rm s\) this decay accelerated: \[F_{\rm e}\propto(t-T_{*})^{-2.21^{+0.30}_{-0.83}}\,, \tag{53}\] with the index change of \(1.1^{+0.8}_{-0.3}\). We refer to this faster decay period as interval "\(e\)". ## 5 Discussion The detection of GRB221009A with LHAASO revealed a surprisingly smooth light curve. This light curve reveals several features that constrain the parameters of this burst and can shed light on the afterglow physics of GRBs. These features include: (i) the offset of the reference time with respect to the trigger time, (ii) a range of power-law indexes of the light curve, (iii) positions of the breaks in these power-laws. We interpret the VHE light curve assuming that the jet interaction occurs either with the stellar wind or with homogeneous circumburst medium. We also note that the reported trigger time doesn't necessary determine the moment of the jet activation, which may be delayed with respect to the trigger time. Thus, we assume that the physical trigger time, \(T_{0}\), is to some extent uncertain, \(T_{0}>T_{\rm{gbm,0}}\). ### Interaction with homogeneous circumburst medium The GRB jet interacting with homogeneous ISM was proposed to be the most natural explanation for the LHAASO observations (LHAASO Collaboration et al., 2023). The key feature that reportedly supports this scenario is the growth of the light curve during interval "\(a\)", which was interpreted as emission from the coasting phase during which the blastwave propagates within a homogeneous ISM (LHAASO Collaboration et al., 2023). In this case, the transition to the self-similar regime occurs at the observer time \(T_{\rm{ss}}\approx R_{\rm{h,ss}}/(2c\Gamma_{0}^{2})\) after the trigger time, and the offset of the reference time of the self-similar solution is \(T_{*}=3T_{\rm{ss}}/4\). Therefore the transition to the self-similar phase occurs at \(T_{\rm{ss}}\approx R_{\rm{h,ss}}/(8c\Gamma_{0}^{2})\) after the reference time, and the jet activation (see in Fig. 3) was at \[T_{0}\approx T_{*}-\frac{3}{4}T_{\rm{ss}}\approx T_{\rm{gbm,0}}+172\,{\rm{s}}\,. \tag{54}\] Remarkably, at this moment, the main burst episode starts in the Fermi/GBM light curve. We therefore adopt this point as the jet activation moment, and one should set \(T_{\rm{ss}}\approx 72\,{\rm{s}}\). The blast wave at this moment has a radius of \[R_{\rm{ss}}\approx 2cT_{\rm{ss}}\Gamma_{0}^{2}\approx 4\times 10^{17}\bigg{(} \frac{\Gamma_{0}}{300}\bigg{)}^{2}\bigg{(}\frac{T_{\rm{ss}}}{72\,{\rm{s}}} \bigg{)}\,{\rm{cm}}\,. \tag{55}\] On the other hand, the transition implies that the ISM mass is \[M_{\rm{ss}}=\frac{4\pi}{3}R_{\rm{ss}}^{3}m_{p}n\approx\frac{E_{0}}{\Gamma_{0}^ {2}c^{2}}\,. \tag{56}\] Solving these two equations we obtain an almost parameter-independent estimate for the initial Lorentz factor \[\Gamma_{0}\approx 600\bigg{(}\frac{E_{0}}{10^{55}\,{\rm{erg}}}\bigg{)}^{ \sfrac{1}{8}}\Big{(}\frac{n}{10^{-3}\,{\rm{cm}}^{-3}}\Big{)}^{-\sfrac{1}{8}} \bigg{(}\frac{T_{\rm{ss}}}{72\,{\rm{s}}}\bigg{)}^{-\sfrac{3}{8}}\,, \tag{57}\] where we use a density value normalized to a value typical for the shocked stellar wind, Eq. (25). Which dependency of the shock luminosity should one expect on \((t-T_{*})\) during the coasting phase? Theory predicts that the propagation of a jet with a constant Lorentz factor results in a "\(\propto t^{2}\)" increase of the flux. This dependency, however, is for the detection time with respect to the trigger time. Thus, during the coasting phase theory predicts a flux dependence as \[F_{\rm{coasting}} \propto(t-T_{*}+54\,{\rm{s}})^{2}\] \[\propto\left[\bigg{(}\frac{t-T_{*}}{1\,{\rm{s}}}\bigg{)}^{2}+108 \frac{(t-T_{*})}{1\,{\rm{s}}}+3\times 10^{3}\right]. \tag{58}\] LHAASO reported an index of 1.8 relative to the reference time for the time interval between 4 and 18 s after the reference time. In Fig. 4 we plot in a log scale the dependency given by Eq. (58), and compare it to \((t-T_{*})^{1.8}\) and \((t-T_{*})^{2}\). It can be seen from this figure that one expects an almost constant flux level during the coasting phase if plotted with respect to \((t-T_{*})\), which significantly disagrees with the LHAASO data. Thus, if the GRB jet indeed interacts with a homogeneous circumburst medium, then both the rapid increase of the TeV flux within the first 4 s after \(T_{*}\) (i.e., interval 0), and the slower flux increase during the next \(\sim 15\) s (i.e., interval "\(a\)") must be caused by processes internal to the jet. For example, the following processes may result in such rapid flux growth: gamma-gamma absorption; activation of the acceleration process; or development of the target photon field. Since the attenuation as well as acceleration activation have a very strong impact on the flux level, the relatively smooth flux increase between \(T_{*}+4\,{\rm{s}}\) and \(T_{*}+18\,{\rm{s}}\) appears to be more naturally explained by the development of the target, and the more abrupt initial increase, \(t<T_{*}+4\,{\rm{s}}\), by the decrease of gamma-gamma absorption or by activation of the acceleration mechanism. Assuming that the development of the photon target explains the growth of the VHE emission before \(t-T_{*}\approx 18\) s (during interval "a"), using Eq. (36) for \(t_{\rm{ph}}\approx 70\) s, Figure 3: Relation between the key time-scale for a GRB jet interacting with progenitor wind and homogeneous medium shown together with the time-scales revealed with LHAASO from GRB221009A. one can obtain an estimate for the required magnetic field \[B^{\prime} \approx 0.2\bigg{(}\frac{\hbar\omega}{1\,\mathrm{TeV}}\bigg{)}^{ \nicefrac{{1}}{{3}}}\bigg{(}\frac{E_{0}}{10^{55}\,\mathrm{erg}}\bigg{)}^{-\nicefrac{{ 1}}{{8}}}\times \tag{59}\] \[\bigg{(}\frac{n}{10^{-3}\,\mathrm{cm}^{-3}}\bigg{)}^{\nicefrac{{1}}{ {8}}}\bigg{(}\frac{T_{\mathrm{ss}}}{72\,\mathrm{s}}\bigg{)}^{\nicefrac{{3}}{{8 }}}\,\mathrm{G}\,.\] This result only weakly depends on all uncertain parameters. For the typical parameter values usually adopted, this magnetic field strength corresponds to a magnetization of \(\sim 3\times 10^{-3}\). We also note that development of IC target should be accompanied by hardening of the IC spectrum (see Khangulyan et al., 2023, and discussion in Sec. 3.2), which appears consistent with LHAASO observations (see in Fig. 3 of LHAASO Collaboration et al., 2023). The gamma-gamma optical depth depends on the emitter size, its bulk Lorentz factor, and density of the target photons. Since we have constrained both the Lorentz factor and the shock radius, we can estimate the luminosity of the target photon field required to lead to considerable attenuation during the interval "0" and compare it to the available observational data. According to the results obtained with Konus-Wind instrument (Frederiks et al., 2023), the energy flux carried by \(20\,\mathrm{keV}\) - \(10\,\mathrm{MeV}\) photons was \(\approx(1.62\pm 0.09)\times 10^{-2}\) erg s\({}^{-1}\,\mathrm{cm}^{-2}\) between \(T_{\mathrm{gbm},0}+225\,\mathrm{s}\) to \(T_{\mathrm{gbm},0}+233\,\mathrm{s}\). This energy flux corresponds to \(\approx 10^{54}\) erg s\({}^{-1}\) for \(z=0.151\). The part suitable for attenuation of TeV photons is likely \(\sim 10^{52}\) erg s\({}^{-1}\). Since this value seems to be (marginally) comparable to the estimate given by Eq. (39), one cannot rule out that the change of gamma-gamma absorption is reflected in the light curve. Using the Lorentz factor given by Eq. (57), Eq. (55) implies the radius for which the coasting phase ends is at \(\lesssim\) pc. This distance is likely smaller than the inner size of the stellar bubble, thus the jet - homogeneous medium regime may be realized only in the case of a very weak wind of the GRB progenitor. More specifically this scenario requires a stellar bubble of outer radius of \(\approx 10\,\mathrm{pc}\), which requires a stellar wind kinetic luminosity of \(L_{\mathrm{w}}\sim 10^{33}\)erg s\({}^{-1}\), see Eq. (22). If the GRB jet interacts with a homogeneous medium, then, using the estimate given by Eq. (31), the GRB forward shock should reach a distance of a few parsec at \(t-T_{*}\approx 670\,\mathrm{s}\). We do not expect any considerable change of the shock dynamics at this point, since the compressed ISM layer should be located at a few tens of parsec (unless the stellar wind kinetic luminosity is tiny, \(L_{\mathrm{w}}<10^{32}\)erg s\({}^{-1}\)). Thus, the change of the light curve power-law index at \(T_{*}+670\,\mathrm{s}\) (i.e., the transition to interval "\(e\)") is most likely caused by the jet breaking (see LHAASO Collaboration et al., 2023, for the implications of this scenario, however, note the difference in homogeneous medium density). The change of temporal index of \(1.1^{+0.8}_{-0.3}\) is marginally consistent with the change of \(0.75\) predicted by theory (i.e., the temporal index is expected to change from \(-1.12\) to \(-1.87\approx-1.9\)). Thus, further detailed simulations are needed to test the feasibility of this scenario. ### Interaction with wind As shown in section 5.1, the LHAASO light curve does not favor the scenario of interaction with homogeneous circumburst medium. Instead, the interaction with progenitor wind is found to provide an equally feasible scenario. In this case, the transition to the self-similar phase and the offset of the reference time differ by a factor of 2. Thus, the jet activation time should occur at (see in Fig. 3) \[T_{0}\approx T_{*}-\frac{R_{\mathrm{w,ss}}}{4c\Gamma_{0}^{2}}\approx T_{ \mathrm{gbm},0}+208\,\mathrm{s}\,. \tag{60}\] In the GBM light curve this moment approximately corresponds to the onset of the main burst, that saturated the instrument. Thus, this can be considered an indirect support for this scenario. The radius of the blast wave at this moment is \[R_{\mathrm{ss}}\approx 2cT_{\mathrm{ss}}\Gamma_{0}^{2}\approx 2\times 10^{17} \bigg{(}\frac{\Gamma_{0}}{300}\bigg{)}^{2}\bigg{(}\frac{T_{\mathrm{ss}}}{36 \,\mathrm{s}}\bigg{)}\,\mathrm{cm}\,. \tag{61}\] Figure 4: Change of the GRB luminosity caused by the shock dynamics expected for the case of GRB jet interacting with homogeneous medium during the coasting phase, Eq. (46), shown as a function of \(t-T_{*}\) and label “theory”. The dependence is compared to LHAASO interval “\(a\)” (temporal index 1.8) and to curve that ignores the influence of the offset (temporal index 2). The transition to the self-similar regime implies that the mass of the shocked wind is \[M_{\rm ss}=\frac{\dot{M}_{\rm w}R_{\rm ss}}{v_{\rm w}}\approx\frac{E_{0}}{\Gamma_ {0}^{2}c^{2}}\,. \tag{62}\] Solving these two equations for the initial Lorentz factor we obtain \[\Gamma_{0} \approx 600\bigg{(}\frac{E_{0}}{10^{55}\,{\rm erg}}\bigg{)}^{\nicefrac{ {1}}{{4}}}\bigg{(}\frac{v_{\rm w}}{2\times 10^{3}\,{\rm km\,s^{-1}}}\bigg{)}^{ \nicefrac{{1}}{{4}}}\times \tag{63}\] \[\qquad\bigg{(}\frac{\dot{M}_{\rm w}}{10^{-7}{\rm M}_{\odot}\,{\rm yr }^{-1}}\bigg{)}^{\nicefrac{{-1}}{{4}}}\bigg{(}\frac{T_{\rm ss}}{36\,{\rm s}} \bigg{)}^{\nicefrac{{-1}}{{4}}}\,.\] Substituting Eq. (63) to Eq. (61), one obtains the distance at which the interaction enters the self-similar regime. For the adopted parameter values, it should be a sub-parsec distance, i.e., well inside the hot stellar bubble. If the emission is generated at the jet interaction with the stellar wind, then the \((t-T_{*})^{1.8}\) part of the light curve (i.e., interval "\(a\)") should be caused by processes internal to the jet. Similar arguments as in the previous case considered also apply here, thus the processes related to the development of the target are the most feasible explanation for this phase. In particular, Eq. (36) can explain this growth phase if \[B^{\prime}\approx 0.3\bigg{(}\frac{\hbar\omega}{1\,{\rm TeV}}\bigg{)}^{ \nicefrac{{1}}{{3}}}\bigg{(}\frac{E_{0}}{10^{55}\,{\rm erg}}\bigg{)}^{ \nicefrac{{-1}}{{4}}}\times \tag{64}\] \[\bigg{(}\frac{v_{\rm w}}{2\times 10^{3}\,{\rm km\,s^{-1}}} \bigg{)}^{\nicefrac{{-1}}{{4}}}\bigg{(}\frac{\dot{M}_{\rm w}}{10^{-7}{\rm M} _{\odot}\,{\rm yr}^{-1}}\bigg{)}^{\nicefrac{{1}}{{4}}}\bigg{(}\frac{T_{\rm ss }}{36\,{\rm s}}\bigg{)}^{\nicefrac{{1}}{{4}}}{\rm G}\,.\] For the typical parameter values, this magnetic field strength corresponds to a magnetization of \(\sim 3\times 10^{-2}\). If GRB221009A is produced by a jet interacting with the stellar wind, then the initial rapid increase of the TeV flux (i.e., interval 0) can be attributed to the activation of the acceleration mechanism, e.g., to the magnetic field amplification, or to the impact of gamma-gamma absorption. We note that because of the different relation between the trigger and reference times, in the progenitor wind scenario the shock locates closer to the explosion origin and thus the impact of gamma-gamma absorption is stronger compared to the homogeneous circumburst medium case. However, this impact needs to be verified with more accurate calculations (to be presented elsewhere). If the GRB jet interacts with the progenitor wind, then the break at \(T_{*}+670\,{\rm s}\) (i.e., the transition to interval "\(e\)") can be caused by the jet break. The index change of \(1.1^{+0.8}_{-0.3}\) seems to be inconsistent with the change of 0.5 predicted by the theory for the jet breaking in the stellar wind. On the other hand, according to Eq. (10) the forward shock is expected to reach a distance of \(R_{\rm br}\approx 2\,{\rm pc}\) at \(T_{*}+670\,{\rm s}\). This length matches quite well the inner size of the stellar bubble. Thus it seems feasible that the transition to the interval "\(e\)" is instead related to the blast wave interaction with the inner boundary of the stellar bubble. There are a few effects that need to be taken into account to verify this hypothesis. In the first place, once the jet reaches the stellar bubble it starts interacting with homogeneous medium, so its dynamics changes (see Appendix A for detail). Another point is related to the change of the reference time expected after the blast wave adjusts to the new propagation regime. As discussed in Sec. 2.3, the reference times for the self-similar expansion of the blast wave in the wind zone and in the bubble differ. Similarly to the case summarized in Fig. 4, this change of the reference time may cause a significant distortion of the light curve when it is plotted in a log scale with respect to a different reference time. In Fig. 5 we present some examples that illustrate this effect. In interval "\(e\)" LHAASO obtained a time-dependence \((t-T_{*})^{-2.21}\), which is faster than the dependence predicted by theory for the jet break scenario, \((t-T_{*})^{-1.9}\). If, however, one assumes an additional offset of the reference time (which is justified by the analysis presented in Sec. 2.3), then the theoretical predictions match the observations better, see the case \((t-T_{*}-300\,{\rm s})^{-1.9}\) in Fig. 5. We note that this case corresponds to the jet break occurring in the shocked stellar wind. Finally, if the additional offset of the reference time is significant, then it causes a sharp transformation of the light curve (see the case \((t-T_{*}-600\,{\rm s})^{-1.1}\)), which could alleviate the need for a jet break. However, a detailed light curve modeling is required to achieve any robust conclusion here. ## 6 Summary The detection of GRB221009A with LHAASO provides a unique data sample that allows a very insightful analysis of the processes occurring in the early afterglow phase. In this paper we have presented a qualitative study of the key features of the light curve obtained with LHAASO and have shown that this data set allows one to constrain the key parameters of the burst, in particular, the jet activation moment, its initial Lorentz factor, and magnetic field strength. The obtained parameter values of the initial Lorentz factor and magnetic field strength agree with the ones typically revealed by spectral modeling of GRBs, and the jet activation moment obtained solely based on properties of the VHE light curve matches a very special point in the GBM light curve. This remarkable match can be considered as an indirect confirmation of the considered scenario, and as a support for the GRB phenomenology, in general. We find that whilst the LHAASO data do not exclude the homogeneous circumburst medium scenario, the progenitor wind scenario looks preferable as it appears to show excellent agreement with the expected size and structure of the stellar bubble. The apparent agreement of the power-law slopes characterizing the light curve obtained with LHAASO (i.e., a change of its power-law index from \(+1.8\) to \(-1.115\) to \(-2.21\)) with the predictions for a GRB jet interacting with homogeneous circumburst medium was considered as a strong support for this scenario (LHAASO Collaboration et al., 2023). The analysis performed here, however, suggests that one needs to reconsider the validity of such an argument. Indeed, for an explosion into a homogeneous medium we expect a growth of the flux, \(\propto t^{2}\), during the coasting phase. However, this dependence appears as an almost constant flux, if plotted on a log scale, with respect to the reference time of the self-similar phase. Therefore, we cannot give preference to either one of the two standard scenarios, jet interaction with a homogeneous circumbust medium or jet interaction with the progenitor wind, solely based on interval "\(a\)" of the LHAASO light curve. We also note that provided the early nature of the afterglow detected with LHAASO, the hot stellar bubble should be considered as the homogeneous medium interacted with, not the standard ISM. **Homogeneous medium scenario.** If the homogeneous medium scenario is realized, then the light curve implies an initial Lorentz factor, \(\Gamma_{0}\approx 600\). Also, the emission should be generated at \(<\) pc from the explosion origin, i.e., from a region expected, for the standard parameter values, to be well within the wind zone. Thus, the homogeneous medium scenario requires a weak progenitor wind. The initial rapid increase of the VHE flux (i.e., interval 0) can be explained by either a weakening of the gamma-gamma attenuation on the photons from the prompt emission, or by delay due to the activation time of the acceleration process. Smoother increase between \(4\,\mathrm{s}<(t-T_{*})<18\,\mathrm{s}\) (i.e., interval "\(a\)") can be explained by development of the photon target for SSC process, if the magnetic field in the production regions is \(B^{\prime}\approx 0.2\,\mathrm{G}\), which corresponds to quite a small magnetization in the production region, \(\sim 3\times 10^{-3}\). Finally, the softening of the light curve at \(T_{*}+670\,\mathrm{s}\) (i.e., the transition to interval "\(e\)") can be explained by a jet break as suggested in the discovery paper LHAASO Collaboration et al. (2023) or by the blast wave reaching the contact discontinuity, where the medium density undergoes an almost a \(10^{3}\) fold increase. The latter explanation, however, is found to require an unrealistic constraint on the wind kinetic luminosity. **Wind scenario.** On the other hand, the wind scenario implies a similar initial Lorentz factor, \(\Gamma_{0}\approx 600\). The location of the interaction region, \(\sim 10^{17}\,\mathrm{cm}\), is smaller than the inner size of the stellar bubble for typical values of the parameters, thus the wind scenario looks preferable from this perspective. Similar to the homogeneous circumburst medium case, the initial rapid growth (i.e., interval 0) can be caused by gamma-gamma absorption or by the activation of the acceleration process. The smoother increase seen in the light curve during interval "\(a\)" can be explained by the development of the SSC target, if the magnetic field is \(B^{\prime}\approx 0.3\,\mathrm{G}\), which translates to a \(\sim 3\times 10^{-2}\) magnetization of the forward shock downstream (in the homogeneous circumburst medium case, a similar magnetic field strength points to a lower value of the magnetization because of the expected higher density of upstream medium). The break in light curve seen at \(T_{*}+670\,\mathrm{s}\) (i.e., the transition to interval "\(e\)") appears to be too strong to be consistent with a jet break expected in the wind scenario (the index change of \(1.1^{+0.8}_{-0.3}\) instead of theory predicted change of 0.5). This could be taken as an indication for a change of the jet dynamics caused by the interaction with the stellar bubble at \(R\approx 2\,\mathrm{pc}\). We note that this transition also causes an addition offset of the reference time, which alone can lead to an apparent break in the light curve. Figure 5: GRB luminosity measured with LHAASO during the interval “\(e\)” compared to theory predicted temporal evolution after jet break (temporal index \(-1.12-0.75\approx-1.9\)), expected during the shock propagating in the bubble (additional offset); and jet break occurring in the bubble (temporal index \(-1.9\) and additional offset). A summary of the physical processes responsible for the formation of the VHE light curve are shown in Fig. 6. In both scenarios we conclude that the growing part of the light curve is dominated by processes internal to the jet (including the gamma-gamma attenuation on photons from the prompt phase) and that the decaying parts are due to the jet dynamics, namely the jet propagation in the self similar regime and jet breaking. Although the homogeneous medium case cannot be excluded based on qualitative analysis presented here, the progenitor wind scenario is favoured due to the inferred length scales naturally fitting the expected size of the wind zone and the stellar bubble. The Lorentz factor and the magnetic field derived merely from the analysis of the light curve, significantly reduce the parameter space for modeling the time-dependent SED of the afterglow. The results of the modeling of the synchrotron-self-Compton SED taking into account the internal absorption, will be published elsewhere. D.K. acknowledges the support of RSF grant No. 21-12-00416. A.T. acknowledges support from DESY (Zeuthen, Germany), a member of the Helmholtz Association HGF. ## Appendix A Shock wave transition from the wind zone to the stellar bubble When the shock wave reaches the inner boundary of the stellar bubble, one expects a change of the propagation regime as the upstream media density changes from \(1/R^{2}\) to \(1/R^{0}\) (i.e., constant) dependency. The density of the upstream homogeneous medium is only a factor of 4 larger than the stellar wind density at the wind termination shock. It means that the dynamics of the shock close to the termination shock is influenced by mass accumulated in the wind zone considerably, thus one needs to account for this contribution. Density of the upstream gas is then \[n=\begin{cases}\frac{\dot{M}_{\rm w}}{4\pi R^{2}m_{p}v_{\rm w}}&\quad\text{for }\;R<R_{\rm bbl,r}\,,\\ n_{\rm bbl,r}&\quad\text{for }\;R>R_{\rm bbl,r}\,,\end{cases}\] (A1) where from the shock jump condition we have \[n_{\rm bbl,r}=\frac{\dot{M}_{\rm w}}{\pi R_{\rm bbl,r}^{2}m_{p}v_{\rm w}}\,.\] (A2) Therefore the mass of the shocked shell depends on \(R\) as \[M=\begin{cases}\frac{R\dot{M}_{\rm w}}{v_{\rm w}}&\quad\text{for }\;R<R_{\rm bbl,r}\\ \frac{R_{\rm bbl,r}\dot{M}_{\rm w}}{v_{\rm w}}+\frac{4\pi}{3}n_{\rm bbl,r}m_{p} \Big{(}R^{3}-R_{\rm bbl,r}^{3}\Big{)}&\quad\text{for }\;R\geq R_{\rm bbl,r}\end{cases}\] (A3) Using the standard relation between the shell mass and bulk Lorentz factor, one obtains \[\Gamma^{2}=\frac{E_{0}}{c^{2}}\times\begin{cases}\frac{v_{\rm w}}{R\dot{M}_{ \rm w}}&\quad\text{for }\;R<R_{\rm bbl,r}\\ \Big{[}\frac{R_{\rm bbl,r}\dot{M}_{\rm w}}{v_{\rm w}}+\frac{4\pi}{3}n_{\rm bbl,r}m_{p}\Big{(}R^{3}-R_{\rm bbl,r}^{3}\Big{)}\Big{]}^{-1}&\quad\text{for }\;R\geq R_{\rm bbl,r}\end{cases}\] (A4) Figure 6: Different phases of the early afterglow shown with labels indicating the key physical processes responsible for the light curve evolution. The relation between the blast wave radius and the delay can be obtained with simple integrations: \[t=\begin{cases}\frac{R}{2c\Gamma_{0}^{2}}&\text{for}\qquad R\leq R_{\text{w,ss}} \\ \frac{R_{\text{w,ss}}}{2c\Gamma_{0}^{2}}+\frac{\tilde{M}_{\text{w}}\big{(}R^{2}- R_{\text{w,ss}}^{2}\big{)}}{4cE_{0}v_{\text{w}}}&\text{for}\;\;R_{\text{w,ss}}<R \leq R_{\text{bbl,r}}\\ t(R_{\text{bbl,r}})+\frac{\tilde{M}_{\text{w}}cR_{\text{bbl,r}}(R-R_{\text{bbl, r}})}{2E_{0}v_{\text{w}}}+\frac{\pi}{6}\frac{n_{\text{bbl,r}}m_{r}c\big{(}R^{-}R_{ \text{bbl,r}}\big{)}}{E_{0}}-\frac{4\pi}{6}\frac{n_{\text{bbl,r}}m_{r}c(R-R_{ \text{bbl,r}})R_{\text{bbl,r}}^{3}}{E_{0}}&\text{for}\qquad R>R_{\text{bbl,r}} \end{cases}\] (A5) However, for the purpose of interpreting observations the inverse dependence, i.e. \(R(t)\), is required. The first two cases from the above equations are trivial: \[R=\begin{cases}\frac{2c\Gamma_{0}^{2}t}{M_{\text{w}}}&\text{for}\qquad t\leq \frac{R_{\text{w,ss}}}{2c\Gamma_{0}^{2}}\,,\\ \sqrt{\frac{4cE_{0}v_{\text{w}}(t-T_{*})}{M_{\text{w}}}}&\text{for}\;\;\frac{R_ {\text{w,ss}}}{2c\Gamma_{0}^{2}}<t\leq t(R_{\text{bbl,r}})\,,\\ \tilde{R}(t)&\text{for}\qquad t>t(R_{\text{bbl,r}})\,,\end{cases}\] (A6) and the last one, \(t>t(R_{\text{bbl,r}})\), requires some simple algebra. Substituting the bubble density in to the third equation of Eq. (A5) one obtains: \[y^{4}-y=x\,,\] (A7) where \[\begin{split} y&=\frac{R}{R_{\text{bbl,r}}}\,,\\ x&=\frac{6E_{0}v_{\text{w}}}{\tilde{M}_{\text{w}}cR_{ \text{bbl,r}}}(t-t(R_{\text{bbl,r}}))\,.\end{split}\] (A8) Equation (A7) is polynomial equation of \(4^{\text{th}}\) power, thus it allows analytical solution. The physical root of the equation can be selected by the condition \(\left.y\right|_{x=0}=1\): \[\begin{split} y=&\frac{\sqrt{3\,\left(\frac{\sqrt{256 \,x^{3}+27}}{2\,3^{\frac{3}{2}}}+\frac{1}{2}\right)^{\frac{3}{2}}}-4\,x}{2\, \sqrt{3\,\left(\frac{\sqrt{256\,x^{3}+27}}{2\,3^{\frac{3}{2}}}+\frac{1}{2} \right)^{\frac{1}{6}}}}+\\ &\frac{\sqrt{\frac{2\,\sqrt{3}\,\left(\frac{\sqrt{256\,x^{3}+27} }{2\,3^{\frac{3}{2}}}+\frac{1}{2}\right)^{\frac{1}{6}}}{\sqrt{3\,\left(\frac{ \sqrt{256\,x^{3}+27}}{2\,3^{\frac{3}{2}}}+\frac{1}{2}\right)^{\frac{1}{6}}}}} }{2\,\sqrt{3\,\left(\frac{\sqrt{256\,x^{3}+27}}{2\,3\,3^{\frac{3}{2}}}+\frac{1} {2}\right)^{\frac{1}{6}}}}}-\left(\frac{\sqrt{256\,x^{3}+27}}{2\,3\,3^{\frac{3} {2}}}+\frac{1}{2}\right)^{\frac{1}{3}}+\frac{4\,x}{3\,\left(\frac{\sqrt{256\,x ^{3}+27}}{2\,3\,3^{\frac{3}{2}}}+\frac{1}{2}\right)^{\frac{1}{3}}}}\\ & 2\end{split}\] (A9) The asymptotic behavior of this expression is \[y=\begin{cases}1+\frac{x}{3}-\frac{2x^{2}}{9}+\frac{20x^{3}}{81}&\text{for} \;\;x\ll 1\\ x^{\nicefrac{{1}}{{4}}}+\frac{4}{4}x^{-\nicefrac{{1}}{{4}}}&\text{for}\;\;x \gg 1\end{cases}.\] (A10)
2310.16172
Astrophysical Parameter Inference on Accreting White Dwarf Binaries using Gravitational Waves
Accreting binary white dwarf systems are among the sources expected to emanate gravitational waves that the Laser Interferometer Space Antenna (LISA) will detect. We investigate how accurately the binary parameters may be measured from LISA observations. We complement previous studies by performing our parameter estimation on binaries containing a low-mass donor with a thick, hydrogen-rich envelope. The evolution is followed from the early, pre-period minimum stage, in which the donor is non-degenerate, to a later, post-period minimum stage with a largely degenerate donor. We present expressions for the gravitational wave amplitude, frequency, and frequency derivative in terms of white dwarf parameters (masses, donor radius, etc.), where binary evolution is driven by gravitational wave radiation and accretion torques, and the donor radius and logarithmic change in radius ($\eta_{\rm d}$) due to mass loss are treated as model parameters. We then perform a Fisher analysis to reveal the accuracy of parameter measurements, using models from Modules for Experiments in Stellar Astrophysics (MESA) to estimate realistic fiducial values at which we evaluate the measurement errors. We find that the donor radius can be measured relatively well with LISA observations alone, while we can further measure the individual masses if we have an independent measurement of the luminosity distance from electromagnetic observations. When applied to the parameters of the recently-discovered white dwarf binary ZTF J0127+5258, our Fisher analysis suggests that we will be able to constrain the system's individual masses and donor radius using LISA's observations, given ZTF's measurement of the luminosity distance.
Sophia Yi, Shu Yan Lau, Kent Yagi, Phil Arras
2023-10-24T20:36:52Z
http://arxiv.org/abs/2310.16172v2
# Astrophysical Parameter Inference on Accreting White Dwarf Binaries using Gravitational Waves ###### Abstract Accreting binary white dwarf systems are among the sources expected to emanate gravitational waves that the Laser Interferometer Space Antenna (LISA) will detect. We investigate how accurately the binary parameters may be measured from LISA observations. We complement previous studies by performing our parameter estimation on binaries containing a low-mass donor with a thick, hydrogen-rich envelope. The evolution is followed from the early, pre-period minimum stage, in which the donor is non-degenerate, to a later, post-period minimum stage with a largely degenerate donor. We present expressions for the gravitational wave amplitude, frequency, and frequency derivative in terms of white dwarf parameters (masses, donor radius, etc.), where binary evolution is driven by gravitational wave radiation and accretion torques, and the donor radius and logarithmic change in radius (\(\eta_{\rm d}\)) due to mass loss are treated as model parameters. We then perform a Fisher analysis to reveal the accuracy of parameter measurements, using models from Modules for Experiments in Stellar Astrophysics (MESA) to estimate realistic fiducial values at which we evaluate the measurement errors. We find that the donor radius can be measured relatively well with LISA observations alone, while we can further measure the individual masses if we have an independent measurement of the luminosity distance from electromagnetic observations. When applied to the parameters of the recently-discovered white dwarf binary ZTF J0127+5258, our Fisher analysis suggests that we will be able to constrain the system's individual masses and donor radius using LISA's observations, given ZTF's measurement of the luminosity distance. keywords: accretion, accretion discs - gravitational waves - binaries: close - white dwarfs ## 1 Introduction The first direct detection of gravitational waves (GWs) in 2015 came from the merger of a binary black hole (Abbott et al., 2016). Since then, the LIGO/Virgo Collaborations have additionally detected GW signals from numerous other binary black hole mergers, as well as several binary neutron star and neutron star-black hole mergers (Abbott et al., 2017, 2019, 2021). While LIGO and other ground-based detectors are able to detect GWs with frequencies from about 15 Hz to several kHz (Abbott et al., 2019), the Laser Interferometer Space Antenna (LISA) is a space-based GW detector expected to launch in the mid-2030s with the ability to detect GWs in the frequency range of \(\sim 10^{-4}\) to \(10^{-1}\)Hz (Amaro-Seoane et al., 2017). Among the astrophysical sources anticipated to emit GWs within this range are binary white dwarfs (WDs). In fact, for a 4-year observation period, some \(\sim 10^{4}\) double white dwarfs (DWDs) are expected to be resolvable with LISA (Lamberts et al., 2019). For DWD systems emitting GWs at relatively high frequency, we anticipate being able to extract significant information from not only the GW frequency but also the GW frequency "chirp," i.e., change in frequency over time (\(f\)) (Shah et al., 2012). Prospects of measuring astrophysical parameters of _detached_ binary WDs, in particular their individual masses, are studied in Wolz et al. (2021). The authors employ universal relations between the tidal deformability and moment of inertia, and between the moment of inertia and WD mass, to express the finite-size effects entering in the gravitational waveform in terms of the individual masses. By conducting a Fisher analysis on this waveform expressed in terms of the masses, the authors show that LISA will be able to measure individual masses of DWDs for sufficiently small binary separation and large stellar masses. Accreting DWD systems with close separations can also be strong GW sources (Nelemans et al., 2004). Some of these systems have been observed as cataclysmic variables and are identified as LISA verification sources (Strocew & Vecchio, 2006; Kupfer et al., 2018). A few examples are V407 Vul (Cropper et al., 1998), HM Cancri (Israel, G. L. et al., 2002; Ramsay et al., 2002), and ES Cet (Warner & Woudt, 2002). The population of these systems depends on the stability of the accretion. The influence of tidal synchronization on the formation of AM Canun Venaticorum (AM CVn) type binaries, a type of DWD system involving accretion of hydrogen-poor gas, has been studied in Marsh et al. (2004); Gokhale et al. (2007); Sepinsky & Kalogera (2014); Kremer et al. (2015). They consider the stability criteria of the accretion, either through a disk or direct impact, taking into account the tidal synchronization torque. Population simulations of such accreting LISA sources have been performed in Kremer et al. (2017); Biscoveanu et al. (2022). In particular, the work by Kremer et al. (2017) demonstrates that \(\sim 10^{3}\) DWDs with negative chirp due to accretion may be observed by LISA. In Biscoevaneu et al. (2022), they employ a similar population study to further show that LISA is able to constrain the tidal synchronization timescale (\(r_{0}\)) through its influence on the population distribution in the \(f\)-\(\dot{f}\) parameter space. In these studies, they only consider cold degenerate WDs. On the other hand, accreting systems with an extremely low mass WD donor that has a thick hydrogen envelope are studied in Kaplan et al. (2012). These hydrogen-rich donors are what we would expect to see in an early, pre-period minimum stage of binary evolution. Kaplan et al. (2012) highlights the importance of understanding the relative composition of hydrogen and helium in these WDs in order to infer the stability and behavior of the binary. These systems can be candidates of the observed inspiraling cataclysmic variables (e.g., HM Cancri) and may evolve into AM CVn binaries. In this paper, we investigate the possibility of directly measuring astrophysical parameters of _accreting_ DWDs given LISA's measurements of the amplitude (\(A\)), frequency (\(f\)), and frequency derivative (\(f\)) of GWs emanated by the DWDs. We additionally build on the work of Kaplan et al. (2012) by connecting the non-degenerate regime, in which donor WDs have a lingering hydrogen envelope, with the later, degenerate regime in which the zero-temperature mass-radius relation is valid. We study the evolution of accreting DWDs through the transition between these two regimes. Knowledge of the accretion physics for such DWDs allows us to parameterize their gravitational waveforms in terms of the individual masses and other parameters of interest. We then perform a Fisher analysis on this waveform to determine how well we can constrain the masses and other parameters given LISA's detections of GWs from accreting DWDs. We find that with an independent measurement of the luminosity distance of our DWD systems from electromagnetic observations, we are likely to be able to measure the individual masses, donor radius, and exponent of the mass-radius relation given LISA's measurements of the GW amplitude, frequency, and frequency derivative. With LISA observations alone, we lose our ability to constrain the individual masses, but we are still able to measure the other two parameters. The rest of the paper is organized as follows. In Sec. 2, we introduce the parameterized gravitational waveform. In Sec. 3, we discuss how the mass-radius relations of our WDs differ in the degenerate versus non-degenerate regimes, introducing models of donors in the non-degenerate regime that we generate with a stellar evolutionary code. We also discuss the dynamical stability of accreting DWDs. Section 4 illustrates the detectability of our DWD systems based on the relative magnitude of these systems' GW strain versus LISA's noise curve. Finally, our parameter estimation technique and results are given in Sec. 5, followed by discussion and conclusions in Sec. 6. The geometric units of \(c=G=1\) are used in all of our equations, with the physical dimensions being recoverable through the conversion \(1M_{\odot}=1.5\mathrm{km}=4.9\times 10^{-6}\mathrm{s}\). ## 2 Gravitational Waveform The sky-averaged gravitational waveform, \(h(t)\), for a DWD with donor mass \(m_{\mathrm{d}}\) and accretor mass \(m_{\mathrm{a}}\) is given by \[h(t)=A\cos\phi(t). \tag{1}\] In this expression, \(A\) is the amplitude, given by \[A=\frac{8\mathcal{M}}{5D}(\pi\mathcal{M}f)^{2/3}\,, \tag{2}\] where \(D\) is the luminosity distance and \(\mathcal{M}\) is the chirp mass, \[\mathcal{M}=\frac{(m_{\mathrm{d}}m_{\mathrm{a}})^{3/5}}{(m_{\mathrm{d}}+m_{ \mathrm{a}})^{1/5}}. \tag{3}\] Assuming a fairly slowly changing GW frequency, so that \(\ddot{f}\) and higher derivatives are negligible, the phase \(\phi(t)\) is given by (Shah & Nelemans, 2014) \[\phi(t)=\phi_{0}+2\pi f_{0}\delta t+\pi f_{0}\delta t^{2}, \tag{4}\] where the subscript \(0\) indicates the quantity measured at the initial time of observation, \(t_{0}\), and \(\delta t=t-t_{0}\). Examining Eqs. (2) and (4), it is evident that in order to write the waveform in terms of our parameters of interest, we must express \(f\) and \(\dot{f}\) in terms of these parameters. We assume that the semi-major axis, \(a\), adjusts itself during accretion such that the donor radius \(r_{\mathrm{d}}\) always equals the Roche lobe radius, \(r_{\mathrm{L}}a\), which we approximate with the fitting formula by Eggleton (1983): \[r_{\mathrm{L}}=\frac{0.49q^{2/3}}{0.6q^{2/3}+\ln(1+q^{1/3})},\quad q=\frac{m_{ \mathrm{d}}}{m_{\mathrm{a}}}. \tag{5}\] Taking the derivative of \(r_{\mathrm{d}}=r_{\mathrm{L}}a\) leads us to a relation between the mass loss rate and the orbital separation: \[\frac{\dot{a}}{a}=\frac{m_{\mathrm{d}}}{m_{\mathrm{d}}}(\eta_{\mathrm{d}}-\eta _{\mathrm{L}}), \tag{6}\] where \(\eta_{\mathrm{L}}\) is the ratio between \(r_{\mathrm{L}}/r_{\mathrm{L}}\) and \(\dot{m}_{\mathrm{d}}/m_{\mathrm{d}}\), given by \[\eta_{\mathrm{L}} =\frac{d\ln r_{\mathrm{L}}}{d\ln m_{\mathrm{d}}} \tag{7}\] \[=\left[q(1-F)+1\right]\frac{2(1+q^{1/3})\ln(1+q^{1/3})-q^{1/3}}{ 3(1+q^{1/3})\left[0.6q^{2/3}+\ln(1+q^{1/3})\right]},\] and \(\eta_{\mathrm{d}}\) is the logarithmic change in radius due to mass loss, \[\eta_{\mathrm{d}}=\frac{d\ln r_{\mathrm{d}}}{d\ln m_{\mathrm{d}}}. \tag{8}\] If \(\eta_{\mathrm{d}}>0\), the donor shrinks as it loses mass; if \(\eta_{\mathrm{d}}<0\), as is the case for a degenerate donor, the WD becomes larger in response to mass loss. As in Kaplan et al. (2012), we have introduced the mass-loss fraction, \(F\), defined such that \(\dot{m}_{\mathrm{a}}=-(1-F)\dot{m}_{\mathrm{d}}\), to indicate whether the mass transfer is conservative or not. When \(F=0\), all mass lost by the donor is gained by the accretor, and there is no overall loss of mass from the binary; \(F=1\) indicates that the accreted material is lost by the binary due to stellar winds, classical novae, etc. We then consider the angular momentum conservation: \[\dot{J}=J_{\mathrm{gr}}+J_{\mathrm{acc}}, \tag{9}\] where \(J\) is the orbital angular momentum and \(J_{\mathrm{gr}}\) is the angular momentum carried away by gravitational radiation, \[\frac{j_{\mathrm{gr}}}{J}=-\frac{32}{5}\frac{Mm_{\mathrm{d}}m_{\mathrm{a}}}{a ^{4}}, \tag{10}\] with \(M=m_{\mathrm{a}}+m_{\mathrm{d}}\). The "accretion torque", \(J_{\mathrm{acc}}\), is given by \[\frac{J_{\mathrm{acc}}}{J}=\frac{\dot{m}_{\mathrm{d}}}{m_{\mathrm{d}}}\sqrt{r_{ \mathrm{h}}\left(1+q\right)}, \tag{11}\] where \(r_{\mathrm{h}}\) is the effective radius (in units of \(a\)) of material orbiting the accreting companion that carries the same amount of angular momentum as is lost due to the impact. This accretion torque quantifies the angular momentum lost by the binary when accreted material impacts the companion star directly, rather than forming an accretion disk around the companion. We use a fitting formula for \(r_{\rm h}\) that depends only on \(q\)(Verbunt & Rappaport, 1988) in the direct impact scenario, and set \(r_{\rm h}=0\) for the disk accretion case. To determine whether we have disk accretion or direct impact, we use Eq. (6) of Nelemans et al. (2001) for the definition of the minimum radius, \[\begin{split}\frac{r_{\rm min}}{a}&\approx 0.04948-0.03 815\,\left(\log_{10}q\right)\\ &+0.04752\,\left(\log_{10}q\right)^{2}-0.006973\,\left(\log_{10}q \right)^{3},\end{split} \tag{12}\] and assume disk accretion for \(r_{\rm a}<r_{\rm min}\) and direct impact for \(r_{\rm a}>r_{\rm min}\). For the results shown in Sec. 5, the orbit was always wide enough for a disk to form, allowing us to neglect the torque term. This is because the lingering hydrogen envelopes in our models of donor WDs cause \(r_{\rm d}\) (and therefore \(r_{\rm min}\)) to be relatively large, i.e., larger than \(r_{\rm a}\). Kepler's third law is used to relate \(f\), \(M\), \(r_{\rm d}\), and \(r_{\rm L}\): \[f=\frac{1}{\pi}\sqrt{\frac{M}{\left(r_{\rm d}/r_{\rm L}\right)^{3}}}\,. \tag{13}\] We find that in the degenerate regime, \(f\) calculated according to Eq. (13) depends almost entirely on \(m_{\rm d}\), varying very little with different \(m_{\rm a}\). This result agrees well with the findings of Breivik et al. (2018) in their analysis of DWDs with degenerate donors (see App. A). Combining Eqs. (6), (9), (10), (11), and (13), we now have a full expression for \(\dot{f}\): \[\begin{split}\frac{\dot{f}}{f}&=-\frac{16}{5}\frac{ Mm_{\rm d}m_{\rm a}}{a^{4}}\times\\ &\frac{\frac{Fm_{\rm d}}{M}-3(\eta_{\rm d}-\eta_{\rm L})}{1+q(F- 1)-\frac{Fm_{\rm d}}{2M}-r_{\rm h}^{1/2}\left(1+q\right)^{1/2}+\frac{\eta_{\rm d }-\eta_{\rm h}}{2}},\end{split} \tag{14}\] i.e., \(\dot{f}\) is a function of \(m_{\rm d}\), \(m_{\rm a}\), \(r_{\rm d}\) (through \(f\)), \(\eta_{\rm d}\), and \(F\)1. Notice that one can take the limit of \(\eta_{\rm d}-\eta_{\rm L}\to\infty\) in Eq. (14) to find \(\dot{a}/a\) and \(\dot{f}/f\) without mass accretion. This is because, from Eq. (6), \(\dot{m}_{\rm d}\) has to go to \(0\) when we set \(\eta_{\rm d}-\eta_{\rm L}\to\infty\) while keeping \(\dot{a}/a\) finite. Footnote 1: Again, we exclude the term \(-r_{\rm h}^{1/2}(1+q)^{1/2}\) whenever the orbit is wide enough for an accretion disk to form. The difference from the expressions used in previous work, such as Bisceveanu et al. (2022) (other than the tidal coupling that we do not consider here), is that we (i) introduce the mass-loss fraction \(F\) and (ii) further assume \(r_{\rm d}=r_{\rm L}a\) at any time so that Eq. (6) holds. With Eqs. (2), (4), (13), and (14), we have a gravitational waveform in terms of the six physical parameters \(\phi_{0},m_{\rm d},m_{\rm a},r_{\rm d},\eta_{\rm d}\) and \(D\). We use this waveform in our Fisher analysis to determine the measurability of our parameters of interest. We note that despite there being only four raw model parameters in the waveform, \(\theta^{i}=(\phi_{0},A,f,f)\), we can break some of the degeneracy between our six physical parameters by imposing priors on the individual masses (see Sec. 5.1). ## 3 Astrophysical properties of accreting double white dwarfs We now use Modules for Experiments in Stellar Astrophysics (MESA Paxton et al., 2011, 2013, 2015, 2018, 2019) to study the mass-radius relation and \(\eta_{\rm d}\) for WDs and consider the dynamical stability of mass transfer in accreting DWDs. ### Mass-radius Relations The way in which the donor WD responds to mass loss depends significantly on the composition of the donor. For fully degenerate WDs, we can use Eggleton's analytic formula to obtain \(r_{\rm d}\) in terms of \(m_{\rm d}\)(Verbunt & Rappaport, 1988). In this cold temperature regime, \(r_{\rm d}\) goes roughly as \(m_{\rm d}^{-1/3}\), so \(\eta_{\rm d}\sim-1/3\) for a range of \(m_{\rm d}\) values. If the donor is not fully degenerate, \(r_{\rm d}\) does not vary with \(m_{\rm d}\) in such a simple manner. To reach this conclusion, we used MESA to model mass loss from dozens of WDs containing a range of core and envelope masses. The radius and \(\eta_{\rm d}\) of one model are shown in Fig. 1. To construct this model, we used MESA to evolve a \(M=\)1.5\(M_{\odot}\) pre-main sequence star to the red giant branch, and stopped the evolution when the helium core had mass \(0.153M_{\odot}\). The hydrogen-rich envelope was then rapidly removed until the envelope was reduced to \(0.006M_{\odot}\). In Fig. 1, \(|\eta_{\rm d}|\) begins at this point of the simulation; the initial positive value of \(\eta_{\rm d}\) reflects the lingering hydrogen envelope surrounding the donor WD. We then simulate mass loss from the donor, causing the WD to become increasingly degenerate as hydrogen is transferred away, resulting in a decreasing \(\eta_{\rm d}\) function. Eventually, \(\eta_{\rm d}\) passes through zero (seen in the cusp around \(0.052M_{\odot}\) of stripped mass). After this point, the donor increases in size as it continues losing mass, causing the orbital separation of the binary to increase2. Footnote 2: The MESA model halted after \(\sim 0.07M_{\odot}\) due to significant numerical noise, likely due to insufficient resolution near the outer boundary. The plot in Fig. 1 shows one of the longest-lasting models we were able to obtain. In the sections that follow, we will use the MESA model described here to obtain realistic fiducial values at which to evaluate the error on the astrophysical parameters, \(r_{\rm d}\) and \(\eta_{\rm d}\). In so doing, we account for the fact that the accreting DWDs that LISA observes may have low-mass donors with lingering hydrogen envelopes, causing the radius (and \(\eta_{\rm d}\)) to be larger than what a fully degenerate WD would have. In other words, using the MESA model for fiducial values of \(r_{\rm d}\) and \(\eta_{\rm d}\) allows us to apply our parameter inference to DWDs in a near-period minimum stage of evolution, when the GW strain is likely to be highest (as we will show in Sec. 4). We note that based on Eq. (6), with \(\dot{m}_{\rm d}<0\) (donor losing mass), orbital separation will eventually start to grow (\(\dot{a}>0\)) as \(\eta_{\rm d}\) decreases below \(\eta_{\rm L}\). In a similar manner, using Eqs. (6) and (14), we see that \(f\) will also switch from positive to negative as the magnitude of \(\eta_{\rm d}\) decreases in comparison with \(\eta_{\rm L}\). Figure 2 shows this change of \(f\) for a binary with a donor mass described by the MESA model in Fig. 1; time evolves from right to left on this plot. All \(\dot{f}\) values on the right of the dotted white line (period minimum, where \(\dot{f}=0\)) are positive, and all values to the left are negative. In this plot, we see a discontinuity in the region between \(m_{\rm a}=0.6M_{\odot}\) and \(m_{\rm a}=0.7M_{\odot}\). This is due to our choice of a discontinuous transition of \(F\) between a region of parameter space in which \(F\) equals \(0\) (lower portion of the plot) to a region in which \(F=1\) (upper portion). We will see shortly that our choice of \(F\) depends on the magnitude of the accretion rate. ### Dynamical Stability for the DWD systems The mass transfer process for the DWDs considered here is expected to be unstable for certain mass ratios. Such unstable mass transfer causes dynamical instability of the binary and ultimately results in short-lived DWDs that we do not expect to observe with LISA. We follow Rappaport et al. (1982) in taking mass transfer to be stable if we have a self-consistent solution in Eqs. (6) and (14) assuming \(\dot{m}_{\rm d}<0\). Revisiting Eqs. (6) and (14), we arrive at the following criterion for the dynamical stability of the binary: \[1+q(F-1)-\frac{Fm_{\rm d}}{2M}-r_{\rm h}^{1/2}\left(1+q\right)^{1/2}+\frac{\eta_ {\rm d}-\eta_{\rm L}}{2}>0. \tag{15}\] From the above expression, it is evident that dynamical stability is dependent on the value of the mass-loss fraction \(F\), which is determined by whether the accreted material can be burned stably. We take the criterion for stable hydrogen burning from Kaplan et al. (2012) and adopt \[F=\begin{cases}1&\left(|\dot{m}_{\rm d}|<\dot{m}_{\rm c}\right),\\ 0&\left(|\dot{m}_{\rm d}|>\dot{m}_{\rm c}\right),\end{cases} \tag{16}\] where the critical mass loss rate is given by Kaplan et al. (2012), \[\dot{m}_{\rm c}=10^{-7}\left(\frac{m_{\rm a}}{M_{\odot}}-0.5357\right)M_{ \odot}\,{\rm yr}^{-1}\,, \tag{17}\] which takes into account the reduced metallicity of the accreting WD. As mentioned previously, for the results shown in Sec. 5, the orbit was always wide enough for an accretion disk to form. We could then exclude the accretion torque term (\(-r_{\rm h}^{1/2}(1+q)^{1/2}\)) in Eq. (15), leading to greater dynamical stability (i.e., more regions in which the left-hand side is positive). As mentioned in Kaplan et al. (2012), the hydrogen rich nature of the donor, implying a larger positive \(\eta_{\rm d}\), also leads to more dynamical stability. Lastly, we note that Kaplan et al. (2012) additionally account for the possibility of unstable helium burning at higher mass transfer rates, in which case we could again have \(F>0\). It would be interesting to implement a more careful analysis of the different scenarios in which the binaries discussed here might undergo nonconservative mass transfer. ## 4 GW strain vs. LISA's noise curve The left panel of Fig. 3 presents the GW strain compared to LISA's noise curve over a range of GW frequencies. The dashed curves model the GW strain during an earlier stage of DWD evolution (near-period minimum), when the donor has some amount of lingering hydrogen. We construct these curves using Eqs. (2) and (13), along with the mass-radius relation from the MESA model shown in Fig. 1. The solid lines show the GW strain at a much later stage of evolution (post-period minimum), when the donor is fully degenerate and \(f\) is steadily decreasing toward the bottom left-hand corner of the plot. These solid curves are constructed using Eggleton's cold-temperature radius-mass formula (Verbunt & Rappaport, 1988). We see that in both the early (dashed) and late (solid) stages of evolution of these binaries, the GW strain (plotted as \(A\times{\rm T_{obs}}^{1/2}\), where \({\rm T_{obs}}\) is the observation time that we take to be 4 years) is up to one order of magnitude higher than LISA's noise curve (\(S_{n}(f)\); Robson et al. (2019)). The right panel of Fig. 3 shows the signal-to-noise ratio (SNR) at a luminosity distance of 8kpc. If we follow Kremer Figure 1: _Left_: Plot illustrating how the donor’s mass-radius relation differs significantly between degenerate and non-degenerate regimes. As the donor loses mass, the degenerate radius increases steadily (\(r_{\rm d}\sim m_{\rm d}^{-1/3}\)), whereas the non-degenerate radius (calculated with MESA) first decreases before increasing as the donor becomes increasingly degenerate. Here, the MESA model was computed with a donor having a predominantly helium core of \(0.153M_{\odot}\) and initial hydrogen envelope of \(0.006M_{\odot}\). _Right_: \(|\eta_{\rm d}|\) vs. stripped mass for the MESA model of a donor. Numerical noise halted the model after about \(0.07M_{\odot}\) of mass had been stripped from the donor. Figure 2: The rate of change of GW frequency, \(\dot{f}\), for a DWD containing a donor modeled by MESA (mass-radius relation plotted in Fig. 1). Going from right to left on the plot, \(\dot{f}\) goes from strictly positive to zero at the dotted white line, to strictly negative. et al. (2017) in taking SNR=5 as the minimum SNR for detectability, Fig. 3 confirms that LISA should be able to detect the DWDs discussed here. We note that the GW strain is higher for the non-degenerate case because at the same luminosity distance, a binary with a non-degenerate donor must have a larger donor mass, and therefore chirp mass, to emanate GWs at the same frequency as a comparable binary with a degenerate donor. The larger chirp mass at identical \(D\) and \(f\) leads to a larger \(A\) (see Eq. (2)). Finally, we note that our calculations for degenerate DWDs in the left panel of Fig. 3 agree well with the SNR results in Fig. 6 of Kremer et al. (2017). Like them, at D=8kpc and for the mass ranges we consider (\(m_{\rm d}\simeq 0.09-0.15M_{\odot},m_{\rm a}\simeq 0.54-1.00M_{\odot}\)), we also have a majority of parameter space with SNR between 5 and 10, as well as smaller regions with SNR \(<5\) and \(\geq 10\). ## 5 Astrophysical parameter inference Let us now move on to carrying out our parameter estimation for the accreting DWDs with LISA. We first explain our methodology and next present our findings. ### Fisher Method Given our gravitational waveform derived in Sec. 2, we can estimate the statistical error on parameters due to the detector noise using a Fisher information matrix (FIM) (Cutler, 1998; Shah et al., 2012; Shah and Nelemans, 2014). This method of parameter estimation assumes stationary and Gaussian detector noise. The FIM is defined as \[\Gamma_{ij}=\left(\frac{\partial h}{\partial\theta^{i}}\Big{|}\frac{\partial h }{\partial\theta^{j}}\right)\;, \tag{18}\] where the partial derivatives of the waveform \(h\) are taken with respect to the parameters of interest described in the previous section, \[\theta^{i}=(\phi_{0},m_{\rm d},m_{\rm a},r_{\rm d},\eta_{\rm d},D). \tag{19}\] The inner product in Eq. (18) is given by \[(a|b)=4\int_{0}^{\infty}\frac{\tilde{a}^{*}(f)\tilde{b}(f)}{S_{n}(f)}df\approx \frac{2}{S_{n}(f_{0})}\int_{0}^{T}a(t)b(t)dt\,, \tag{20}\] with spectral noise density \(S_{n}\) and observation time \(T\). Tildes indicate Fourier components, and the asterisk denotes the complex conjugate of \(\tilde{a}(f)\). We take LISA's \(S_{n}\) from Robson et al. (2019). The monochromatic nature of DWD signals is assumed in our approximation, \(S_{n}(f)\approx S_{n}(f_{0})\), and we use Parseval's theorem to convert the inner product defined in the frequency domain to an integral in the time domain. By inverting the FIM defined in Eq. (18), we obtain the 1-\(\sigma\) uncertainty on each of the parameters: \[\Delta\theta^{i}=\sqrt{(\Gamma^{-1})_{ii}}. \tag{21}\] We further impose Gaussian priors on \(m_{\rm d}\) and \(m_{\rm a}\), with the priors \(\sigma_{\theta^{i}}\) defined such that (Poisson and Will, 1995; Cutler and Flanagan, 1994; Carson and Yagi, 2020) \[\Delta\theta^{i}=\sqrt{(\tilde{\Gamma}^{-1})_{ii}}\;,\quad\tilde{\Gamma}_{ij }=\Gamma_{ij}+\frac{1}{\sigma_{\theta^{i}}^{2}}\delta_{ij}. \tag{22}\] As previous studies have found that shell flashes occur in the hydrogen envelope of donors with \(m_{\rm d}\gtrsim 0.2M_{\odot}\)(Athaus et al., 2001; Panei et al., 2007), we set the prior on the donor to \(\sigma_{m_{\rm d}}=0.2M_{\odot}\). Requiring the accretor WD to have a larger mass than the donor WD, we set the prior on the accretor to \(\sigma_{m_{\rm a}}=0.8M_{\odot}\), as WDs with masses much higher than \(\sim 1.0M_{\odot}\) are less common. For fiducial values, we take \(\phi_{0}=3.666\) rad and \(D=8\)kpc unless otherwise stated, and vary \((m_{\rm d},m_{\rm a})\). The MESA model in Fig. 1 is used for the fiducial values of \(r_{\rm d}(m_{\rm d})\) and \(\eta_{\rm d}(m_{\rm d})\). Our results are shown for an observation time of \(T_{\rm obs}=4\) years. ### Results: Gravitational-wave Observations Alone We begin by presenting results with LISA observations alone. Our parameter set is \[\theta^{i}=(\phi_{0},m_{\rm d},m_{\rm a},r_{\rm d},\eta_{\rm d},D)\;. \tag{23}\] The error on the parameters \(\eta_{\rm d},r_{\rm d}\), and \(D\) are given in Fig. 4. Unfortunately, in this case, we find that our Fisher analysis merely returns the priors we impose on the masses, i.e., we gain no additional constraints on the individual masses of the DWDs. Although we are unable to constrain the individual masses with LISA observations alone, there are large regions of parameter space Figure 3: _Left_: GW strain (\(A\times(T_{\rm obs})^{1/2}\); red, green, blue) and LISA’s noise curve (black) vs. GW frequency for degenerate (solid) and non-degenerate (dashed) DWD systems for a variety of initial accretor masses at a luminosity distance of 8kpc. Arrows show the direction of evolution. The frequency ranges correspond to donor mass ranges of \(0.040-0.100M_{\odot}\) and \(0.085-0.155M_{\odot}\) for the solid and dashed lines, respectively. _Right_: The signal-to-noise ratio (SNR) computed for a DWD system with a donor modeled by MESA. The black contour corresponds to SNR=5, which we take as the detection threshold following, e.g., Kremer et al. (2017). Based on these plots, we expect GWs from the DWDs we study to be detectable at distances around 8kpc. in which the fractional error on \(r_{\rm d}\) is smaller than the measurability threshold, \(\Delta r_{\rm d}/r_{\rm d}=1\). The same cannot be said for \(D\) or \(\eta_{\rm d}\); our Fisher analysis suggests that LISA cannot determine these parameters for binaries in consideration. In the plot of \(\Delta\eta_{\rm d}/\eta_{\rm d}\), we see the same discontinuity due to switching \(F\) between 0 and 1 that we saw in Fig. 2. The fractional error on \(\eta_{\rm d}\) is very large on the left side of the plot, which is partially due to the smallness of the parameter itself (see right panel of Fig. 1). In particular, there is a peak in the fractional error near \(m_{\rm d}=0.105M_{\odot}\), corresponding to where \(\eta_{\rm d}\) crosses zero, causing \(\Delta\eta_{\rm d}/\eta_{\rm d}\) to be very large. However, even all the way to the right of the plot, where \(\eta_{\rm d}>1\), the fractional error is generally greater than one, suggesting that we will be unable to constrain \(\eta_{\rm d}\) from LISA's observations. Let us comment on how the measurement errors on \(r_{\rm d}\), \(\eta_{\rm d}\), and \(D\) scale with \(f\), \(f\) and \(A\). We derive the scaling by studying the measurement errors without correlations between parameters3. First, we find that the fractional error on \(r_{\rm d}\) scales inversely with the signal-to-noise ratio (SNR) times \((df/dr_{\rm d})\times r_{\rm d}\). This is intuitive; we see from Eq. (13) that \(f\) depends significantly on \(r_{\rm d}\) through the orbital separation, and the error should of course decrease with a larger SNR. The extra factor of \(r_{\rm d}\) accounts for the fact that we compare the derivative against our plots of fractional error (i.e., \(\Delta r_{\rm d}\) times a factor of \(1/r_{\rm d}\)). In a similar manner, we find that the error on \(\eta_{\rm d}\) scales inversely with \({\rm SNR}(\times(df/d\eta_{\rm d})\times\eta_{\rm d}\), which is sensible, as \(\eta_{\rm d}\) only appears in \(f\) and not \(A\) or \(f\). Finally, we find that the error on \(D\) scales inversely with \({\rm SNR}\times(\Delta t/dD)\times D\). This is also as we expect; the only place luminosity distance appears in our parameterized gravitational waveform is through the amplitude (Eq. (2)). We note that the clear Figure 4: Error on \(\eta_{\rm d}\) and fractional error on \(r_{\rm d}\) and \(D\) for non-degenerate DWD systems, calculated via our Fisher analysis for LISA observations alone. Since inverting the FIM merely returns the priors on the individual masses, giving no new constraints on the mass values, we do not show plots for these parameters. Fiducial values for \(\eta_{\rm d}\) and \(r_{\rm d}\) were obtained from the MESA model shown in Fig. 1. LISA can detect systems to the right of the black contour corresponding to the detection threshold of SNR=5 (see the right panel of Fig. 3). For the range of parameters shown here, the orbit was wide enough for an accretion disk to form, causing the accretion torque term \(-r_{\rm h}^{1/2}(1+q)^{1/2}\) to be zero in Eqs. (14) and (15). dependence of \(\eta_{\rm d}\), \(r_{\rm d}\), and \(D\) on \(\dot{f}\), \(f\), and \(A\), respectively, explains the appearance of the discontinuity in the plot of \(\eta_{\rm d}\) alone: the mass loss fraction, \(F\), only enters the waveform through \(\dot{f}\), so the change between \(F=0\) and \(F=1\) does not alter error calculations on \(r_{\rm d}\) and \(D\). For more plots and discussion on the scaling of parameter error with various derivatives of the waveform, see App. B. To summarize, in the absence of an electromagnetic counterpart to LISA's measurements of accreting DWDs, our Fisher analysis suggests that we will only likely be able to constrain \(r_{\rm d}\) out of the six parameters appearing in our gravitational waveform. ### Results: Gravitational-wave Observations with Electromagnetic Counterparts Let us now consider the case where we have electromagnetic counterparts. A recent paper has shown that at least \(\sim 60\) DWDs with helium-rich donors are expected to be observable by both LISA and Gaia (Breivik et al., 2018). For these DWD systems, we can obtain an independent measurement of the luminosity distance \(D\) from Gaia. This reduces the number of unknown parameters by one, leaving us with the parameter set \[\theta^{\rm i}=(\phi_{0},m_{\rm d},m_{\rm a},r_{\rm d},\eta_{\rm d}). \tag{25}\] Figure 5 shows the measurement uncertainties calculated for the parameters \(m_{\rm d}\), \(m_{\rm a}\), \(\eta_{\rm d}\), and \(r_{\rm d}\), as determined via Fisher analysis excluding \(D\) as a parameter. We see that if we have an independent measurement of \(D\), we can anticipate being able to constrain \(m_{\rm d}\) and \(m_{\rm a}\) (if \(m_{\rm a}\) is sufficiently large for the latter), which we were unable to do without the complementary measurement of \(D\). Although the measurability of \(\eta_{\rm d}\) does not change significantly, the measurability of \(r_{\rm d}\) is considerably enhanced when we perform the Fisher analysis on only five parameters. Once again, we find that the error (without correlations) on \(r_{\rm d}\) and \(\eta_{\rm d}\) scales with SNR\(\propto(df/dr_{\rm d})\times r_{\rm d}\) and SNR\(\propto(df/d\eta_{\rm d})\times\eta_{\rm d}\), respectively. The errors on \(m_{\rm d}\) and \(m_{\rm a}\) do not follow such a simple scaling because of the priors that we impose on these parameters. Instead, we find that the error on \(m_{\rm a}\) is mainly dominated by the prior, with a slight improvement that comes from the amplitude. On the other hand, the error on \(m_{\rm d}\) is determined both from the amplitude and phase. For the results shown in both Secs. 5.2 and 5.3, we note that the fractional errors on \(m_{\rm d}\), \(m_{\rm a}\), and \(r_{\rm d}\) do not change significantly when we use the MESA model versus Eggleton's cold-temperature mass-radius relation for fiducial values of \(r_{\rm d}\) and \(\eta_{\rm d}\).4 On the other hand, \(\Delta\eta_{\rm d}/\eta_{\rm d}\) decreases significantly when we use Eggleton's mass-radius relation instead of a MESA model for fiducial values. This is because \(df/d\eta_{\rm d}\) scales with \(r_{\rm d}^{-11/2}\) (see Eqs. (13) and (14), with \(r_{\rm d}=r_{\rm d}\)). Evaluating this derivative at the smaller fiducial radius values given by the cold-temperature mass-radius relation (see Fig. 1) causes the Fisher matrix component for \(\eta_{\rm d}\), which is determined by \(df/d\eta_{\rm d}\), to be larger, leading to a smaller error (from inverting the Fisher matrix). Footnote 4: We would use Eggleton’s mass-radius relation to perform parameter estimation on DWDs in a much later stage of evolution. We note that LISA is less likely to be able to observe DWDs in this late stage, due to the significantly lower GW strain there (see Fig. 3). ### Application to ZTF J0127+5258 The discovery of ZTF J0127+5258 was very recently reported (Burdge et al., 2023). This binary system, which has an orbital period of 13.7 minutes, is the first accreting verification DWD system for LISA with a loud enough SNR and luminous donor. The binary is estimated to be at a distance of \(3.5^{+1.7}_{-1.5}\)kpc with a donor mass of either \(0.19\pm 0.03M_{\odot}\) or \(0.31\pm 0.11M_{\odot}\) and accretor mass of either \(0.75\pm 0.06M_{\odot}\) or \(0.87\pm 0.11M_{\odot}\), depending on the mass transfer rate. Moreover, ZTF J0127+5258 is believed to be a DWD system in the pre-period minimum stage, which would confirm the presence of the DWDs we study within the 8kpc distance we have been considering. We now show the results of our Fisher analysis method when we apply it to the astrophysical parameters of ZTF J0127+5258. Using the central values of measurements for the masses mentioned in the previous paragraph as fiducial values, along with fiducial \(\eta_{\rm d}=1\)5 and \(D=3.5\)kpc, our Fisher analysis returns the measurement uncertainties compiled in Table 1. Since we have a measurement of \(D\) from ZTF, we perform a Fisher analysis with just the five parameters (\(\phi_{0},m_{\rm d},m_{\rm a},r_{\rm d},\eta_{\rm d}\)). The resultant calculations of statistical error due to detector noise are relatively small, meaning our constraints of these parameters from GW measurements are likely to be quite strong. We note that with LISA, we can estimate the mass transfer rate from observations. However, the errors on \(\dot{m}_{d}\) from LISA's observations are significant; if we propagate the errors due to \(m_{\rm d}\), \(m_{\rm a}\), \(\eta_{\rm d}\), and \(r_{\rm d}\) as listed in Table 1, the propagated error on \(\dot{m}_{d}\) overlaps with the standard deviation in mass transfer rate given by Burdge et al. (2023) (\(\log(\dot{M}/(M_{\odot}\,\mathrm{yr}^{-1}))=0.5\)). Moreover, the errors we calculate for the individual parameters \(m_{\rm d}\), \(m_{\rm a}\), and \(r_{\rm d}\) overlap with the errors from electromagnetic observations, which also suggests that we will not be able to use LISA's measurements to identify which of the mass transfer priors reported in Burdge et al. (2023) is more accurate. Footnote 5: We choose some small, positive numbers for fiducial \(\eta_{\rm d}\) to reflect lingering hydrogen on the donor of ZTF J0127+5258. While \(\Delta\eta_{\rm d}\) depends significantly on what we choose for \(F\) and fiducial \(\eta_{\rm d}\), the other parameters’ errors are agnostic to what we use for these values. ## 6 Conclusions We parameterize the GWs that we expect LISA to detect from accreting DWD systems in terms of the parameters \(\theta^{\rm i}=(\phi_{0},m_{\rm d},m_{\rm a},\eta_{\rm d},r_{\rm d},D)\). We perform a Fisher analysis on the parameterized waveform, imposing Gaussian priors on the individual masses based on the properties of the DWDs we expect to be generating the GWs. We find from our Fisher analysis that if we can obtain simultaneous, independent measurements of \(D\) from a separate detector \begin{table} \begin{tabular}{c c c} \hline \hline & \(\mathcal{N}(-8.5,0.5)\) & \(\mathcal{N}(-7.3,0.5)\) \\ \hline \(\Delta m_{\rm d}/m_{\rm d}\) & \(0.6578\) & \(0.4921\) \\ \(\Delta m_{\rm a}/m_{\rm a}\) & \(0.8346\) & \(0.5956\) \\ \(\Delta\eta_{\rm d}/\eta_{\rm d}\) & \(2.6341\) & \(0.7218\) \\ \(\Delta r_{\rm d}/r_{\rm d}\) & \(0.2262\) & \(0.1811\) \\ \hline \hline \end{tabular} \end{table} Table 1: Measurement uncertainties for astrophysical parameters of ZTF J0127+5258 calculated via our Fisher analysis. The two columns correspond to two sets of fiducial parameter values obtained from different priors of the mass transfer rate, assuming a normal distribution. The priors are denoted by \(\mathcal{N}(a,b)\), with \(a\) being the center value and \(b\) the standard deviation in units of \(M_{\odot}\mathrm{yr}^{-1}\). See Burdge et al. (2023) for details. like Gaia, then we are likely to be able to constrain not only the individual masses, \(m_{\rm d}\) and \(m_{\rm a}\), but also \(r_{\rm d}\). However, if we use only LISA, lacking an independent measurement of \(D\), then although our Fisher analysis still reveals reasonable measurability of \(r_{\rm d}\), we lose our ability to constrain the individual masses. Finally, our parameter inference results suggest that we will be able to constrain astrophysical parameters of ZTF J0127+5258 from LISA's observations of the binary. Although we found fairly pessimistic results in terms of being able to constrain \(\eta_{\rm d}\) itself, we note that our results might nevertheless be useful in distinguishing between the two scenarios of a hydrogen-rich donor (\(\eta_{\rm d}\geq 1\)) versus a cold, degenerate donor (\(\eta_{\rm d}\approx-1/3\)). To see this, we note that the errors obtained by Fisher analysis are interpreted as the standard deviation, \(\sigma\), of a normal distribution centered at the best fit parameter. Given a fiducial \(\eta_{\rm d,fid}\), we can therefore use our results to investigate the probability that \(\eta_{\rm d}\) is in some range \(\eta_{\rm d,min}-\eta_{\rm d,max}\), i.e., \[\int_{\eta_{\rm d,min}}^{\eta_{\rm d,max}}P(\eta_{\rm d})d\eta_{\rm d}, \tag{26}\] where \(P(\eta_{\rm d})=\mathcal{N}(\eta_{\rm d,fid},\sigma)\). In particular, for a given \(\eta_{\rm d,fid}\), we can integrate the distribution from \(-\infty\) to \(-1/3\) to reveal the probability that the donor WD is in the late (T=0) stage of evolution with \(\eta_{\rm d}<-1/3\). Repeating the above prescription with \(r_{\rm d}\) instead of \(\eta_{\rm d}\) would similarly allow us to distinguish between a finite temperature WD (with a larger radius) and a cold WD (with a smaller radius). Collectively, the distributions for \(\eta_{\rm d}\) and \(r_{\rm d}\) obtained from our Fisher analysis could lend insight into what stage of evolution the DWDs are in when we observe them with LISA. We leave this for a future extension of this study. There are a few additional avenues for future work to improve the current analysis. First, we have only used one evolutionary model of a hydrogen-rich donor to estimate errors on the physical parameters of all DWDs with such donors that LISA will observe. To test the robustness of this analysis, one should study how much \(\eta_{\rm d}\) can vary as a function of \(m_{\rm d}\) between different hydrogen-rich donors, and see whether this variability affects our parameter inference. It would also be interesting to confirm and generalize our findings using a full Bayesian Markov-chain Monte-Carlo analysis (Cornish & Littenberg, 2007). One can also include the effect of tidal synchronization torque, as done in Biscoveanu et al. (2022); Kremer et al. (2015, 2017). Figure 5: _Top row:_ Fractional error on the individual masses for non-degenerate DWD systems as determined via Fisher analysis using LISA with electromagnetic counterparts. _Bottom row, left to right:_ Error on \(\eta_{\rm d}\) and fractional error on \(r_{\rm d}\) given by the same Fisher analysis. Fiducial values for \(\eta_{\rm d}\) and \(r_{\rm d}\) were obtained from the MESA mass-radius relation given in Fig. 1. As in Fig. 4, the accretion torque term was zero in these plots. ## Acknowledgements We thank Emanuele Berti for bringing the discovery of ZTF J0127+5258 to our attention. We all thank the support by NASA Grant No.80NSSC20K0523. S. Y. would also like to thank the UVA Harrison Undergraduate Research Award and the Virginia Space Grant Consortium (VSGC) Undergraduate Research Scholarship Program.
2304.14822
Superconductivity from Orbital-Selective Electron-Phonon Coupling in $A\mathrm{V}_3\mathrm{Sb}_5$
Recent experiments have shown that the phase diagrams of the kagome superconductors $A\mathrm{V}_3\mathrm{Sb}_5$ are strongly impacted by changes in the $c$-axis lattice parameter. Here, we show that $c$-axis deformations impact primarily the Sb apical bonds and thus the overlap between their $p_z$ orbitals. Changes in the latter, in turn, substantially affect low-energy electronic states with significant Sb character, most notably the central electron pocket and the van Hove singularities located above the Fermi level. Based on the orbital-selective character of $c$-axis strain, we argue that these electronic states experience a non-negligible attractive electron-phonon pairing interaction mediated by fluctuations in the apical Sb bonds. We thus propose a multi-band model for superconductivity in $A\mathrm{V}_3\mathrm{Sb}_5$ that includes both the Sb pocket and the V-derived van Hove singularities. Upon comparing the theoretical phase diagram with the experimentally observed vanishing of the $T_c$ dome across a Lifshitz transition of the Sb pocket, we propose that either an $s^{+-}$ or an $s^{++}$ state is realized in $A\mathrm{V}_3\mathrm{Sb}_5$.
Ethan T. Ritz, Henrik S. Røising, Morten H. Christensen, Turan Birol, Brian M. Andersen, Rafael M. Fernandes
2023-04-28T13:04:18Z
http://arxiv.org/abs/2304.14822v2
# Superconductivity from Orbital-Selective Electron-Phonon Coupling in \(A\)V\({}_{3}\)Sb\({}_{5}\) ###### Abstract Recent experiments have shown that the phase diagrams of the kagome superconductors \(A\)V\({}_{3}\)Sb\({}_{5}\) are strongly impacted by changes in the \(c\)-axis lattice parameter. Here, we show that \(c\)-axis deformations impact primarily the Sb apical bonds and thus the overlap between their \(p_{z}\) orbitals. Changes in the latter, in turn, substantially affect low-energy electronic states with significant Sb character, most notably the central electron pocket and the van Hove singularities located above the Fermi level. Based on the orbital-selective character of \(c\)-axis strain, we argue that these electronic states experience a non-negligible attractive electron-phonon pairing interaction mediated by fluctuations in the apical Sb bonds. We thus propose a multi-band model for superconductivity in \(A\)V\({}_{3}\)Sb\({}_{5}\) that includes both the Sb pocket and the V-derived van Hove singularities. Upon comparing the theoretical phase diagram with the experimentally observed vanishing of the \(T_{c}\) dome across a Lifshitz transition of the Sb pocket, we propose that either an \(s^{+-}\) or an \(s^{++}\) state is realized in \(A\)V\({}_{3}\)Sb\({}_{5}\). + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. The discovery of superconductivity (SC) in the family of kagome metals \(A\)V\({}_{3}\)Sb\({}_{5}\) (\(A\): K, Rb, Cs) has sparked significant interest, since the interference between different electronic hopping paths in the kagome lattice endows the electronic structure with flat bands, van Hove singularities (vHs), and Dirac points. These features have the potential to promote collective electronic behaviors characteristic of materials with strong electronic correlations or non-trivial band topology [1; 2; 3; 4]. Indeed, upon a cursory examination, the phase diagrams of \(A\)V\({}_{3}\)Sb\({}_{5}\) resemble those of Cu- and Fe-based superconductors, in that SC appears in close proximity to another electronic order, in this case a charge-density wave (CDW) phase, which has been intensely scrutinized both theoretically [5; 6; 7; 8; 9; 10; 11] and experimentally [12; 13; 14; 15; 16]. While the three-dimensional nature of the CDW wave-vector is well established [17; 18; 19; 20], there remains considerable debate whether it also breaks time-reversal and rotational symmetries [21; 22; 23; 24; 25; 26; 27; 28]. Studies of the SC properties of \(A\)V\({}_{3}\)Sb\({}_{5}\) have lagged the investigations of the CDW phase, partly because the latter onsets at much higher temperatures (\(T_{\rm CDW}\sim 100\) K) than the former (\(T_{c}\sim 1\) K). There have been reports of both nodeless [29; 30; 31; 32; 33; 34; 35] and nodal [36; 37; 15] gap structures, as well as conflicting accounts of whether the electron-phonon coupling can explain the SC instability [38; 39; 40; 41]. Proposals have been put forward in favor of both conventional and unconventional pairing, most of which focus on the electronic states derived from the V \(d\)-orbitals [5; 6; 7; 42; 43; 44; 45; 46; 47], which give rise to saddle points in the band structure near the M point. However, recent experimental studies of the doping-temperature and pressure-temperature phase diagrams of CsV\({}_{3}\)Sb\({}_{5}\) have shown that the end of the SC dome in either case coincides with the disappearance of an electron pocket at the \(\Gamma\) point derived from the Sb \(p\)-orbitals, highlighting their relevance for the onset of pairing [48; 49]. In this paper, we combine density functional theory (DFT) calculations and low-energy modeling to show that the Sb degrees of freedom play an essential role for the superconductivity of \(A\)V\({}_{3}\)Sb\({}_{5}\). Our starting point is the empirical observation made in Ref. [50] that the phase diagrams of CsV\({}_{3}\)Sb\({}_{5}\) under pressure and under uniaxial in-plane stress fall essentially on top of each other when \(T_{\rm CDW}\) and \(T_{c}\) are expressed as a function of the \(c\)-axis expansion/contraction. Moreover, thermal expansion measurements report a \(c\)-axis response much more pronounced than the \(a\)-axis response at both \(T_{c}\) and \(T_{\rm CDW}\)[51]. These results are strong indication that the \(c\)-axis lattice parameter is a key control parameter for both types of electronic order. From DFT, we find that the states primarily affected by changes in the \(c\)-axis are those that have a significant contribution from the apical Sb orbitals: the vHs above the Fermi level at the M point and the electron band centered at the \(\Gamma\) point, whose bottom shifts by hundreds of meV for strain values of a few percent. In contrast, the energies of the vHs below the Fermi level remain nearly unchanged. Such an "orbital-selective" modification of the electronic spectrum provides a mechanism by which the \(c\)-axis lattice parameter can impact the CDW and SC transitions, as empirically seen in Ref. [50]. Based on these results, we construct a low-energy model for the SC state of \(A\)V\({}_{3}\)Sb\({}_{5}\) consisting of a central Sb-dominated electron pocket and a large V-dominated Fermi surface associated with the vHs. By studying the evolution of \(T_{c}\) as the Sb band raises above the Fermi level, we find that only \(s^{++}\) and \(s^{+-}\) states are compatible with the observation of a vanishing \(T_{c}\) as the Sb pocket undergoes a Lifshitz transition, in agreement with experiments [48; 49]. We start by employing DFT to elucidate the impact of \(c\)-axis distortions on the band structure of \(A\)V\({}_{3}\)Sb\({}_{5}\), focusing on \(A=\) Cs for concreteness. For details, see the Supplementary Material (SM). Above \(T_{\rm CDW}\) and at ambient pressure, CsV\({}_{3}\)Sb\({}_{5}\) adopts the \(P6/mmm\) (\(\#191\)) space group, with Cs occupying the \(1a\) Wyckoff site, V the \(3g\) site, and Sb the \(4h\) (apical) and \(1b\) (planar) sites, as illustrated in Figs. 1(a)-(b). The V atoms form a kagome sublattice, whereas the planar (apical) Sb atoms, a hexagonal (honeycomb) sublattice. Besides the lattice parameters, the reduced \(z\) coordinate of the apical Sb atoms is the only free structural parameter. Interestingly, \(z\) increases significantly upon compression of the \(c\)-axis, in a way that approximately preserves the Sb-V bond distances between the apical Sb and the kagome layer, while shortening the Sb-Sb bond distances between apical Sb in adjacent unit cells, as shown Fig. 1(c). To address whether this displacement pattern is capable of affecting the low-energy electronic states, we first calculate via DFT the atomically-resolved band structure near the Fermi energy in the undistorted phase [Fig. 1(d)]. In agreement with previous works [48, 49, 52], we find dominant spectral-weight contributions from both types of Sb atoms (planar and apical) to the \(\Gamma\)-point electron band, as well as a significant contribution from the apical Sb to the V-dominated saddle points located above the Fermi level at the M point. It is this hybridization between apical Sb orbitals and V orbitals that endow the corresponding vHs with a significant \(k_{z}\) dispersion, to the point that they even cross the Fermi level along the M-L line. In Fig. 1(e), we show how the low-energy band structure is modified by both compressive (negative) and tensile (positive) \(c\)-axis strain (see also Ref. [53]). We include large absolute values of strain to highlight the effect. Note that all internal lattice parameters are relaxed for a given \(c\)-axis distortion, while keeping the in-plane lattice parameter fixed. The bands that are most affected are those exhibiting a sizable contribution from the apical Sb atoms, such as the vHs located above the Fermi level. Since the CDW is associated with the condensation of phonon modes at the M and L points [9, 17], this provides a possible mechanism by which \(c\)-axis strain can impact the CDW phase. Besides these saddle points, the bottom of the electron pocket at the \(\Gamma\) point moves substantially with \(c\)-axis changes, with shifts of 100 meV for strains of about 1% (see also Fig. S1 in the SM). In contrast, the energies of the M-point vHs located below the Fermi level barely change, reflecting their dominant V character. The large shifts of the bottom of the \(\Gamma\)-point electron band can be attributed to the out-of-phase overlap between the \(p_{z}\) orbitals of apical Sb atoms of neighboring unit cells. This overlap generates a bonding and an anti-bonding state, the latter of which gives rise to the \(\Gamma\)-point electron band. Upon compression of the \(c\)-axis, the orbital overlap increases and, consequently, the energy of the anti-bonding state increases, leading to the observed shift in the bottom of the electron band. The electronic properties of \(A\)V\({}_{3}\)Sb\({}_{5}\) should be impacted not only by static strain, but also by thermal fluctuations associated with the atomic displacement pattern promoted by the \(c\)-axis strain. These fluctuations are expected to be strongly coupled to the electronic states with significant Sb character. Because the displacement pattern associated with the Sb-Sb bonds does not break crystal symmetries, it cannot be decomposed in terms of a single phonon mode. Instead, there are two different phonon modes that modify the bond lengths along the \(c\)-axis without modifying other features in the crystal structure: a longitudinal acoustic phonon mode with out-of-plane dispersion and a \(\Gamma\)-point optical phonon mode that transforms as the \(A_{1g}\) irreducible representation of the point group. Note that the \(A_{1g}\) displacements, represented in Fig. 1(b), also involve changes in the Sb-V bond distances. As shown in Fig. 1(c), the strong \(c\)-axis strain Figure 1: (a)-(b) Schematic illustrations of two neighboring \(A\)V\({}_{3}\)Sb\({}_{5}\) unit cells and of the displacement pattern of the apical Sb promoted by the A\({}_{1g}\) phonon mode, respectively. We choose a sign convention such that the displacement of apical Sb towards the Kagome plane corresponds to positive A\({}_{1g}\). (c) Changes in the distances between the Sb apical atom and its nearest-neighbor (green triangles) and the V atoms (red circles) as a function of \(c\)-axis strain for CsV\({}_{3}\)Sb\({}_{5}\). The purple squares give the atomic displacements from the equilibrium structure corresponding to a frozen excitation of the optical A\({}_{1g}\) phonon mode. (d) Low-energy band structure of CsV\({}_{3}\)Sb\({}_{5}\). The thickness of the bands is proportional to the total projection onto the \(p_{z}\) orbitals of the Sb atoms, whereas their color is proportional to the projection onto the planar Sb (blue) and apical Sb (red). The \(\Gamma\) band has contributions from both Sb sites. (e) Modifications in the low-energy band structure as a function of \(c\)-axis strain. The momentum L is located above M in the hexagonal Brillouin zone; the other momenta are defined in Fig. 2(a). dependence of the displacement associated with this \(A_{1g}\) mode resembles that displayed by the Sb-Sb bond distance. This result confirms that the \(A_{1g}\) phonon mode leads to Sb-Sb bond fluctuations. The coupling between the electronic states with significant Sb spectral weight and these phonon modes should lead to a non-negligible attractive pairing interaction. To assess its impact, we construct a low-energy model for the SC phase considering a simplified Fermi surface that consists of a small Sb electron-pocket at the \(\Gamma\) point and a large hexagonal-like Fermi surface originating from one of the V vHs [43; 47; 48; 52]. The V band is modeled in terms of a single orbital on the sites of the kagome lattice whereas the Sb band is parameterized as a nearly isotropic dispersion, \(\xi_{\Gamma}(\mathbf{k})=\tilde{t}f_{\Gamma}(\mathbf{k})-\mu_{\Gamma}\); details are given in the SM. The parameter \(\mu_{\Gamma}\) defines the energy of the bottom of the electron band, and is set to \(\mu_{\Gamma 0}=-3.65\tilde{t}\) for the undistorted compound based on comparison with ARPES measurements [54]. Upon decreasing \(\mu_{\Gamma}\), which mimics the effect of hole doping, a Lifshitz transition occurs at \(\mu_{c}\equiv-4\tilde{t}\), where the \(\Gamma\)-point Fermi pocket disappears. For simplicity, we thus define \(\delta\mu\equiv(\mu_{\Gamma}-\mu_{c})/\mu_{c}\). To derive the SC gap equations, we generalize a patch approach commonly employed to describe systems with vHs near the Fermi level [55; 56; 57; 58; 59; 60]. Because of the logarithmic enhancement of the density of states (DOS) at the M-point vHs, it is sufficient to consider only the pairing interactions involving states on the three Fermi surface patches centered at each of the three M points. Symmetry restricts these interactions to two different types: intra-patch \(g_{\rm MM}/N_{\rm M}\) and inter-patch \(g_{\rm MM}/N_{\rm M}\), where \(N_{\rm M}\) is the DOS of the M-point patches. We approximate the small \(\Gamma\) pocket by a fourth patch subjected to an intra-patch pairing interaction \(g_{\Gamma\Gamma}/N_{\Gamma}\) and an inter-patch interaction \(g_{\Gamma\rm M}/\sqrt{N_{\rm M}N_{\Gamma}}\) with the M-point patches. Based on our results above, we assume an attractive interaction \(g_{\Gamma\Gamma}<0\) arising from the electron-phonon coupling involving the apical Sb degrees of freedom. Note that this parametrization of the pairing interaction in terms of the DOS of each patch is not valid close to the Lifshitz transition; we will return to this point later. The resulting four-patch model is schematically shown in Fig. 2(a). Denoting \(\vec{\Delta}\equiv(\Delta_{\rm M_{1}},\Delta_{\rm M_{2}},\Delta_{\rm M_{3}}, \Delta_{\Gamma})^{\rm T}\) for the gap functions on the four patches, the corresponding linearized gap equations can be written in matrix form as \(\chi_{\rm pp}\vec{\Delta}=\vec{\Delta}\), with \[\chi_{\rm pp}=-V_{\rm A}\begin{bmatrix}g_{\rm MM}&g_{\rm M\bar{ \rm M}}&g_{\rm M\bar{\rm M}}&\eta g_{\rm TM}\\ g_{\rm M\bar{\rm M}}&g_{\rm MM}&g_{\rm M\bar{\rm M}}&\eta g_{\rm TM}\\ g_{\rm M\bar{\rm M}}&g_{\rm M\bar{\rm M}}&g_{\rm MM}&\eta g_{\rm TM}\\ \eta^{-1}g_{\rm TM}&\eta^{-1}g_{\rm TM}&\eta^{-1}g_{\rm TM}&g_{\Gamma\Gamma} \end{bmatrix}, \tag{1}\] where \(V_{\rm A}\equiv\int_{-\Lambda}^{\Lambda}\mathrm{d}\varepsilon\ \tanh(\beta \varepsilon/2)/(2\varepsilon)\approx\ln\left(2e^{\gamma}\beta\Lambda/\pi\right)\) is the particle-particle bubble and \(\eta\equiv\sqrt{N_{\rm T}/N_{\rm M}}\) is the ratio between the DOS. Here, \(\Lambda\) is the cutoff for the pairing interaction, as shown in Fig. 2(b), and \(\gamma\approx 0.577\) is Euler's constant. \(T_{c}\) is found by imposing that the largest eigenvalue of \(\chi_{\rm pp}\) is 1. The two leading eigenvalues are \[\lambda_{E_{2g}} =\left(g_{\rm M\bar{\rm M}}-g_{\rm MM}\right)\ln\left(2e^{\gamma} \beta\Lambda/\pi\right), \tag{2}\] \[\lambda_{A_{1g}} =\frac{1}{2}\left(\tilde{g}-2g_{\rm M\bar{\rm M}}-g_{\rm MM}-g_{ \Gamma\Gamma}\right)\ln\left(2e^{\gamma}\beta\Lambda/\pi\right), \tag{3}\] where \(\tilde{g}\equiv\sqrt{12g_{\Gamma\rm M}^{2}+\left(g_{\Gamma\Gamma}-2g_{\rm M \bar{\rm M}}-g_{\rm MM}\right)^{2}}\). Analysis of the eigenvectors of \(\lambda_{E_{2g}}\), which is doubly-degenerate, shows that they describe \(d_{x^{2}-y^{2}}\)-wave and \(d_{xy}\)-wave SC states, which transform as the 2D irreducible representation \(E_{2g}\) of the point group \(D_{6h}\) (see SM). As discussed elsewhere [55], going beyond the linearized gap equation reveals that the linear combination \(d_{x^{2}-y^{2}}\pm id_{xy}\) minimizes the free energy, leading to a time-reversal symmetry-breaking SC phase. The second eigenvalue \(\lambda_{A_{1g}}\) corresponds to a pairing state that transforms as the trivial representation \(A_{1g}\), corresponding to two isotropic gaps \(\Delta_{\Gamma}\) and \(\Delta_{\rm M}\). While the symmetry of this state is \(s\)-wave, there are two qualitatively different possible gap configurations depending on the signs of \(\Delta_{\Gamma}\) and \(\Delta_{\rm M}\): an \(s^{++}\) state, in the case of equal signs, or an \(s^{+-}\) state, in the case of opposite signs - similar to that realized in the Fe-based superconductors [61]. Only positive eigenvalues correspond to attractive pairing channels. For the \(E_{2g}\) channel \((d+id)\), \(\lambda_{E_{2g}}>0\) requires a strong enough inter-M-patch repulsion to overcome the intra-M-patch repulsion, \(g_{\rm M\bar{\rm M}}>g_{\rm MN}>0\), as found in renormalization group studies of the three-patch model [55]. As for the \(A_{1g}\) channel (\(s^{++}\) or \(s^{+-}\)), \(\lambda_{A_{1g}}>0\) requires either a strong inter-patch interaction \(g_{\rm TM}\), which can be repulsive or attractive, or a strong intra-\(\Gamma\)-patch attraction \(g_{\Gamma\Gamma}<0\). Note that the sign of \(g_{\rm TM}\) does not impact the eigenvalue \(\lambda_{A_{1g}}\), but only whether the eigenvector corresponds to the \(s^{++}\) (\(g_{\rm TM}<0\)) or the \(s^{+-}\) (\(g_{\rm TM}>0\)) state (see SM). Figure 2(c) shows the SC phase diagram in the \(\{g_{\rm M\bar{\rm M}},g_{\Gamma\Gamma},|g_{\rm TM}\}\) parameter space. As anticipated, the \(d+id\) state is only stabilized by a dominant repulsive interaction \(g_{\rm M\bar{\rm M}}>0\), whereas attractive interactions of any kind favor an \(s\)-wave state. An increase in the magnitude of the inter-patch interaction \(g_{\rm TM}\), be it attractive or repulsive, further expands the regime where the \(s\)-wave state is realized. While this plot is obtained for \(g_{\rm MM}=0\), the main effect of a non-zero \(g_{\rm MM}\) is in the case where it is repulsive, as it suppresses the regime in which SC is stabilized (see SM). To elucidate which of these SC regimes are consistent with the experimental observation of a suppression of \(T_{c}\) across the Lifshitz transition [48; 49], we compute the evolution of \(T_{c}\) as \(\mu_{\Gamma}\) approaches the critical value \(\mu_{c}\) for which the electron-band bottom crosses the Fermi level (see inset of Fig. 2(d)). Near the Lifshitz transition, where \(|\mu_{\Gamma}-\mu_{c}|\ll\Lambda\), the gap equations (1) have to be modified, as it is not justified to remove the DOS from the integrand of the particle-particle bubble [62; 63]. The modified \(\chi_{\rm pp}\) is shown in the SM. By numerically computing its eigenvalues, we obtain \(T_{c}(\mu_{\Gamma})\) for the various regimes in Fig. 2(c). Because the \(d+id\) state is insensitive to the \(\Gamma\) pocket, its \(T_{c}\) does not change across the Lifshitz transition. Meanwhile, the behavior of \(T_{c}\) of the \(s\)-wave state depends on the nature of the dominant pairing interaction. If the \(s\)-wave state is driven by large attractive interactions involving the M patches only, \(g_{\rm M\bar{M}},\,g_{\rm M\bar{M}}<0\), \(T_{c}\) is not significantly changed at \(\mu_{c}\). On the other hand, for dominant intra-\(\Gamma\)-patch attraction \(g_{\Gamma\Gamma}<0\) or dominant inter-patch \(g_{\rm FM}\) of either sign, \(T_{c}\) is strongly suppressed across the Lifshitz transition. This is shown in Fig. 2(d) for the parameter values corresponding to the orange symbol (dominant \(g_{\Gamma\Gamma}\)) and the purple symbol (dominant \(g_{\rm TM}\)) in Fig. 2(c). Additional \(T_{c}(\mu_{\Gamma})\) plots for other parameter values are shown in the SM. A large attractive intra-pocket pairing interaction \(g_{\Gamma\Gamma}\) could be mediated by the Sb-Sb bond fluctuations discussed above. On the other hand, CDW fluctuations with wave-vector M could boost \(g_{\rm FM}\), rendering it repulsive (attractive) if the CDW breaks (preserves) time-reversal symmetry. However, these CDW fluctuations should also enhance \(g_{\rm M\bar{M}}\), since the M patches are connected by the same wave-vector. Because the latter couples states with similar orbital compositions (V-V orbitals), whereas \(g_{\Gamma\rm M}\) couples states with different orbital compositions (Sb-V orbitals), the CDW boost of \(g_{\rm M\bar{M}}\) is expected to be larger, particularly if the CDW is enhanced by the vHs. Interestingly, the Sb-Sb bond fluctuations could switch this hierarchy if the relevant vHs is one of those located above the Fermi level. Indeed, as shown in Figs. 1(d)-(e), those vHs have a sizable Sb orbital weight, and as such should be impacted by the phonon modes associated with Sb-Sb bond displacements. We now discuss the experimental implications of our results. All three states obtained in our model, \(d+id\), \(s^{+-}\), and \(s^{++}\), are fully gapped, which makes it challenging to distinguish between them empirically. The suppression of \(T_{c}\) across the Sb electron-pocket Lifshitz transition shown in Fig. 2(d) is qualitatively consistent with the experimental results of Refs. [48; 49], suggesting that either an \(s^{+-}\) or an \(s^{++}\) state is realized, at least in the region of the phase diagram where the CDW is absent. These \(s\)-wave states are also compatible with the robustness of \(T_{c}\) against impurities reported in Ref. [32] for CsV\({}_{3}\)Sb\({}_{5}\) and with the observed multi-gap structure of the SC state seen in the parent compounds, where SC coexists with CDW. If one of these gaps is small, it may reconcile reports favoring both a nodeless and a nodal pairing state [29; 30; 31; 32; 33; 34]. Alternatively, coexistence of an \(A_{1g}\) SC state with CDW may lead to nodes in the reconstructed Fermi surface [64]. As for unconventional SC, even if the \(d+id\) state is subleading with respect to the \(s^{+-}/s^{++}\) channels, interesting mixed states can emerge when the ground states are close in energy. These include not only an \(s+d+d\) state that has two-fold anisotropy, but also an \(s+\mathrm{e}^{\mathrm{i}\theta}(d+id)\) state that breaks time-reversal symmetry, as discussed in Refs. [58; 65]. In summary, we showed that changes in the \(c\)-axis lattice parameter of \(A\)V\({}_{3}\)Sb\({}_{5}\) lead to significant changes in the electronic dispersion promoted by the apical Sb \(p_{z}\) orbitals. Not only the energies and the \(k_{z}\)-dispersion of the vHs located above the Fermi energy are modified, but also the bottom of the \(\Gamma\)-point electron band shifts strongly with \(c\)-axis strain. We proposed that fluctuations of the Sb-Sb bonds promote a non-negligible electron-phonon pairing interaction for states with sizable Sb orbital character, which includes both the central electron pocket as well as the saddle points located above the Fermi level. The resulting \(s^{+-}\) and \(s^{++}\) states are consistent with several experimental observations, including the full suppression of \(T_{c}\) across the Lifshitz transition involving the Sb pocket. We thank N. Ni, Z. Wang, and S. Wilson for fruitful discussions. ETR and TB were supported by the NSF CAREER grant DMR-2046020. HSR was supported by research Grant No. 40509 from VILLUM FONDEN. MHC has received funding from the Euro Figure 2: (a) Pairing interactions of the four-patch model involving fermions at the M\({}_{i}\) and \(\Gamma\) points. The Fermi surface is shown in gray. (b) Tight-binding dispersions, highlighting the Lifshitz transition of the \(\Gamma\)-point electron pocket as a function of the parameter \(\delta\mu\equiv(\mu_{\Gamma}-\mu_{c})/\mu_{c}\); \(\Lambda\) is the pairing interaction cutoff. (c) SC phase diagram of the four-patch model (away from the Lifshitz transition) as a function of the interactions shown in panel (a); \(s\)-wave corresponds to either \(s^{++}\) or \(s^{+-}\) states depending on the sign of \(g_{\rm TM}\). (d) \(T_{c}\) as a function of the parameter \(\delta\mu\) that tunes the \(\Gamma\)-pocket across the Lifshitz transition at \(\delta\mu=0\), as shown in the insets. The interaction parameters, marked by the orange and purple symbols shown in (c), are \((g_{\rm M\bar{M}},g_{\rm FM},g_{\Gamma\Gamma},g_{\rm MM})\)=\((0.1,0.015,-0.15,0)\) for the orange lines and \((0.07,0.1,-0.03,0)\) for the purple lines. \(T_{c0}\) is the SC transition temperature for the first set of parameters (orange) at \(\mu_{\Gamma 0}=-3.65\tilde{t}\). pean Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101024210. RMF was supported by the Air Force Office of Scientific Research under Award No. FA9550-21-1-0423.
2306.02839
Measuring Galaxy Asymmetries in 3D
One of the commonly used non-parametric morphometric statistics for galaxy profiles and images is the asymmetry statistic. With an eye to current and upcoming large neutral hydrogen (HI) surveys, we develop a 3D version of the asymmetry statistic that can be applied to datacubes. This statistic is more resilient to variations due to the observed geometry than 1D asymmetry measures, and can be successfully applied to lower spatial resolutions (3-4 beams across the galaxy major axis) than the 2D statistic. We have also modified the asymmetry definition from an `absolute difference' version to a `squared difference' version that removes much of the bias due to noise contributions for low signal-to-noise observations. Using a suite of mock asymmetric cubes we show that the background-corrected, squared difference 3D asymmetry statistic can be applied to many marginally resolved galaxies in large wide-area HI surveys such as WALLABY on the Australian SKA Pathfinder (ASKAP).
N. Deg, M. Perron-Cormier, K. Spekkens, M. Glowacki, S. -L. Blyth, N. Hank
2023-06-05T12:44:09Z
http://arxiv.org/abs/2306.02839v1
# Measuring Galaxy Asymmetries in 3D ###### Abstract One of the commonly used non-parametric morphometric statistics for galaxy profiles and images is the asymmetry statistic. With an eye to current and upcoming large neutral hydrogen (H i) surveys, we develop a 3D version of the asymmetry statistic that can be applied to datacubes. This statistic is more resilient to variations due to the observed geometry than 1D asymmetry measures, and can be successfully applied to lower spatial resolutions (3-4 beams across the galaxy major axis) than the 2D statistic. We have also modified the asymmetry definition from an 'absolute difference' version to a'squared difference' version that removes much of the bias due to noise contributions for low signal-to-noise observations. Using a suite of mock asymmetric cubes we show that the background-corrected, squared difference 3D asymmetry statistic can be applied to many marginally resolved galaxies in large wide-area H i surveys such as WALLABY on the Australian SKA Pathfinder (ASKAP). keywords: galaxies: general - radio lines: galaxies - software: data analysis ## 1 Introduction One of the first ideas explored in extragalactic astronomy was how to classify galaxies based on their morphology. The most well-known are the Hubble schema (Hubble, 1926) and the extended de Vaucouleurs system (de Vaucouleurs, 1959), which classify galaxies into a few major classes; spirals, ellipticals and lenticulars, and irregulars. The Hubble sequence separates these classifications into early-types (ellipticals and lenticulars) and late-types (spirals, barred or not, and irregulars) based on ideas of galaxy evolution at that time. While this early association has been shown to be broadly incorrect, the connection between visual appearance/classification and galaxy formation/evolution has continued to the present. For instance, the morphology of a galaxy has been found to correlate with the gas content (Roberts and Haynes, 1994), star formation rate (Wuyts et al., 2011; Leslie et al., 2020), star formation efficiency (Saintonge et al., 2012; Ellison et al., 2018), metallicity (Ellison et al., 2008), and more. As observations have become more sensitive and data volumes have expanded, many new galaxies have been discovered that defy simple classification into early and late types, leading to the number of irregulars increasing exponentially. One approach, which has been used with great success, is to add new galaxy classes (see Buta (2013) for a review). Regardless of the classification scheme devised, increased data volumes have made the visual classification of all observed galaxies very difficult. At this point, no single person can visually classify all the galaxies detected in a single large survey. One possible solution to this problem is the use of 'citizen science' like the Galaxy Zoo project (Willett et al., 2013, 2017). Alternatively, approaches involving machine learning have also shown a great deal of promise (for some examples see Huertas-Company et al., 2015; Barchi et al., 2020; Walmsley et al., 2022 and references therein). A different approach is to use quantitative non-parametric measurements to quantify galaxy morphologies. One of the most successful approaches is the use of the CAS parameters (Concentration, Asymmetry, and Smoothness) pioneered by Conselice et al. (2000) and Conselice (2003). These are often coupled with the Gini and \(M_{20}\) parameters of Lotz et al. (2004). The calculation of these measures can be automated rather simply, allowing them to be applied to large surveys with relative ease. For example, the CAS parameters have been used to distinguish between early and late type galaxies (Conselice, 2003; Lotz et al., 2004). Rodriguez-Gomez et al. (2019) compared mock images from the Illustris TNG simulation (Marinacci et al., 2018; Naiman et al., 2018; Nelson et al., 2018; Naiman et al., 2018; Springel et al., 2018) to those of the Pan-STARRS survey (Chambers et al., 2016) and found that the overall morphologies of the simulated galaxies match observations, but there are some disagreements in the morphology-color and morphology-size relations. Pearson et al. (2019) used these statistics to train machine learning algorithms to identify mergers and examine the effect of merging on the star formation rate. Pearson et al. (2022) applied this technique to the HSC-NEP survey (Hyper Subprime-Cam North Ecliptic Pole; Goto et al., 2017; Qi et al., 2021) to generate a merger catalogue for that field. Additionally, Bellhouse et al. (2022) explored the use of these parameters to analyze ram-pressure-stripping and post-starburst galaxies, as well as to investigate the connection between AGN activity, star formation, and the disturbances in the galaxies (Zhao et al., 2022). The success of the asymmetry statistic at quantifying optical galaxy morphologies suggests that it might be equally successful at quantifying morphology of the H i content of galaxies. H i generally extends further from the galaxy centre than the stellar disk (Koribalski et al., 2018), making it more susceptible to disturbances such as interactions and mergers (Bok et al., 2019; Deg et al., 2020), ram pressure stripping (Gunn and Gott, 1972), tidal effects (Haynes et al., 1984), accretion (Sancisi et al., 2008), outflows (Fraternali, 2017). However, measuring the asymmetry statistic using H i imaging has proven to be quite difficult. The availability of large samples of resolved H i images is scarce and the dynamic range of these moment maps can be orders of magnitude lower than that of optical images. Moreover, the available H i moment maps tend to have lower signal-to-noise, \(S/N\), and angular resolution than optical imaging. However, surveys like MIGHTEE-HI (the H i emission project for the MeerKAT International GHz Tiered Extragalactic Exploration, Jarvis et al., 2016; Maddox et al., 2021), WALL-LABY (the Widefield ASKAP L-band Legacy All-sky Blind surveY, Koribalski et al., 2020), and the WSRT-APERTIF imaging survey (the Westerbork Synthesis Radio Telescope - APERture Tile In Focus; Adams et al., 2022) will change this as they detect orders of magnitude more galaxies that are spatially resolved in H i. One of the first attempts at measuring the asymmetry statistic, along with a set of other morphometric measures, in H i data was by Holwerda et al. (2011). They examined THINGS observations (The H i Nearby Galaxy Survey; Walter et al., 2008) and found that the asymmetry was particularly sensitive to disturbances in H i disks, and carried follow-up studies focussing on other other surveys to further explore this phenomenon (Hohwerda et al., 2011, 2011, 2014). More recently, Reynolds et al. (2021) applied the 2D asymmetry measurement to a sample of WALLABY pilot observations. They identified a number of particularly asymmetric galaxies and examined them to determine if their disturbances were due to ram pressure stripping. As discussed above, one of the issues with calculating the asymmetry of H i moment maps is the lower \(S/N\) and angular resolution of radio observations compared to optical imaging. In this regime, applying the correct background subtraction becomes increasingly important (Giese et al., 2016; Reynolds et al., 2020; Thorp et al., 2021). To understand these issues, Giese et al. (2016) examined a suite of mock moment 0 images. They found that the low \(S/N\) of typical H i observations introduced a bias in the asymmetry calculation and that the traditional background subtraction from Conselice et al. (2000) and Conselice (2003) overcorrected the results. As such, they developed an empirical background correction for the asymmetry. Similarly, analyzing mock IFU observations from the Illustris (Vogelsberger et al., 2014; Genel et al., 2014; Sijacki et al., 2015) and Illustris TNG simulations, Thorp et al. (2021) developed an alternate background correction for stellar mass maps. Bilimogga et al. (2022) used the mock H i images and profiles constructed from the EAGLE simulations (Schaye et al., 2015; Crain et al., 2015) to investigate the effect of noise on the measured asymmetry and found that relatively high \(S/N\) and resolution were required for robust measurements (when the measurement has not been corrected for the noise). Another issue with calculating the asymmetry of H i moment maps is the relative paucity of data. However, there are orders of magnitude more H i velocity profiles than there are images of H i disks due to large single dish surveys like HIPASS (HI Parkes All Sky Survey, Barnes et al., 2001) and ALFALFA (Arecibo Legacy FAST ALFA, Giovanelli et al., 2005; Haynes et al., 2018). As such, quantifying the asymmetry of 1D velocity profiles has proven to be quite fruitful. One of the first profile asymmetry statistics to be adopted is the 'lopsidedness' measure proposed by Peterson & Shostak (1974), which compares the ratio of the flux on the approaching and receding sides of a profile. This statistic has been used in a variety of different studies that highlight the many drivers of H i asymmetry. For instance, Espada et al. (2011) found that a significant number of isolated galaxies are asymmetric, while Bok et al. (2019) found close pairs tend to be more asymmetric than isolated galaxies. This is in contrast to Zuo et al. (2022) who found that massive merger galaxies have similar levels of asymmetry as their non-merger sample. Thus, while mergers may drive asymmetry, there must be both a mass dependence and other drivers of asymmetry. Watts et al. (2020) utilized lopsidedness to analyze the xGASS survey (the extended GALEX Arecibo SDSS Survey, Catinella et al., 2018). They found that when they properly accounted for the effect of noise, 37% of the galaxies detected were asymmetric. Additionally, they found that satellite galaxies tended to be more asymmetric than central galaxies, indicating that environmental processes are a key driver of asymmetry. These results were followed up by Watts et al. (2020) who explored the lopsidedness of velocity profiles constructed from the Illustris TNG simulation. They confirmed that, in the simulation, the satellite galaxy population tends to be more asymmetric than central galaxies. While the excess asymmetry is driven by ram-pressure stripping in the satellites, the general drivers of asymmetry affect both populations of galaxies. More recently, Watts et al. (2021) examined the lopsidedness of velocity profiles from the ALFALFA survey and the xGASS survey. They found that asymmetric galaxies tend to be more gas-poor than symmetric galaxies with similar stellar masses. This is only a small sampling of the increasingly large efforts aimed at using 1D profile asymmetries to characterize galaxies. Recently Deg et al. (2020) and Reynolds et al. (2020) developed a new 'channel-by-channel' 1D asymmetry statistic for velocity profiles that is analogous to the standard 2D asymmetry statistic. While the lopsidedness/flux ratio measure is an integral quantity, this new measure is sensitive to more local disturbances. Moreover, its similarity to the 2D asymmetry statistic allows for easier comparison of 1D and 2D measurements from simulations and observations. Deg et al. (2020) found that this statistic provided a better agreement to visual classifications of asymmetric profiles than the lopsidedness statistic. Reynolds et al. (2020) applied this statistic, along with other asymmetry measures, to a sample of galaxies from the LVHIS (Local Volume H i Survey, Koribalski et al., 2018), VIVA (VLA Imaging of Virgo Spirals in Atomic Gas, Chung et al., 2009), and HALOGAS (The Westerbork Hydrogen Accretion in LOcal GAlaixeS, Heald et al., 2011) surveys, and found that the measured asymmetry does depend on the environment, but did not find a strong trend with H i mass. More recently Glowacki et al. (2022) explored the use of this statistic in the SIMBA cosmological simulation (Dave et al., 2019) and found the H i mass has the strongest correlation with the profile asymmetry. When the H i mass is controlled, highly asymmetric galaxies were found to be more gas poor and have larger specific star formation rates than their symmetric counterparts. The 2D and 1D 'channel-by-channel' asymmetry statistics are quite similar to each other and have been used to analyze H i observations from a variety of different surveys as well as simulations. However, modern H i surveys are generally interferometric in nature, and the most common data product is a 3D datacube that contains both spatial and spectral information simultaneously. Given this, it is worthwhile to extend the asymmetry statistic to 3D data cublets (a data cube containing only a single galaxy detection). Cubelets contain both morphological and kinematic information, which allows for a larger variety of disturbances to be detected in a single measurement than either a 2D moment 0 map or a 1D velocity profile. Moreover, the noise of a cublet tends to be uniform, which is simpler to account for than the non-uniform noise structure of its derived data products. In addition, 1D and 2D asymmetries tend to be unreliable in the low resolution, low \(S/N\) regime that comprises most detections from wide-field H i surveys (e.g. Giese et al., 2016; Reynolds et al., 2020; Bilimogga et al., 2022). As we will show in this paper, moving to 3D allows the asymmetry statistic to be applied to lower resolution and \(S/N\) detections. In this paper we introduce the 3D asymmetry statistic. In addition, we show that a switch from an 'absolute difference' asymmetry to a'squared difference' asymmetry allows the effect of noise on a measurement to be quantified and accounted for in a significantly more rigorous fashion. In Section 2 we derive 3D asymmetry measures using both absolute differences and our preferred squared difference formalism. Section 3 explores how the 1D, 2D, and 3D statistics depend on the observed geometry of a galaxy as well as the resolution of an observation. Section 4 shows the effect of noise on asymmetry measures, while Section 5 applies the 3D asymmetry statistic to a mock WALLABY-like sample. Finally Section 6 provides a discussion of these measures and our conclusions. ## 2 Asymmetry statistics The 2D asymmetry statistic was initially designed to be applied to optical images (Schade et al., 1995; Conselice et al., 2000; Conselice, 2003) but it can be applied to any two dimensional density or flux map; it has also been modified for 1D velocity profile analysis (Deg et al., 2020; Reynolds et al., 2020). This section will first review both the 2D and 1D asymmetry definitions. It will then describe the 3D asymmetry using the standard 'absolute difference' definition, as well as a new'squared difference' definition that can account for noise in a more rigorous fashion. ### 2d The 2D asymmetry has been defined in a few different ways. The most common is from Conselice et al. (2000): \[A_{2D,abs}=\frac{\sum_{j,k}|f_{j,k}-f_{-j,-k}|}{\sum_{j,k}|f_{j,k}+f_{-j,-k}| }\, \tag{1}\] where \((j,k)\) are the pixel indices relative to a center of rotation and \(f_{j,k}\) is the flux of the pixel \((j,k)\), and the summation is done over all pixels in a masked region of an image. Effectively, Eq. 1 computes a pixel-by-pixel normalized difference between an image and that same image rotated by \(180^{\circ}\). An alternate definition used in Conselice (2003) and more recently in Abruzzo et al. (2018) drops the absolute sign in the denominator: \[A_{2D,abs}=\frac{\sum_{j,k}|f_{j,k}-f_{-j,-k}|}{\sum_{j,k}(f_{j,k}+f_{-j,-k})}\, \tag{2}\] The advantage of Eq. 2 is that it allows for a slightly simpler calculation of the effect of noise (although it is still difficult as discussed in Sec. 2.3 and Sec. 4). For the remainder of this paper we will utilize Eq. 2 rather than Eq. 1 when computing 'absolute difference' asymmetries. The 2D asymmetry was originally designed for optical observations where using a pixel as the rotation point is reasonable (see the discussion on centering in Conselice et al., 2000). However, it can be generalized to use arbitrary coordinates provided that some method of interpolation is applied to the image. In Conselice et al. (2000) (and many other implementations), the center point is found by minimizing the asymmetry. This is a relatively straightforward approach, but the point that minimizes the asymmetry does not necessarily correspond to a physically meaningful location like the dynamical center of the galaxy, which can be estimated from the 3D datasets that are the focus of this work (e.g. Deg et al., 2022; Westmeier et al., 2022). ### 1D and 3D Asymmetries Eq. 1 can be generalized as \[A_{abs}=\frac{P_{abs}}{Q_{abs}}\, \tag{3}\] where \[P_{abs}=\sum_{i}^{N}|f_{i}-f_{-i}|\ \, \tag{4}\] \[Q_{abs}=\sum_{i}^{N}\left(f_{i}+f_{-i}\right). \tag{5}\] In this notation, the 2D sum over \((j,k)\) pixels in Eq. 1 is replaced with a generalized sum over all \(N\)_pixel pairs_ across one or more dimensions, with \(i\) representing the pixel index; in 2D, \(f_{i}=f_{j,k}\) and \(f_{-i}=f_{-j,-k}\). The idea of pair indexing rather than pixel indexing reveals the 1D asymmetry statistic clearly. Rather than using pixel indices as in Eq. 1, the profile asymmetry of Deg et al. (2020) and Reynolds et al. (2020) simply uses pairs of velocity channels. The only difference between \(A_{2D,abs}\) and \(A_{1D,abs}\) mathematically is the mapping of a flux pair, \(i\), to a pair of velocity channels rather than to a pair of pixels. Instead of pairing pixels around a particular rotation point, the 1D asymmetry pairs channels matched across some reference velocity. Given this notation, moving to 3D is relatively straightforward. Rather than a pixel (2D) or velocity (1D), the rotation point is some cell inside the 3D cubelet and the \(i\)'th flux pair maps to two locations that are \(180^{\circ}\) apart spatially and equally distant from the reference velocity. Nonetheless, working in 3D introduces some additional complications. In particular, the construction of a mask is significantly more complex. Related to this issue is the symmetry of the mask itself. In 3D, masks are usually constructed using complex algorithms that rely upon \(S/N\) levels. For instance, SoFiA-2 (Westmeier et al., 2021) is a commonly used tool for detecting extragalactic H i sources and it constructs masks containing each source. These masks are rarely symmetric, which poses a problem for asymmetry calculations as an asymmetric mask itself will affect the value of the asymmetry measured (see Sec. 4.1 for an example). For this reason, we recommend that any 3D mask be symmeterized about the chosen rotation point. This is trivial when using a specific rotation point. However this symmetrization step will significantly slow down approaches that attempt to find the point that minimize the asymmetry (e.g. Conselice et al., 2000; Deg et al., 2020), as the mask will need to be recalculated about each trial rotation point. During preliminary tests with our asymmetry implementation (see Sec. 6), we found that re-symmeterizing the mask when minimizing the asymmetry led to \(\approx 10\) times longer runtimes. Before moving to a discussion of the noise, it is worth noting one other advantage of the notation shown in Eqs. 3-5. A physically meaningful rotation point, like the dynamical center, does not need to lie at a particular pixel/channel/cell. Rather than considering integer pairs of pixels, it is possible to interpolate to arbitrary points within a pixel/channel/cell. Following this, the pairs are simply defined as \(x_{i}-x_{cent}=x_{cent}-x_{-i}\), \(y_{i}-y_{cent}=y_{cent}-y_{-i}\), and, as mentioned above, \(v_{i}-v_{sys}=v_{sys}-v_{-i}\) for the appropriate dimensions, where \((x_{cent},y_{cent},v_{sys})\) is the chosen rotation point. ### Dealing with noise The entire discussion thus far has defined asymmetries in 1, 2, and 3 dimensions for noiseless data. In addition to introducing uncertainty (see Sec. 4.1), noise also causes a bias in the measured asymmetry. When the data are noisy, the observed flux of some channel/pixel/cell can be written as \(F_{i}=f_{i}+g_{i}\), where \(f_{i}\) is the signal as above, and \(g_{i}\) is the contribution of the noise to that pixel. In this case the measured asymmetry, \(C_{m}\), becomes \[C_{m}=\frac{P_{m}}{Q_{m}}\, \tag{6}\] where \[P_{m}=\sum_{i}^{N}\left|f_{i}-f_{-i}+g_{i}-g_{-i}\right|\, \tag{7}\] and \[Q_{m}=\sum_{i}^{N}(f_{i}+f_{-i}+g_{i}+g_{-i}). \tag{8}\] If the noise is uniform, then \(g_{i}\) can be treated as a random draw from a distribution with a mean of zero. For a sufficiently large number of pairs and low levels of noise (regardless of the precise noise distribution), Eq. 8 reduces to Eq. 5. Unfortunately the effect of the noise cannot be easily separated out in Eq. 7 due to the non-linearity of the absolute value function. The typical approach to account for this is to approximate the noise-corrected asymmetry as \[A_{m}\approx C_{m}-\frac{B_{abs}}{Q_{m}}\, \tag{9}\] where \[B_{abs}=\sum_{i}^{N}\left|g_{i}-g_{-i}\right|. \tag{10}\] It is reasonable to adopt a Gaussian noise distribution for well-calibrated interferometric radio observations, so \(g_{i}\) is a random draw from a Gaussian with a mean of zero and a standard deviation of \(\sigma\). For Gaussian noise, \(B_{abs}\) can be simplified to \[B_{abs}\approx\frac{2\sigma N}{\sqrt{\pi}}. \tag{11}\] As pointed out in Giese et al. (2016) and Thorp et al. (2021), this approach, while quite successful at high \(S/N\), results in an over-subtraction at lower \(S/N\) observations. Thorp et al. (2021) noted that, due to the rules of modular subtraction \(|R|-|S|\leq|R-S|\), this over-subtraction is expected, but it can be quite severe. There are numerous methods that have been developed to deal with this bias or to determine the \(S/N\) at which this bias becomes important (Giese et al., 2016; Watts et al., 2020; Reynolds et al., 2020; Thorp et al., 2021). An alternate approach is to redefine asymmetry using the _squared difference_ of flux pairs rather than the _absolute difference_ in Eq. 3. This is very similar to the 'rms' asymmetry introduced in Conselice et al. (2000). In the absence of noise we can rewrite the asymmetry equation as \[A_{sq}^{2}=\frac{P_{sq}}{Q_{sq}}\, \tag{12}\] where \[P_{sq}=\sum_{i}^{N}\left(f_{i}-f_{-i}\right)^{2} \tag{13}\] \[Q_{sq}=\sum_{i}^{N}\left(f_{i}+f_{-i}\right)^{2} \tag{14}\] The adjustment of the asymmetry equation denominator to \(Q_{sq}\) is necessary to keep \(A_{sq}\) unitless and independent of the number of pairs. For noisy data, \(P_{sq,m}\) becomes \[P_{sq,m}=\sum_{i}^{N}\left(f_{i}-f_{-i}+g_{i}-g_{-i}\right)^{2}\, \tag{15}\] and the measured denominator, \(Q_{sq,m}\), becomes \[Q_{sq,m}=\sum_{i}^{N}\left(f_{i}+f_{-i}+g_{i}+g_{-i}\right)^{2}. \tag{16}\] Expanding and rearranging \(P_{sq,m}\) slightly gives \[P_{sq,m}=\left[\sum_{i}^{N}\left(f_{i}-f_{-i}\right)^{2}\right] +\left[\sum_{i}^{N}\left(g_{i}-g_{-i}\right)^{2}\right]\\ +2\left[\sum_{i}^{N}\left(f_{i}-f_{-i}\right)\left(g_{i}-g_{-i} \right)\right]. \tag{17}\] The first term in the equation above is simply \(P_{sq}\), while the third goes to zero for sufficiently large \(N\) and small \(\sigma\). The second term, \(B_{sq}\), is the contribution of the noise to \(P_{sq,m}\) and can be approximated as \[B_{sq}=\left\langle\sum_{i}^{N}\left(g_{i}-g_{-i}\right)^{2}\right\rangle \approx 2N\sigma^{2}\, \tag{18}\] for Gaussian noise and sufficiently large N. In a similar manner, \(Q_{sq,m}\) can be expanded and rearranged. Since \[\left\langle\sum_{i}^{N}\left(g_{i}-g_{-i}\right)^{2}\right]=\left[\sum_{i}^ {N}\left(g_{i}+g_{-i}\right)^{2}\right\rangle\, \tag{19}\] for Gaussian noise and large N, we find that \[A_{sq}=\left(\frac{P_{sq,m}-B_{sq}}{Q_{sq,m}-B_{sq}}\right)^{1/2}. \tag{20}\] The \(B_{sq}\) terms should be understood as the _systematic contribution_ of the noise to the asymmetry measurement, and subtracting them from the numerator and denominator removes this bias. This is not the same as the _random uncertainty_ of the asymmetry measurement, which is also produced by noise. We discuss random uncertainties in Sec. 4.1. It is again worth noting here that our'squared difference' method is similar to the 'rms' method of Conselice et al. (2000). In that work, they found an improved correlation with galaxy color. However, in the low \(S/N\) regime of many H i cubelets, we find the cleaner background correction of the squared difference method to be a great advantage. We compare the performance of 'absolute difference' and'squared difference' 3D asymmetries in Section 4. ## 3 3D asymmetries and noiseless data As an observed quantity, the asymmetry statistic is subject to a host of observational effects/biases, many of which are caused by the observed geometry and resolution (Giese et al., 2016; Deg et al., 2020). These effects are present regardless of the 'intrinsic' asymmetry or the observed noise. As such, it is important to understand how the 3D asymmetry statistic depends on the asymmetry viewing angle (relative to the galaxy orientation), the disk inclination, and the resolution of the observation in the absence of noise. We explore this performance here, and then add noise in Section 4. In this section, we compare the 3D asymmetry to the 2D and 1D asymmetry using mock cubelets with a variety of different geometries and resolutions. The mock cubes are generated using a modified version of the MCGSuite code1(Lewis, 2019, Spekkens et al. in prep), which generates realistic mock H i cubelets of flat axisymmetric H i disks using empirical scaling relations. The key parameters for MCGSuite are the H i mass, \(M_{\rm H\textsc{i}}\), (which generates the rotation curve and surface density profile) and the diameter, \(D_{\rm H\textsc{i}}\), measured in angular resolution elements which we henceforth call 'beams'. The diameter is defined as twice the radius in the plane of the disk, \(R_{\rm H\textsc{i}}\), where the unconvolved surface density equals 1 M\({}_{\odot}\) pc\({}^{-2}\). \(D_{\rm H\textsc{i}}\) is defined in beams as MCGSuite calculates the distance to the object such that \(2R_{\rm H\textsc{i}}\) in kpc subtends an angle equal to the target size in beams. In addition, the observed inclination and position angle of the disk, \(i\), and \(\phi\) respectively, are MCGSuite input parameters. Footnote 1: [https://github.com/CIRADA-Tools/MCGSuite](https://github.com/CIRADA-Tools/MCGSuite) We have modified MCGSuite to include an arbitrary number of Fourier moments in the gaseous surface density. By using the first moment, we are able to generate asymmetric H i cubes for testing purposes. The strength of this moment is characterized by the \(A_{1}\) Fourier coefficient and can be oriented at an arbitrary phase angle, \(\Phi\), measured relative to the major axis of the galaxy. Explicitly, the galaxy plane surface density is set to \[\Sigma(R,\theta)=\Sigma(R)\left(1+A_{1}\cos(\theta+\Phi)\right)\, \tag{21}\] where \(\Sigma(R)\) is the axially symmetric surface density at the cylindrical radius \(R\), and \(\theta\) is the cylindrical angle in the galaxy plane measured from the approaching side of the major axis. As illustrated in Figure 1 which shows a pair of example cubelets, \(\Phi=0^{\circ}\) corresponds to an asymmetry about the minor axis, while \(\Phi=90^{\circ}\) is an asymmetry about the major axis. For this section, all mock cubelets are noiseless and have \(\phi=0^{\circ}\), as the disk position angle does not affect the calculated asymmetry. Figure 1 shows two example cubelets generated by our modified version of MCGSuite. Both cubes are built with the same underlying model (\(M_{\rm H\textsc{i}}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm H\textsc{i}}=5\) beams, \(i=45^{\circ}\), and \(\phi=0^{\circ}\)) and Fourier moment (\(A_{1}=0.8\)). The only difference between the cubes is \(\Phi\), the orientation of the asymmetry with respect to the observer's line-of-sight. The 3D visualizations shown in the left-hand column are generated using the SlicerAstro2 software package (Punzo et al., 2016, 2017). These two examples are clearly unrealistic in terms of their level of asymmetry. However, they show how the orientation of an asymmetric feature in the disk surface density affects the observed cubelet, moment map, and velocity profile. In particular, the \(\Phi=90^{\circ}\) model has a completely symmetric velocity profile, which is consistent with findings of Deg et al. (2020). Throughout this paper we use a newly developed code called 3DACS3 (3D Asymmetries for data CubeS) to calculate asymmetries. A brief description of this publicly available code is given in the appendix. For this section we only utilize the squared difference asymmetry. To keep the notation clean, we simply use \(A_{3D}\), \(A_{2D}\), and \(A_{1D}\) in what follows to represent the 3D, 2D, and 1D'squared difference' asymmetry respectively, computed using Eq. 20. Furthermore, all analysis here uses the dynamical center of the cube as the rotation point. Calculating the asymmetry at the dynamical center allows for the strongest linking between the measured asymmetry to the structure and disruption of a galaxy. Footnote 3: [https://github.com/NateDeg/3DACS](https://github.com/NateDeg/3DACS) ### Asymmetry Viewing Angle As seen in Fig. 1, the orientation \(\Phi\) of an asymmetric feature with respect to the line-of-sight can strongly affect the observed morphology. Unlike other potential observational biases, like inclination and resolution, the viewing angle of an asymmetry is effectively unknown for any galaxy. While one can select a sample of galaxies based on inclination, resolution, \(S/N\), and other effects to avoid potential biases, this is impossible for the viewing angle. Any survey will contain galaxies with a range of viewing angles for the intrinsically asymmetric features. Figure 2 explores the dependence on viewing angle in greater detail. It shows two suites of noiseless cubes, one with \(A_{1}=0.8\) (solid lines) and one with \(A_{1}=0.2\) (dashed lines), where \(\Phi\) is varied between \(0^{\circ}\) and \(180^{\circ}\). The base model for each suite has a size, \(D_{\rm H_{1}}\), of 8 beams across and \(M_{HI}=10^{9.5}\) M\({}_{\odot}\). The mock galaxies are all observed at an inclination of \(i=50^{\circ}\). The upper panel of Fig. 2 shows the calculated asymmetry for each cube, while the bottom panel shows the asymmetry scaled to the maximum asymmetry \(A_{3Dmax}\) for that particular suite. It is clear that 1D asymmetry is particularly susceptible to the viewing angle, with \(A_{1D}=0\) for \(\Phi=90^{\circ}\), which is consistent with the results of Deg et al. (2020). The 2D asymmetry displays a greater resilience, remaining nearly constant regardless of the viewing angle. This can be understood when comparing panels in the middle column of Fig. 1: the change in the viewing angle at this inclination does move flux around in the moment map, but it does not affect how asymmetric the image appears. By contrast, the 3D asymmetry's susceptibility to viewing angle effects lies between these two extremes. The reason for a variation in the 3D asymmetry signal is due to its additional dependence on the velocity structure. The bottom panel of Fig. 1, which shows the asymmetry effects of viewing angle scaled by the peak 3D asymmetry, illustrates that once scaled, the asymmetry effects of viewing angle are the same across different input asymmetry amplitudes. The invariance of the scaled asymmetry across different input amplitudes is maintained for inclination and resolution (when dealing with noiseless data). As such, we have chosen to use unrealistically asymmetric galaxies (\(A_{1}=0.8\)) in the noiseless data tests that follow in order to emphasize the observational effects on the asymmetry measurements. ### Inclination Inclination must also affect the measured asymmetry of an object. Figure 3 shows how the asymmetry varies as a function of disk inclination. In this case, the two models shown both have \(A_{1}=0.8\), \(M_{\rm H_{1}}=10^{9.5}\) M\({}_{\odot}\), and an asymmetry viewing angle of \(\Phi=45^{\circ}\), but one model is moderately resolved with \(D_{HI}=5\) beams across while the other is only marginally resolved with \(D_{HI}=2\) beams across. This figure shows that a face-on galaxy will have greatly diminished 1D asymmetry whereas an edge-on galaxy will have a reduced 2D asymmetry. Extreme inclinations also lower the 3D asymmetry, but it always includes the signal from the 2D or 1D asymmetry, making it more resilient to inclination effects than either statistic alone. Fig. 3 also illustrates how at lower spatial resolutions, 3D asymmetry more closely resembles the 1D asymmetry. Additionally, at low inclinations where the 1D asymmetry goes to zero, the 3D asymmetry reduces to the 2D asymmetry. Another important takeaway from Fig. 3 is that the relative shapes of the inclination trends are not constant with resolution. This is different than the response of the asymmetry to different Fourier strengths seen in Fig. 2. Therefore, understanding the effect of the resolution on the measured asymmetry is critically important. ### Resolution To understand the effect of spatial resolution on the measured asymmetry, an additional two suites of model cubes were generated with varying \(D_{\rm H_{1}}\), with \(M_{HI}=10^{9.5}\) M\({}_{\odot}\) and \(i=50^{\circ}\). One set of models has \(\Phi=45^{\circ}\), while the other has \(\Phi=70^{\circ}\). Figure 4 shows the dependence of asymmetry in these two suites as a function of the spatial resolution. The 1D asymmetry, which is purely spectral, does not depend on the spatial resolution, but it does depend on the viewing angle. By contrast the 2D asymmetry plunges at lower resolutions, and should converge to zero when the object is effectively unresolved. The 3D asymmetry, though affected by resolution, remains more resilient to the spatial resolution than 2D asymmetry. When the model becomes unresolved, the 3D asymmetry should converge to the 1D asymmetry. Taking the effects of viewing angle, inclination, and particularly resolution together, it is clear that in the marginally-to-moderately resolved regime, the 3D asymmetry is more representative of the intrinsic asymmetry than either the 1D or 2D asymmetries. The 3D asymmetry is less susceptible to viewing angle variations than the 1D asymmetry, is more resilient to inclination effects than either the 1D or 2D measures, and can be used at lower resolutions than the 2D asymmetry. A key point to note is that, while the observed geometry/resolution affects all asymmetry measurements, an important use of any asymmetry statistic is separating surveys of galaxies into undisturbed and disturbed populations. While none of the statistics are constant with geometry/resolution, the 3D asymmetry shows a variation maximum of 0.25 in Fig. 4, and an average variation across Figs. 2-4 of \(\approx 0.05\), which are both lower than the variations seen for the 2D and 1D statistics. This lower variation implies that it will be less likely to classify a disturbed galaxy as undisturbed due to the orientation of the galaxy with respect to the observer. While this section has only used the'squared difference' asymmetry statistic, when this analysis is repeated with the traditional 'absolute difference' asymmetry the results are the same. ## 4 Uncertainties and noise ### Uncertainties In addition to biasing the asymmetry measure itself (see Section 2.3), noise also adds random uncertainty to the asymmetry that must be calculated. This uncertainty arises in three distinct ways: through the formal uncertainties from the noise, by causing variations in the mask, and through uncertainties in the precise rotation point. By definition, the formal uncertainty \(\sigma_{A_{sq}}\) is given by: \[\sigma_{A_{sq}}^{2}=0.5A_{sq}^{2}\left(\frac{\sigma_{P_{c}}^{2}}{P_{c}^{2}}+ \frac{\sigma_{Q_{c}}^{2}}{Q_{c}^{2}}\right)\, \tag{22}\] where \(P_{c}=P_{sq,m}-B_{sq}\) and \(Q_{c}=Q_{sq,m}-B_{sq}\). Calculating this uncertainty is not trivial due to the signal-noise cross terms in \(P_{sq,m}\) and \(Q_{sq,m}\) seen in Eq. 17. While the expectation value of those terms is zero, they nonetheless introduce uncertainty. It is possible to write an expression for \(\sigma_{A_{sq}}\) starting from Eq. 22 and applying a number of approximations that simplify it to some degree. However, in practice, this uncertainty is small relative to the systematic uncertainties. For example, in our tests we found that the formal uncertainty computed from Eq. 22 is rarely larger than \(\sigma_{A_{sq}}=0.02\). By comparison, the uncertainty associated with the unknown viewing angle shown in Figure 2 is on the order of 0.1. We elaborate on the magnitude of different sources of uncertainty on \(A_{sq}\) below. Beyond the formal uncertainty, noise can generate variations in the masks/segmentation maps that are used in the asymmetry calculation. The construction of such masks is not trivial and variations in the mask due to noise may affect the asymmetry in non-obvious ways. In order to explore the effect of mask variations on the asymmetry measurement only, we added Gaussian noise with \(\sigma=1.6\) mJy per \(30^{\prime\prime}\) beam (as expected for the WALLABY H i survey; Koribalski et al.2020) to the cubelet shown in the upper row of Figure 1 (\(M_{\rm HI}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm HI}=5\) beams, \(i=45^{\circ}\), \(\phi=0^{\circ}\), \(A_{1}=0.8\)). Figure 5 shows the measured asymmetry in 1D, 2D, and 3D for masks constructed using a fraction of the total flux of the noiseless cube (top panel), and masks constructed using SoFiA-2 with different source finding thresholds, where the total cube flux included by the mask increases towards low values of _scfind.threshold_ (bottom panel). In both panels the dashed lines show the effect of using these unmodified and asymmetric masks on the measured asymmetry. The variations in the dashed lines Figure 1: An example of two strongly asymmetric mock cubes generated by our modified version of MCGSuite with different viewing angles, \(\Phi\), to the asymmetric feature. Both rows show a model galaxy with \(M_{HI}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm H\,i}=5\) beams across the major axis, \(i=45^{\circ}\), \(\phi=0^{\circ}\), and \(A_{1}=0.8\). The left-hand panels show 3D projections of the cubelet taken from SlicerAstro that have been oriented to roughly match the moment 0 maps in the middle panels. The axes shown in the left panels as \(E,N,Z\) correspond to RA, DEC, and \(V_{\rm los}\). The colours show surfaces of constant flux in the 3D cubelets. show that, when using asymmetric masks, the precise size and shape of the mask can change the uncertainty by tens of percent for relatively modest changes in source finding parameters. An approach to mitigate this effect is to _symmetevitze_ the mask within which the asymmetry is calculated. This is done by adjusting the input mask such that pairs of cells/pixels/channels across the rotation point are either both included in or excluded from the mask. The solid lines of Figure 5 show that the variations in the measured asymmetry are much smaller when using these symmetric masks. There are some variations when the sourcefinding threshold digs deeper into the noise (_scfind.threshold_\(<3\)). The resulting large masks are including a great deal of additional flux from noise peaks, and are unlikely to be used for real observations. Given that the asymmetry variations from symmetric masks with (_scfind.threshold_\(>3\)) are \(\lesssim 0.02\) (solid lines in Fig. 5), we utilize symmetric masks throughout this work. However, it should be noted that it is not always possible to construct a symmetric mask, and in such cases it will be necessary to consider how to estimate the asymmetry uncertainty due to potential mask variations. For instance, when minimizing the asymmetry for a 2D image or 1D profile calculated from a 3D datacube, symmetrizing the mask is impossible because the image and profile are generated using a 3D mask. Yet another way that noise may affect the asymmetry is by introducing uncertainties in the rotation point. Noise can cause uncertainties in the measurement of the dynam Figure 4: The dependence of the asymmetry on the spatial resolution at two different viewing angles (solid and dashed lines). All models in this plot have \(M_{\rm H_{1}}=10^{9.5}\) M\({}_{\odot}\), \(i=50^{\circ}\), and \(A_{1}=0.8\). Figure 3: The dependence of the asymmetry on the disk inclination for moderately-resolved models (\(D_{HI}=5\) beams, solid lines) and marginally-resolved models (\(D_{HI}=2\), dashed lines). Only one line is seen for the 1D asymmetry, as the velocity profile does not depend on the spatial resolution. All models in this plot have \(M_{\rm H_{1}}=10^{9.5}\) M\({}_{\odot}\), \(\Phi=45^{\circ}\), and \(A_{1}=0.8\). Figure 2: The dependence of the asymmetry measurement on the viewing angle, \(\Phi\), of the asymmetric feature. When \(\Phi=0^{\circ}\) or \(180^{\circ}\) the asymmetry appears across the minor axis (see the top row of Figure 1, while when \(\Phi=90^{\circ}\) the asymmetry appears across the major axis.The top panel shows the measured asymmetry while the bottom panel shows the asymmetry scaled to the maximum 3D asymmetry for that particular suite, \(A_{3Dmax}\) for the strongly asymmetric (\(A_{1}=0.8\), solid lines) and weakly asymmetric (\(A_{1}=0.2\), dashed lines) suites. Note that in the bottom panel the dashed lines and solid lines are superimposed. All models in this plot have \(M_{\rm H_{1}}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm H_{1}}=8\) beams, and \(i=50^{\circ}\). ical center (or other interesting pivot points) which should propagate to an uncertainty in the measured asymmetry. The simplest way to propagate this uncertainty is to simply calculate the asymmetry within the range of allowable points given the uncertainty and use the extrema to determine the asymmetry uncertainty. In 3D for uncorrelated uncertainties, this involves calculating the asymmetry at an additional 26 points; these are the \(3\times 3\times 3-1\) points without the center point defined by \((x\pm\delta x,y\pm\delta y,v\pm\delta v)\). To give an idea of the scale of these variations we assumed an uncertainty of \(\pm 0.25\) beams and \(\pm 1\) channels (typical for H i datacubes from widefield surveys, e.g. Deg et al., 2022) for the center of the mock cube used in the mask tests shown in Fig. 5. Setting the uncertainty as half the range of the minimum and maximum uncertainties gives \(\sigma_{A,{\rm center}}=0.1\). While this example is informative, the precise size of this variation will strongly depend on how precisely the rotation point is known. For instance, if the optical center of brightness is used, the uncertainty on the rotation point will likely be much lower. Table 4 lists the three different sources of uncertainty for the mock cube in Fig 5 as well as the background contribution to the asymmetry. It also includes the actual measured values for \(P_{sq,m}\) and \(Q_{sq,m}\) as \(B_{sq,3D}\) appears in both the numerator and denominator of the asymmetry calculation. In this example, the dominant uncertainty is the uncertainty on the rotation point, which we have assumed to be \(\pm 0.25\) beams and \(\pm 1\) channel. It is worth noting that the formal uncertainty (Eq. 22) is considerably smaller than the variations due to the viewing angle seen in Figure 2. As discussed in Sec. 3, the viewing angle is an uncontrollable and unknowable parameter from an observational point of view. Thus, when observing a population, the systematic uncertainty in the measurements from such observational biases will likely dominate over other sources of uncertainty. ### Squared Difference vs Absolute Asymmetry The effect of noise on asymmetry measurements has been investigated in a large number of works. As noted in Conselice et al. (2000) and Conselice (2003), noise will always increase the measured value of the asymmetry. To account for this when using the 'absolute difference' asymmetry, a background measurement is made and subtracted from the measured value as shown in Eq. 9. However, as noted in Giese et al. (2016) and Thorp et al. (2021), this subtraction is an approximation and will overcorrect the asymmetry down to zero in the low \(S/N\) regime. In Sec. 2.3, we introduced the'squared difference' asymmetry that should have a more robust background subtraction. In order to compare the effect of the noise contribution to the asymmetry when 'absolute differences' and when'squared differences' are used, we built a suite of 1000 cubels with increasing levels of noise, \(\sigma\), ranging from 0.01 mJy to 5 mJy per 30'' beam. Each cube has \(M_{\rm H{\,\sc i}}=10^{9.5}\) M\({}_{\odot}\), \(i=50\)deg, and \(\Phi=45\)deg. Half the cubes have \(A_{1}=0.8\) while the other half have \(A_{1}=0.2\). Once generated, SoFiA-2 was used to make a mask for each cube, within which the asymmetry was then calculated about the centre of the axisymmetric component of the mock H i disk. The upper panels of Figure 6 show the background-corrected asymmetry for each statistic as a function of the input noise expressed in M\({}_{\odot}\) pc\({}^{-2}\) in a 30'' beam over a spectral range of 16 km s\({}^{-1}\) (\(\sim 4\) WALLABY spectral channels), while the lower panels show the difference between the measured asymmetry and the asymmetry calculated from a noiseless cube, \(A_{3D,n}\). For the upper panels, we show an average asymmetry and a width of one standard deviation at each noise value calcu \begin{table} \begin{tabular}{l c} \hline Measurement & Value \\ \hline \(A_{3D}\) & 0.63 \\ \(P_{sq,m}\) & 0.54 \\ \(Q_{sq,m}\) & 1.19 \\ \(B_{sq,3D}\) & 0.11 \\ \(\sigma_{A_{sq}}\) & \(\sim\) 0.02 \\ Symmetric Mask Uncertainty & 0.02 \\ Rotation Point Uncertainty & 0.08 \\ \hline \end{tabular} \end{table} Table 1: Sources of uncertainty for the noisy cube used to generate Fig. 5: noise of 1.6 mJy per 30′′ beam, and \(M_{\rm HI}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm HI}=5\) beams, \(i=45\)°, \(A_{1}=0.8\). An uncertainty in the rotation point of \(\pm 0.25\) beams and \(\pm 1\) channel was assumed. Figure 5: The noise corrected asymmetry measured for a cube with an input noise of 1.6 mJy per 30′′ beam and \(M_{\rm HI}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm HI}=5\) beams, \(i=45\)°, \(A_{1}=0.8\). The top panel shows the asymmetry using masks constructed from the underlying noiseless cube where the x-axis is the fraction of the total noiseless flux included in the mask. The bottom panel uses masks generated by SoFiA-2 with different values of the SoFiA-2 parameter _scfind.threshold_ either as is (Asymmetric mask, dashed lines) or symmetrized as described in the text (Symmetric mask, solid lines). In both panels the total flux included in the mask increases towards the right. lated using a Gaussian kernel with a width that is inversely proportional to the density of points. Figure 6 shows the general bias of the 'absolute difference' asymmetry quite clearly. There is a constant decrease in the asymmetry, with a slope that is roughly independent of the initial Fourier moment strength (as seen in the lower left panel). There is a turnover as the noise corrected 'absolute difference' asymmetry reaches zero, as seen in the red lines. By \(\sigma=0.25\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-2}\), the measured asymmetry has decreased by 0.15. By contrast the'squared difference' asymmetry remains at a constant value until \(\sigma\approx 0.2-0.3\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-2}\). Both the start of and the rate of the decrease depends somewhat on the initial strength of the Fourier moment. Interestingly, the spread of measured asymmetry values is roughly constant for the 'absolute difference' statistic, while it increases with noise for the'squared difference' statistic. In the region of the plot where the'squared difference' asymmetry has low bias, the standard deviation is below 0.01. The effect of the noise on the asymmetry measurement is also related to the spatial resolution. To demonstrate this, Figure 7 shows the background corrected asymmetry (top row), the difference between the corrected and noiseless asymmetries (second row), the measured spread (third row), and relative uncertainty (bottom row) for a suite of cubes with different resolutions and input levels of noise. Each cube has \(M_{\mathrm{H}_{\mathrm{I}}}=10^{9.5}\)\(\mathrm{M}_{\odot}\), \(i=50^{\circ}\), \(\Phi=45^{\circ}\), and an input Fourier strength of \(A_{1}=0.4\). In both the 'absolute difference' and'squared difference' asymmetry, there is a clear dependence on the spatial resolution. Better-resolved objects are both less biased by the noise than more poorly-resolved objects, and have a lower uncertainty in their measured asymmetry. Objects with \(D_{\mathrm{H}_{\mathrm{I}}}\geq 8\) beams have only a small bias even at large levels of noise. These results are qualitatively similar to the resolution tests presented for noiseless cubelets in Fig. 4. As in Fig. 6, Fig. 7 shows that the 'absolute difference' asymmetry has a relatively constant level of spread, regardless of the noise and object size, while the'squared difference' depends on both the noise and resolution. This uncertainty is due to the formal uncertainty discussed in Sec. 4.1 and does not include contributions due to uncertainties in the rotation point. Below a noise limit of \(0.25\,\mathrm{M}\,\mathrm{pc}^{-2}\) and for \(D_{\mathrm{H}_{\mathrm{I}}}>5\) beams, the'squared difference' asymmetry has a lower spread than the 'absolute difference'. For many science cases, the key quantity is the relative spread, which is shown in the bottom row of Figure 7. Here we see that the relative uncertainty of both methods is similar, although the region where the relative uncertainty is minimized is larger for the'squared difference' method. The similarity between the two is due to the offsetting behaviours of the bias and uncertainties for each method. Knowing the relative spread can help to plan out where measuring a particular asymmetry statistic is viable at both an individual and at a population level. It is useful to consider the mock cubelet noise in the context of current widefield surveys. To that end, Fig. 7 shows the expected noise for WALLABY (1.6 mJy per 30\({}^{\prime\prime}\) beam, Koribalski et al., 2020) in \(\mathrm{M}_{\odot}\,\mathrm{pc}^{-2}\) over a spectral range of \(16\,\mathrm{km}\,\mathrm{s}^{-1}\) (\(=4\) WALLABY channels) as a vertical red line. At this noise level, the 'absolute difference' asymmetry shows a strong bias at all sizes. However, the'squared difference' asymmetry is shows little to no bias for all detections with \(D_{\mathrm{H}_{\mathrm{I}}}\gtrsim 6\) beams. This suggests that, while the'squared difference' statistic can be used for many of the resolved WALLABY detections, the absolute difference method cannot. Altogether, Figs. 6 and 7 show that the'squared difference' asymmetry is superior to the 'absolute difference' asymmetry in the presence of noise. As such, we recommend that any study of asymmetry adopt squared differences. ## 5 Potential for WALLABY-like observations There are a variety of new telescopes undertaking cutting edge widefield H i surveys. For example, WALLABY on the Australian Square Kilometer Array Pathfinder (ASKAP, Hotan et al., 2021) telescope will observe the H i content of galaxies over most of the Southern Sky. The majority of detections in such surveys are marginally resolved and low \(S/N\). For instance, Fig. 1 of Deg et al. (2022) shows that most of the detections in the WALLABY Pilot Data Release 1 (PDRI, Westmeier et al., 2022) have \(\log(S/N)_{\mathrm{int}}\leq 2\) and ell\(\_\)maj\(\leq 5\) beams, where ell\(\_\)maj is an estimate of the detection size based on the source finding (for PDRI, \(D_{\mathrm{H}_{\mathrm{I}}}\approx 2\) ell\(\_\)maj, Deg et al., 2022) and \(S/N_{\mathrm{int}}\) is the integrated \(S/N\) in the mask. Given the performance of the squared difference 3D asymmetry statistic, it is natural to explore whether it can be applied to WALLABY and other similar surveys. While there are strong hints that this is possible from Fig. 7, those maps are made using a single galaxy model observed at different noise levels, whereas a real survey will make many different detections with a common level of noise. To that end, we generated a population of 500 mock H i cubes with random geometries, sizes, and asymmetry Fourier moments. Each cube is generated with the nominal WALLABY observing parameters of 1.6 mJy/beam, a 30\({}^{\prime\prime}\) beam, 6\({}^{\prime\prime}\) pixels, and 4 km s\({}^{-1}\) channels (Koribalski et al., 2020). The mock galaxies have \(8\leq\log(M_{\mathrm{H}_{\mathrm{I}}}/M_{\odot})\leq 10.5\), \(0^{\circ}\leq i\leq 90^{\circ}\), \(0^{\circ}\leq\phi\leq 360^{\circ}\), \(1\leq D_{\mathrm{H}_{\mathrm{I}}}\leq 40\) beams. The mass for each galaxy is drawn from a logarithmic distribution, while the other parameters are drawn from linear distributions. These selections and distributions are meant to roughly approximate the observations of WALLABY PDR1 galaxies except for the asymmetry levels, but they are not precise matches. For the asymmetries, the models have \(0\leq A_{1}\leq 0.6\), and \(0^{\circ}\leq\Phi\leq 180^{\circ}\). The upper limit on the \(A_{1}\) range reflects the fact that no real galaxies should have such a large Fourier \(A_{1}\) moment. The final population, while comprised of galaxies with a different observed size and asymmetry distribution, is still similar enough to WALLABY to draw a few conclusions. We run SoFiA-2 on each cube using the parameters listed for the Hydra Team Release 2 sources in Table 2 of Westmeier et al. (2022). This generates similar masks to the WALLABY PDR1 observations. At low resolutions SoFiA-2 can fail to find the galaxy, or generate a mask that is not quite appropriate. Since we know the center of the galaxy, we remove all galaxies where the SoFiA-2 center differs from the true center by \(\geq 15\%\) of the size of the object as estimated by SoFiA-2. This is a rough filter and a number of galaxies with poorly constructed masks still fall into the sample. As noted earlier, ell_maj\(\sim D_{HI}/2\) when using WALLABY-like parameters (Deg et al., 2022). Figure 8 shows the size, background corrected squared difference asymmetry, and the difference between that measurement and what would be measured for a noiseless cube using the same model and mask. At lower resolutions (ell_maj\(\leq 3\) beams), there is still a significant population of galaxies where the mask is poorly constructed. Above this size, most of the galaxies have well-recovered asymmetries. This is broadly consistent with the results seen in Figure 7. In that figure, the mock galaxies with \(D_{\rm H\,i}<6\) beams (which is equivalent to ell_maj\(=3\) beams) show a significant bias, while those above that limit show very little bias. However, there are a few larger objects with lower measured asymmetries where the background subtraction has undercorrected the results by \(0.03-0.07\). The increased asymmetry of these objects is likely due to poorly constructed masks. In WALLABY, and other wide area untargeted surveys, greater care will be taken with detecting sources and constructing the masks than we utilized for this toy problem. As such, it is likely that the measured asymmetry can still be used with a great degree of accuracy for the entire population of marginally resolved detections. But, if we are to be cautious, Fig. 8 suggests that the asymmetry is accurately measured for all our mock galaxies with ell_maj\(\geq 3\) beams. It is worth noting that this is larger than the limit of \(D_{\rm H\,i}\geq 3-4\) seen in Figs. 4 and 7. This is due to the low \(S/N\) of the WALLABY-like population used in Fig. 8. Nonetheless, even when restricted to ell_maj\(\geq 3\), the 3D squared difference asymmetry can be applied to the majority of the marginally resolved WALLABY PDR1 detections without worry of noise biasing the results of the analysis, and with little scatter from what one would expect from a noiseless measurement. ## 6 Discussion and Conclusions In this work we have introduced a methodology for calculating the 3D asymmetry of a cubelet containing a single galaxy. While this method has been designed for H i datacubes, it should be applicable to any spectral line cube, whether from an IFU or using some other line/feature. The 3D asymmetry is less affected by the viewing angle of the asymmetric feature than the 1D measurement. It is also superior to both the 1D and 2D measures with respect to the inclination of the galaxy. The 3D asymmetry can be used at lower spatial resolutions than the 2D measurement. This result is of particular importance as there are usually an order of magnitude more galaxies that are marginally resolved than are well resolved in widefield, untargeted H i surveys (Koribalski et al., 2020). The application of asymmetries to large surveys is the key use of this statistic. On an individual basis, the various geometric and resolution biases tend to drive asymmetries down. Therefore, while a low asymmetry measurement does not guarantee that a galaxy is Figure 6: Behaviour of the asymmetry statistic as a function of cubelet noise for different input Fourier moments in the range \(0.2\leq A_{1}\leq 0.8\), when asymmetries are calculated using ‘absolute difference’ and ‘squared difference’ methods. In the top row, the solid lines show the average value calculated using a Gaussian kernel, and the shaded regions show the standard deviation of points about the mean. Negative absolute asymmetry values obtained from over-subtraction are treated as 0. The bottom row shows the difference between the measured asymmetry, \(A_{3D}\), and asymmetry from the noiseless cubes, \(A_{3D,n}\). The dashed black line at 0 highlights what is expected when the background corrected asymmetry matches the noiseless asymmetry. All models used to generate these curves have \(M_{\rm H\,i}=10^{9.5}\) M\({}_{\odot}\), \(D_{\rm H\,i}=8\) beams, \(i=50^{\circ}\), and \(\Phi=45^{\circ}\). truly symmetric, when the background correction is properly applied a high asymmetry measurement does guarantee that the galaxy is indeed asymmetric. But, for larger surveys, the asymmetry statistic can be used to select interesting galaxies as well as probe for differences in various populations (e.g. mergers versus non mergers or groups/clusters versus field galaxies). In addition to introducing the 3D asymmetry, we have also developed the'squared difference' asymmetry. This asymmetry formulation allows for a more straightforward calculation of the contribution of noise to the measured asymmetry than the standard 'absolute difference' asymmetry. Unlike the absolute asymmetry, the background corrected squared difference asymmetry remains unbiased down to very low \(S/N\). This removes the need for some of the noise corrections developed in Giese et al. (2016) and Thorp et al. (2021) for low \(S/N\) observations. Based on these results, we expect that the 3D asymmetry for WALLABY detections with \(D_{\rm H_{1}}\geq 3\) beams that have reliable masks can be calculated reliably. This opens up many exciting avenues for future explorations, including the effects of environment on asymmetry, the connection between asymmetries and physical processes, and the use of asymmetries as a diagnostic for kinematic modelling. ## Acknowledgement The authors thank the anonymous referee for their excellent suggestions. KS acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). MG is supported by the Australian Government through the Australian Research Council's Discovery Projects funding scheme (DP210102103). Figure 7: Intensity maps portraying the effects of resolution and noise on the absolute difference asymmetry and squared difference asymmetry measurements for a galaxy. The units of \(D_{\rm H_{1}}\) are beams. The upper panels show the average background corrected 3D asymmetry while the second row shows the difference between this background corrected asymmetry and the asymmetry of an equivalent noiseless cubelet. The third row shows the spread in asymmetries in each bin, which is calculated as the standard deviation of all points in each cell in the \(\sigma_{-}D_{\rm H_{1}}\) space. The bottom row panels show the relative spread in the asymmetry measurement. The vertical line shows the noise for a WALLABY-like population (see Sec. 5 for a discussion of this population.) All models used to generate this map have \(M_{\rm H_{1}}=10^{9.5}\) M\({}_{\odot}\), \(i=50^{\circ}\), \(\Phi=45^{\circ}\), and \(A_{1}=0.4\). ## Data availability All mock cubes and analysis codes are available upon request to the corresponding author. Additionally, the modified version of MCGSuite is available upon request.
2302.09243
A Federated Approach for Hate Speech Detection
Hate speech detection has been the subject of high research attention, due to the scale of content created on social media. In spite of the attention and the sensitive nature of the task, privacy preservation in hate speech detection has remained under-studied. The majority of research has focused on centralised machine learning infrastructures which risk leaking data. In this paper, we show that using federated machine learning can help address privacy the concerns that are inherent to hate speech detection while obtaining up to 6.81% improvement in terms of F1-score.
Jay Gala, Deep Gandhi, Jash Mehta, Zeerak Talat
2023-02-18T06:08:04Z
http://arxiv.org/abs/2302.09243v1
# A Federated Approach for Hate Speech Detection ###### Abstract Hate speech detection has been the subject of high research attention, due to the scale of content created on social media. In spite of the attention and the sensitive nature of the task, privacy preservation in hate speech detection has remained under-studied. The majority of research has focused on centralised machine learning infrastructures which risk leaking data. In this paper, we show that using federated machine learning can help address privacy the concerns that are inherent to hate speech detection while obtaining up to \(6.81\%\) improvement in terms of F1-score. ## 1 Introduction Content moderation is a topic that intersects across multiple fundamental rights, e.g., freedom of expression and the right to privacy; and interest groups, e.g. scholars, legislators, civil society, and commercial entities Kaye (2019). The availability of public datasets has been crucial to the development of computational methods for hate speech detection. However, public data contains risks for those whose content is available. On the other hand, privately held data, e.g., data held by corporate entities, holds risks for those who are reporting content. Such risks may be actualised through information leaks in models Hitaj et al. (2017) or the transmission of data Shokri and Shmatikov (2015), and can impact people's safety and livelihood. In this work, we apply Federated Learning (FL, McMahan et al., 2017) to address the lack of privacy in hate speech detection. FL is a privacy-preserving training paradigm for machine learning that jointly optimises for user privacy and model performance. We posit that privacy is necessary for users whose content is flagged and users who are flagging content alike. We thus operationalise privacy, in the context of hate speech detection and federated learning, to mean privacy in terms of the content of reported content, and the report itself. FL is an apt training paradigm for tasks in which training data is highly sensitive, as FL is designed to mitigate risks of information leaks while also dealing with a high number of end-users, information loss, and label imbalances Lin et al. (2022); Priyanshu and Naidu (2021); Gandhi et al. (2022). We apply the FL algorithms FedProx Li et al. (2020) and Adaptive Federated Optimization FedOpt, Reddi et al. (2021) to \(5\) machine learning algorithms. We evaluate our approach on \(8\) previously published datasets for hate speech detection. While using FL often implies a trade-off between privacy and performance, we obtain performance improvements of up to \(6.81\%\) in F1-score. We find that that models trained using FL outperform centralised models across multiple tests (e.g., derogatory language, spelling variation, and pronoun reference) in HateCheck Rottger et al. (2021).1 Figure 1: Federated Learning: A centralised model is hosted on a server and is distributed to client devices, these compute weight updates, and transmit the updates for aggregation into the centralised model. The centralised model is then redistributed to client devices. Prior Work Although the areas of hate speech detection and FL have each been subject to extensive research, the study of their intersection remains in its infancy. Federated LearningFederated Learning is a privacy-preserving machine learning paradigm that aims to reduce privacy risks by decentralising data processing onto client devices (i.e., personal devices), thereby foregoing the need for transmitting "raw" user data, and thus minimising risks of personal data leaks caused by transmission of data.2 In FL, the machine learning model is located in two places: On a centralised server, and on client devices, which hold instances of the model distributed from the centralised model.Client devices use the model to compute model updates. The model updates are then transmitted to the server and aggregated by the centralised model, which is redistributed to the client devices. However, not all transmitted weight updates are aggregated into the model. FL operates with a notion of data loss in its design, which is emulated by selecting a fraction of clients whose updates are aggregated. Thus, FL paradigm uses less data to train a models. Footnote 2: See Gitelman (2013) for a discussion on ‘raw’ data. In our experiments, we apply two FL algorithms: FedProx and FedOpt Reddi et al. (2021). FedProx introduces a proximal term to the Federated Averaging algorithm FedAvg, McMahan et al. (2017). FedAvg averages the weights computed on participating client devices in a round. FedProx introduces a proximal term that functions as a regulariser to the weight updates transmitted by participating clients, which penalises local weight updates that diverge from the global model. The FedAvg algorithm can thus be understood as a special case of FedProx with the proximal term set to \(0.0\). FedOpt Reddi et al. (2021) extends the adaptive optimisation strategies from centralised optimisation (e.g., Adam Kingma and Ba (2015) and Adagrad Duchi et al. (2010)) to explicitly account for client and server optimisation. FedOpt handles server optimisation distinctly from client optimisation, by introducing a state to the server-side optimisation routine.. This distinct handling of server-side optimization enables more accurate and heterogeneity-aware FL models, which can speed up convergence. FL has been applied to a number of tasks, including emoji prediction Ramaswamy et al. (2019); Gandhi et al. (2022), next-word prediction for mobile keyboards Yang et al. (2018), pre-training and fine-tuning large language models Liu and Miller (2020), medical named entity recognition Ge et al. (2020), and text classification Lin et al. (2022). For instance, Lin et al. (2022) used FL to fine-tune a DistilBERT model to perform classification on the 20NewsGroup dataset Lang (1995) using three different FL algorithms: FedAvg McMahan et al. (2017), FedProx Li et al. (2020), FedOpt Reddi et al. (2021)) under non-IID partitioning. In a closely related study, Basu et al. (2021) apply FL, using the FedAvg algorithm to fine-tune large language models to detect depression and sexual harassment from small Twitter data samples. They find that using large language models such as BERT and RoBERTa outperform distilled language models such as DistilBERT. Our work extends on Basu et al. (2021) by introducing additional FL algorithms and extending to a multi-class setting for hate speech detection. Thus, our work extends on prior work by i) applying FL to the task of multi-class hate speech detection, a task which has proven difficult in part due to the complex nature of pragmatics Rottger et al. (2021) and hate mongers seeking to evade content moderation infrastructures Crawford and Gillespie (2016); ii) using the FedProx and FedOpt algorithms rather than the FedAvg algorithm, thereby reducing model vulnerability to divergent weight updates; and iii) providing an in-depth analysis of federated model performances. Hate Speech DetectionPrior work on hate speech detection has primarily focused on privacy-agnostic machine learning paradigms, using centralised models for classification. Such work has investigated a number of machine learning models (e.g. SVMs Karan and Snajder (2018), CNNs Park and Fung (2017), and fine-tuned language models Swamy et al. (2019)) and the development of resources (e.g. Talat and Hovy (2016). Recently, Fortuna et al. (2021) proposed a standardisation of classes across 9 publicly available datasets and studied the generalisation capabilities of BERT, fastText, and SVM models. In their work they found limited success in inter-dataset generalization. Our work thus extends on the task of hate speech detection by introducing privacy-preserving methods to multi-class hate speech detection. In doing so, the privacy of those who flag content and those whose content is flagged remain intact. ## 3 Data We combine our dataset using the standardisation schema proposed by Fortuna et al. (2021). CombWe reuse \(8\) of the \(9\) datasets used by Fortuna et al. (2021) to form Comb.3Comb then consists of the datasets proposed by Talat and Hovy (2016); Davidson et al. (2017); Fersini et al. (2018); de Gibert et al. (2018); Swamy et al. (2019); Basile et al. (2019); Zampieri et al. (2019) and the Kaggle toxic comment challenge.4 We perform a stratified split of all training data into training (70%), validation (10%), and test (20%) sets.5 Footnote 3: The dataset proposed by (Founta et al., 2018) is not included as it was not available to us. Footnote 4: [https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) Footnote 5: We do not use the test data provided with some datasets to ensure uniformity, as test sets are not provided with all datasets. Data CleaningWe address issues of extreme class imbalance in Comb by removing the "abusive" category as it only contains \(2\) documents. Following an in-depth analysis of the Kaggle dataset we find that the maximum length of tokens in the dataset is \(4950\) while the median length of tokens in Comb is \(26\). Moreover, we find that the longest \(1\%\) of documents in the Kaggle dataset do not contain unique tokens. Removing the longest \(1\%\) of comments reduces the maximal document length to \(727\) tokens (see Appendix B.3 for further detail). Following our data cleaning processes, Comb comes to consist of \(293,300\) documents (see Table 1 for an overview of changes). ## 4 Experiments We experiment with \(5\) machine learning models in their centralised and federated settings: Logistic Regression Bi-LSTMs (Hochreiter and Schmidhuber, 1997), FNet (Lee-Thorp et al., 2022), DistilBERT (Sanh et al., 2019) and RoBERTa (Liu et al., 2019). We measure their performance using weighted F1 scores. The centralised models form our baselines, while the federated models form our experimental models. For the Logistic Regression and Bi-LSTMs, we perform word-level tokenisation using SpaCy (Honnibal and Montani, 2017). For the FNet, DistilBERT, and RoBERTa, we use the tokenisers provided with each model.6 Footnote 6: Please refer to Appendix A for further experiments and analyses on the Vidgen et al. (2021) dataset. ### Federated Training FL is a machine learning training paradigm that distributes training onto client devices. All client devices are split into overlapping subsets and the training data is partitioned and uniformly distributed to client devices. A random client subset is selected for training in each round, and their locally computed weights are aggregated on the server. We train our models for \(300\) rounds for \(1\), \(5\), or \(20\) epochs per round, and set the client fraction to \(10\%\), \(30\%\), or \(50\%\) which are randomly sampled from \(100\) client devices. We perform hyper-parameter tuning for the client learning rate, server-side learning rate, and proximal term (see appendix B.1). In our work, we conceptualise client devices as users who witness and report hate speech. We simulate the client devices and ensure that data is independently and identically distributed (I.I.D.) on client devices.7 We use the FedProx and FedOpt algorithms to aggregate client updates on the server. FedProx introduces a regularisation constant to the server-side aggregation step, the proximal term to address issues of divergence in weights and statistical heterogeneity in FedAvg. FedOpt seeks to create more robust models by introducing a separate optimiser for the server-side model to account for data heterogeneity. Footnote 7: We use an I.I.D. setting for data as \(40\%\) of all social media users and \(64\%\) of those under 30 in the USA have experienced online harassment (Pew Research Center, 2021). I.e. while hate speech is not frequent, it is often experienced by users. ## 5 Analysis Considering the baseline models in Table 4, we see that the Logistic Regression tends to under-perform, while the RoBERTa model posts the best performances. Although FL-based models often outperform our baselines, we note that when FL \begin{table} \begin{tabular}{c|c c c} **Category** & **Merged count** & **Comb** & **Change** \\ \hline aggression & \(6,950\) & \(6,950\) & - \\ aggressive hate speech & \(1,561\) & \(1,561\) & - \\ covert aggression & \(4,242\) & \(4,242\) & - \\ _hate speech_ & \(13,222\) & \(13,205\) & \(-0.13\%\) \\ _insult_ & \(7,879\) & \(7,779\) & \(-1.27\%\) \\ misogyny sexism & \(5,000\) & \(5,000\) & - \\ _none_ & \(189,869\) & \(188,550\) & \(-0.69\%\) \\ offensive & \(19,192\) & \(19,192\) & - \\ overt aggression & \(2,710\) & \(2,710\) & - \\ racism & \(1,978\) & \(1,978\) & - \\ _severely toxic_ & \(1,597\) & \(1,527\) & \(-4.38\%\) \\ _threat_ & \(480\) & \(470\) & \(-2.08\%\) \\ _toxicity_ & \(40,316\) & \(40,134\) & \(-0.45\%\) \\ \end{tabular} \end{table} Table 1: Label count of the raw datasets and Comb models are trained with lower client fractions and epochs, they tend to be outperformed by the baselines. Models trained using FedProx outperform the centralised baselines (see table 2).8 For instance, we see large improvements for FNet and Logistic Regression (\(4.5\) and \(6.8\) points in terms of F1- score, respectively). Comparing the performances of models trained using FedOpt (table 3) with those trained using FedProx, we observe that the former (in particular FNet and RoBERTa) tend to outperform the latter for lower client fractions and epochs. In general, we find that the best FL models outperform their centralised counter-parts (see Tables 2 and 3). In fact, the best performing RoBERTa, DistilBERT, and FNet models trained using FL algorithms outperform their centralised baselines, with FNet obtaining a \(3\)-\(4\) point improvement over centralised models in terms of F1 score.9 Footnote 8: For tables 2 and 3, \(c\) refers to the client fraction used and \(e\) refers to the number of epochs on client devices. Footnote 9: See Section 5.1 for an analysis using HateCheck (Rötger et al., 2021). While FL often indicates a trade-off between privacy and performance, we find that the best FL models outperform the centralised baselines. We believe that the improved performance stems from the dataset being split into smaller segments, in congruence with findings from prior work. For instance, Nobata et al. (2016) show that splitting data into smaller temporal segments helped improve classification performance overall. We believe that a similar effect may be evident with FL models that, by design split data into small segments and disregard a fraction of the clients. Further, it may be the case that some data within hate speech datasets hinders generalisation. Only using subsets of the data for training may therefore aid generalisation. ### Hate Check Evaluation This section extends the experiments to qualitatively evaluate the effectiveness of federated and centralised models under different axis of hate speech using HateCheck(Rottger et al., 2021). HateCheck is a suite of functional tests for hate speech detection models. HateCheck provides an in-depth examination of model performances across different potential challenges for machine learning models trained for hate speech detection. The HateCheck(Rottger et al., 2021) dataset consists of \(29\) tests, \(18\) of which test for distinct expressions of hate while the remaining \(11\) test for non-hateful expressions. The dataset contains \(3.728\) labelled samples, \(69\%\) of which are 'Hate' and while the remaining \(31\%\) are labelled as 'Not \begin{table} \begin{tabular}{l c c c|c} & \multicolumn{2}{c|}{**Centralised**} & \multicolumn{1}{c}{**Federated**} \\ & Precision & Recall & F1 & F1 \\ \hline LogReg & \(69.11\) & \(57.45\) & \(62.20\) & \(69.09\) \\ Bi-LSTM & \(71.43\) & \(66.64\) & \(67.90\) & \(69.15\) \\ FNet & \(71.35\) & \(64.73\) & \(66.58\) & \(71.15\) \\ DistilBERT & \(73.99\) & \(69.01\) & \(69.39\) & \(72.34\) \\ RoBERTa & \(\mathbf{75.45}\) & \(\mathbf{70.58}\) & \(\mathbf{71.03}\) & \(\mathbf{72.61}\) \\ \end{tabular} \end{table} Table 4: Results for the centralised and best performing FL models. The FL models have been chosen across FedProx and FedOpt based on F1 scores. \begin{table} \begin{tabular}{c c|c c c c c c|c c c|c c c|c c} & \multicolumn{2}{c|}{**Logistic Regression**} & \multicolumn{2}{c|}{**Bi-LSTM**} & \multicolumn{2}{c|}{**FNet**} & \multicolumn{2}{c|}{**DistilBERT**} & \multicolumn{2}{c}{**RoBERTa**} \\ & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline \multirow{3}{*}{**c = 10\%**} & **e\(\mathbf{=1}\)** & 70.22 & 53.15 & 58.47 & 74.04 & 58.19 & 61.28 & 72.61 & 59.20 & 62.20 & 73.98 & 60.75 & 63.79 & 74.76 & 64.43 & 66.16 \\ & **e\(\mathbf{=5}\)** & 70.83 & 63.31 & 66.35 & 70.84 & 66.51 & 67.72 & 73.52 & 68.33 & **70.42** & 74.54 & 69.46 & 70.85 & 74.59 & 69.68 & 71.48 \\ & **e\(\mathbf{=20}\)** & 70.18 & 67.41 & 68.67 & 69.17 & 69.25 & 69.10 & 73.10 & 68.02 & 69.73 & 73.28 & 71.06 & 71.94 & 73.11 & **71.48** & **72.07** \\ \hline \multirow{3}{*}{**c = 30\%**} & **e\(\mathbf{=1}\)** & **71.23** & 53.50 & 58.89 & 71.15 & 58.82 & 61.12 & 73.62 & 61.13 & 63.97 & 74.84 & 64.03 & 66.14 & 75.02 & 64.33 & 66.41 \\ & **e\(\mathbf{=5}\)** & 70.82 & 64.44 & 67.01 & 70.65 & 65.90 & 67.27 & 73.35 & 68.30 & 70.36 & 74.82 & 69.04 & 70.68 & 74.41 & 69.98 & 71.81 \\ & **e\(\mathbf{=20}\)** & 70.30 & **68.13** & **69.09** & 63.94 & **69.26** & **69.15** & 72.55 & 68.03 & 69.74 & 73.33 & 71.39 & 72.15 & 73.65 & 70.86 & 71.96 \\ \hline \multirow{3}{*}{**c = 50\%**} & **e\(\mathbf{=1}\)** & 71.11 & 53.12 & 58.58 & **71.59** & 58.71 & 61.13 & **77.33** & **63.19** & 65.61 & **74.88** & 63.58 & 65.85 & 74.42 & 63.57 & 65.87 \\ & **e\(\mathbf{=50}\)** & 70.89 & 64.26 & 66.80 & 70.70 & 66.16 & 67.54 & 72.90 & 68.27 & 70.18 & 74.44 & 69.68 & 70.88 & **74.90** & 69.46 & 70.86 \\ & **e\(\mathbf{=20}\)** & 70.28 & 68.00 & 69.01 & 69.25 & 68.84 & 68.20 & 72.90 & **68.42** & 70.16 & 73.71 & **71.51** & **72.34** & 73.53 & 71.18 & 72.01 \\ \end{tabular} \end{table} Table 2: Results of FedProx experiments on Comb. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c|c c c} & \multicolumn{2}{c|}{**Logistic Regression**} & \multicolumn{2}{c|}{**Bi-LSTM**} & \multicolumn{2}{c|}{**FNet**} & \multicolumn{2}{c|}{**DistilBERT**} & \multicolumn{2}{c}{**RoBERTa**} \\ & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline \multirow{3}{*}{**c = 10\%**} & **e\(\mathbf{=1}\)** & 68.29 & 52.48 & 58.46 & **71.80** & 58.34 & 61.70 & 72.64 & 59.78 & 62.49 & 74.51 & 63.87 & 65.03 & 72.02 & 64.21 & 64.57 \\ & **e\(\mathbf{=10\%}\)** & **e\(\mathbf{=5}\)** & 68.29 & 59.38 & 63.14 & 70.63 & 63.73 & 66.10 & 72.39 & 69.11 & 70.51 & 74.33 & 69.61 & 70.48 & **75.55** & 69.44 & 70.34 \\ & **e\(\mathbf{=20}\)** & **68.30** & 59.65 & 63.23 & 69.74 & 63.27 & 67.27 & 71.87 & **70.69** & **71.15** & 72.21 & **71.34** & **71.766** & 73.17 & **72.20** & **72.61** \\ \hline \multirow{3}{*}{**c = 30\%**} & **e\(\mathbf{=5}\)** & 66.5 & 60.31 & 63.10 & 69.48 & 62.63 & 65.57 & 72.01 & 69.14 & 70.30 & 72.82 & 69.59 & 70.75 & 74.38 & 71.54 & 71.69 \\ & **e\(\mathbf{=20}\)** & 67.18 & 62.50 & **64.60** & 69.74 & 65.69 & 67.49 & 71.91 & 70.02 & 70.79 & 71.55 & Hate'. We evaluate all the models that have been trained for this manuscript, including the model examined in appendix A. We evaluate our trained models HateCheck's binary form by mapping all classes positive classes to "hate" and the negative class to "not-hate".10 Footnote 10: The Comb dataset uses ‘none’ as its negative class, the Binary Dataset (Vidgen et al., 2021) has ‘Not-hate’ as non-hateful label, and Multi-class Dataset Vidgen et al. (2021) has ‘None’ as non-hateful label Conducting the HateCheck functional tests for the models trained on the Comb dataset, we see (please refer to table 5) that the federated learning models perform on par or better than the centralised models on a macro scale. The federated Bi-LSTM and FNet models yield strong improvement of 3 - 5%. On the other hand, there is a slight performance dip (0.5 - 1%) for the federated DistilBERT and RoBERTa models. Moreover, through a fine-grained analysis of model performance, we observe that all the models (centralised and federated) perform acceptable performances for different types of derogatory, pronoun reference, phrasing, spelling variations, and threatening language. However, all models perform poorly for the tests for counter speech, indicating that while the models learn to recognise some forms of hate, they cannot accurately recognise responses to it. Furthermore, we see that RoBERTa performs slightly better than all the other model variants on non-hate group identity and abuse against non-protected targets. RoBERTa and DistilBERT achieve the best performances for slurs. Overall, we find that RoBERTa and DistilBERT consistently perform well across many of the functional tests which might be due to having been pre-trained on large amount of language data. However, the pre-training also induces certain biases which limit the models' performance on profanity. The Bi-LSTMs outperform all the models on non-hateful profanity but simultaneously under-perform on hateful profanity. ## 6 Conclusion Private and sensitive data can risk being exposed when developing and deploying models for hate speech detection. We therefore examine the use of Federated Learning, a privacy preserving machine learning paradigm to the task of hate speech detection to emphasise privacy in hate speech detection. We find that using Federated Learning improves on the performance levels achieved using centralised models, thus affording both privacy and performance. In future work, we intend to examine interpretability and explainability for federated learning to gain a better understanding of the causes of such performance increases. \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c} \multicolumn{1}{c}{**Fundantially**} & \multicolumn{2}{c|}{**Logistic Regression**} & \multicolumn{2}{c|}{**BiLSTM**} & \multicolumn{2}{c|}{**Not**} & \multicolumn{2}{c}{**DistilBERT**} & \multicolumn{2}{c}{**Bi-LSTM**} \\ & **Control** & **F\({}_{\text{max}}\)** & **F\({}_{\text{G}}\)** & **Convert** & **F\({}_{\text{max}}\)** & **F\({}_{\text{G}}\)** & **Convert** & **F\({}_{\text{max}}\)** & **F\({}_{\text{G}}\)** & **Convert** & **F\({}_{\text{max}}\)** & **F\({}_{\text{G}}\)** & **Convert** & **F\({}_{\text{max}}\)** & **F\({}_{\text{G}}\)** \\ \hline **P1:** Exponentially strong negative annotations (eptifab) & 96.4 & **1000.0** & 97.1 & 80.7 & **1000.0** & 95.7 & 75.0 & **98.6** & 97.0 & 90.0 & **90.3** & 81.9 & 83.9 & 87.9 & **92.9** \\ **P2:** Description using very negative attributes (eptifab) & 96.5 & **1000.0** & 97.9 & 57.1 & 97.3 & 99.3 & **93.7** & **1000.0** & 96.4 & **1000.0** & 97.7 & 92.9 & 96.0 & **95.0** \\ **P3:** Dilumination (eptifab) & 97.9 & **1000.0** & 94.3 & 77.9 & **1000.0** & 85.0 & **1000.0** & 97.1 & **1000.0** & 93.6 & 90.7 & **94.3** & **94.3** \\ **P4:** Implicit domain & 87.1 & **95.7** & 77.7 & **94.2** & 92.9 & 62.1 & **99.3** & 96.4 & 72.9 & **82.9** & **82.9** & 71.1 & 75.6 & **80.7** \\ \hline **P5:** Direct forest & 92.2 & **99.3** & 94.0 & 80.0 & 96.2 & 88.2 & **1000.0** & 95.5 & 80.8 & **95.5** & 91.7 & 91.0 & **96.5** & 91.7 \\ **P6:** Direct a normative statement & 94.3 & **99.3** & 96.4 & 80.7 & **99.3** & 96.6 & 70.0 & **1000.0** & **1000.0** & 90.0 & **1000.0** & 94.3 & **90.4** & 97.1 \\ **P7:** Intra expected using hate & 87.2 & 99.8 & 98.6 & 78.7 & 88.2 & **96.5** & 86.1 & **98.6** & 95.1 & 91.0 & **94.0** & 90.0 & **88.2** & 84.7 & 86.1 \\ **P8:** Non-hateful components of hate & 67.7 & 16.7 & **43.0** & 100.0 & **43.3** & 16.7 & 12.3 & **26.7** & 23.3 & **80.0** & **33.3** & 28.7 & **40.0** & 86.7 \\ **P9:** Recational dines & 4.9 & 4.9 & **40.7** & 2.5 & **42.0** & 27.2 & 9.9 & 11.1 & 6.2 & 6.2 & 7.4 & **12.4** & 4.9 & 16.1 & **17.3** \\ \hline **P10:** Fine personal using ordinary & **100.0** & **1000.0** & 96.0 & **96.4** & **96.4** & **93.0** & **1000.0** & 97.3 & **1000.0** & **1000.0** & **1000.0** & **1000.0** & **1000.0** \\ **P11:** Non-hateful of generality & 22.1 & 13.0 & **80.0** & **100.0** & **47.0** & 60.0 & 6.1 & 11.0 & 11.0 & 11.0 & 11.0 & 11.0 & 11.0 & **11.0** \\ \hline **P12:** Inter optimal through reference is subsequent classes & **1000.0** & **1000.0** & 90.0 & 91.4 & **96.6** & **96.6** & 86.4 & **1000.0** & **1000.0** & 95.0 & **90.6** & 96.4 & 90.0 & **92.9** & 92.1 \\ **P13:** Hot personal through reference is independent outcomes & **1000.0** & **1000.0** & 90.0 & 96.2 & 84.0 & **96.2** & 94.7 & **12.00** & **1000.0** & 95.5 & **90.3** & 97.6 & 92.4 & 94.6 & 91.5 \\ **P14:** Inter operational group temporal positive sentiment & **92.9** & **99.3** & 84.3 & 75.2 & 89.3 & **93.6** & 52.1 & **1000.0** & **17.9** & **93.6** & **93.6** & **93.7** & 91.4 & 81.4 & **90.7** \\ **P15:** Non-hate personal image applied hierarchical attention & 6.0 & 38.0 & **58.7** & 21.6 & **53.4** & 41.4 & 27.1 & 21.3 & **33.8** & 17.3 & 27.1 & **43.6** & 31.6 & 51.1 & **56.4** \\ \hline **P16:** Inter placed in a position & 95.7 & **1000.0** & 95.0 & 81.4 & 93.6 & **96.4** & 81.4 & **95.0** & **95.0** & 92.1 & **96.4** & 91.4 & 91.2 & **95.0** & 87.5 \\ **P17:** Inter placed in a position & 99.0 & **1000.0** & 92.5 & 83.8 & **99.0** & 97.7 & 81.2 & **1000.0** & 91.0 & 91.0 & **95.5** & **93.2** & 86.5 & 92.2 \\ \hline **P18:** Non-hateful actions using protocol group identifiers & 20.6 & 56.3 & **77.0** & 42.1 & **75.4** & 62.7 & 50.0 & 55.6 & **69.0** & 0.0 & 68.3 & **75.4** & 60.0 & **92.9** & **57.3** \\ **P19:** Post-processing using protocol group identifiers & 18.0 & 42.9 & **80.0** & **60.0** & **64.6** & 41.3 & 45.5 & 57.0 & **69.3** & **86.0** & 92.7 & **73.1** & 61.4 & **58.8** & **92.3** \\ \hline **P20:** Improvements of that due to **1** & 1.7 & 19.0 & **65.5** & **14.5** & **44.5** & **83.3** & **31.2** & **45.1** & 1 ## Limitations While Federated Learning introduces increased privacy in the process of hate speech detection, a real time system may be vulnerable to attacks that can lead to privacy leakages. For instance, the weights being transferred from the clients to the server may reveal information about the local dataset to an adversary (Bhowmick et al., 2018; Melis et al., 2019). However unintended these leakages may be, they still pose a significant threat and might limit the privacy claim. The Federated Learning models trained in our work rely on \(8\) of the \(9\) datasets used by Fortuna et al. (2021), as we could not gain access to the final dataset. We do not test the biases introduced in Federated Learning models upon combining and normalising these datasets under the schema proposed by Fortuna et al. (2021, 2020). Additionally, the dataset division for the simulation is done under the assumption of I.I.D. conditions which might not always be true for real-world scenarios. ## Ethical Considerations Although our methods for hate speech detection provide increased privacy to downstream users of content moderation technologies, i.e. users of online platforms, there are significant risks to it. First, our proposed technology has dual use implications, as it can also be applied maliciously, for instance to limit the speech of specific groups. Second, while this work uses publicly available datasets, there is an inherent tension between the public availability of data and privacy risks. Finally, although all model updates occur on local client devices, federated learning is not a silver bullet which addresses issues of systemic violence of content moderation Thylstrup and Talat (2020), or issues of privacy. Rather, federated learning can provide an avenue for engaging in meaningful conversations with people and their experiences and needs for content moderation and privacy.
2302.06170
Restoring the saturation response of a PMT using pulse-shape and artificial-neural-networks
The linear response of a photomultiplier tube (PMT) is a required property for photon counting and reconstruction of the neutrino energy. The linearity valid region and the saturation response of PMT were investigated using a linear-alkyl-benzene (LAB)-based liquid scintillator. A correlation was observed between the two different saturation responses, with pulse-shape distortion and pulse-area decrease. The observed pulse-shape provides useful information for the estimation of the linearity region relative to the pulse-area. This correlation-based diagnosis allows an ${in}$-${situ}$ estimation of the linearity range, which was previously challenging. The measured correlation between the two saturation responses was employed to train an artificial-neural-network (ANN) to predict the decrease in pulse-area from the observed pulse-shape. The ANN-predicted pulse-area decrease enables the prediction of the ideal number of photoelectrons irrelevant to the saturation behavior. This pulse-shape-based machine learning technique offers a novel method for restoring the saturation response of PMTs.
Hyun-Gi Lee, Jungsic Park
2023-02-13T08:08:37Z
http://arxiv.org/abs/2302.06170v3
# Restoring the saturation response of a PMT using pulse-shape and artificial-neural-networks ###### Abstract The linear response of a photomultiplier tube (PMT) is a required property for photon counting and reconstruction of the neutrino energy. The linearity valid region and the saturation response of PMT were investigated using a linear-alkyl-benzene (LAB)-based liquid scintillator. A correlation was observed between the two different saturation responses, with pulse-shape distortion and pulse-area decrease. The observed pulse-shape provides useful information for the estimation of the linearity region relative to the pulse-area. This correlation-based diagnosis allows an _in-situ_ estimation of the linearity range, which was previously challenging. The measured correlation between the two saturation responses was employed to train an artificial-neural-network (ANN) to predict the decrease in pulse-area from the observed pulse-shape. The ANN-predicted pulse-area decrease enables the prediction of the ideal number of photoelectrons irrelevant to the saturation behavior. This pulse-shape-based machine learning technique offers a novel method for restoring the saturation response of PMTs. 1Center for Precision Neutrino Research, Department of Physics, Chonnam National University, Gwangju 61186, Korea 2Department of Physics, Kyungpook National University, Daegu 41566, Korea *E-mail: [email protected] 3Department of Physics & Astronomy, Seoul National University, Seoul 08826, Korea *E-mail: [email protected] C44, C50, H20 ## 1 Introduction A photomultiplier tube (PMT) amplifies the number of photoelectrons (NPE) released from a photocathode at each dynode and discharges them as a pulse of current. The linearity of the gain between the photocathode and the amount of output signal is a required property for counting NPE from the observed currents. It is known that saturation behavior appears when detecting an event of high-light intensities [1; 2] and the validity of the linearity depends on the environment [3; 4]. Hamamatsu's 10-inch photocathode PMT R7081 is a widely used photon sensor specialized in neutrino experiments [5; 6; 7; 8; 9]. Their linearity has been tested up to \(\sim 300\,(600)\) PE at a gain of \(1\times 10^{7}\,(5\times 10^{6})\)[7; 10] and the characteristics of the saturation behavior have been investigated [11]. Reconstruction of neutrino events is typically performed from the NPE observed by each PMTs installed in the detector [12; 13; 14]. When it comes to reconstructing relatively high-energy particles, there are concerns regarding saturation behavior as the number of photoelectrons observed by a PMT (observed NPE) increases. The saturation behavior is a major limitation in the reconstruction of neutrino produced by kaon-decay-at-rest (236 MeV) in the JSNS\({}^{2}\) experiment [7; 15] or passing through muon (\(\sim\) GeV) in the Double Chooz experiment [16]. For each case, a diagnosis of the linearity range or an understanding of the saturation response is required for the reconstruction of the events occurring in these energy regions. The observed NPE associated with the reconstructed particle energy also depends on the detector size and light-collection method. In the very-short baseline reactor \(\bar{\nu}_{e}\) (\(\sim\) few MeV) experiments, several PMTs were installed in each segmented cell with different baselines, and the observed NPE reached \(\sim 500\) PE/MeV [17]. These types of experiments prove the existence of sterile neutrinos based on the comparison of the observed \(\bar{\nu}_{e}\) energy spectra for the segmented region with different baselines [18; 19; 20]. Global analyses of available \(\bar{\nu}_{e}\) disappearance data with the existence of sterile neutrino have indicated that the best-fitted value of \((\sin^{2}\theta_{14},\,\Delta m^{2}_{41})\) is \((0.009,1.3)\)[21]. An identical response for each detector is required to prove the energy-dependent disappearance of the \(\bar{\nu}_{e}\) with such a small mixing angle [12; 22]. The saturation behavior of PMTs may spoil the identical energy scale response for each cell, which gives ambiguity in the precision comparison of the prompt energy spectra or reduce sensitivity in the sterile neutrino search [18; 19]. In this study, we investigate a possible diagnosis and restoring saturation response of R7081 up to \(\sim 4000\) PE using artificial-neural-networks (ANNs) and the observed pulse-shape. A correlation between the two different types of saturation responses, pulse-shape distortion and pulse-area decrease, was observed. The obtained correlation was employed to train the ANN to recover the reduced pulse-area from the observed pulse-shape, which contains _in-situ_ information on the pulse-area decrease. Section 2 describes the experimental setup for the saturation response measurements. Section 3 elucidates the measurement of gain and saturation response. Section 4 provides details on the structure and training of the ANN. Section 5 presents the training results and restoration of the saturation response from the saturated pulse-shape. Finally, Section 6 summarizes the study and highlights the major conclusions drawn from the obtained results. Figure 1: Experimental setup for the PMT response measurement. (left): Test setup used for the linearity test of the 2-inch-PMT-A is drawn in the left. The 2-inch-PMT-B receives scintillation light attenuated by a 1/4 neutral-density-filter and monitors the linearity of the 2-inch-PMT-A. (right): Experimental setup for the saturation response measurement of the 10-inch-PMT. The 2-inch-PMT-A (10-inch-PMT) observed the scintillation events, without (including) the saturation. Experimental setup The saturation response of R7081 was investigated using the setups shown in Figure 1. Two different types of PMTs, 10-inch-PMT (Hamamatsu R7081) and 2-inch-PMT (Hamamatsu H7195) were used to detect the scintillation events. The 2-inch-PMT was designed to detect the scintillation events without the saturation, whereas the 10-inch-PMT coincidently detected the same scintillation events, including the saturation response. The linearity (saturation response) of the 2-inch-PMT (10-inch-PMT) was measured using the configuration shown in Figure 1 left (right). A mu-metal was used to shields the 10-inch-PMT from the geomagnetic field to improve its collection efficiency [23]. A glass vial contained the liquid-scintillator (LS), which yielded the scintillation light from ionization. The \(\gamma\)-rays emitted from the radioactive sources of \({}^{137}\)Cs (0.66 MeV) and \({}^{35}\)Cl (\(\sim\) 8 MeV) transferred their energy to the electrons of the LS via Compton scattering and further ionized the scintillator. The emitted scintillation photons were reflected in the Teflon cylindrical tube, and 10-inch or 2-inch-PMT coincidently detected these photons with different quantum and collection efficiencies. The Teflon cylindrical tube was made of Polytetrafluoroethylene (PTFE) and had a length of 7.5 cm with an inner diameter of 2 inches. The gain of each PMT was adjusted to the typical gain provided by the manufacturer (\(\sim\) 3 \(\times\) 10\({}^{6}\) for H7195, \(\sim\) 1 \(\times\) 10\({}^{7}\) for R7081) [5], and the observed pulse-area was divided into gain to determine the observed NPE. An LS is a mixture of an organic solvent, a fluor, and a wavelength shifter. Linear-alkyl-benzene (LAB, C\({}_{\mathrm{n}}\)H\({}_{\mathrm{2n+1}}\)-C\({}_{\mathrm{6}}\)H\({}_{\mathrm{5}},\mathrm{n=10}\)-13) is a widely used as a solvent for LS owing to its relatively high light yield and environmentally friendly characteristics [25, 26, 27, 28, 29, 30]. In this study, 2,5-diphenyloxazole (PPO, C\({}_{\mathrm{15}}\)H\({}_{\mathrm{11}}\)NO) and 1,4-bis(2-methylstyryl)benzene (bis-MSB, C\({}_{\mathrm{24}}\)H\({}_{\mathrm{22}}\)) were adopted as the fluor and the wavelength shifter, respectively. The LS was synthesized by dissolving a 3 g/L of PPO and 30 mg/L of bis-MSB to LAB, which is an aromatic solvent. The primary decay time of the LS is about \(\sim\)4 ns [31]. Figure 2: Circuit diagram of R7081. The total resistance between the dynodes is 12.7 M\(\Omega\), and the distribution ratio of the dynode resistance is (16.8-0.6-3.4-5-3.33-1.67-1-1.2-1.5-2.2-3-2.4) [24]. As described in Section 1, the NPE released from the photocathode is multiplied at each dynode and discharged as current pulses. Figure 2 presents a relevant circuit diagram for the 10-inch-PMT, which contained an attenuator (Kaizuworks KN320), a flash analog to digital converter (FADC), and a high-voltage power supply (ORTEC 556). The discharged pulses were attenuated to adjust the dynamic range of the electronics and digitized by Notice FADC400, which is a 10-bit, 400 MHz/s, \(\pm 1\mathrm{V}_{\mathrm{pp}}\) FADC [32]. ## 3 Measurements The gain of each PMT was measured using an attenuated laser light source with an external trigger. For obtaining the single photoelectron (SPE) charge distribution, a 440 nm laser light (OPG-NIM-440) was attenuated by neutral-density-filter (0.01%). Figure 3 shows the observed charge distribution from the PMT with the light source. The inset indicates the exponential increase in the SPE as a function of the applied voltage. The adjusted gains are \(2.99\times 10^{6}\,(2\)-inch-A), \(2.78\times 10^{6}\,(2\)-inch-B), and \(1.02\times 10^{7}\,(10\)-inch) with the applied voltages of 1600, 1600, and 1365 V, respectively. As described in the previous section, the scintillation events are detected by the coincidence of two PMT with different efficiencies. The relative difference between the photon detection efficiencies of the two PMT shown in Figure 1 was obtained by the comparing observed Compton edges [33]. In Figure 4, the red data points illustrates the NPE observed by the two PMTs obtained using a \({}^{137}\mathrm{Cs}\,(0.66\,\mathrm{MeV})\)\(\gamma\)-ray source. A one-dimensional projection of the scatter plot is presented in the inset, and their distribution is fitted with an empirical formula of \(\mathrm{Error}+\mathrm{Exponential}\,\mathrm{function}\,(y=\mathrm{P}_{0}\cdot[1- \mathrm{erf}\,(\mathrm{x}-\mathrm{P}_{1})/\mathrm{P}_{2})]+\mathrm{P}_{3} \cdot\mathrm{exp}\,(-\mathrm{P}_{4}\cdot x))\) to find the edge of \(\mathrm{P}_{1}\)[34, 35]. The blue-dashed line in Figure 4 indicates the linear-model (Y \(\propto\) X), which has a slope of the ratio of the observed edge between the two PMT. From the consistency between the data and model in the upper figure, the ratio of the observed edge between the two PMTs is employed as an efficiency correction coefficient. By multiplying the ratio of the obtained edges with the observed NPE of 2-inch-PMT-A, the ideal NPE of the 10-inch-PMT irrelevant to the saturation behavior is estimated. Figure 3: Measurement of the gain of each PMTs. The charge distribution of the 2-inch-A, 2-inch-B and 10-inch-PMT are presented on the left, middle, and right, respectively. The insets display the exponential increase in the observed gain with the increasing applied voltage. The responses of the PMTs at a higher scintillation light intensity were obtained using a \({}^{35}\)Cl (\(\sim 8\) MeV) \(\gamma\)-ray source [36]. In Figure 4, the black data points shows a comparison between the observed NPE of the two PMTs at a larger NPE. Figure 4a compares the NPE observed by the 2-inch-PMT-A and 2-inch-PMT-B from the same scintillation events. To test the linearity of the 2-inch-PMT-A, a 1/4 neutral-density-filter was installed on the 2-inch-PMT-B. No saturation behavior is evident for \(\sim 4000\) PE observed by the 2-inch-PMT-A. Figure 4b compares the response of the 2-inch and 10-inch-PMTs. As the NPE increases, the saturation behavior of the 10-inch-PMT becomes distinct. In the operation of a PMT with a high-intensity light pulse, the electron trajectory between the dynodes is distorted by the space charge effect caused by the repulsive force of the electron cloud [1, 2], and the saturation behavior of the PMT appears as a distortion of the pulse-shape and simultaneous decreases in the pulse-area [3, 11]. Both the saturation responses were observed in the 10-inch-PMT and are presented in Figure 5. The left side of Figure 5 shows the accumulated pulse-shape of the scintillation events with respect to the observed NPE of each PMT. The measured pulse-shape contains a rise, peak, and tail [37]. Figure 5a displays the distortion of the pulse-shape of 10-inch-PMT appears as a decrease in the peak. Figure 4: Obtained PMT response at the typical gain using the setup shown in Figure 1. Response of the 2-inch-PMT-A and 2-inch-PMT-B (10-inch-PMT) drawn in the Figure 4a (b). The blue dashed-line indicates the linear-model and their slope is determined from the ratio of the observed Compton edges (79.5/284.5 (222.5/286.8) [NPE/NPE]). The distortion starts around \(\sim 300\) PE. On the other hand, in the Figure 5b, a similar pulse-shape is observed for the 2-inch-PMT irrespective of the observed NPE. Figure 5c presents the pulse-area decrease responses for the 10-inch-PMT. A deviation from the ideal response of more than 1% appears at \(\sim 300\) PE, and this deviation increases as a function of the NPE of the 10-inch-PMT. The response is obtained by averaging the ratio between the ideal and observed NPE according to the NPE of the 10-inch-PMT. An increase in the pulse-shape distortion and the pulse-area decrease was observed as a function of the observed NPE of 10-inch-PMT. Note that the pulse-shape distortion of the 10-inch-PMT is observed only using the 10-inch-PMT information, without comparing its response to that of the 2-inch-PMT. The correlation observed between the pulse-shape distortion and pulse-area decrease suggests the possibility of diagnosing the linearity range or predicting a decrease in the pulse-area from the observed pulse-shape. ## 4 Training of artificial-neural-network (ANN) ANNs are widely used machine learning algorithms that can extract correlated features from an input of a higher dimension. A typical ANN consists of repeated layers of perceptron and a non-linear activation function. An output of the \(i\)-th perceptron in the \(N\)-th layer is feed forward to the \(j\)-th perceptron in the \((N+1)\)-th layer with a connection strength of \(w_{ij}\). Higher-dimension features are extracted as the layer is repeated, and the extracted features are feed forward to the next layer. The ANN has numerous free parameters for the unfixed \(w_{ij}\), and multiple sets of data-label pairs are required to train these parameters. Further details on ANNs are provided in Ref. [38]. Figure 5: Observed pulse-shape and pulse-area saturation response. (left): Observed pulse-shape distortion (linearity) of the 10-inch-PMT (2-inch-PMT-A). (right): Observed decrease in pulse-area shown by the 10-inch-PMT. The ideal NPE of the 10-inch-PMT which is irrelevant to the saturation behavior is obtained based on the NPE of 2-inch-PMT-A. The responses were measured with respect to the observed NPE for each PMT. The ANN was trained using the observed correlation between the pulse-shape distortion and pulse-area decrease. Figure 6a shows the structure of the ANN used for predicting the pulse-area decrease from the observed pulse-shape. The deep-learning framework PyTorch [39] was adopted to construct the ANN model. The input is provided to 60 nodes, and each node represents a single 2.5 ns bin of the area-normalized pulse-shape. The area-normalized pulse-shape is then feed forward to the next layer by the exponential linear unit (ELU), which is an non-linear activation function [40], and repeated several times to extract the non-linear features corresponding to the pulse-area decrease. The extracted features are transferred to the linear layers to predict the decreased ratio of the observed pulse-area for restoration. As demonstrated in Figure 5, the saturation response appears as a distortion of the pulse-shape and a simultaneous decrease in the pulse-area. The free parameters in the ANN are trained by the number of input-label pairs obtained from the scintillation events. The input is the pulse-shape distortion response, which is given by the area-normalized pulse shape with 60 bins, while the label is inverse of the pulse-area decrease response defined by Ideal NPE/Observed NPE or Q/(Q \(-\)\(\Delta\)Q). Here, Ideal NPE is denoted as Q, which is ideal charge from the pulse-area irrelevant to the saturation response. On the other hand, Observed NPE is denoted as Q\(-\)\(\Delta\)Q, where \(\Delta\)Q is the reduced charge from the pulse area decreased by the saturation response. To train, validate and test the ANN model, \(2.0\times 10^{5}\) input-label pairs were selected at NPE larger than 300 for the restoration. For each event, the normalized pulse-shape of the 10-inch-PMT was used as the input. The corresponding label was determined from the observed pulse-area decrease response in Figure 5c and the observed NPE of the 10-inch-PMT. The inverse of the pulse-area decreased ratio was adopted as a label (label = Ideal NPE/Observed NPE or Q/(Q \(-\)\(\Delta\)Q)) and used as the pulse-area restoration coefficient for the decreased pulse-area. The splitting ratio of the training, validation and test datasets was 2:1:2. The correction should not be performed in the region where the absence of the saturation response and NPE smaller than 300 have been excluded for the restoration. Figure 6: Structure and training of the ANN. (left): Schematic diagram of the ANN used to predict the pulse-area saturation from the observed pulse-shape. (right): Decreasing in MSE loss of the training and the validation sets during the epochs. The decreasing of the loss saturates at an epoch of 130. Figure 5(b) depicts the training of the ANN over each epoch. The Adam optimizer [41] and the Mean Squared Error (MSE) loss [42] were utilized during the training process. The initial learning rate (\(5\times 10^{-4}\)) was gradually decreased at a rate of \(0.975^{\text{epoch}}\) by using the learning rate scheduler. During each epoch, the connection strengths represented as \(w_{ij}\) between the perceptrons were optimized and leading to a decrease in the loss. The ANN was trained for a total of \(130\,\text{epochs}\). ## 5 Results The trained ANN was tested using the test dataset. The restoration coefficient corresponding to the decreased pulse-area was predicted by the trained ANN from the observed pulse-shape. A clear correlation between the ANN prediction and the label is shown in Figure 6(a). The inset depicts the ratio between the prediction and the label. From the trained ANN, the pulse-shapes are classified according to the restoration coefficient as presented in Figure 6(b). The peak of the area-normalized pulse height decreases with the increasing restoration coefficient. The ANN-predicted restoration coefficients obtained from the observed pulse-shapes were applied to the test dataset. Figure 8 compares responses of the 10-inch-PMT with and without restoration. A distinct saturation behavior appears as the NPE increases without restoration. The ANN-predicted restoration coefficient from the observed pulse-shape is multiplied with the NPE observed by the 10-inch-PMT to restore the decreased pulse-area. The restored response is compared with a linear response model (\(\text{Y}=\text{AX}\)) with the ideal slope (\(\text{A}=1\)). Figure 8 insets provide a comparison of the 10-inch-PMT response with (black points) and without (red points) restoration. Without restoration, a deviation from the ideal response appears in NPE larger than 300. The biased response was restored by the ANN from the prediction obtained by the pulse-shape classification. Due to the finite classification power, the root-mean-square (RMS) of the ideal/observed NPE ratio increased after the restoration. The RMS differences between the red and black decreased as the NPE increases. Figure 7: Validation of the trained ANN using the test dataset. The trained ANN predicts the restoration coefficient for the decreased pulse-area from each observed pulse-shape. (left): Comparison of the labels and ANN predictions. (right): Classified pulse-shape by the trained ANN for each prediction. ## 6 Summary Neutrino events are typically reconstructed from the NPE observed by the PMTs mounted in the detector. Understanding the saturation behavior or a solid diagnosis of the linearity range of the PMT provides useful information for the correct reconstruction of the events. Previous studies were primarily focused on the PMT saturation behavior based on the absolute pulse-height or absolute-pulse-area. In this study, the saturation behavior of a PMT was investigated by focusing on the correlation between two saturation responses, distortion of the pulse-shape, and decrease in the pulse-area. Comparing the observed pulse-shapes with the observed NPE provided useful information for an _in-situ_ diagnosis of the pulse-area decrease. The observed correlation between the two saturation responses was employed to train the ANN. The trained ANN predicted the decreased ratio of the pulse-area from the observed pulse-shape. The predicted restoration coefficient was applied to the observed NPE to restore the ideal case irrelevant to the pulse-area decrease. The restored linearity facilitates the correct reconstruction of the event in the saturation region for PMT-based detectors. This work was supported by grants from the National Research Foundation (NRF) of the Korean Government (2022R1A2C1006069, 2022R1A5A1030700, 2022R11A1A01064311, 2018R1D1A1B07045812). We are very grateful for the support provided by the Center for Precision Neutrino Research at the Chonnam National University.
2303.11725
Online Learning of Wheel Odometry Correction for Mobile Robots with Attention-based Neural Network
Modern robotic platforms need a reliable localization system to operate daily beside humans. Simple pose estimation algorithms based on filtered wheel and inertial odometry often fail in the presence of abrupt kinematic changes and wheel slips. Moreover, despite the recent success of visual odometry, service and assistive robotic tasks often present challenging environmental conditions where visual-based solutions fail due to poor lighting or repetitive feature patterns. In this work, we propose an innovative online learning approach for wheel odometry correction, paving the way for a robust multi-source localization system. An efficient attention-based neural network architecture has been studied to combine precise performances with real-time inference. The proposed solution shows remarkable results compared to a standard neural network and filter-based odometry correction algorithms. Nonetheless, the online learning paradigm avoids the time-consuming data collection procedure and can be adopted on a generic robotic platform on-the-fly.
Alessandro Navone, Mauro Martini, Simone Angarano, Marcello Chiaberge
2023-03-21T10:30:31Z
http://arxiv.org/abs/2303.11725v1
# Online Learning of Wheel Odometry Correction for Mobile Robots with Attention-based Neural Network ###### Abstract Modern robotic platforms need a reliable localization system to operate daily beside humans. Simple pose estimation algorithms based on filtered wheel and inertial odometry often fail in the presence of abrupt kinematic changes and wheel slips. Moreover, despite the recent success of visual odometry, service and assistive robotic tasks often present challenging environmental conditions where visual-based solutions fail due to poor lighting or repetitive feature patterns. In this work, we propose an innovative online learning approach for wheel odometry correction, paving the way for a robust multi-source localization system. An efficient attention-based neural network architecture has been studied to combine precise performances with real-time inference. The proposed solution shows remarkable results compared to a standard neural network and filter-based odometry correction algorithms. Nonetheless, the online learning paradigm avoids the time-consuming data collection procedure and can be adopted on a generic robotic platform on-the-fly. Mobile Robots Odometry Correction Deep Learning Robot Localization ## 1 Introduction Wheel odometry (WO) and inertial odometry (IO) are the simplest forms of self-localization for wheeled mobile robots [1]. However, extended trajectories without re-localization, together with abrupt kinematic and ground changes, drastically reduce the reliability of wheel encoders as the unique odometric source. For this reason, visual odometry (VO) has recently emerged as a more general solution for robot localization [2], relying only on the visual features extracted from images. Nonetheless, service and assistive robotics platforms may often encounter working conditions that forbid the usage of visual data. Concrete scenarios are often related to the lack of light in indoor environments where GPS signals are denied, as occurs in tunnels exploration [3, 4] or in assistive nightly routines [5, 6, 7]. Repetitive feature patterns in the scene can also hinder the precision of VO algorithms, a condition that always exists while navigating through empty corridors [8] or row-based crops [9]. Therefore, an alternative or secondary localization system besides VO can provide a substantial advantage for the robustness of mobile robot navigation. Wheel-inertial odometry is still widely considered a simple but effective option for localization in naive indoor scenarios. However, improving its precision in time would extend its usage to more complex scenarios. Previous works tackle the problem with filters or simple neural networks, as discussed in Section1.1. Learning-based solutions demonstrate to mitigate the odometric error at the cost of a time-consuming data collection and labeling process. Recently, online learning has emerged as a competitive paradigm to efficiently train neural networks on-the-fly avoiding dataset collection [10]. In this context, this work aims at paving the way for a learning-based system directly integrated into the robot and enabling a seamless transition between multiple odometry sources to increase the reliability of mobile robot localization in disparate conditions. Figure 1 summarizes the proposed methodology schematically. ### Related Works Several studies have explored using machine learning techniques to estimate wheel odometry (WO) in mobile robotics applications. Approaches include different feed-forward neural networks (FFNN) [11], of which, in some cases, the output has been fused with other sensor data [12], and long short-memory (LSTM) NN, which have been applied to car datasets [13]. These approaches show a promising improvement in WO accuracy, which is crucial for mobile robotics applications. Many works have focused on using Inertial Measurement Unit (IMU) data in mobile robots or other applications, such as person tracking using IMU data from cell phones [14]. One system was improved by implementing a zero-velocity detection with Gate Recurrent Units (GRU) neural network [15]. Another study used an Extended Kalman Filter (EKF) to estimate positions and velocities in real-time in a computationally lightweight manner [16]. Additionally, a custom deep Recurrent Neural Network (RNN) model, IONNet, was used to estimate changes in position and orientation in independent time windows [17]. Some studies used a Kalman Filter (KF) to eliminate noise from the accelerometer, gyroscope, and magnetometer sensor signals and integrate the filtered signal to reconstruct the trajectory [18]. Another KF approach has been combined with a Neural Network to estimate the noise parameters of the filter [19]. Several neural network architectures have been proposed to predict or correct IO odometry over time. For example, a three-channel LSTM was fed with IMU measurements to output variations in position and orientation and tested on a vehicle dataset [20]. Another LSTM-based architecture mimics a kinematic model, predicting orientation and velocity given IMU input data. Studies have investigated the role of hyper-parameters in IO estimation [21]. Figure 1: Diagram of the proposed approach. Red blocks and arrows refer to the online training phase, blue ones to the model inference stage, and yellow ones to the odometric input data. Sensor fusion of wheel encoder and IMU data is a common method for obtaining a robust solution. One approach involves fusing the data with a Kalman Filter, which can assign a weight to each input based on its accuracy [22]. A Fully Connected Layer with a convolutional layer has been employed for estimating changes in position and orientation in a 2D space over time in an Ackermann vehicle, along with a data enhancement technique to improve learning efficiency [23]. Additionally, a GRU RNN-based method has been proposed to compensate for drift in mechanum wheel mobile robots, with an in-depth fine-tuning of hyper-parameters to improve performance [24]. ### Contributions In this work, we tackle the problem of improving wheel-inertial odometry by learning how to correct it online with an efficient artificial neural network. At this first stage, the study has been conceived to provide the robot with a more reliable, secondary odometric source in standard indoor environments where the working conditions for VO can temporarily vanish, as in the case of robots for domestic night surveillance or assistance. The main contribution of this work can be summarized as: * A novel online learning approach for wheel-inertial odometry correction which allows avoiding complex trajectory data collection and can be directly included in a ROS 2 system; * An efficient model architecture to preserve both easy online training and fast inference performance. Nonetheless, a validation dataset of sensor data has been collected with the robot following different trajectories to conduct extensive experiments and comparisons with state-of-the-art offline methods. ## 2 Methodology ### Problem Formulation The position of a robot at time \(t\) referred to the starting reference frame \(\textbf{R}_{0}\) can be calculated by accumulating its increments during time segments \(\delta t\). The time stamp \(n\) refers to the generic time instant \(t=n\delta t\). The state of the robot \(\textbf{x}_{n}\) is defined by the position and orientation of the robot, such as: \[\textbf{x}_{n}=(x_{n},y_{n},\theta_{n})^{T}, \tag{1}\] where \((x_{n},y_{n})\) is the robot's position in the 2D space and \(\theta_{n}\) is its heading angle. Given the state, it is possible to parametrize the roto-translation \(\textbf{T}_{0}^{m}\) matrix from the robot's frame \(\textbf{R}_{m}\) to the global frame \(\textbf{R}_{0}\). Its first two columns represent the axes of the robot frame, and the last one is its position with respect to the origin. The robot employed to develop this work is equipped with an IMU, which includes a gyroscope and an accelerometer, and two wheel encoders. Therefore, \(\textbf{u}_{n}\) is defined as the measurement array referred to instant \(n\), i.e.: \[\textbf{u}_{n}=\left(v_{l},v_{r},\ddot{x},\dot{y},\ddot{z},\dot{\theta}_{x}, \dot{\theta}_{y},\dot{\theta}_{z}\right)^{T}, \tag{2}\] Figure 2: Architecture of the proposed model. The batch dimension is omitted for better clarity. where \((v_{l},v_{r})\) are the wheels' velocities, \((\ddot{x},\ddot{y},\ddot{z})\) are the linear accelerations and \((\dot{\theta}_{x},\dot{\theta}_{y},\dot{\theta}_{z})\) are the angular velocities. The input \(\mathbf{U}_{n}\) to the proposed model consists in the concatenation of the last \(N\) samples of the measurements \(\mathbf{U}_{n}=(\mathbf{u}_{(n)},\mathbf{u}_{(n-1)},\dots,\mathbf{u}_{(n-N)})^{T}\). At each time sample, the state is updated as a function of the measurements \(f(\mathbf{U}_{n})\): first, the change of the pose \(\delta\hat{x}=f(\mathbf{U}_{n})\) of the robot is estimated, relative to the previous pose \(\hat{\mathbf{x}}_{n-1}\). Then, the updated state is calculated, given the transformation matrix obtained before, as: \[\hat{\mathbf{x}}_{n}=\hat{\mathbf{x}}_{n-1}\boxplus f(\mathbf{U}_{n})= \mathbf{T}_{0(n-1)}^{m}\delta\hat{\mathbf{x}}_{n}, \tag{3}\] where the operator \(\boxplus\) symbolizes the state update. ### Neural Network Architecture As formalized in the previous section, the prediction of \(\hat{\mathbf{x}}_{n}\in\mathbb{R}^{3}\) from \(\mathbf{U}_{n}\in\mathbb{R}^{T\times C}\) is framed as a regression problem. The architecture we propose to solve this task is inspired to REMNet [25, 26], though it uses 2D convolutions instead of the original 1D convolutional blocks (Figure 2). This modification aims at exploiting temporal correlations without compressing the channel dimension throughout the backbone. In particular, we keep the channel dimension \(C\) separated from the filter dimension \(F\). In this way, the first convolutional step with kernel \((K,1)\) and \(F\) filters outputs a low-level feature map \(f_{1}\in\mathbb{R}^{T\times C\times F}\). Then, a stack of \(N\) Residual Reduction Modules (RRM) extracts high-level features while reducing the temporal dimension \(T\). Each RRM consists of a residual (\(Res\)) block followed by a reduction (\(Red\)) module: \[RRM(x)=Red(Res(x)) \tag{4}\] The \(Res\) block comprises a 2D convolution with kernel \(K\times 1\) followed by a Squeeze-and-Excitation (SE) block [27] on the residual branch. The SE block applies attention to the channel dimension of the features with a scaling factor learned from the features themselves. First, the block applies average pooling to dimensions \(T\) and \(C\). Then, it reduces the channel dimensionality with a bottleneck dense layer of \(F/R\) units. Finally, another dense layer restores the original dimension and outputs the attention weights. After multiplying the attention mask for the features, the result is used as a residual and added to the input of the residual block. The \(Red\) block halves the temporal dimension by summing two parallel convolutional branches with a stride of 2. The layers have kernels \(K\times 1\) and \(1\times 1\), respectively, to extract features at different scales. After \(N\) RRM blocks, we obtain the feature tensor \(f\in\mathbb{R}^{T\times C\times F/2^{N}}\), which is flattened to predict the output through the last dense layer. We also include a dropout layer to discourage overfitting. ### Training Procedure The goal of this work consists of learning the positioning error of the robot using wheel odometry. Nonetheless, it is important to remark that, nowadays, visual-inertial odometry (VIO) is a standard approach on robotic platforms. This work does not aim to propose a more precise localization system but to learn wheel-inertial odometry as a second reliable localization algorithm available whenever visual approaches fail. We exploit a basic VIO system on the robot for the only training process since it enables a competitive online learning paradigm to train the model directly on the robot. Batch learning, the most used training paradigm, requires all the data to be available in advance. As long as the data are collected over time, the proposed method consists in training the network in a continuous way when a batch of N data is available. This approach has been tested extensively in [28], demonstrating a negligible loss in accuracy compared to the batch-learning paradigm. The proposed model's training consists of two main steps, which are repeated as long as new data are available. First, a batch of \(N\) elements is collected, respectively, the input of the network \(\mathbf{U}_{n}\) and the expected output \(\delta x\). Then, an update step is carried out using an SGD-based optimizer algorithm adopting a Mean Absolute Error loss function, which does not emphasize the outliers or the excessive noise in the training data. ## 3 Tests and Results In this section, the proposed approach is tested through extensive experimental evaluations. The model presented in Section 2.2 has been trained with an incremental learning method and a classical batch training approach. Results obtained with a simple FFNN model and a standard localization solution based on an EKF are also discussed in the comparison. For this sake, both training processes have been accomplished on the same dataset, and all the tests have been executed on the same test set. ### Experimental Setting The dataset used for the experiments was collected in a generic indoor environment. The employed robotic platform was a Clearpath Jackal1, a skid-steer driving four-wheeled robot designed for indoor and outdoor applications. All the code was developed in a ROS 2 framework and is tested on Ubuntu 20.04 LTS using the ROS 2 Foxy distro. Footnote 1: [https://clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/](https://clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/) Since an indoor environment was considered, the linear velocity of the robot was limited to \(0.4m/s\) and its angular velocity to \(1rad/s\). The data from the embedded IMU and wheel encoders were used as inputs to the model. According to these assumptions, we used the robot pose provided by an Intel Realsense T265 tracking camera as ground truth. As the testing environment is a single room, the precision of the tracking camera is guaranteed to provide a drift of less than \(1\%\) in a closed loop path2. All the data have been sampled at \(1/\delta t=25Hz\). Footnote 2: [https://www.intelrealsense.com/tracking-camera-t265/](https://www.intelrealsense.com/tracking-camera-t265/) The data were collected by teleoperating the robot around the room and recording the sensor measurements. For the training dataset, the robot has been moved along random trajectories. For the test dataset, critical situations when the skid-steer drive robot's odometry is known to lose the most accuracy were reproduced, such as tight curves, hard brakings, strong accelerations, and turns around itself. The obtained training dataset consists of 156579 samples; 80% have been used for training and 20% for validation and hyperparameter tuning. The test dataset consists of 61456 samples. Figure 4: Absolute error of position and orientation of different methods during the test performed on a subset of infinite-shaped trajectories. The considered subset is the same as figure 3. Figure 3: Infinite-shaped trajectories estimated by different methods. The data are collected during a total navigation time of about \(60s\). The model hyperparameters have been tuned by performing a grid search using a batch learning process, considering a trade-off between accuracy and efficiency. In the identified model, we adopted \(F=64\) filters, \(N=2\) reduction modules, and a ratio factor \(R=4\). Kernel size \(K=3\) is used for all the convolutional layers, including the backbone. The input dimensions were fixed to \(T=10\) and \(C=8\). The former corresponds to the number of temporal steps, and it has been observed how a higher value appears to be superfluous. In contrast, a lower value leads to performance degradation. The latter value, \(C\), corresponds to the number of input features, i.e., sensor measurements as described in 2. We adopted Adam[29] as the optimizer for the training. The exponential decay rate for the first-moment estimates is fixed to \(\beta_{1}=0.9\), and the decay rate for the second-moment estimates is fixed to \(\beta_{2}=0.999\). The epsilon factor for numerical stability is fixed to \(\epsilon=10^{-8}\). The optimal learning rate \(\eta\) was experimentally determined as \(1\times 10^{-4}\) for batch learning. Conversely, the incremental learning process showed how a value of \(\eta=7\times 10^{-5}\) avoided overfitting since the data were not shuffled. In both learning processes, a batch size of \(B=32\) was used. ### Evaluation Metrics To evaluate the performance of the proposed model, two different metrics were used [30]: * _Mean Absolute Trajectory Error (m-ATE)_, which averages the magnitude of the error evaluated between the estimated position and orientation of the robot and its ground truth pose in the same frame. Sometimes, it can lack generalization due to possible error compensations along the trajectory. * _Segment Error (SE)_, which averages the errors along all the possible segments of a given length s, considering multiple starting points. It is strongly less sensitive to local degradation or compensations than the previous metrics. ### Quantitative Results The proposed method was tested by training the neural network from scratch using the stream of sensor data in real-time, brought by the ROS 2 topics. The data were first collected in mini-batches of 32 elements. After completion, backpropagation is performed on the model to update all the weights. The data stream is recorded to provide the aforementioned 3.1 training dataset, which was later used to evaluate other methods. The results of the methods are compared to different state-of-the-art solutions, which are i) the same network trained with a traditional batch learning, ii) a feedforward neural network, as in [23], and iii) an Extended Kalman Filter based method, which can be considered one of the most common wheel-inertial odometry estimators. All the models were evaluated offline using a test set composed of 19 sequences of various lengths, comprised between \(60s\) and \(280s\), which aim to recreate different critical situations for wheel inertial odometry. In particular, the sequences can be separated into three main trajectory types: * _Type A_, comprises round trajectories which do not allow fortunate error compensation during the time. Therefore, they may lead to fast degradation of the estimated pose, and especially of the orientation. Figure 5: Histograms of the SE error in position and orientation in section B of the test set. * _Type B_ comprises an infinite-shaped trajectory. This test allows partial error compensations, but possible unbalanced orientation prediction may lead to fast degradation of the position accuracy. A partial sequence of type B trajectories are shown in Figure 3. * _Type C_ comprises irregular trajectories, including hard brakings and accelerations, and aims to test the different methods' overall performance. Table 1 presents the numeric results of the different tests, considering the proposed model (Online Learning) and the selected benchmarks. All the leaning-based approaches show a significant error reduction compared to the EKF results, which can be considered a baseline for improvement. Considering both the neural network architectures trained offline, the proposed convolutional one achieves an average improvement of 73.5% on the position \(m-ATE_{(x,y)}\) and 85.3% on the orientation \(m-ATE_{\theta}\). In comparison, the FFNN model achieves 49.0% and 48.4%, respectively. The Segment Error improves in both cases: the proposed model improves by 42.6% on the position \(SE_{(x,y)}\) and 75.8% on the orientation \(SE_{\theta}\). The FFNN architecture improves by 39.3% and 60.3%, respectively. Compared with the EKF baseline, the online learning model shows almost the same improvement as batch learning. The improvement on the m-ATE equals 60.4% on the position and 79.2% on the orientation. The Segment Error also appears to be lower, showing an improvement of 19.1% on position and 60.3% on orientation. The observed difference between the two training paradigms is an acceptable trade-off between the slight loss of accuracy of the online training compared to the batch training and the possibility of training the model without a pre-collected dataset. Figure 5 reports the histograms of the distribution of the Segment Errors, in position and orientation, respectively, for test scenario B. It emerges how learning-based methods achieve, on average, a smaller error than the EKF method. Figure 4 shows the error trend during time related to the trajectory of figure 3. It is evident how the batch-trained and online-trained models perform similarly to the other methods. external PC with 32-GB RAM on a \(12^{th}\)-generation Intel Core i7 @ 4.7 GHz took an average time of 25 ms per batch, considering 100 measurements. ## 4 Conclusions This paper introduces an online learning approach and an efficient neural network architecture for wheel-inertial odometry estimation in mobile robots from raw sensor data. The online training paradigm does not need a pre-collected dataset and allows fine-tuning the performance of the model over time, adapting to environmental changes. Moreover, the proposed model's reduced dimension allows training and fast inference on a low-resources robotic platform on-the-fly. Future works may include developing a collaborative system based on integrating multiple odometry sources with a seamless transition to constantly guarantee accurate localization data to the robot. ## 5 Acknowledgement This work has been developed with the contribution of Politecnico di Torino Interdepartmental Centre for Service Robotics PIC4SeR3. Footnote 3: www.pic4ser.polito.it
2307.12903
Towards Bridging the FL Performance-Explainability Trade-Off: A Trustworthy 6G RAN Slicing Use-Case
In the context of sixth-generation (6G) networks, where diverse network slices coexist, the adoption of AI-driven zero-touch management and orchestration (MANO) becomes crucial. However, ensuring the trustworthiness of AI black-boxes in real deployments is challenging. Explainable AI (XAI) tools can play a vital role in establishing transparency among the stakeholders in the slicing ecosystem. But there is a trade-off between AI performance and explainability, posing a dilemma for trustworthy 6G network slicing because the stakeholders require both highly performing AI models for efficient resource allocation and explainable decision-making to ensure fairness, accountability, and compliance. To balance this trade off and inspired by the closed loop automation and XAI methodologies, this paper presents a novel explanation-guided in-hoc federated learning (FL) approach where a constrained resource allocation model and an explainer exchange -- in a closed loop (CL) fashion -- soft attributions of the features as well as inference predictions to achieve a transparent 6G network slicing resource management in a RAN-Edge setup under non-independent identically distributed (non-IID) datasets. In particular, we quantitatively validate the faithfulness of the explanations via the so-called attribution-based confidence metric that is included as a constraint to guide the overall training process in the run-time FL optimization task. In this respect, Integrated-Gradient (IG) as well as Input $\times$ Gradient and SHAP are used to generate the attributions for our proposed in-hoc scheme, wherefore simulation results under different methods confirm its success in tackling the performance-explainability trade-off and its superiority over the unconstrained Integrated-Gradient post-hoc FL baseline.
Swastika Roy, Hatim Chergui, Christos Verikoukis
2023-07-24T15:51:06Z
http://arxiv.org/abs/2307.12903v2
# Towards Bridging the FL Performance-Explainability Trade-Off: A Trustworthy 6G RAN Slicing Use-Case ###### Abstract In the context of sixth-generation (6G) networks, where diverse network slices coexist, the adoption of AI-driven zero-touch management and orchestration (MANO) becomes crucial. However, ensuring the trustworthiness of AI black-boxes in real deployments is challenging. Explainable AI (XAI) tools can play a vital role in establishing transparency among the stakeholders in the slicing ecosystem. But there is a trade-off between AI performance and explainability, posing a dilemma for trustworthy 6G network slicing because the stakeholders require both highly performing AI models for efficient resource allocation and explainable decision-making to ensure fairness, accountability, and compliance. To balance this trade off and inspired by the closed loop automation and XAI methodologies, this paper presents a novel explanation-guided _in-hoc_ federated learning (FL) approach where a constrained resource allocation model and an _explainer_ exchange--in a closed loop (CL) fashion--soft attributions of the features as well as inference predictions to achieve a transparent 6G network slicing resource management in a RAN-Edge setup under non-independent identically distributed (non-IID) datasets. In particular, we quantitatively validate the faithfulness of the explanations via the so-called attribution-based _confidence metric_ that is included as a constraint to guide the overall training process in the run-time FL optimization task. In this respect, Integrated-Gradient (IG) as well as Input \(\times\) Gradient and SHAP are used to generate the attributions for our proposed in-hoc scheme, wherefore simulation results under different methods confirm its success in tackling the performance-explainability trade-off and its superiority over the unconstrained Integrated-Gradient _post-hoc_ FL baseline. 6G, closed-loop, federated learning, game theory, in-hoc, post-hoc, proxy-Lagrangian, resource allocation, XAI, ZSM ## I Introduction G networks aim to support numerous simultaneous and diverse slices for various vertical use cases, resulting in increased complexity. Extensive research efforts have been dedicated to AI-based autonomous management and orchestration within the zero-touch network and service management (ZSM) framework, standardized by the ETSI, which aims to effectively handle the end-to-end network and services to accommodate the various services offered by 6G networks [1, 2]. On the other hand, FL is a decentralized approach to machine learning that allows for the training of models on distributed data without the need to transfer the data to a central server. The ZSM framework can incorporate FL as a key component for managing zero-touch distributed network slices while preserving the confidentiality of sensitive data [3]. Furthermore, for the widespread adoption of AI in telecommunication networks, addressing the lack of transparency, trust, and explainability in black-box AI models becomes crucial [4]. The deployment of network automation requires a thorough understanding of AI models' decision-making and behavior. This drives the search for AI approaches that offer understandable and explainable outcomes. In this regard, explainable AI (XAI) methods have emerged as a means to scrutinize the decisions made by black-box AI models. These methods build white-box models and generate feature contributions to explain AI decisions, promoting fairness, accuracy, and transparency. The ability to provide confidence and trust in AI models is vital for businesses and organizations deploying AI-enabled systems [5, 6]. Nonetheless, there exists a clear trade-off between performance and explainability which is an ongoing challenge in the field of machine learning. Simpler models like linear regression or decision trees are more interpretable but may have limited predictive performance with complex data. On the other hand, complex models like deep neural networks can achieve higher accuracy but lack interpretability [5]. Additionally, minimizing the model loss conflicts with its explainability. In real-world deployments, it is essential to tackle the trade-off between performance and explainability towards ensuring trustworthy network orchestration. To achieve this goals, we anticipate that _in-hoc_ XAI approaches, that leverage the explanations during model's training and optimization, is a path to achieve that trade-off. We therefore introduce an explanation-guided _in-hoc_ FL strategy for achieving transparent zero-touch service management of 6G network slices at the RAN-Edge, specifically in a non-independent identically distributed (non-IID) data. ### _Related Work_ This section discusses the state-of-the-art works on explainable AI (XAI) in the telecommunications domain. AI transparency is now more critical than ever with the advent of 6G networks, especially for zero-touch network management [5, 7]. The "human-centric" character of 6G highlights how important it is for XAI to win over humans and keep them in the loop [7]. The authors in [8] specify the challenges of developing XAI methods and show causal inference in 6G technology. The importance of explainability in 5G for essential services, device-to-device architectural security, and causal inference in 6G technology has also been emphasized in [9, 10]. Moreover, The significance of explainability in upcoming 6G networks has been acknowledged by researchers [11], with applications to resolving handover and resource allocation issues. A study [12] compares XAI techniques and suggests that SHAP is the best for identifying the cause of SLA violations. However, the study lacks clear explanations for the discrepancies among the results, casting doubt on the model's reliability. In [13], a human-understandable architecture is proposed for a network automation task. However, no quantitative metrics are used to evaluate the fidelity of the explanations. Thus, to resolve this gap, in [14], an extensive range of XAI metrics is introduced for qualitative assessment, but no integration-based XAI methods are considered, and there is also no comparative analysis of different XAI techniques. In [5], it is emphasized that the interpretability and transparency of AI/ML models are crucial for the full automation of a ZSM framework in 6G networks. Recently, [15] proposes a neuro-symbolic XAI twin framework to enhance reasoning and trustworthiness in zero-touch IoE service management. However, they do not incorporate a explainable-guided AI approach or address distributed learning with XAI in the ZSM framework like us. Additionally, in [16], the concept of Federated Learning (FL) of XAI is introduced, focusing on its importance in 5G/6G networks. However, their work lacks quantitative validation of explanation faithfulness. In [17], the authors utilize the SHAP method from XAI to apply a trust-aware deep reinforcement learning-based device selection technique. This technique is employed in an FL training process for an autonomous driving scenario. On the other hand, several AI-based research works [18, 19, 20] have focused on solving resource allocation issues in 6G network slices. However, most of these works have not addressed the challenge of providing explainability in their proposed solutions. While some research areas like NLP and Healthcare have started recognizing the trade-off between model performance and explainability [21], the field of telecommunication is still in its early stages concerning this concept. However, in a fortunate development, the authors [5] have emphasized the importance of _in-model_ explainable AI (XAI) methods for the beyond 5G/6G field. They discuss the challenges and significance of achieving a better trade-off between explainability and AI model performance. They advocate for developing inherently explainable models and establishing quantifiable metrics to evaluate the effectiveness of XAI. _In-model_ explainability aims to create transparent models from the start, reducing reliance on post hoc XAI techniques. This approach allows stakeholders to understand the decision-making process or any decision while balancing performance and explainability. In addition to the aforementioned research areas, a new research approach called Explanation Guided Learning (EGL) has emerged, which aims to exploit the explanations of machine learning models during the training to guide the model towards fulfilling both trustworthiness and performance. Specifically, EGL incorporates additional supervision signals or XAI-driven prior knowledge into the model's reasoning process, which helps improving the confidence of model's predictions [22], which might improve the balance between explainability and AI model performance. ### _Contributions_ The main contributions of this paper are as follows: * We introduce a novel explanation-guided _in-hoc_ federated learning approach, where a constrained slice-level CPU resource allocation model and an _explainer_ exchange--in a closed loop way--explanations in the form of soft feature attributions as well as predictions to achieve a transparent and explainability-aware 6G network orchestration in a non-IID setup, * We adopt the integrated gradients (IG) XAI method to generate explanations in terms of feature attributions, and map them to a soft probability space, * These soft attributions are then used to quantitatively evaluating the model's _confidence metric_ which is included as a constraint in the FL optimization task. * We formulate the corresponding XAI-constrained optimization problem under the _proxy-Lagrangian_ framework and solve it via a non-zero sum two-player game strategy, while comparing with a vanilla unconstrained post-hoc IG FL baseline. * We present a comparative analysis of additional XAI methods that are used to generate attributions for the _in-hoc_ scheme and assess their confidence metrics via distribution plots. * We showcase the impact of network conditions (channel quality indicator (CQI), OTT traffics and MIMO full-rank usage) on the output CPU allocation. ### _Notations_ We summarize the notations used throughout the paper in Table I. ## II System Model and Problem Statement ### _System Model_ As depicted in Fig. 1, a 6G RAN-Edge topology under a per-slice central unit (CU)/distributed unit (DU) functional split is considered, wherein DUs are co-located with the transmission/reception point (TRP), while each CU \(k\) is a virtual network function (VNF) running on top of a commodity hardware at the Edge domain and implements a closed loop (CL) \(k\left(k=1,\dots,K\right)\) consisting of a key performance indicators (KPIs) collection as well as AI-enabled slice resource allocation functions. This CL concept adheres to the ZSM framework ETSI [2], which considers autonomous feedback loops between data monitoring, AI-driven data analytics, and decision-making functions to achieve particular network management tasks. In this regard, the architecture entails synthesizing data of several over-the-top (OTT) applications, which serves to build local datasets for each slice \(n\left(n=1,\dots,N\right)\), i.e., \(\mathcal{D}_{k,n}=\left\{\mathbf{x}_{k,n}^{\left(i\right)},y_{k,n}^{\left(i \right)}\right\}_{i=1}^{D_{k,n}}\), where \(\mathbf{x}_{k,n}^{\left(i\right)}\) stands for the input features vector, which includes OTT traffics, channel quality indicator (CQI) and multiple-input multiple-output (MIMO) full-rank usage, while \(y_{k,n}^{\left(i\right)}\) represents the supervised output, which is the CPU load as shown in Table II. The considered slices include several OTTs each, namely, * **eMBB:** Netflix, Youtube and Facebook Video, * **Social Media:** Facebook, Whatsapp and Instagram, * **Browsing:** Apple, HTTP and QUIC, where the corrsponding accumulated datasets are non-IID due to the different traffic patterns induced by the heterogeneous users' distribution and channel conditions. Note that the intuitive link between the input features and the output resides in the fact that when the radio conditions are enhanced, the transmission and computing queues are relaxed, delivering thereby a higher number of packets which also results in increasing the CPU load. Next, to enhance model accuracy and preserve data privacy as well as reduce transport overhead, the architecture adopts a federated learning (FL) approach, where local closed loops (CLs) participate in training using their respective synthesized datasets. The FL process is guided by XAI ensuring transparency and interpretability during model training. The knowledge gained from each local CL is then aggregated through an E2E slice-level server, collectively improving the global AI model's explainability and AI model performance. ### _Problem Statement_ Unlike post-hoc XAI strategies, the aim of this paper is to design an explanation-guided _in-hoc_ FL-based resource allocation scheme for transparent zero-touch 6G network slicing at the RAN-Edge domain while balancing the trade-off between the performance and explainability. To achieve this, the proposed approach incorporates the XAI-based _confidence metric_ as constraint in a closed-loop manner during the model learning process. This allows for iterative optimization and adjustment of the model based on the insights gained from XAI considerations. Indeed, this approach is needed to strengthen the intelligence entities of our ZSM-based slice-level resource management [15]. Fig. 1: Decentralized closed loops (CLs) architecture ## III Proposed Approach Towards bridging the AI performance-explainability trade-off for trustworthy RAN slicing resource allocation, we detail in this section the proposed architecture shown in Fig. 2. ### _EGL-Driven Trustworthy Resource Allocation_ Inspired by the CL and EGL principle, we propose an explanation-aided _in-hoc_ federated learning architecture where the local learning is performed iteratively with run-time explanation. The overall working principle of the proposed model is manifested in Fig. 2 by clearly indicating the steps. For each local epoch, the Learemer module feeds the posterior symbolic model graph to the Tester block which yields the test features and the corresponding predictions \(z_{k,n}^{(i)}\) to the Explainer as shown from steps 1 to 3. The latter first generates the features attributions using one of the feature attribution XAI methods. It then converts these attributions to a soft probability distribution, as indicated in step 4, which is translated afterward into a confidence metric by the Confidence Mapper and fed back to the Learner to include it in the local constrained optimization, as pointed out in stages 5 and 6. Indeed, for each local CL \((k,n)\), the predicted amount of resources \(\hat{y}_{k,n}^{(i)},\,(i=1,\ldots,D_{k,n})\), should minimize the main loss function with respect to the ground truth \(y_{k,n}^{(i)}\) and guided by the gained insights from XAI. Hence, as depicted in those mentioned steps 1 to 7, the optimized local weights at round \(t\), \(\mathbf{W}_{k,n}^{(t)}\), are sent to the server which generates a global FL model for slice \(n\) as, \[\mathbf{W}_{n}^{(t+1)}=\sum_{k=1}^{K}\frac{D_{k,n}}{D_{n}}\mathbf{W}_{k,n}^{(t )}, \tag{1}\] where \(D_{n}=\sum_{k=1}^{K}D_{k,n}\) is the total data samples of all datasets related to slice \(n\). The server then broadcasts the global model to all \(K\) CLs that use it to start the next round of turbo local optimization. Specifically, it leverages a two-player game strategy to jointly optimize over the objective and original constraints as well as their smoothed surrogates and detailed in the sequel. ### _Model Testing and Explanation_ In this subsection, operational details of Model Tester and Explainer blocks of Fig. 2 are discussed more descriptively. As depicted in stage 2 of Fig. 2, upon the reception of the updated model graph, the Tester uses a batch drawn from the local dataset to reconstruct the test predictions \(\mathbf{z}_{k,n}^{(i)}\). All the graph, test dataset and the predictions are fed to the Explainer at stage 3 to generate the attributions, i.e., a quantified impact of each single feature on the predicted output. Let \(\mathbf{a}_{k,n}^{(i)}\in\mathbb{R}^{Q}\) denote the attribution vector of sample \(i\), which can be generated by any attribution-based XAI method. In our solution, for estimating the attributions of the considered features, we mainly leverage the low-complexity Integrated Gradient (IG) scheme [23], which is based on the gradient variation when sampling the neighborhood of a feature. At stage 4, the Explainer then calculates what we call _soft attributions_ by mapping the attributions into a probability space as follows, \[\pi_{k,n}^{(i,j)}=\frac{\exp\Bigl{\{}\Bigl{|}\alpha_{k,n}^{(i,j)}\Bigr{|} \Bigr{\}}}{\sum_{q=1}^{Q}\exp\Bigl{\{}\Bigl{|}\alpha_{k,n}^{(i,q)}\Bigr{|} \Bigr{\}}},\,j=1,\ldots,Q, \tag{2}\] where, \(\alpha_{k,n}^{(i,j)}\) = \(a_{k,n}^{(i,j)}/x_{k,n}^{(i,j)}\) stands for the weighted attribution, since the sensitivity of the model's output with respect to an input feature in the neighborhood of the input is approximately Figure 2: _in-hoc_ FL for transparent resource allocation. given by the ratio of the attribution for that input feature to the value of that feature [24]. ### _Confidence Mapping_ To characterize the trustworthiness of the local model, we invoke the explanation-based confidence metric \(C_{k,n}\)[24]. Its rationale lies in the fact that slightly shifting the value of high-magnitude features (in the sens of attributions) is an acceptable way to determine the model's conformance. Such an approach is vital because there will likely be no change in the SLA group (SLA violation or non-violation) if we sample over low attribution features space. In this respect, the _Confidence Mapper_ at stage 5 of Fig. 2 starts by performing a feature mutation, where it selects from the dataset feature \(x^{(i,j)}_{k,n}\) with probability \(\pi^{(i,j)}_{k,n}\), and changes it to the baseline value, i.e., zero, \[\hat{x}^{(i,j)}_{k,n}=x^{(i,j)}_{k,n}\times(1-p),\ p\sim\mathcal{B}\left(1,\pi^ {(i,j)}_{k,n}\right), \tag{3}\] to force the change of the model's prediction into the opposite category. In classification tasks, the categories (or subsets) are merely the classes. In contrast, we cast CPU resource allocation as a regression problem since it is well-suited for predicting continuous variables, allowing the FL model to estimate the CPU load as a numerical value. In this case, the subsets are defined according to an SLA threshold, i.e., \[\mathcal{D}_{k,n}=\mathcal{U}_{k,n}\cup\bar{\mathcal{U}}_{k,n}, \tag{4}\] where \(\mathcal{U}_{k,n}\) contains the samples whose prediction fulfills the SLA, i.e., their CPU load lies in an interval \([\alpha_{n},\beta_{n}]\). The aforementioned transformation leads to a mutated dataset \(\{\hat{x}^{(i,j)}_{k,n}\}\). The Confidence Mapper reports then the fraction of samples in the neighborhood for which the decision of the model, \[\hat{z}^{(i)}_{k,n}=\mathcal{M}_{k,n}(\mathbf{W}^{(t)}_{k,n},\hat{\mathbf{x}}^ {(i)}_{k,n}), \tag{5}\] does not move to the other set, that is, conforms to the original decision, as the conservatively estimated confidence measure [24], i.e., \[C_{k,n}=\frac{1}{u_{k,n}}\sum_{i=1}^{u_{k,n}}\max\Bigl{\{}\mathds{1}_{\mathbb{ R}^{-}}\left(\alpha_{n}-\hat{z}^{(i)}_{k,n}\right),\mathds{1}_{\mathbb{R}^{-}} \left(\hat{z}^{(i)}_{k,n}-\beta_{n}\right)\Big{\}}, \tag{6}\] where \(\hat{z}^{(i)}_{k,n}\) are the predictions after mutation, while \(u_{k,n}\) stands for the size of the considered original category \(\mathcal{U}_{k,n}\). ### _Explainability-Aware Resource Allocation_ We cast the slice-level resource allocation as a confidence-constrained regression task. For this purpose, we consider the datasets corresponding to the different slices as summarized in the Table II and section II, where resources at CU-level are dynamically allocated to slices according to their traffic patterns and radio conditions (CQI and MIMO full-rank usage). In this respect, we formulate the constrained optimization using the proxy-Lagrangian framework and solve it via a non-zero sum two-player game strategy. Specifically, the confidence metric should be higher than a predefined threshold \(\nu_{n}\). This translates into solving a statistically constrained local resource allocation problem as shown at stage 6 of Fig. 2, i.e., \[\min_{\mathbf{W}^{(t)}_{k,n}}\frac{1}{D_{k,n}}\sum_{i=1}^{D_{k,n}}\ell\left(y ^{(i)}_{k,n},\hat{y}^{(i)}_{k,n}\left(\mathbf{W}^{(t)}_{k,n},\mathbf{x}_{k,n} \right)\right), \tag{7a}\] \[C_{k,n}\geq\nu_{n}, \tag{7b}\] which is solved by invoking the so-called _proxy Lagrangian_ framework [25]. This consists first on considering two Lagrangians as follows: \[\mathcal{L}_{\mathbf{W}^{(t)}_{k,n}}= \frac{1}{D_{k,n}}\sum_{i=1}^{D_{k,n}}\ell\left(y^{(i)}_{k,n},\hat {y}^{(i)}_{k,n}\left(\mathbf{W}^{(t)}_{k,n},\mathbf{x}_{k,n}\right)\right) \tag{8a}\] \[+\lambda_{1}\Psi_{1}\left(\mathbf{W}^{(t)}_{k,n}\right),\] \[\mathcal{L}_{\lambda}=\lambda_{1}\Phi_{1}\left(\mathbf{W}^{(t)}_{k,n }\right) \tag{8b}\] where \(\Phi_{1}\) and \(\Psi_{1}\) represent the original constraints and their smooth surrogates, respectively. Specifically, the indicator term in (7b) are replaced with Logistic functions and the soft maximum is used as surrogate of the maximum as, \[\Psi_{1}\left(\mathbf{W}^{(t)}_{k,n}\right)=\nu_{n}-\frac{1}{u_{k,n}}\sum_{i=1}^{u_{k,n}}\log \Bigl{\{}\!\exp\Bigl{[}S_{\mu}\left(\hat{z}^{(i)}_{k,n}-\alpha_{n}\right) \Bigr{]}\] \[+\exp\Bigl{[}S_{\mu}\left(\beta_{n}-\hat{z}^{(i)}_{k,n}\right) \Bigr{]}\Bigr{\}}\leq 0, \tag{9}\] where \(L_{\mu}\) stands for the Logistic function with steepness parameter \(\mu\), i.e., \[S_{\mu}\left(\theta\right)=\frac{1}{1+e^{-\mu\theta}}. \tag{10}\] This optimization task turns out to be a non-zero-sum two-player game in which the \(\mathbf{W}^{(t)}_{k,n}\)-player aims at minimizing \(\mathcal{L}_{\mathbf{W}^{(t)}_{k,n}}\), while the \(\lambda\)-player wishes to maximize \(\mathcal{L}_{\lambda}\)[26, Lemma 8]. While optimizing the first Lagrangian w.r.t. \(\mathbf{W}_{k,n}\) requires differentiating the constraint functions \(\Psi_{1}(\mathbf{W}^{(t)}_{k,n})\), to differentiate the second Lagrangian w.r.t. \(\lambda\) we only need to evaluate \(\Phi_{1}\left(\mathbf{W}^{(t)}_{k,n}\right)\). Hence, a surrogate is only necessary for the \(\mathbf{W}_{k,n}\)-player; the \(\lambda\)-player can continue using the original constraint functions. The local optimization task can be written as, \[\min_{\mathbf{W}_{k,n}\in\Delta}\ \max_{\lambda,\ \|\lambda\|\leq R_{\lambda}}\ \mathcal{L}_{\mathbf{W}^{(t)}_{k,n}} \tag{11a}\] \[\max_{\lambda,\,\left\|\lambda\right\|\leq R_{\lambda}}\;\;\;\min_{\mathbf{W}_{k,n }\in\Delta}\mathcal{L}_{\lambda}, \tag{11b}\] where thanks to Lagrange multipliers, the \(\lambda\)-player chooses how much to weigh the proxy constraint functions, but does so in such a way as to satisfy the original constraints, and ends up reaching a nearly-optimal nearly-feasible solution [27]. These steps are all summarized in Algorithm 1. ``` Input:\(K\), \(m\), \(\eta_{\lambda}\), \(T\), \(L\) # See Table III Server initializes \(\mathbf{W}_{n}^{(0)}\) and broadcasts it to the CLs for\(t=0,\ldots,T-1\)do parallel for\(k=1,\ldots,K\)do Initialize \(M=\texttt{num\_constraints}\) and \(\mathbf{W}_{k,n,0}=\mathbf{W}_{n}^{(t)}\) Initialize \(\mathbf{A}^{(0)}\in\mathbb{R}^{(M+1)\times(M+1)}\) with \(\mathbf{A}_{m^{\prime},m}^{(0)}=1/(M+1)\) for\(l=0,\ldots,L-1\)do Receive the graph \(\mathcal{M}_{k,n}\) from the local model # Test the local model and calculate the attributions \(a_{k,n}^{(j)}=\texttt{Int\_Gradient}\left(\mathcal{M}_{k,n}\left(\mathbf{W}_{k,n,l},\mathbf{x}_{k,n}\right)\right)\) # Generate soft attributions \(\pi_{k,n}^{(i,j)}=\frac{\exp\left[\left|a_{k,n}^{(i,j)}\right|\right]}{\sum_{k =1}^{L}\exp\left\{\left|\lambda_{k,n}^{(i,j)}\right|\right\}}\), \(j=1,\ldots,Q\), # Mutate the test dataset \(\pi_{k,0}^{(i,j)}\) with probability # Calculate the confidence metric \(C_{k,n}=\frac{\sum_{i=1}^{k+n}\max\left\{\mathbf{i}_{k}-\left(\alpha_{n}-s_{k }^{(i)}\right),\mathbf{i}_{k}-\left(\mathbf{i}_{k}^{(i)},-\beta_{n}\right) \right\}}{w_{k,n}}\) Let \(\lambda^{(l)}\) be the top eigenvector of \(\mathbf{A}^{(l)}\) # Solve problem (7) via oracle optimization Let \(\hat{\mathbf{W}}_{k,n,l}=\mathcal{O}_{\delta}\left(\mathcal{L}_{\mathbf{W}_{k,n,l}}(\cdot,\dot{\lambda}^{(l)})\right)\) Let \(\Delta_{l}^{(l)}\) be a gradient of \(\mathcal{L}_{\lambda}(\hat{\mathbf{W}}_{k,n,l},\lambda^{(l)})\) w.r.t. \(\lambda\) # Exponentiated gradient ascent Update \(\mathbf{A}^{(l+1)}=\mathbf{A}^{(l)}\odot\exp\left\{\eta_{\lambda}\Delta_{l}^{( l)}(\lambda^{(l)})\right\}\) # Column-wise normalization \(\mathbf{A}_{m}^{(l+1)}=\hat{\mathbf{A}}_{m}^{(l+1)}/\left\|\hat{\mathbf{A}}_{m }^{(l+1)}\right\|_{1}\), \(m=1,\ldots,M+1\) end for return\(\hat{\mathbf{W}}_{k,n}^{(t)}=\frac{1}{L^{*}}\sum_{l=0}^{L-1}\hat{\mathbf{W}}_{k,n,l}\) Each local CL \((k,n)\) sends \(\hat{\mathbf{W}}_{k,n}^{(t)}\) to the server. end parallel for return\(\mathbf{W}_{n}^{(t+1)}=\sum_{k=1}^{K}\frac{D_{k,n}}{D_{n}}\hat{\mathbf{W}}_{k,n}^{(t)}\) end for ``` **Algorithm 1**_In-hoc_ Explainable Federated Learning At line 1 of Algorithm 1, we declare the baseline parameters. Then, at line 2, the aggregation server initializes the weights and broadcasts them to all participating CLs. Lines 3 to 4 describe the execution of the FL optimization task, as mentioned earlier at each FL round. ## IV Results In this section we evaluate the proposed _in-hoc_ FL framework by first justifying the use of feature attributions as a pillar to build the explainability-aware constrained resource allocation model. We then present the FL convergence, confidence score and performance metrics and showcase their underlying trade-offs. We afterward study the correlation between features attributions and draw some important conclusions. Finally we present a time complexity analysis to assess the computational efficiency of the proposed framework. To implement the model Tester and Explainer, we invoke DeepExplain framework, which includes state-of-the-art gradient and perturbation-based attribution methods [28]. It provides an attribution score based on the feature's contribution to the model's output, which we integrate with our proposed constrained resource allocation FL framework in a closed-loop iterative way. ### _Parameter Settings and Baseline_ Here, we define the parameters to tackle the FL optimization task. The details of the considered slices and associated KPIs are mentioned in section II. We use vectors \(\alpha\), \(\beta\) for the cpu resource bounds corresponding to the different slices for a particular resource and \(\nu\) for the explainability confidence metrics threshold. The parameters settings are presented in Table III. Recently, there has been growing interest in combining FL and XAI approaches. In this work, the considered baseline is the state-of-the-art unconstrained FL with post-hoc explanation [29]. The authors [29] investigate the feature importance challenge in vertical FL scenarios and propose a method, the Sharp Federated algorithm, which utilizes Shapley values to determine feature importance in a post-hoc way. ### _Bridging the Performance-Explainability Trade-off_ To study the performance-explainability trade-off of the proposed strategy, we plot both the convergence curves and the confidence metric vs. the FL rounds in Fig. 3 and Fig. 4, respectively, considering our _in-hoc_ FL scheme and the baseline unconstrained IG post-hoc. It shows that the confidence metric of the proposed in-hoc approach for the different slices conserves a higher value above 85\(\%\), while presenting a similar convergence trend as the post-hoc FL baseline. In contrast, the latter fails to ensure the confidence of the model, since its confidence metric decreases as we gradually approach the convergence. This behavior conveys that the in-hoc strategy addresses the trade-off successfully, and guarantees explainability and trust in the training phase. For analysis completeness, the Explainer block in Fig. 2 is also implemented using both the perturbation-based XAI method SHAP and gradient-based method Input\(\times\)Gradient [30], where Fig. 4-(b) confirms that our proposed _in-hoc_ FL algorithm preserves the same behavior during the testing phase. Moreover, the confidence score has remained almost the same even if the attribution scores have been generated by various XAI methods, which makes our proposed algorithm more reliable. Overall, based on the presented results in Fig. 3 and Fig. 4, the constrained _in-hoc_ FL ensures a trade-off between convergence and confidence, i.e., while the model loss decreases within an allowable confidence threshold, the model's confidence increases and significantly outperforms the post-hoc baseline. This indicates that the model becomes more confident in its predictions as it converges. This is achieved thanks to the explanation-guided _in-hoc_ constrained FL optimization. On the contrary, in the state-of-the-art post-hoc scenario, the model confidence degrades as it starts to converge. ### _Explaining the Impact of Network Parameters on Slice Resource Allocation_ In this subsection, we first examine the post-hoc feature attributions plots of the eMBB slice, which are generated via SHAP as illustrated in Fig 5. The analysis reveals that CQI has the highest impact on the CPU allocation, as it is widely concentrated towards positive values. This concentration indicates that better channel quality is leading to lower CPU load, which can be interpreted by the reduction of retransmissions and queuing time. Consequently, the model can anticipate efficient data transmission, increased network capacity, and potential resource optimization. Additionally, the concentration of OTT traffic per TRP around the lowest positive values highlights varying traffic levels that impact CPU load. Lower levels of OTT traffic typically require less processing resources, leading to decreased CPU load. The FL model can also leverage this information to optimize resource allocation and guide network planning and scaling efforts. Furthermore, the negative concentration of MIMO full rank suggests signal degradation and interference, which could potentially lead to higher CPU load predictions. However, in our specific case, where the value is very small, the impact of MIMO full rank on CPU load can be considered negligible. Nonetheless, this analysis still allows the FL model to identify and address potential issues related to signal degradation and interference, providing valuable insights for network optimization in other scenarios where MIMO full rank plays a more significant role. By considering these features, the FL model can make accurate predictions and enable resource optimization, leading to improved network performance and enhanced management of CPU load in network slices in a transparent and explainable way. Fig. 4: Confidence metric using different XAI methods. ### _Time complexity_ Attention has been also paid to the training time having in mind future actual deployment of our solution. The efficiency of the solution's training process directly impacts its real-world usability and effectiveness. Faster training times enable the system to respond promptly to dynamic changes, ensuring real-world efficiency and cost-effectiveness. Additionally, efficient training times facilitate scalability, allowing the system to handle increasing data volumes and user demands without compromising performance. For evaluating the time complexity of the proposed solution, the _in-hoc_ FL explainer can be implemented with various feature attribution generators, we use IG as the primary method. Fig. 6 shows the convergence time of the browsing slice under our _in-hoc_ FL strategy with SHAP, IG and Input \(\times\) Grad as attribution scores generators as well as the baseline unconstrained post-hoc method. Since Fig. 3 indicates that the browsing slice with _in-hoc_ FL starts to converge at around round 13, the computation time has been calculated until that convergence round. It can observe from the Fig. 6 that _in-hoc_ FL with IG takes lower time than others but nearly close to _post-hoc_ one while presenting the highest confidence. This finding suggests that utilizing IG as the primary feature attribution method in the _in-hoc_ FL explainer enables faster convergence without compromising the confidence of the predictions. This is a significant advantage for real-world deployment, as it indicates efficient allocation of computational resources while maintaining the trustworthiness of the model. ## V Conclusion In this paper, we have presented a novel explanation-guided _in-hoc_ federated learning approach to achieve a transparent zero-touch service management of 6G network slices while bridging the trade-off between performance and explainability. We have considered the XAI-based confidence metric in a closed loop way to solve the underlying joint optimization task using a proxy-Lagrangian two-player game strategy. In particular, we have used both integrated-gradient and perturbation-based XAI schemes to generate the attributions consumed by our _in-hoc_ FL, which present almost the same superior performance compared to an unconstrained _post-hoc_ FL baseline. We have also provided a post-hoc analysis of the impact of networks parameters on the slice-level resource allocation, which points out that CQI is the key influencing feature. Finally, computational complexity (up to the convergence round) has been assessed which demonstrates that our _in-hoc_ FL has less complexity than the _post-hoc_ while presenting a clearly superior confidence.
2302.10712
AOTF based spectro-polarimeter for observing Earth as an Exoplanet
Earth is the only known habitable planet and it serves as a testbed to benchmark the observations of temperate and more Earth-like exoplanets. It is required to observe the disc-integrated signatures of Earth for a large range of phase angles, resembling the observations of an exoplanet. In this work, an AOTF (Acousto-Optic Tunable Filter) based experiment is designed to observe the spectro-polarimetric signatures of Earth. The results of spectroscopic and polarimetric laboratory calibration are presented here along with a brief overview of a possible instrument configuration. Based on the results of the spectro-polarimetric calibration, simulations are carried out to optimize the instrument design for the expected signal levels for various observing conditions. The usefulness of an AOTF based spectro-polarimeter is established from this study and it is found that, in the present configuration, the instrument can achieve a polarimetric accuracy of $<0.3$\% for linear polarization for an integration time of 100 ms or larger. The design configuration of the instrument and the planning of conducting such observations from Lunar orbit are discussed.
Bhavesh Jaiswal, Swapnil Singh, Anand Jain, K Sankarasubramanian, Anuj Nandi
2023-02-21T14:55:16Z
http://arxiv.org/abs/2302.10712v1
# An AOTF based spectro-polarimeter for observing Earth as an Exoplanet ###### Abstract Earth is the only known habitable planet and it serves as a testbed to benchmark the observations of temperate and more Earth-like exoplanets. It is required to observe the disc-integrated signatures of Earth for a large range of phase angles, resembling the observations of an exoplanet. In this work, an AOTF (Acousto-Optic Tunable Filter) based experiment is designed to observe the spectro-polarimetric signatures of Earth. The results of spectroscopic and polarimetric laboratory calibration are presented here along with a brief overview of a possible instrument configuration. Based on the results of the spectro-polarimetric calibration, simulations are carried out to optimize the instrument design for the expected signal levels for various observing conditions. The usefulness of an AOTF based spectro-polarimeter is established from this study and it is found that, in the present configuration, the instrument can achieve a polarimetric accuracy of \(<0.3\)% for linear polarization for an integration time of 100 ms or larger. The design configuration of the instrument and the planning of conducting such observations from Lunar orbit are discussed. Acousto-Optic, Spectro-polarimetry, Planetary atmosphere, Exoplanet figure table *First author email, [email protected] ## 1 Introduction More than 5000 exoplanets have been discovered so far and several have been characterized for their atmospheres.[1, 2, 3, 4, 5] The majority of these planets may not be habitable owing to their orbital configurations and the intrinsic properties of the planet. However, there are several known planets that orbit their stars in the classical habitable zone, where the equilibrium temperatures are cool enough for liquid water to exist. The search for such Earth-like planets is advancing fast with the help of some of the world's most powerful telescopes, both on the ground and in space. The present decade also holds the promise for testing some of the key technologies like stellar coronagraphs[6, 7] which are crucial for observing relatively cooler and much fainter habitable zone planets. These observations will allow the study of starlight reflected from the entire day side of the planet at various phase angles, as the planet revolves around the star. Before understanding the reflected light from other temperate planets, it is crucial to study and retrieve the information contained in the disc-integrated spectrum of Earth. Earth, being the only known planet to host water and life, serves as a unique testbed for such a study. So far, the disc integrated observations of Earth have been limited to a few spacecraft observations which have flown at large inter-planetary distances like _Galileo_,[8]_EPOXI_,[9]_DSCOVR/EPIC_,[10] etc. These spacecraft observations are limited in their phase angle coverage and also lack in terms of polarimetric measurements. There have been ground-based observations of 'Earthshine' for the same objective in the visible and NIR bands.[11, 12, 13] Earthshine is the reflected light from the day side disc of the Earth which is again reflected by the night side of the moon and captured by the ground based telescopes. The Earthshine measurements can cover a large range of phase angles (usually \(\sim 40^{\circ}\) to \(140^{\circ}\)) as the Moon revolves around the Earth, and can also be equipped with polarization measurements. However, these observations suffer from the depolarization due to the Lunar surface [14, 15]. Satellite-based observations of polarization from clouds and aerosols for local regions of Earth have also been carried out from low Earth orbit satellites [16]. These observations very well demonstrate the importance of polarization measurements in few selected bands, but miss the global view of the planet and broad wavelength coverage. A comprehensive discussion on previous observations of 'Earth as an Exoplanet' and their results can be found in Ref. [17]. Further, there has been a lot of theoretical work towards predicting the spectro-polarimetric signatures of Earth-like exoplanets [18, 19, 20]. Several of these predictions, lack a confirmation from the direct observations. Growing interest in future detections of habitable zone planets has led to several proposals for space-based observations of Earth as an exoplanet. For example, projects like _LOUPE_[21, 22] and _EarthShine_[23] have been proposed to observe Earth as an Exoplanet from the surface of Moon. _LOUPE_ is proposed for spectropolarimetry of an unresolved Earth in 400-800 nm band whereas _EarthShine_ will perform imaging and spectral measurements in 400 nm-12.5\(\upmu\)m. In the present work, we discuss the working of an Acousto-Optic Tunable Filter (AOTF) based spectro-polarimeter in a similar observational platform as the previously discussed proposals; from a lunar orbiting platform. The AOTF is the main component of the proposed experiment and it works on the principle of Bragg diffraction using a birefringent crystal such as TeO\({}_{2}\), Quartz etc. It filters the incident light in two diffracted beams which are polarized in mutually perpendicular directions [24, 25, 26, 27]. AOTFs can be tuned for wavelength selection using an external Radio Frequency (RF). This ability of wavelength tuning and polarization sensitivity makes them suitable for the application of spectro-polarimetry. Further, AOTFs are based on solid-state devices, which avoids the need for any moving parts. For use in space-based experiments, the AOTF based spectro-polarimeter can be made into a compact light-weighted instrument. Previous spectroscopic [28] as well as polarimetric applications [29, 30] of AOTFs are worth noting here. In the current work, based on the laboratory measurements and the radiative transfer simulations, the possibility of observing Earth as an exoplanet with an AOTF based spectro-polarimeter is discussed. In this paper, the methodology for conducting the spectro-polarimetric calibration experiments is briefly discussed in Section 2. The experimental setup is discussed in Section 3. The calibration and important results from the experiments are presented in Section 4. The NIR polarization signals of Earth and a possible instrument configuration, incorporating the instrument response obtained from the calibration results, are discussed in Section 5. The need for further characterization of AOTF is highlighted in Section 6. The work has been summarized with a conclusion in Section 7. ## 2 Characterizing the AOTF The AOTF consists of an optical crystal, whose one face is bonded to an ultrasonic transducer. When an RF frequency is applied to the transducer, due to the photo-elastic effect a sinusoidal perturbation of refractive index in the medium is generated. When the phase matching condition is satisfied, the input light beam is diffracted at a given angle [27, 31]. The interaction between the light beam and acoustic wave produces two diffracted (+1 and -1) narrowband components, which are polarized in mutually perpendicular directions. There are two types of AOTFs depending on the propagation direction of the acoustic waves: collinear and non-collinear [27]. In this work, a non-collinear, dual beam AOTF having TeO\({}_{2}\) crystal is used as the dispersive medium. The diffracted wavelength from the crystal was tuned in the \(1000-1700\) nm wavelength (\(\lambda\)) range. In the following two subsections, the methodology for the spectroscopic and polarimetric calibration of AOTF has been discussed. ### Spectroscopic Characterization AOTFs have been well established as spectrometers and have been successfully flown in various space missions [28, 29, 32]. AOTF-based spectrometers, SPICAM and SPICAV, have been used extensively to study planetary atmospheres [33, 34, 35]. An AOTF-based spectrometer can be preferred over a traditional spectrometer for space applications due to the absence of any moving component, compactness of the overall design and a moderate spectral resolution of few nanometers. Though AOTF based spectrometers also have disadvantages in terms of smaller optical aperture area, large power requirements etc., the benefits offered in the present design in terms of polarimetric output and compactness of the instrument outweigh the limitations. The spectral performance of AOTFs is characterized by (i) Tuning frequency relation (\(\lambda\) vs. RF), (ii) Spectral bandpass (\(\triangle\lambda\)), (iii) Angular aperture and (iv) Diffraction efficiency [27, 31]. The tuning frequency relation determines the diffracted wavelength corresponding to the frequency applied via the transducer. It is influenced by the acoustic velocity inside the crystal, birefringence of the crystal and the incident angle with respect to the optic axis. The spectral bandpass of the two output beams from the AOTF depends on the wavelength of diffracted beams and the interaction length between acoustic and lightwave within the crystal. This spectral bandpass determines the spectral resolution of the AOTF and has a wavelength dependence [27]. ### AOTF as a polarizer Although AOTFs are known to be good polarizers [36, 25, 30], in order to characterize the AOTF it is assumed to be represented by a partial polarizer. A partial polarizer produces a partially polarized beam when an unpolarized light is incident on it. This partial polarizer is mathematically represented with a 4\(\times\)4 Mueller matrix (see equation 1; Ref. [37]). This matrix describes the linear relationship between the polarization states of the light beam incident on a polarizing optical element and the emerging light beam after passing through the AOTF. The first term, \(M_{00}\) is the output intensity corresponding to completely unpolarized input light. According to Stokes-Mueller calculus, the light is represented by a 4\(\times\)1 Stokes vector [38]. Consider a partial polarizer where the two orthogonal components of the incident electric field vector are affected by two positive constant factors, \(k_{1}\) and \(k_{2}\), respectively. The Mueller Matrix for this partial linear polarizer is then defined as [37], \[M=\begin{bmatrix}M_{00}&M_{01}&M_{02}&M_{03}\\ M_{10}&M_{11}&M_{12}&M_{13}\\ M_{20}&M_{21}&M_{22}&M_{23}\\ M_{30}&M_{31}&M_{32}&M_{33}\end{bmatrix}=\frac{\alpha}{2}\begin{bmatrix}1&\beta c _{2}&\beta s_{2}&0\\ \beta c_{2}&c_{2}^{2}+\gamma s_{2}^{2}&(1-\gamma)c_{2}s_{2}&0\\ \beta s_{2}&(1-\gamma)c_{2}s_{2}&s_{2}^{2}+\gamma c_{2}^{2}&0\\ 0&0&0&\gamma\end{bmatrix}, \tag{1}\] where \(\alpha=k_{1}^{2}+k_{2}^{2}\), \(\beta=\frac{(k_{1}^{2}-k_{2}^{2})}{\alpha}\), \(\gamma=\frac{(2k_{1}k_{2})}{\alpha}\), \(c_{2}=\cos(2r)\), \(s_{2}=\sin(2r)\) and \(r\) is the angle from the reference direction measured with respect to the horizontal direction in a plane perpendicular to the direction of light propagation. In our case, \(r\) takes the value of 0\({}^{\rm o}\) and 90\({}^{\rm o}\). Here, \(\alpha\) represents the transmission efficiency, \(\beta\) is a measure for the polarizing capability of the AOTF and \(\gamma\) is a measure of depolarization. A value of \(\alpha=\beta=1\) signifies that the polarizer is an ideal linear polarizer. As a response to an unpolarized input \([1,0,0,0]^{T}\), the output Stokes vector is given by \(p^{\prime}=\alpha/2[1,\beta c_{2},\beta s_{2},0]^{T}\). The combination of wavelength tuning and polarized output makes an AOTF-based instrument a good choice for spectro-polarimetric studies. The detailed theory and spectral characteristics such as resolution, angular aperture etc., of a similar non-collinear, single beam AOTF have been studied in Ref. [31]. In the present work, the polarimetric calibration of the AOTF has been emphasized along with the results from the dual beam spectral calibration. ## 3 Experimental Test Setup In order to characterise the AOTF for its spectro-polarimetric properties, a dual beam AOTF with a 5mm\(\times\)5mm aperture was used. This AOTF is driven by an external RF driver and the input frequency and power can be controlled via a computer interface. A halogen lamp is used as a broadband light source. A grating-based monochromator is used to create a narrow wavelength output (much narrower than the spectral resolution of AOTF). After the monochromator, an NIR linear polarizer, which has a high extinction ratio of 100,000:1, was used. After the polarizer, the light is collimated using a set of lenses and a pinhole. The diameter of the collimated beam is 3 mm and it fits well within the AOTF aperture. The exit beam from the AOTF is focused onto an Indium-Gallium-Arsenide (InGaAs) detector. The detector output is read via a computer interface. The entire experiment is conducted in a light-proof dark room to minimize background contribution to the measurements. The schematic representation of the test setup is presented in Figure 1 and the major specifications of each of the components used are summarized in Table 1. The AOTF has two diffracted beams at the output: (i) extraordinary (e) or the horizontally polarized (H-pol) beam and (ii) ordinary (o) or the vertically polarized (V-pol) beam. For the diffracted beam of the AOTF (H-pol as well as V-pol) the observations are conducted in the following sequence: \begin{table} \begin{tabular}{l l l} \hline Monochromator & Resolution & \(\sim\) 0.3 nm \\ \hline Polarizer & Extinction Ratio & 100,000:1 \\ \hline Lens L1 & Focal length & 45 mm \\ Lens L2 & Focal length & 19 mm \\ Lens L3 & Focal length & 19 mm \\ \hline Pinhole & Diameter & 250 μm \\ \hline AOTF & Type & Non-collinear, dual beam \\ & Crystal & TeO\({}_{2}\) crystal \\ & Aperture & 5mm\(\times\)5mm \\ \hline Detector & Type & InGaAs \\ & Quantum Efficiency & 0.7 \\ & Gain & \(10^{8}\)\(\Omega\) \\ & Responsivity & 0.98 A/W \\ & Operating wavelength range & 0.9-1.7 μm \\ \hline \end{tabular} \end{table} Table 1: Major specifications the elements used in the lab setup. 1. Set the desired wavelength in the monochromator. 2. The polarizer is kept in the vertical position and a scan of AOTF is recorded. 3. The polarizer is kept in the horizontal position and a scan of AOTF is recorded. The steps (2) and (3) are then repeated for a 'without-AOTF' condition where the AOTF is carefully taken out of the optical path and the detector is moved to the central location. This completes the experiment for one wavelength. These steps are repeated for each wavelength. The same experimental test setup as shown in Figure 1 is used for spectroscopic calibration. For spectroscopic calibration, the polarizer is removed from the optical chain and the spectral response of the AOTF is observed by tuning the input RF. ## 4 Calibration Results ### Spectroscopic Calibration Spectroscopic calibration of AOTF mainly involves the study of AOTF transfer function and the frequency tuning relation. In the present design involving two beams of the AOTF, the spectroscopic calibration is done for both beams using the lab setup mentioned in Figure 1 (after removing the polarizer from the path). For the spectroscopic calibration, the intensity profile of the halogen source for a narrow wavelength band (which was selected using the monochromator) was scanned as a function of the RF frequency. The AOTF diffracts light for a narrow wavelength band according to its spectral resolution for that particular wavelength. The shape of the transfer function as well as its peak amplitude (diffraction efficiency \(\alpha\)) can be different for the two beams. In Figure 2 (left) the transfer function for the 'e' and 'o' beams are shown for 1450 nm wavelength. Apart from the clear difference in the peak amplitude, there are also minor differences in the shape of the Figure 1: Schematic diagram of the laboratory setup. A broadband light source (halogen lamp) is used along with a grating-based monochromator to produce a monochromatic input beam. Light enters the AOTF after passing through a linear polarizer and a collimator arrangement. The output light from the AOTF consists of H-pol (horizontally polarized e-beam) and V-pol (vertically polarized o-beam) components in two directions. Light is then focused onto an InGaAs detector. two transfer functions. These differences will ultimately tend to alter the overall transmission of the filter. The measurements of the AOTF transfer function were carried out for different wavelengths ranging from 1050 nm to 1600 nm in steps of 50 nm. The intensity profile vs wavelength is shown as the colourmap in Figure 2 (right). The transfer functions for various input wavelengths are shown along the z-axis in Figure 2 and it is observed that the resolution increases with wavelength. The measured intensity profile of the AOTF is similar to the theoretical AOTF transfer function (i.e. \(sinc^{2}\)) except for an asymmetry in the side lobes that could be due to crosstalk behaviour or could result from inhomogeneous birefringence of transducer waveguide [27, 31, 34]. Ref. [39] suggests that this asymmetry could arise when the AOTF crystal is inhomogeneous and light does not travel as a planar wave. They use the sum of an odd number of \(sinc^{2}\) functions (five) to model the AOTF transfer function. The sum of seven or more \(sinc^{2}\) functions was not used due to the limited wavelength range of the scans [39] and a large number of free parameters could result in overfitting. For our datasets, we modelled the transfer functions using two models: (I) sum of three \(sinc^{2}\) functions and (II) sum of five \(sinc^{2}\) functions. A comparison at 1250 nm is shown in Figure 9 of appendix A. We see that the Model (II) fits the secondary and tertiary lobes more accurately when compared to Model (I). Hence for all further analysis, we use Model (II). To estimate the resolution of the AOTF at various wavelengths, Model (II) function was fitted to the AOTF transfer profiles. It is observed that the resolution of the AOTF ranges from \(1.9-4.1\) nm in the wavelength range of \(1.0-1.7\)\(\mu\)m. The variation of resolution with wavelength for both beams of the AOTF is not identical as seen in Figure 3 (left). This difference between the two beams is a consequence of the difference in the geometry of the acousto-optical interaction for these two beams [30]. Figure 2: Laboratory measurements of the AOTF transfer function (\(sinc^{2}\)). _(Left)_ Measured transfer function at 1450 nm wavelength for the two beams of AOTF. _(Right)_ Spectral variation of AOTF transfer function (\(sinc^{2}\)) for e-beam. The x-axis represents the difference from the peak wavelength (\(\Delta\lambda_{c}\)) of the \(sinc^{2}\) function and the y-axis represents the wavelength of light input to the AOTF (as central wavelength \(\lambda_{c}\)). The projected image in the bottom shows the contours of the measured response. The behaviour of the FWHM and side lobes can be clearly seen across the wavelength. #### 4.1.1 Frequency Tuning Relation (RF vs \(\lambda\)) In order to obtain the frequency tuning relation with wavelength, the Krypton line source spectrum was acquired with the AOTF. To obtain this spectrum, the monochromator in Figure 1 is replaced with the line source. The spectrum was acquired using the variable RF signal from 85 to 125 MHz and is shown in Figure 10 of appendix B. The Krypton line source has many lines in the desired wavelength range. Ten Krypton emission lines that do not have any overlap with the neighbouring lines were selected. The frequencies at which these lines are obtained were then matched with the wavelengths of the emission lines to obtain the RF-\(\lambda\) calibration by fitting the relation between the tuning frequency and wavelength given in Ref. [30]. The calibration curves for both beams are shown in Figure 3 (right). It is observed that the curve for the o-beam closely traces that of the e-beam and the small difference between them could be attributed to the different geometry of the acousto-optical interaction [30]. ### Polarimetric Calibration The aim of polarimetric calibration is to obtain the Mueller matrix (equation 1) of the two beams of the AOTF, across the entire wavelength range. In order to obtain \(k_{1}\) and \(k_{2}\) it is essential to make the transmission measurements of AOTF for the two beams. It is done by making the measurements with and without the AOTF. Without AOTF, signal (\(V\)) and background (\(V^{B}\)) measurements were taken for two positions of the polarizer: (i) \(V_{00}\) and \(V_{00}^{B}\) at 0\({}^{\rm o}\) (horizontal) and (ii) \(V_{90}\) and \(V_{90}^{B}\) at 90\({}^{\rm o}\) (vertical). With the AOTF, for one beam two datasets (\(v\) and \(v^{B}\)) were taken at both positions of the polarizer: (i) \(v_{00}\) and \(v_{00}^{B}\) at 0\({}^{\rm o}\) and (ii) \(v_{90}\) and \(v_{90}^{B}\) at 90\({}^{\rm o}\). The signal comprises of the spectral scan of the central peak of the \(sinc^{2}\) transfer function. The background with the AOTF was measured by fixing the tuning frequency to a value away from the peak frequency value. To remove the Figure 3: _(Left)_ Variation of spectral resolution for both AOTF output beams in range of 1.0 to 1.7 \(\mu\)m. The error bars are 3\(\sigma\). The observed variation is fit using the relation between the spectral bandpass and wavelength given in Ref. [31]. _(Right)_ RF-\(\lambda\) tuning relation for both e and o beams represented by the dotted lines was obtained by fitting the theoretical tuning relation to the experimentally observed points marked by blue squares for e-beam and red circles for o-beam. contribution of this background, it was subtracted from the signal measurements. Similar datasets were taken for the other beam of the AOTF as well. \[V^{\prime}_{00}=V_{00}-V^{B}_{00};V^{\prime}_{90}=V_{00}-V^{B}_{90} \tag{2}\] \[v^{\prime}_{00}=v_{00}-v^{B}_{00};v^{\prime}_{90}=v_{00}-v^{B}_{90} \tag{3}\] Five sets of observations were combined to improve the signal to noise ratio and minimize the error. Together these datasets provide the values of \(k_{1}\) and \(k_{2}\) for each beam. \[k_{1}^{2}=\frac{v^{\prime}_{00}}{V^{\prime}_{00}};k_{2}^{2}=\frac{v^{\prime}_{ 90}}{V^{\prime}_{90}} \tag{4}\] It is to be noted here that for H-pol beam \(k_{1}\) is obtained by the ratio of H-pol beam (with AOTF) to horizontal beam (without AOTF) whereas \(k_{2}\) is obtained by the ratio of H-pol beam (with AOTF) to vertical beam (without AOTF). And hence \(k_{2}\) is expected to be very close to zero. If \(k_{2}\) = 0, a perfect polarizer with \(\beta\) = 1 is obtained. And that is why the values of \(k_{2}\) have to be obtained very carefully as they are very close to the background levels. The measured values of \(k_{1}\) and \(k_{2}\) were then used to estimate the Mueller matrix elements (\(\alpha\), \(\beta\) and \(\gamma\)) for both beams at various wavelengths in the 1.0-1.7 \(\mu\)m range. The errors in \(\alpha\), \(\beta\) and \(\gamma\) have been obtained by propagating the measurement errors in the detector voltages. Their values with errors are listed in C. The variation of \(\alpha\) and \(\beta\) with wavelength is shown in Figure 4. The variation of \(\alpha\) with wavelength for both output beams as seen from Figure 4 suggests that the transmission efficiency is higher for the e-beam when compared to the o-beam. It is also seen that the efficiency is low for lower wavelengths up to 1150 nm, further, it increases and plateaus till 1400 nm and then decreases till 1700 nm. The low efficiency at lower and higher wavelengths is Fig 4: Estimated Mueller Matrix parameters for various input wavelengths for both e and o beams is shown in blue squares and red circles respectively. The wavelength dependence of the two major parameters \(\alpha\) and \(\beta\) which are used to describe the Mueller Matrix of the two beams of the AOTF are represented by solid and dashed lines. The 3\(\sigma\) error bars are within the marker sizes. largely due to the variation of the acoustic power, which is incident on the crystal, with the sweep frequency. As the RF is swept from 75 MHz to 150 MHz, the variation seen in the input RF power is \(\sim 0.12\) W. The polarimetric efficiency is represented by \(\beta\), where \(\beta=1\) for an ideal polarizer. From our measurements, \(\beta\gtrsim 0.986\) is obtained in the desired wavelength range for both beams and maximum value reaching for \(\beta\sim 0.997\). \(\beta\) also has information about the linear dichroism properties of the AOTF, which is a measure of the difference in the absorption of linearly polarized light beams with orthogonal planes of polarization. It is observed that the linear dichroism is highest for the wavelength range of 1150 to 1500 nm. These measurements were considered to obtain the signal sensitivity in section 5.4. ## 5 Instrument Response to the Signal ### Polarized Signal Here, we consider a simplistic configuration of an AOTF based spectro-polarimeter with a specific focus on detecting the linearly polarized signal of Earth. The light scattered in planetary atmospheres can get linearly polarized due to Rayleigh or Mie scattering[40]. The reflected light from planets can get circularly polarized owing to either scattering from clouds[40] or of biological origin[41]. Presently, we ignore the circular polarization as it is usually much smaller than linear polarization. The strength of the scattered linear polarization varies with the phase angle (\(\phi\)). The phase angle is defined as the star-planet-observer angle as shown in figure 5. The Mie scattering linear polarization starts to dominate over that of Rayleigh scattering beyond about 700 nm (for example, see figure 9 of Ref. [42]). The maximum polarization occurs at 'Rainbow angle' (this is the angle at which Rainbow is formed; for liquid water it is \(\sim 42^{\circ}\)). This happens due to the process of Total Internal Reflection[19], which occurs when light comes out of the cloud drop after one reflection inside the cloud droplets - at the liquid-air boundary. The process of reflection leads to high polarization at these angles and this angle is also indicative of the refractive index of the cloud droplets. ### Possible Experiment Configuration To observe Earth as a _disc-integrated_ planet, the angular extent of Earth needs to be sufficiently small, especially for polarimetric observations because polarization is dependent on the phase angle. When observing Earth from a closer distance, the angular variation of the local regions of Earth disc could be very large, for example it can be about \(\sim 20^{\circ}\) from a geostationary orbit at \(\sim 36000\) km altitude. A large variation in the local phase angle can lead to dilution of overall polarization in _disc-integrated_ observations. How much of this local variation can be tolerated can be estimated from the phase angle dependence of the cloud polarization, an example of which is shown in Figure 6. Looking at the sharp polarization feature at \(42^{\circ}\) it is clear that any disc angle larger than about \(\sim 10^{\circ}\) would lead to dilution of polarization at this phase angle. Earth, subtending an angle of only \(\sim 2^{\circ}\) from Moon, makes Moon a choice of preference for such observations. An ideal experiment to observe Earth as an exoplanet would require to have a broad wavelength coverage (visible to IR) with an ability to do spectroscopic and polarimetric measurements. The space based experiments, however, are often severely constrained in terms of mass and volume of the instrument which also limits the instrument design in many ways. For this reason, we envisage a compact and light weight AOTF based spectro-polarimeter. We chose NIR wavelength band for this experiment as it offers several strong absorption bands[12] of H\({}_{2}\)O, CO\({}_{2}\), O\({}_{2}\) and CH\({}_{4}\) which could be of interest to habitability. Measuring the polarization within the absorption bands of different gas species can be of interest as it leads to higher polarization due to reduced multiple scattering [43, 44, 45]. Further, the NIR band has also been shown to be more sensitive to the ocean glints than visible bands [20, 46] though it lacks the sensitivity to Rayleigh scattering. Our choice of NIR band leads us to choose the AOTF and detector which have better performance in this band. We configure the rest of the instrument design with the measured performance of available detectors and AOTF, keeping in mind the compactness of the overall instrument design. The suggested experiment consists of a dual beam AOTF based spectro-polarimeter. It would consist of a pair of InGaAs detectors (one for each beam) where the disc of Earth will be focused. The plate scale of the optics can be designed to focus each beam in few pixels. Table 2 describes the major specifications of the instrument configuration for the experiment and figure 5 shows the basic instrument configuration and the observation geometry. As the Moon orbits the Earth, it will be possible to observe different phases of Earth. Similar observations have been carried out using Earthshine measurements [47, 48, 46] but these measurements are susceptible to depolarization from the lunar surface and absorption from the Earth's atmosphere. Such issues can be eliminated with the direct observations of Earth from the lunar orbit. ### Radiative Transfer Calculations The disc integrated flux and polarization of Earth largely depends upon the extent of the cloud cover and surface features. The cloud cover of Earth is mainly responsible for the polarization features via Mie scattering from cloud droplets whereas land areas usually reflect unpolarized light. A simplistic atmospheric and surface model is considered for the calculations. The model consists of a uniform surface layer covered with gas and clouds. The surface is assumed to be Lambertian having a constant albedo of 0.3 [49]. The doubling-adding vector radiative transfer code: PyMieDAP [50] is used for calculating reflected flux and polarization from Earth's disc. It works by \begin{table} \begin{tabular}{l l} \hline AOTF type & \multicolumn{2}{l}{Non-collinear, dual beam} \\ AOTF crystal & \multicolumn{2}{l}{TeO\({}_{2}\)} \\ \hline RF sweep range & 75-150 MHz \\ RF step size & 200 kHz \\ AOTF input RF power & 1-2 Watts \\ \hline Detector Type & InGaAs \\ Detector QE & 70\% \\ No. of pixels & 5 \\ \hline Input aperture & 2 mm \\ \hline Spectral range & \(1.0-1.7\) μm \\ Spectral resolution & 2-4 nm \\ \hline Integration time & 10 millisec to 1 sec \\ \hline FOV & 2\({}^{\mathrm{o}}\) \\ \hline Light polarization at output & Two orthogonal linear polarizations \\ \hline \end{tabular} \end{table} Table 2: Major specifications of the possible instrument configuration for an AOTF-based spectro-polarimetric experiment to observe Earth as an exoplanet. calculating the full Stokes vector of the locally reflected light from the planet. Initially the planet is divided into a grid of \(15\times 15\) points, where the radiative transfer calculations are performed for each grid point. Next, for the disc-integrated simulations, the locally calculated Stokes vectors are integrated over the parts of the planet which is visible to the observer. The results presented here are for the disc-integrated simulations. The strength of polarization as well as spectral features can depend upon various atmospheric features like cloud-top altitude, cloud droplet size, the extent of clouds etc. In this model, we consider standard atmospheric parameters (like pressure and temperature profile) for Earth. The water clouds are kept at an altitude of \(\sim\)4 km with a cloud droplet size of 6 \(\upmu\)m. Patchy clouds[50] with 20% and 100% cloud cover are considered in the simulation to show the extent of polarization variation due to cloud cover (the average cloud cover on Earth is about 50-70%). The simulation results for flux and polarization are presented in figure 6 for a fixed continuum polarization wavelength of 1.0 \(\upmu\)m and 1.3 \(\upmu\)m. The reflected flux is seen to be decreasing, almost linearly, with phase angles (owing to less illumination) and also with decreasing cloud cover (owing to reduced net albedo). The polarization features, as mentioned earlier, show a typical Mie scattering profile where the signal peaks at about 42\({}^{\circ}\) for water clouds. Decreasing the cloud fraction uncovers the surface which leads to more unpolarised signal being reflected, leading to less overall polarization. The signals obtained from these simulations are used for further modelling of the instrument response in the next section. ### Signal Calculations Then disc-integrated signal from Earth at the lunar distance is calculated for each of the two polarised beams of the AOTF and is considered to fall on a total of 5 pixels. The signal calculations were performed using the parameters listed in Table 3. The total number of electrons generated Figure 5: Observation Geometry and instrument configuration. (Left) The orbital configuration of the Moon and the Earth along with the instrument FOV. The relative orientation of the instrument reference plane with respect to the Sun-Earth-Moon plane is also shown. (Right) A functional diagram of the suggested instrument is shown on the right. The input aperture, FOV and instrument reference planes are marked for the instrument. per pixel in the detector for an integration time \(t\) is given by the following relation: \[\mathrm{e}^{-}/\mathrm{pixel}=(F\times a\times A\times S\times QE\times R\times t )/(\pi\times N\times E_{p}) \tag{5}\] Considering Mie scattering from spherical particles[40] the incident signal has only Q polarization if measured in the plane of scattering. Both Stokes U and V are considered zero here. Both Stokes U and V are \(\sim\)2-3 orders smaller than Stokes Q in visible and IR bands[40, 51]. Stokes Q however can get converted to Stokes U if the measurements are performed in a plane different from the plane of scattering. If the instrument plane is rotated by angle \(\theta\) with respect to scattering plane (Sun-Earth-Moon plane) as shown in Figure 5, then the Stokes vector gets multiplied with rotation \begin{table} \begin{tabular}{l l l l} \hline Parameter & Symbol & Value & Unit \\ \hline Solar flux on Earth (@ 1300 nm) & \(F\) & 0.3 & W/m\({}^{2}\)/nm \\ Telescope aperture diameter & \(a_{d}\) & 2 & mm \\ Telescope aperture area & \(a\) & \(3.141\times 10^{-6}\) & m\({}^{2}\) \\ Earth albedo & \(A\) & 0.3 & \\ Total no. of pixels on which Earth is focused & \(N\) & 5 & \\ Quantum Efficiency of the detector & \(QE\) & 0.7 & \\ Earth solid angle from Moon orbit & \(S\) & \(1\times 10^{-3}\) & steradian \\ Spectral Resolution & \(R\) & 3 & nm \\ Integration time & \(t\) & 0.01 to 1 & second \\ Energy of photon (@ 1300 nm) & \(E_{p}\) & \(1.53\times 10^{-19}\) & Joule \\ \hline \end{tabular} \end{table} Table 3: Input parameters for signal calculation. Figure 6: Simulation of Flux and Polarization at 1.0 μm and 1.3 μm wavelength. The simulations are carried out at at 100\(\%\) and 20\(\%\) Cloud cover. (a) Variations of reflected Intensity with phase angles. Total Flux \(F\) is given by \(\pi\times\) flux for a unit solar flux incident on the planet[50]. (b) Variations of reflected polarization (\(-100Q/I\)) with phase angles. The polarization at 1.0 μm for 100\(\%\) cloud cover is recreated by simulations from figure 9 of Ref. [42]. matrix as follows: \[M\times\begin{bmatrix}1&0&0&0\\ 0&\cos 2\theta&\sin 2\theta&0\\ 0&-\sin 2\theta&\cos 2\theta&0\\ 0&0&0&1\end{bmatrix}\times\begin{bmatrix}I\\ Q\\ U\\ V\end{bmatrix}=\begin{bmatrix}I_{1}\\ Q_{1}\\ U_{1}\\ V_{1}\end{bmatrix}. \tag{6}\] Here, I=e\({}^{-}/\)pixel (see equation 5), Q=degree of polarization \(*\) I, U and V = 0. \(M\) is the Mueller matrix of the AOTF. The detector detects only the first element of the Stokes vector, given by \(I_{1}\). It is noteworthy that the Mueller matrix \(M\) can be different for the two beams (as discussed in Section 2), the incident Stokes vector and rotation matrix will remain unchanged though. The intensity detected in the two detectors (\(I_{1}\) and \(I_{2}\)) is: \[I_{1}=\frac{\alpha_{1}}{2}I+\frac{\alpha_{1}}{2}\beta_{1}Q\cos 2\theta \tag{7}\] and \[I_{2}=\frac{\alpha_{2}}{2}I-\frac{\alpha_{2}}{2}\beta_{2}Q\cos 2\theta. \tag{8}\] Here, the values of \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{1}\) and \(\beta_{2}\) were obtained for the two beams from Figure 4. Observed intensity signal in the two detectors is also accompanied by the noise: detector noise and photon noise. Detector noise (including dark noise and read-out circuit noise) is observed to be close to 5000 \(e^{-}/pixel\) for a large range of integration times. In the present electronics circuit, the noise is seen to be dominated by the read-out electronics noise. The photon noise is considered to be \(\sqrt{N}\) where \(N\) is the no. of photons. The two noises are added in quadrature to obtain the total noise. The calculated signal to noise ratio for the two detectors is shown in Figure 7 for a range of integration time and plane rotation angle \(\theta\). Three different scenarios of polarization signal and phase angles are considered. The SNR mostly depends upon the phase angle of the Earth, the smaller phase angles allow for viewing a large sun-lit portion of Earth and vice versa. For 0\({}^{\rm o}\) phase angle, the instrument views the full day-side Earth disc whereas for 150\({}^{\rm o}\) phase angle only a thin crescent of the day-side Earth is visible to the instrument. The design of the instrument needs a large range of detector integration times which can be tuned for different portions of the orbit in order to maintain the SNR. Also to be noted is the difference in the values of SNR in the two detectors for the same integration time and plane rotation angle is due to the different values of \(\alpha\) for the two beams of AOTF (see Figure 4). From the observed values of \(I_{1}\) and \(I_{2}\) and by solving equations 7 and 8, the estimated desired Degree of Linear Polarization (DOLP) or \(Q/I\) of the incident radiation is given as \[\frac{Q}{I}=\frac{\alpha_{1}I_{2}-\alpha_{2}I_{1}}{\alpha_{1}\beta_{1}\cos 2 \theta I_{2}+\alpha_{2}\beta_{2}\cos 2\theta I_{1}}. \tag{9}\] The retrieved DOLP (from equation 9) is dependent on the measured values of \(I_{1}\) & \(I_{2}\) which will have errors due to detector noise as well as the photon noise. DOLP is also dependent on values of \(\alpha\) and \(\beta\) which also have errors due to measurement uncertainties. All these errors combined together lead to a retrieved value which also has uncertainties. A synthetic retrieval of DOLP was performed considering all these sources of uncertainties for a large range of integration times. Three different values of DOLP are injected and corresponding retrieved values for a range of integration times are shown in Figure 8. The retrieved values show a distribution centred about the injected value of DOLP. This distribution decreases in width with increasing values of integration time. Large integration times lead to larger SNRs (as in Figure 7) and hence leads to better constraints on retrieved values. The measured uncertainties of \(\alpha\) and \(\beta\) of AOTF are small and have minimum effect on the retrieval. It was found that the noise from the detector is the dominant source of errors. It was also seen that for the present configuration of the instrument, the integration times of about 100 ms and larger can lead to an uncertainty of \(\sim\)0.3% in the retrieved DOLP. The retrieved DOLP also depends upon our knowledge of the angle \(\theta\) (via equation 9). Space based observations can be prone to mis-alignments due to jitter and drift of satellites causing an error in the knowledge of \(\theta\). It can underestimate or overestimate the DOLP by a factor of \(1/\cos 2\theta\) (see equation 9). This can cause a non-linear error in the retrieved DOLP depending upon angle \(\theta\). A synthetic retrieval excercise of DOLP is performed by deliberately adding an 'unknown' offset in the angle \(\theta\) for various integration times and a range of plane rotation angles (\(\theta\) = -45\({}^{\circ}\) to 45\({}^{\circ}\)). The retrieved DOLP values are shown in the right panel of Figure 8 for two different cases of Figure 7: Signal to noise ratio (SNR) simulation for the two detectors (H and V) for various integration times and plane rotation angle. The calculations are shown for different values of incident polarization (P) and phase angles. The detector is saturated at SNR of about 900. injected DOLP (1% and 5%). We can see that offset of 0\({}^{\circ}\) (no offset) allows us to retrieve the true value (injected value) for a large range of plane rotation angles. For an offset of 5\({}^{\circ}\) we start to see a constant shift in the retrieved values with respect to plane rotation angle. This shift is larger for larger angle offsets and also for larger injected DOLP values. For 10\({}^{\circ}\) offset the shifts are more amplified. From this analysis we find that error (or shift) in the retrieved DOLP can be within 15% of the true value if the satellite platform is maintained within 10\({}^{\circ}\) about the scattering plane (\(\theta\) = 0\({}^{\circ}\)). It is noteworthy that even if the instrument reference plane is tilted at large angles (plane rotation angle), the retrieved polarization is very close to the incident polarization as long as the angles are known. ## 6 Future Work In this work, the usefulness of an AOTF based spectro-polarimeter instrument for observing Earth as an exoplanet has been established. There are more studies that need to be carried out in future for AOTF characterization, especially to understand the temperature related effects. The current experiment was carried out at room temperature but the RF-\(\lambda\) tuning relation of AOTF is known to shift with temperature and needs to be calibrated[34, 35]. Along with that, it may be required to maintain the operating temperature range of AOTF in orbit, so that the calibration does not change significantly. The temperature of AOTF crystal as well as the RF power amplifier are both important for the calibration purposes. The varying temperature of RF power amplifier in space can Figure 8: Synthetic retrieval of DOLP from the instrument. _(Left)_ Three different values of known DOLP are injected into the instrument model: 1%, 5% and 10% and the retrieved DOLP values are plotted in blue, red and black colors respectively. _(Right)_ The effect of platform rotation on the retrieved DOLP for 1% (top panel) and 5% (bottom panel) injected DOLP. Three cases are considered for an ‘unknown’ offset of 0\({}^{\circ}\), 5\({}^{\circ}\) and 10\({}^{\circ}\) in the plane rotation angle. lead to varying acoustic power which in turn can affect the diffraction efficiency of the crystal.[52] The study of temperature related effects is planned in the near future and will be taken up as a continuation of current work. The polarimetric calibration, as discussed in this work depends on various other factors such as the lenses, coatings, detector response etc., and hence a final end-to-end calibration of such an instrument will be carried out with flight components before flight. For the calculations of the instrument response, the peak value of \(\alpha\) (as in Figure 2) is considered. Since the AOTF transfer function of the two beams is different for the two beams in terms of FWHM (Figure 3) and in terms of shape of transfer function (Figure 2), the effective value of \(\alpha\) should take care of these variations, which may alter the transmission of the beams. This can be done by normalizing the two values of \(\alpha\) with their areas under the curve. Although, the instrument design uses two identical detectors for the two beams, one may need to correct for the difference in the response of the detector, if any, in a similar manner. The response of each pixel in the detector will need a relative calibration by doing a flat-fielding of the detector. A final calibration of the instrument with an unpolarized source will reveal any relative difference in the transmission of the two channels ('e' and 'o' beams) which may arise due to unknown sources, such as detector response, transmissivity of lenses etc. It may be required to study the stability of such calibration in orbit by regularly observing a known source. The above mentioned methodology for retrieving polarization, works well for the continuum part of the spectrum. For retrieving the polarization within the absorption bands one may need to convolve the modeled spectrum with the AOTF transfer function. This may be required depending upon the mismatch in the'shape' of AOTF transfer function of the two beams, as the absorption lines in the spectrum are much narrower than the width of the transfer function. ## 7 Discussion and Conclusion Spectro-polarimetry of Earth will allow us in bench-marking the spectral and polarimetric signatures of temperate exoplanets against Earth. A possible configuration of an AOTF based spectro-polarimeter experiment is presented in this work. AOTFs have advantages over other wavelength dispersive/polarization measurement systems such as they are compact, devoid of any moving parts, and have the ability to carry out rapid scans with good spectral resolution (2-3 nm). Hence, AOTFs have been widely used in planetary studies and astronomy.[53, 54, 55, 56] Here, we have proposed an experiment which serves as a compact instrument for studying Earth as an exoplanet in reflected light over a broad spectral range in two orthogonal polarization directions. The experiment is designed for observations from the Lunar orbit which allows capturing all the phase angles of Earth, mimicking the future observations of directly imaged exoplanets which could also be sampled for a large range of phase angles. Considering the uncertainties in measurements, photon noise and detector noise, it is found that the instrument can achieve a polarimetric accuracy of less than 0.3% for integration times of 100 ms and larger. This is also consistent with previous AOTF based polarimeter observations of Venus clouds[29] where the errors are consistently close to \(\sim\)0.1%. In this design, the major source of uncertainty originates from the detector noise. The values of \(\beta\) are very close to 1 with a measurement uncertainty of \(<0.1\)% which shows that AOTF itself can achieve much better accuracy if other noises (like photon noise and detector noise) are not significant. The instrument signal to noise ratio can be further improved by co-adding several frames of a single observation. With this methodology and the considerations outlined in the discussion, we propose this experiment as a piggy-back instrument on a future lunar mission to observe the spectro-polarimetric signatures of disc-integrated Earth. ## Appendix A AOTF transfer function modelling A sum of \(sinc^{2}\) functions were used to model the AOTF transfer function as suggested by [39]. Models (I) sum of three \(sinc^{2}\) functions and (II) sum of five \(sinc^{2}\) functions were used to model the transfer functions. From Figure 9, it is seen that for the secondary and tertiary lobes Model (II) shows an improvement over Model (I). ## Appendix B Krypton lamp spectrum A Krypton lamp was used to measure the spectrum for both e- and o-beams in the lab. This spectrum was used to estimate the tuning relation for each of the diffracted beams of the AOTF. It is observed that the spectrum for the e-beam closely traces the spectrum for the o-beam and there is a marginal difference in the tuning relations for the two beams. The measured lab spectra for the two beams is shown in Figure 10. ## Appendix C Estimated Mueller matrix parameters The Mueller Matrix for each of the AOTF beams was estimated using the relation for a partial polarizer as described in Section 2.2. The Mueller Matrix elements are computed from three major parameters \(\alpha\), \(\beta\) and \(\gamma\). In this work, the variation of these parameters with wavelength was studied (see Section 4.2). The values of these parameters with wavelength are listed in the following Table 4. ### Acknowledgments We acknowledge the encouragement and support of DD, PDMSA, URSC and Director, URSC regarding this work. We thank Dr T K Alex for his encouragement. We acknowledge the initial guidance and support of Anurag Tyagi and Dr Manju Sudhakar towards this work. BJ acknowledges timely help and guidance of Loic Rossi for the use of PyMieDAP package. Figure 9: AOTF transfer function at 1250 nm modelled using \(sinc^{2}\) function (left), sum of three \(sinc^{2}\) functions (middle) and sum of five \(sinc^{2}\) functions (right). In each plot, the data is shown in green, model is shown with dashed blue lines and the relative residuals ([Data\(-\)Model]/Data) are shown in the lower panel. The zero line in the bottom panels is marked using red solid lines.
2303.08274
GeoSpark: Sparking up Point Cloud Segmentation with Geometry Clue
Current point cloud segmentation architectures suffer from limited long-range feature modeling, as they mostly rely on aggregating information with local neighborhoods. Furthermore, in order to learn point features at multiple scales, most methods utilize a data-agnostic sampling approach to decrease the number of points after each stage. Such sampling methods, however, often discard points for small objects in the early stages, leading to inadequate feature learning. We believe these issues are can be mitigated by introducing explicit geometry clues as guidance. To this end, we propose GeoSpark, a Plug-in module that incorporates Geometry clues into the network to Spark up feature learning and downsampling. GeoSpark can be easily integrated into various backbones. For feature aggregation, it improves feature modeling by allowing the network to learn from both local points and neighboring geometry partitions, resulting in an enlarged data-tailored receptive field. Additionally, GeoSpark utilizes geometry partition information to guide the downsampling process, where points with unique features are preserved while redundant points are fused, resulting in better preservation of key points throughout the network. We observed consistent improvements after adding GeoSpark to various backbones including PointNet++, KPConv, and PointTransformer. Notably, when integrated with Point Transformer, our GeoSpark module achieves a 74.7% mIoU on the ScanNetv2 dataset (4.1% improvement) and 71.5% mIoU on the S3DIS Area 5 dataset (1.1% improvement), ranking top on both benchmarks. Code and models will be made publicly available.
Zhening Huang, Xiaoyang Wu, Hengshuang Zhao, Lei Zhu, Shujun Wang, Georgios Hadjidemetriou, Ioannis Brilakis
2023-03-14T23:30:46Z
http://arxiv.org/abs/2303.08274v1
# GeoSpark: Sparking up Point Cloud Segmentation with Geometry Clue ###### Abstract Current point cloud segmentation architectures suffer from limited long-range feature modeling, as they mostly rely on aggregating information with local neighborhoods. Furthermore, in order to learn point features at multiple scales, most methods utilize a data-agnostic sampling approach to decrease the number of points after each stage. Such sampling methods, however, often discard points for small objects in the early stages, leading to inadequate feature learning. We believe these issues can be mitigated by introducing explicit geometry clues as guidance. To this end, we propose **GeoSpark**, a Plug-in module that incorporates **Geo**metry clue into the network to **Spark** up feature learning and downsampling. GeoSpark can be easily integrated into various backbones. For feature aggregation, it improves feature modeling by allowing the network to learn from both local points and neighboring geometry partitions, resulting in an enlarged data-tailored receptive field. Additionally, GeoSpark utilizes geometry partition information to guide the downsampling process, where points with unique features are preserved while redundant points are fused, resulting in better preservation of key points throughout the network. We observed consistent improvements after adding GeoSpark to various backbones including PointNet++, KPConv, and PointTransformer. Notably, when integrated with Point Transformer, our GeoSpark module achieves a 74.7% mIoU on the ScanNetv2 dataset (4.1% improvement) and 71.5% mIoU on the S3DIS Area 5 dataset (1.1% improvement), ranking top on both benchmarks. Code and models will be made publicly available. ## 1 Introduction Since PointNet++[31], mainstream point cloud segmentation methods [47, 19, 24, 2, 36] conduct feature aggregation with local neighborhood, because learning global features for large-scale point clouds is infeasible. However, due to the redundancy in point clouds, local neighborhoods often contain a high percentage of similar points, limiting models' ability to learn from diverse contexts. For this reason, how to model long-range features in point cloud is a long-standing challenge in point cloud processing [20, 13, 30]. Moreover, hierarchical paradigms are predominantly utilized in point cloud understanding architectures, where points undergo multiple downsampling stages to learn features at various scales. These frameworks typically employ data-agnostic sampling methods, such as FPS [31], voxelization [11], or random sampling [13], which tend to di Figure 1: **Illustration of GeoSpark on feature aggregation and downsampling modules.** Current feature learning predominantly depends on local aggregation (shown in **a**). Introducing GeoSpark not only expands the receptive field but also concentrates on the relevant areas (shown in **b**). For instance, if the query point is located at the end of the bed, Geometry-Informed Aggregation will expand the receptive field to cover the entire area of the bed, as well as the bed mat on the floor. Moreover, **GeoSpark** also enhances sampling by using geometry clues as guidance. Compared to FPS (shown in **c**), Geometric Downsampling (shown in **d**) samples coarsely in areas of simple geometry and densely in areas with complex geometry, such as a dishwasher, a painting on the wall, and objects on the table. The joint of these two modules yields impressive segmentation results. lute points for small objects. This is due to the severe data imbalance issues present in point clouds. The reduction in the number of points during later stages restricts the model's ability to learn expressive features for small objects, resulting in unsatisfactory performance. We argue that these challenges can be mitigated by introducing explicate geometry clue into the framework and preformed geometry-aware feature learning and downsampling. We define the geometry information as an initial geometry partition that can be obtained with conventional point cloud processing techniques [16, 22, 21, 12, 26] where redundancy point clouds can be clustered into simple shapes. To this end, we introduce a Plug-in module named **GeoSpark** that can be integrated with various point cloud segmentation backbones to spark up semantic segmentation results by enhancing feature aggregation and the downsampling process. Specifically, for feature aggregation, GeoSpark offers a _Geometry-Informed Aggregation_ (GIA) module which attends to both local neighbor points and geometry partition entities. Unlike methods that rely solely on local neighborhoods, our approach offers two compelling advantages. Firstly, it significantly enlarges the receptive field of the feature aggregation module without dramatically increasing the number of reference points. This is achieved by encoding geometry partition into superpoints, which provide a compact yet information-rich format for the raw input. Secondly, the geometry partition entities are highly tailored to the input point cloud data. Consequently, the receptive field for each point is tailored to its specific geometry, enabling the model to learn from relevant regions rather than data-agnostic areas. This concept is akin to the methodology employed in the deformable structure [49, 43, 37, 8], which learns offset values from the originally prescribed regions. Here, we achieve a similar effect without requiring additional offset design. For the downsampling module, GeoSpark introduces _Geometric Downsampling_ (GD). Unlike conventional methods such as Further Point Sampling, Random Sampling, and Voxelization which drop points without considering their importance, our approach uses geometry partition as guidance and fuses points of redundancy into one while keeping points with unique features. In other words, the downsampling is the process of reducing point redundancy, ensuring that important points are retained. This data-dependent downsampling approach has been shown to be very beneficial, especially for small objects. These two simple yet powerful module designs can be incorporated into various backbone models. We tested them on three backbone models, including Pointnet++ [31], KPConv [36], and Point Transformer [10], and consistently observed improvements. In summary, our contribution is threefold. * We demonstrate that utilizing geometric partition to enhance feature aggregation is an effective approach for performance gain. It not only expands the receptive field in a cost-efficient manner but also concentrates on relevant regions that are customized for the query point. * We enhance the downsampling module in the architecture by fusing redundant points and retaining points with unique features, resulting in better information retention throughout the network. * GeoSpark has proven to be universal and transferable to various baseline models. Notably, when incorporating it with Point Transformer [48], it yielded a 4.1% improvement in the mIoU score on the ScanNetV2 [7] dataset, and achieved a score of 71.5% on the S3DIS [1] dataset, ranking high on both benchmarks. ## 2 Related work Point Cloud Segmentation.Deep learning methods for point cloud processing can be categorized into three types: projection-based methods [34, 41, 40, 3, 17], voxel-based methods [6, 11, 29], and point-based methods [4, 31, 47, 14, 19, 25, 2]. Recent work on point-based methods mostly adopts an encoder-decoder paradigm while applying various sub-sampling methods to expand the receptive field for point features. A variety of local aggregation modules were developed, including convolution-based methods [36, 2], graph-based methods [23, 21], and attention-based methods [13, 48]. Meanwhile, different downsampling methods [5, 9, 45], upsampling [32, 38], and post-processing methods [15, 27] have been proposed to enhance the network's robustness. The inherent permutation invariance of attention operations makes them well-suited for point cloud learning, leading to the application of transformer operations in point clouds [48, 10, 44], following their success in 2D and NLP. However, global self-attention is impractical due to massive computational costs, leading to recent work utilizing local-scale self-attention to learn point features. Point Transformer [48] proposed a vector self-attention with k nearest neighbor points for feature aggregation and introduced the concept of learnable positional embedding, which has proved to be very powerful across various tasks. Nevertheless, local attention has a limited receptive field and cannot explicitly model long-range dependencies. Thus, Stratified Transformer [20] was developed and adopted a shifted window strategy to include long-range contexts. Similarly, Fast Point Transformer [29] utilized a voxel hashing-based architecture to enhance computational efficiency. However, these data-agnostic methods only select receptive fields based on spatial information, which could fail to focus on relevant areas. Geometric PartitionGeometric partitions are an oversegmentation technique for point clouds that groups point with similar geometric features. Over the years, several methods have been developed to learn hand-crafted features of point clouds and utilize clustering or graph partition methods to generate meaningful partitions [28, 12, 26]. However, the performance of these methods is limited by the handcrafted feature of point clouds. SPNet [16] introduced a superpoint center updating scheme that generates superpoints with the supervision of point cloud labels. Although very efficient, SPNet fails to produce intuitive partitions beyond grouping points with the same semantic label together. On the other hand, the supervised superpoint method uses MLP to learn point features and combines points with graph-structured deep metric learning [23]. Nevertheless, Geometric partition techniques offer impressive outcomes in finding geometric shapes in point clouds. However, despite their impressive performance, there have been few attempts to utilize these techniques in the deep learning framework [23, 21]. In this regard, we propose GeoSpark, which leverages the geometry information stored in the partition to enhance feature aggregation and downsampling. ## 3 GeoSpark Our goal is to leverage the explicit geometry clue stored in the geometric partitions to improve the feature aggregation and downsampling process. In this section, we will elaborate on these are achieved on GeoSpark. We will begin by explaining how the geometric partition is generated and embedding, followed by technical details of our _Geometry-Informed Aggregation_ (GIA) module as well as _Geometric Downsampling_ (GD) module. ### Geometric Partition & Embedding The first step of the proposed GeoSpark network is to effectively learn the low-level features of point clouds and group similar points. This process significantly reduces the redundancy of point clouds and makes it possible to encode long-range context inexpensively in future steps. This module will process the entire input scene and produce an initial geometric partition, so it needs to be fast and effective. In our experiment, we tested different geometric partition methods, including VCCS[28], Global Energy Model (GEM)[12], Graph-Structured Method (GSM) [21] and SPnet [16]. The GEM [12] offers the best trade-off between speed and performance, so we integrate that into our _Geometric Partition & Embedding_ module. Specifically, for each point, a set of geometric features, such as linearity, planarity, scattering, and verticality \(f_{i}\in\mathbb{R}^{c}\) are computed, characterizing local features and shapes. The geometrically homogeneous partition is defined as the constantly connected components and therefore as the solution to the following optimization problem[12]: \[\operatorname*{argmin}_{g\in\mathbb{R}^{c}}\sum_{i\in C}||g_{i}-f_{i}||^{2}+ \lambda\sum_{(i,j)\in E_{nn}}\omega_{i,j}[g_{i}\neq g_{j}] \tag{1}\] where \(\lambda\) is the _regularization strength_ that determines the coarseness of the partition, \(\omega_{i,j}\) is a weight matrix that is linearly inversely proportional to the distance between points and \([\cdot]\) is the Iverson bracket. We use the \(l_{o}\)-_cut_ pursuit algorithm[22] to quickly find approximate solutions for Eq. 1, as it is impractical to find exact solutions when Figure 2: Illustration of _Geometric Partition & Embedding_ and _Geometry-Informed Aggregation_ Modules. Geometric Partition & Embedding split point clouds into simple geometric shapes, such as table surfaces, chair backs, table edges etc, which are then embedded into superpoints. The query point \(x_{i}\) conducts local aggregation with \(k_{local}\) local neighbour points \(x_{j}\), and then performs Geometry Partition aggregation with \(k_{global}\) superpoints \(\hat{x_{j}}\) to acquire global contexts. The _Geometry-Informed Aggregation_ module allows the model to learn both local fine details as well as expressive global contexts. the number of points is really large. Given point cloud set \(X=(P,F)\), the geometric partition module splits the point cloud into point subsets \([\hat{X_{1}},\hat{X_{2}},\hat{X_{3}}\dots,\hat{X_{m}}]\), and each set contains a different number of points. To encode each partition into a superpoint, we adopted a lightweight MLP structure, inspired by PointNet [4]. The input point feature firstly go through MLP layers to gradually project the features of the points to higher dimension space. Points in each partition are then fused into one superpoint by applying MaxPooling in feature space \(\hat{f_{i}}\) and AvgPooling in coordinates space \(\hat{p_{i}}\). We concatenate global information of each partition \(\hat{f_{i,g}}\), such as _partition diameter_ into the features of the fused points, before applying other layers of MLP to reduce the feature dimension to \(c\). Formally, \[\hat{F}_{i} =T_{2}\ (\text{MaxPool}\ \{T_{1}\ (\hat{f}_{i})\ |\ \hat{f}_{i} \in\hat{X}_{i}\}\ \oplus\ \hat{f_{i,g}}) \tag{2}\] \[\hat{P}_{i} =\text{AvgPool}\ \{\hat{p_{i}}\ |\ \hat{p_{i}}\in X_{i}\}\] where \(T_{1}:\mathbb{R}^{c}\rightarrow\mathbb{R}^{c_{l}}\) and \(T_{2}:\mathbb{R}^{c_{l}}\rightarrow\mathbb{R}^{c}\) are MLP layers that enlarge and condense point feature dimensions. ### Geometry-Informed Aggregation The key idea of Geometry-Informed Aggregation (GIA) is to attend point feature both from local neighbour points and global geometric partition regions, encoded in superpoints. Therefore, GIA module takes two sets of inputs: local points \(X\ =(p_{i},f_{i})\) and encoded superpoints \(\hat{X}=(\hat{p_{i}},\hat{f_{i}})\). Local point features \(f_{i}\) and superpoint features \(\hat{f_{i}}\) are first processed by separated MLP layers before being fed into the _GIA_ module. The following explain the proposed method using Point Transformer as backbone. However, similar strategies can be incorporated with other backbone easily. Local Neighbor AggregationThe local neighbor aggregation follows the orginal design of backbone structure. For Point Transformer, it [48] first uses three linear projections \(\phi,\psi,\alpha\) to obtain _query_, _key_ and _value_ for local points. Following _vector self-attention_, a weight encoding function \(\omega\) is introduced to encode the subtraction relations between point _queries_ and _keys_, before processing to the _softmax_ operation to form the attention map. A trainable parameterized position encoding, formed by an MLP encoding function \(\theta\), is added to both the attention map and the value vectors. Formally, for query point \(x_{i}=(p_{i},f_{i})\) and reference points \(x_{j}=(p_{j},f_{j})\), the local attention map is formed as follows: \[\delta_{i,j} =\theta(p_{i}-p_{j}) \tag{3}\] \[w_{i,j} =\omega(\phi(f_{j})-\psi(f_{i})+\delta_{i,j})\] Geometric Partition AggregationTo learn point feature from superpoint, a feature map is built between local points and global superpoints. Specifically, _key_ and _value_ is projected from superpoint features \(\hat{f_{i}}\) with linear operation \(\Psi\), and \(A\) respectively. Attention is formed similarly to _Local Attention_. However, an independent weight encoding function \(\Omega\) is used and the positional embedding also employs a different MLP \(\Theta\) to encode the coordinate difference between the query points and the reference superpoints. \[\Delta_{i,j} =\Theta(p_{i}-\hat{p_{j}}) \tag{4}\] \[W_{i,j} =\Omega(\phi(\hat{f_{i}})-\Psi(\hat{f_{j}})+\Delta_{i,j})\] In practice, _Local Neighbor Aggregation_ and _Geometric Partition Aggregation_ are performed simultaneously and merged to form _Geometry-Informed Aggregation_, formally: \[f^{\prime}_{i,local} =\sum_{f_{i}\in K}\text{softmax}(w_{i,j})\odot(\alpha(f_{j})+ \delta_{i,j}) \tag{5}\] \[f^{\prime}_{i,global} =\sum_{\hat{f_{i}}\in K}\text{softmax}(W_{i,j})\odot(A(\hat{f_{j }})+\Delta_{i,j})\] \[f^{{}^{\prime}}_{i} =\xi\left[f^{\prime}_{i,local}+f^{\prime}_{i,global}\right]\] Figure 3: **The Paradigm of GeoSpark Architecture** Initially, the point input is fed into the ”Geometric Partition & Embedding” module to obtain geometric partitions and encode them into superpoints. These superpoints are then processed in the global branch to learn features at multiple scales before being added to the local branch as geometry guidance. In addition to the primary prediction loss, a superpoint loss is introduced to guide the training of the global branch. The aggregation module can be altered to various backbones. where \(\odot\) represents Hadamard product, \(K\) is defined as reference local points, taken from \(k_{local}\) nearest neighbour, and \(\hat{K}\) is the reference superpoints, sampled with \(k_{global}\) nearest neighbour. After merging the outputs of _Local Neighbor Aggregation_ and _Geometric Partition Aggregation_, an MLP layer (\(\xi:\mathbb{R}^{c}\rightarrow\mathbb{R}^{c}\)) is employed to further glue global and local features and form the final outputs. Notice that it is important to maintain a good global and local balance to ensure that both fine details and long-range dependencies are included. More details on this can be found in the ablation study. The features of superpoints \(\hat{X}=(\hat{p_{i}},\hat{f_{i}})\) are learned in a separate global branch. In this branch, the superpoint features are projected to different dimensions at various stages to match the dimension space of the local branch. Since the number of superpoints is typically several orders of magnitude smaller than that of the local branch, the feature learning in the global branch is very fast. Loss function.A simple yet important superpoint loss is introduced to assist feature learning in the global branch. Specifically, given a local point set \(X\), and its label \(E=\{e_{i}\in\mathbb{R}^{l}\mid i=1,\ldots,n\}\), where \(e_{i}\) is the one-hot vector. We generate a _soft pseudo label_ by calculating the label distribution in each geometric partition \(W=\{w_{j}\in\mathbb{R}^{l}\mid j=1,\ldots,m\}\). We optimize the global branch by minimising the distance between superpoint prediction \(u\) and _soft pseudo label_\(w\). Overall, our loss is constructed with per-point prediction loss and the superpoint loss while a parameter \(\beta\) is introduced to determine the weight of the superpoint loss. Given the pre-point predication as \(e^{\prime}\), the loss function is \[L_{total}=\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}_{loss}(e_{i},e^{\prime}_{i})+ \beta\frac{1}{m}\sum_{i=1}^{n}\mathcal{L}_{loss}(w_{j},u_{j}) \tag{6}\] where \(\mathcal{L}_{loss}\) can be various type of loss function. Cross-entropy is selected in our experiments. ### Geometric Downsampling One notable challenge in the downsampling process is the early loss of the under-representative points, such as points for small objects, leading to insufficient feature learning and unsatisfactory prediction results. Data agnostic sampling methods such as random sampling, and FPS, do not consider the uniqueness of points and are likely to cause this issue. To mitigate this problem, we develop the _Geometric Downsampling_ module with the help of geometric partitioning. The motivation is to preserve points with unique features and drop redundancy points in the downsampling process. Given a point set \(X=\{(p_{i},f_{i})\mid i=1\ldots n\}\), instead of sampling from the whole point set, we conduct sampling from each geometric partition. In particular, we set a target diameter \(a\) for new points, and further split every partition that is larger than the predetermined size \(a\) with voxel grids to obtain subsets, e.g. \([\hat{X_{1}}]\) to \([\hat{X_{11}},\hat{X_{12}},\hat{X_{13}},\ldots]\). Once this process is done, we fuse the fine partition with pooling operations, where _MaxPooling_ is applied at features space and _AvgPooling_ is conducted on coordinate space, as illustrated in Figure 4. Similar to Point Transformer, we apply MLP layers \(D:\mathbb{R}^{c}\rightarrow\mathbb{R}^{c}\) on feature space before pooling. Mathematically: \[\begin{split} f_{2,i}=\text{MaxPool}\{Df_{1,i}|f_{i}\in X_{1,ii}\} \\ p_{2,i}=\text{AvgPool}\{p_{1,i}|p_{i}\in X_{1,ii}\}\end{split} \tag{7}\] ## 4 Experiments We evaluated the effectiveness of GeoSpark by integrating it with three backbones: PointNet++ [31], KPConv [37], and PointTransformer [48]. We selected two competitive indoor datasets for testing, including the Stanford 3D Indoor dataset (S3DIS) [1] and ScanNetV2 [7]. We compared the results of GeoSpark with the corresponding baselines and other state-of-the-art methods, as shown in Table 1 and Table 2. We also conducted comprehensive ablation studies to evaluate various design decisions, which involved analyzing individual modules (refer to Table 3), local-global balance for GIA (refer to Table 5), sampling strategies (refer to Table 6), and the geometric partition size (refer to Table 4). ### Experiments Setting Network architecture.The network architecture using PointTransformer as the backbone is presented below. For further information on the settings used for other backbones, please refer to the Supplementary Materials. The regularization strength \(\lambda\), was set to 3 for S3DIS and 2 for ScanNetV2 to obtain expressive partitions. Within the global branch, the Superpoints were down-sampled by a ra Figure 4: **Geometric Downsampling.**_Geometric Downsampling_ module fuses points in small partitions into superpoints and further split oversized partitions with a grid to control the downsampling rate. This method drops redundant points and retains points with unique features tio of 1/2 after each stage. While the feature dimensions were chosen as [32, 64, 128, 256, 512] for stages 1-5. Regarding Geometric Downsampling, a size cap of [0.10, 0.20, 0.40, 0.80] m was set for S3DIS, and [0.25, 0.50, 0.75, 1.00] m for ScanNetV2, during stages 1-5. In the global branch, aggregation was only performed once at each stage. Meanwhile, in the local branch, the depths were set to [1, 2, 2, 6, 2] for stages 1-5, following the approach in [48]. The depth settings were the same for both S3DIS and ScanNetV2 datasets. Implementation details.All experiments were conducted using two A100 GPUs. The following presents details of model using Point Transformer as THE backbone. For S3DIS, the model was trained for 30,000 iterations with a batch size of 16. Following previous works [48, 20], the input sets were subsampled with a 4mm grid. Area 5 was used for testing while other areas were used for training. The loss weight \(\beta\) was set to 0.1, and the training was carried out using the _AdamW_, optimizer, with a learning rate of 0.004 and a weight decay of 0.02. As for ScanNetV2, the model was trained for 600 epochs with a batch size of 12. The input sets were subsampled with a 2mm grid, and the _AdamW_ optimizer was also used with a learning rate of 0.005 and a weight decay of 0.05. Further implementation details can be found in the Supplementary Materials. ### Comparisons against State-of-the-arts Quantitative comparisons.We noticed a consistent enhancement in performance across three backbone structures by integrating the GeoSpark plug-in. For the S3DIS dataset, the PointNet backbone exhibited a notable 5.3% improvement in mIoU matrix, while the KPconv backbone (Rigid Version) showed an increase of 0.4% in mIoU and 1.3 % in mACC performance as well. In addition, by adding GeoSpark to the Point Transformer, we achieved an mIoU of 71.5%, which surpassed the baseline model by 1.1% and ranked highly on the benchmarks. On the ScanNetV2 validation set, integrating GeoSpark to PointNet++ backbone saw a remarkable improvement of 10.0%, outperforming many other complex designs. This result further demonstrates that modeling long-range features are crucial for accurate large-scale scene understanding. The KPConv backbone demonstrated an increase of 2.5% in mIoU performance as well. By incorporating Geospark into the Point Transformer, we achieved an mIoU score of 74.7%, which was a 4.1% improvement over the baseline model. This approach also surpassed other transformer-based networks, like Fast Point Transformer \begin{table} \begin{tabular}{l|c|c c c c c c c c c c c c c} \hline Method & mAcc (\%) & mIoU (\%) & ceiling & floor & wall & beam & column & window & door & table & chair & sofa & bookcase & board & clutter \\ \hline PointNet[4] & 49.0 & 41.1 & 88.8 & 97.3 & 69.8 & 0.1 & 3.9 & 46.3 & 10.8 & 59.0 & 52.6 & 5.9 & 40.3 & 26.4 & 33.2 \\ SegCloud[35] & 57.4 & 48.9 & 90.1 & 96.1 & 69.9 & 0.0 & 18.4 & 38.4 & 23.1 & 70.4 & 75.9 & 40.9 & 58.4 & 13.0 & 41.6 \\ TangentConv[34] & 62.2 & 52.6 & 90.5 & 97.7 & 74.0 & 0.0 & 20.7 & 39.0 & 31.3 & 77.5 & 69.4 & 57.3 & 38.5 & 48.8 & 39.8 \\ PointCNN [24] & 63.9 & 57.3 & 92.3 & 98.2 & 79.4 & 0.0 & 17.6 & 22.8 & 62.1 & 74.4 & 80.6 & 31.7 & 66.7 & 62.1 & 56.7 \\ SPGraph [23] & 66.5 & 58.0 & 89.4 & 96.9 & 78.1 & 0.0 & 42.8 & 48.9 & 61.6 & 84.7 & 75.4 & 69.8 & 52.6 & 2.1 & 52.2 \\ PCCN [39] & 67.0 & 58.3 & 92.3 & 96.2 & 75.9 & 0.3 & 6.0 & **69.5** & 63.5 & 66.9 & 65.6 & 47.3 & 68.9 & 59.1 & 46.2 \\ PAT [46] & 70.8 & 60.1 & 93.0 & 98.5 & 72.3 & **1.0** & 41.5 & 85.1 & 38.2 & 57.7 & 83.6 & 48.1 & 67.0 & 61.3 & 33.6 \\ PointWeb[47] & 66.6 & 60.3 & 92.0 & 98.5 & 79.4 & 0.0 & 21.1 & 59.7 & 34.8 & 76.3 & 88.3 & 46.9 & 69.3 & 64.9 & 52.5 \\ HPEIN[18] & 68.3 & 61.9 & 91.5 & 98.2 & 81.4 & 0.0 & 23.3 & 65.3 & 40.0 & 75.5 & 87.7 & 85.5 & 67.8 & 65.6 & 49.4 \\ MinkowskiNet[6] & 71.7 & 65.4 & 91.8 & 98.7 & 86.2 & 0.0 & 34.1 & 48.9 & 62.4 & 81.6 & 89.8 & 47.2 & 74.9 & 74.4 & 58.6 \\ Stratified Transformer[20] & 78.1 & 72.0 & **96.2** & **98.7** & 85.6 & 0.0 & 46.1 & 60.0 & **76.8** & **92.6** & 84.5 & 77.8 & 75.2 & 78.1 & **64.0** \\ Fast Point Transformer[29] & 77.9 & 70.3 & 94.2 & 90.8 & 86.0 & 0.2 & **53.8** & 61.2 & 77.3 & 81.3 & 89.4 & 60.1 & 72.8 & 80.4 & 58.9 \\ CBL [33] & 90.6 & 69.4 & 93.9 & 98.4 & 84.2 & 0.0 & 37.0 & 57.7 & 71.9 & 91.7 & 81.8 & 77.8 & 75.6 & 69.1 & 62.9 \\ \hline PointNet++[31] & 64.1 & 55.8 & 88.5 & 94.8 & 74.9 & 0.0 & 25.2 & 46.7 & 39.5 & 79.8 & 69.9 & 63.4 & 51.4 & 45.5 & 45.8 \\ \hline + GeoSpark & 69.4(\(\pm\)5.3) & 61.1(\(\pm\)5.3) & 90.5 & 97.8 & 80.3 & 0.0 & 23.3 & 52.7 & 59.8 & 87.6 & 76.6 & 69.6 & 48.9 & 54.0 & 52.5 \\ \hline FConv[36] & 70.9 & 65.4 & 92.6 & 97.3 & 81.4 & 0.0 & 16.5 & 54.5 & 69.5 & 90.1 & 80.2 & 74.6 & 66.4 & 63.7 & 58.1 \\ + GeoSpark & 72.2 (\(\pm\)1.3) & 65.8 (\(\pm\)0.4) & 92.4 & 98.4 & 83.3 & 0.0 & 37.7 & 60.2 & 53.2 & 89.3 & 80.6 & 69.2 & 65.5 & 73.0 & 52.0 \\ \hline Point Transformer[48] & 76.5 & 70.4 & 94.0 & 98.5 & 86.3 & 0.0 & 38.0 & 63.4 & 74.3 & 89.1 & 82.4 & 74.3 & 80.2 & 76.0 & 59.3 \\ + GeoSpark & 77.7 (\(\pm\)1.2) & 71.5 (\(\pm\)1.1) & 93.7 & **98.7** & **86.6** & 0.2 & **48.5** & 60.9 & 66.4 & 83.0 & **92.7** & **82.7** & 76.2 & **81.1** & 59.3 \\ \hline \end{tabular} \end{table} Table 1: **Semantic segmentation results on the S3DIS Area 5. Consistency improvements are observed after adding GeoSpark to three backbone structures.** \begin{table} \begin{tabular}{l|c|c} \hline **Method** & Input & **mIoU (\%)** \\ \hline PointNet++[31] & point & 53.5 \\ PointCov[42] & point & 61.0 \\ JointPointBased[15] & point & 69.2 \\ PointASNL[45] & point & 63.5 \\ KPConv[37] & point & 69.2 \\ SparseConvNet[11] & voxel & 69.3 \\ MinkowskiNet[6] & voxel & 72.2 \\ Fast Point Transformer[29] & point & 72.1 \\ Stratified Transformer[20] & point & 74.3 \\ \hline PointNet++[31] & point & 53.5 \\ + GeoSpark & point & 63.5 (\(\pm\)10.0) \\ \hline KPConv[37] & point & 68.6 \\ + GeoSpark & point & 71.1 (\(\pm\)2.5) \\ \hline _Point Transformer[48]_ & point & 70.6 \\ \(\pm\) GeoSpark & point & **74.7** (\(\pm\)4.1) \\ \hline \end{tabular} \end{table} Table 2: **Semantic segmentation results on ScanNetV2 Validation set. Adding GeoSpark to the three backbone structures resulted in a noticeable, and the PointTransformer + GeoSpark achieved a higher mIOU than the transformer variation architectures and Stratified Transformer, indicating the usefulness of geometry guidance in aggregation and sampling modules. Our method performed exceptionally well for smaller classes such as chairs, sofas, and boards, as shown in Table 1, highlighting the significance of geometry features in small class segmentation, as shown in Table Further information regarding training time and the number of parameters can be found in the supplementary materials. Visual resultsFigure 5 presents the visual prediction outcomes of Point Transformer both with and without GeoSpark. Furthermore, Figure 6 compared our GD approach to other sampling techniques, including FPS and voxel-based sampling. ### Ablation Study A number of controlled experiments were carried out to evaluate the decisions made during the model design process. The ablations were tested on the ScanNetV2 dataset and employed Point Transformer as the backbone. Module ablation.For a fair comparison, Point Transformer is retrained with the same experiment setting, including feature dimensions, module depths, and data argumentation. By upgrading the attention module to _GIA_, a substantial improvement is observed. With the help of pseudo-label loss, a 1.4% gain in mIoU is achieved. This significant improvement confirmed our hypothesis that local aggregation suffers from losing global context and that the _GIA_ module is capable of providing meaningful long-range contexts for better aggregation. Changing the sampling strategies also offered a boost in performance, improving the mIoU to 73.5%. The joining of both modules with the pseudo label loss achieves a final result of 74.7%. Local-global balance.We then investigate the setting of the number of neighbour points \(k_{local}\), and \(k_{global}\), as shown in Table 5. The best results are yielded when the attention module learns from 16 local points and 8 superpoints for feature aggregation. Increasing the number of global points does not provide significant benefits due to the possible introduction of noise. Also, we notice that it is possible to reduce the number of local points, such as (\(k_{local}\) = 10, \(k_{global}\) = 5) set, which achieves similar results as the baseline. **Geometric partition size.** We investigate the different sizes of caps for geometric partition (Table 4). For S3DIS, 1.0 m is the optimal size for geometric partitions, while for ScanNetV2, 3.0 m yields the best results, since the data are slightly large. We notice that it is essential to have the partition cap set to be larger than the maximum receptive field at the end of the encoder, to provide extra global contexts while reducing the risk of introducing noise. ## 5 Conclusion Our work, GeoSpark, aims to utilize explicit geometry guidance to enhance feature learning and downsampling. It leverages easily obtained geometry partition information to guide the feature learning process, resulting in a significant boost to the baseline models. Querying global features from geometric partitions enlarges the receptive fields in a data-dependent way, which helps to capture more expressive features. The geometric partitions also guide the downsampling procedure to drop redundant points while keeping unique points. We hope that this work will draw more attention to integrating geometric partition clues into point cloud understanding, as redundancy is a unique problem of point clouds compared to 2D images. In the future, we plan to investigate integrating the proposed techniques into other 3D scene understanding tasks, such as object detection and instance segmentation, Additionally, we aim to test the proposed method on outdoor datasets, where long-term feature learning and point redundancy pose significant challenges. Figure 5: **Visual comparison of the Point Transformer with and without GeoSpark.** GeoSpark captures finer details and produces better predictions for certain objects such as chairs, tables, sofas, and cabinets. The main difference between the prediction results of the Point Transformer and the Point Transformer with GeoSpark was highlighted in a red block Figure 6: **Visualization of various sampling methods on the S3DIS dataset**. In comparison to FPS and Voxel-based sampling, Geometric Downsampling preserves key points better, particularly in geometry complex areas. This provides significant benefits, especially for small objects.The red block highlights the preservation of details for small objects. ## Appendix 1: Runtime & Parameters Analysis Table 7 presents the training time and a number of parameters on the S3DIS dataset, based on experiments conducted using the Point Transformer backbone, which yielded top-ranking results. The inclusion of a global branch results in an increase in the number of parameters, but it has a manageable effect on training time, as shown in Table 7. This is because the global branch processes points that are typically 3-4 orders of magnitude smaller than the original points (approximately 700-1000 points per S3DIS scan and 900-1200 points on ScanNetV2 scans). The smaller size of global points allows faster processing by the network. Despite the fact that the Geospark+Point Transformer slightly underperforms Stratified Transformer on the S3DIS dataset, we have fewer parameters and shorter training times, demonstrating the design's greater efficiency. The _geometry partition_ module can process 100k points within 2-3 seconds, and the entire S3DIS dataset can be processed in less than an hour on a modern eight-core CPU.
2303.18067
Rediscover Climate Change during Global Warming Slowdown via Wasserstein Stability Analysis
Climate change is one of the key topics in climate science. However, previous research has predominantly concentrated on changes in mean values, and few research examines changes in Probability Distribution Function (PDF). In this study, a novel method called Wasserstein Stability Analysis (WSA) is developed to identify PDF changes, especially the extreme event shift and non-linear physical value constraint variation in climate change. WSA is applied to 21st-century warming slowdown period and is compared with traditional mean-value trend analysis. The result indicates that despite no significant trend, the central-eastern Pacific experienced a decline in hot extremes and an increase in cold extremes, indicating a La Nina-like temperature shift. Further analysis at two Arctic locations suggests sea ice severely restricts the hot extremes of surface air temperature. This impact is diminishing as sea ice melts. Overall, based on detecting PDF changes, WSA is a useful method for re-discovering climate change.
Zhiang Xie, Dongwei Chen, Puxi Li
2023-03-29T18:16:21Z
http://arxiv.org/abs/2303.18067v2
# Rediscover Climate Change during Global Warming Slowdown via Wasserstein Stability Analysis ###### Abstract Climate change is one of the key topics in climate science. However, previous research has predominantly concentrated on changes in mean values, and few research examines changes in Probability Distribution Function (PDF). In this study, a novel method called Wasserstein Stability Analysis (WSA) is developed to identify PDF changes, especially the extreme event shift and non-linear physical value constraint variation in climate change. WSA is applied to 21st-century warming slowdown period and is compared with traditional mean-value trend analysis. The result indicates that despite no significant trend, the central-eastern Pacific experienced a decline in hot extremes and an increase in cold extremes, indicating a La Nina-like temperature shift. Further analysis at two Arctic locations suggests sea ice severely restricts the hot extremes of surface air temperature. This impact is diminishing as sea ice melts. Overall, based on detecting PDF changes, WSA is a useful method for re-discovering climate change. climate change; probability distribution function; extremes; Wasserstein distance; ## 1 Introduction How climate system evolves with time is one of the essential topics in climate science. The Intergovernmental Panel on Climate Change Sixth Assessment Report (IPCC AR6) pointed out that the global mean of surface air temperature (SAT) follows a general warming trend since 20th century, which is mainly attributed to the human influence. Consequently, considerable work is devoted to discussing the mean temperature warming trend and its spatial distribution (Delworth & Knutson, 2000; Delworth et al., 2015; Huang et al., 2017; Li et al., 2015). The mean value is one of the most significant statistical characteristics of the data set, as it represents the overall energy condition and even the low-frequency variability of the climate system. However, the human-induced mean value warming of SAT may be partially countered by internal variability or external forcing, thus the warming trend is not always strong and rising at the same pace. As a result, the mean value change, especially the trend based on mean value, cannot give sufficient information about climate change. In the first decade of the 21st century, global warming decelerates significantly (Modak & Mauritsen, 2021; Lee et al., 2015; Delworth et al., 2015; England et al., 2014; Li et al., 2015). However, during the period, a series of climate change are still recorded, such as extreme events (Johnson et al., 2018) and La-Nina-like Pacific temperature pattern (Kosaka & Xie, 2013). Unfortunately, it is difficult to detect these climate change signals by trend or mean value analysis since the relevant signals during the warming slowdown period are quite weak. Recently, a new mathematical tool from optimal transport, called Wasserstein distance (W-distance), sheds more light on this question. W-distance \(\mathcal{W}_{p}(\mu,\nu)\) on \(\mathbb{R}^{d}\) is induced by the study of how to transport mass from a probability distribution \(\mu\) to another distribution \(\nu\) in the cheapest way, which is given by \[\mathcal{W}_{p}(\mu,\nu)=\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}|x-y|^{p}\;\;d\pi(x,y)\right)^{\frac{1}{p}} \tag{1}\] where \(p\in[1,\infty)\) and \(\pi\in\Pi(\mu,\nu)\) means \(\pi\) is a joint distribution of \(\mu\) and \(\nu\). W-distance satisfies the metric axioms and can quantify the distance and similarities between two probability distributions (Figalli & Glaudo, 2021). More details about W-distance could be found in supporting information materials. In this work, we use earth mover's distance, i.e. \(p=1\). Wasserstein distance has been applied to climate science, such as model evaluation (Vissio et al., 2020), oceanographic data analysis (Hyun et al., 2022), and data assimilation (Tamang, 2022). However, it is rarely applied in the physical analysis of climate data. Based on W-distance, we develop a novel method, named as Wasserstein Stability Analysis (WSA), to discover PDF variability in climate change, which includes signals such as extremes and physical value constraints. Compared to mean value analysis, WSA could help researchers directly identify the significant PDF variability and gives a better insight into climate change. The paper is organized as follows. In section 2, we give a description to the data set and WSA algorithm. Optimal mass transportation and Wasserstein distance are also briefly introduced. In section 3, we apply WSA to global warming slowdown period. Through the new method, we obtain a clear equatorial eastern Pacific signal that is highlighted in previous studies and an evident physical value constraint change in the Arctic that is not fully explored. Finally, in section 4, we draw a conclusion and discuss the future work. ## 2 Data and Method ### Dataset The surface 2 meters air temperature data in this work are obtained from ERA5 dataset (\(0.25^{o}\times 0.25^{o}\), single layer, half day time interval) from 1998 to 2012 (Hersbach et al., 2020). And the spatial range is global. To better represent the large-scale signal and accelerate the data processing, we regrid the original data set into daily data with a horizontal resolution of \(2.5^{o}\times 2.5^{o}\) by spatial and temporal average. The sea ice data used in the study come from Hadley Centre Sea Ice and Sea Surface Temperature monthly data set (HadISST), with 1\({}^{o}\)\(\times\) 1\({}^{o}\) horizontal resolution (Rayner et al., 2003). The time range is from 1998 to 2012 and the spatial range is still global. ### Wasserstein Stability Analysis In the null hypothesis of most statistical analyses, we usually assume that the PDF of one variable is stable and follows the same distribution. Following this logic, we claim that the variable has an unstable PDF if the W-distance between its current state and a reference state is out of a specific confidence interval, representing a significant change from one climate stage to another. Therefore, the significant PDF change signal is detected by testing W-distance, and the climate change details can be further explored. Hence, in this part, we develop a novel method named as Wasserstein Stability Analysis (WSA) to evaluate the magnitude of PDF change quantitatively, i.e., to detect unstable PDF change signals. The new method is mainly divided into two steps: W-distance test and PDF analysis. For more information about W-distance test, interested readers could refer to Algorithm 1 in the supporting information material. The details are as follows: (1). _W-distance test_. Given two samples \(X\) and \(Y\), both series are first normalized individually by the following scalar \[\frac{x-\mu}{\sigma} \tag{2}\] where \(\mu\) is the mean value and \(\sigma\) is the standard deviation. Then Wasserstein distance \(\mathcal{W}_{1}(X,Y)\) between \(X\) and \(Y\) are obtained after normalized series. Note that normalization is necessary since the geographic difference may lead to a large variation in W-distance. For example, the high surface temperature variation at high latitudes usually leads to a larger W-distance than its counterpart in tropical zones. Nevertheless, it does not necessarily mean the PDF change at high latitudes is stronger than that at low latitudes. Our significance test is based on the Monte Carro test (Xie et al., 2017). The null hypothesis \(H_{0}\) of WSA is that the PDF discrepancy between \(X\) and \(Y\) can be explained by the white noise. We generate two new samples, \(X_{N}\) and \(Y_{N}\), by adding white noise to \(X\) and \(Y\). Then one could obtain a new W-distance \(\mathcal{W}_{1}(X_{N},Y_{N})\). Such operation is performed 500 times. Then one could get a confidence interval (C.I.) of W-distances. Under the null hypothesis with confidence level \(1-\alpha\), \(\mathcal{W}_{1}(X,Y)\) must be indistinguishable from \(\mathcal{W}_{1}(X_{N},Y_{N})\). Thus one would expect \(\mathcal{W}_{1}(X,Y)\) is within the confidence interval of \(\{\mathcal{W}_{1}(X_{N},Y_{N})\}_{N=1}^{500}\). If not, \(H_{0}\) is rejected with the confidence level of \(1-\alpha\). The \(\alpha\) in this study has been set as 0.01 and we perform the W-distance test for all locations for spatial pattern. (2). _PDF analysis._ After detecting the significant PDF change zones where the W-distance is significant, we plot two PDFs and the corresponding change for each significant change zone, and then explore the climate change mechanism that makes W-distance significant. The next section will use two examples to demonstrate how it functions. ## 3 Results During 1998 - 2012 period, global mean surface temperature experienced a slower increase, which is referred to as "global warming hiatus" in early studies (Kosaka & Xie, 2013). Recent research indicated that there was no true "hiatus" during that time, but the global surface temperatures were still rising, albeit more slowly (Simmons et al., 2017). This phenomenon suggests that physical processes other than the anthropogenic warming trend play a role in climate change, and the detection of PDF change may provide us with new information for this period. Therefore, by comparing the performance of the WSA with traditional trend analysis, we can evaluate the similarity and difference between these two approaches, which could advance our understanding of climate science and inform future research in this area. Figure 1a depicts the Wasserstein distance between SAT PDFs during 1998-2005 and 2005-2012. There are two main large-scale significant W-distance signals appearing in Figure 1a: the central eastern Pacific and the Barents-Kara Sea. The central eastern Pacific has the greatest W-distance, with values ranging from 0.08 to 0.24; nevertheless, there is no discernible trend (Figure 1b). Another noticeable W-distance signal could be seen over the Barents-Kara Sea, which is located inside a significant warming trend zone in the Arctic (Figure 1b). The following analysis will mainly focus on signals over central eastern Pacific and Barents-Kara Sea to highlight the information that W-distance can provide about climate change. In practice, we found that due to the presence of sea ice, the surface temperature distribution shows a large discontinuous in the Arctic region. As a result, the areal average in this region may smash some information from individual grids. To avoid this, we selected three special sites to do further PDF analysis: 1. Site A (90 \({}^{o}\)W, 0): one site with a significant W-distance signal but without a significant linear trend; 2. Sites B (66 \({}^{o}\)E,79 \({}^{o}\)N) and C (131 \({}^{o}\)E,79 \({}^{o}\)N): both sites exhibit significant linear trends. However, Site B has a significant W-distance while C does not. Figure 1: (a) Wasserstein distance between probability distribution functions during 1998-2005 and 2005-2012. (b) Linear trend during 1998-2012. Only statistically significant regions (99% for Wasserstein distance and 95% for linear trend) are shown. Three sites, A (90\({}^{o}\)W, 0), B (66\({}^{o}\)E, 79\({}^{o}\)N), and C (131\({}^{o}\)E, 79\({}^{o}\)N), have been chosen for further analysis. Figure 2: (a) Probability distribution functions (PDFs) for the periods 1998–2005 (blue) and 2005–2012 (red). The bottom tick labels represent observed temperature values from 2005 to 2012, whereas the top tick labels represent observed temperature values from 1998 to 2005. (b) The difference between two PDFs of normalized temperature series at site A (90\({}^{\rm o}\)W, 0). The frequency has been normalized as days per year. The mean value for the time period is shown by the blanketed number in the legend. Figure 2 shows the SAT PDFs and their changes from the 1998-2005 period to the 2005-2012 period at Site A. The mean value and standard deviation stay relatively stable within the 1998-2005 and 2005-2012 periods. However, in the second period, the hot extremes (more than two standard deviations, roughly 27 \({}^{\circ}\)C) decrease by more than ten days per year, while the mild hot events (near one standard deviation) increase by roughly 30 days per year. Meanwhile, the cold extremes (less than -2 standard deviations, 19 \({}^{\circ}\)C) increase by more than ten days per year, while the mild cold events (near -1 standard deviation) decrease by more than 30 days per year. The hot extreme decreasing and cold extreme increasing indicate a La Nina-like SAT shift. Many early studies argue that the 21st slowdown period is in the negative phase of Pacific Decadal Oscillation, and this strong low-frequency variability canceled out some parts of the global warming trend (Douville et al., 2015; Kosaka & Xie, 2013; Lee et al., 2015; England et al., 2014). Our W-distance result agrees well with this conclusion and extends the results from the perspective of PDF change. Indeed, even though the mean value does not show a significant trend, we can still observe the La Nina-like SAT shift from extreme event change, which is reflected by the strong PDF change and the corresponding significant W-distance. Site B is another region with a significant W-distance, suggesting a strong PDF change from the 1998-2005 period to the 2005-2012 period (Figure 3 (a) and (c)). At Site B, SAT shows significant warming from -10.2 \({}^{\circ}\)C to -6.5 \({}^{\circ}\)C. Meanwhile, the standard deviation also decreases, indicating an elimination of SAT variation. Since Site B is located within a zone covered by sea ice, the maximum SAT is strongly constrained to being near the frozen line in both periods, and thus this causes a sharp frequency jump near the frozen line (Figure 3 (a)). However, as the mean value increases and the standard deviation decreases during the second period, the frozen line at Site B shifts significantly to the left. As a result, the frequency peak of hot events shifts to the left. Meanwhile, the slope near the frozen line becomes flatter, leading to a smoother PDF shape. In contrast, Site C exhibits a change that follows a similar pattern but is considerably smaller in extent in terms of mean value, standard deviation, and frozen line shift (Figure 3 (b) and (d)). Consequently, W-distance at Site C is smaller than that at Site B and fails the 99% significant test. Actually, the existence of sea ice strongly limits the maximum surface temperature and the corresponding surface air temperature near the frozen line. Therefore, the sharp peak shape of SAT PDF appears in regions covered by sea ice. Nevertheless, regions with normal ocean conditions are expected to have a more Gaussian distribution. Therefore, the alteration in Site B and Site C suggests that the variability in sea ice concentration may play a role in the SAT PDF change from the first to the second period. By checking the full-time series in two periods, we can better understand the PDF change at Site B and Site C. From 1998-2005 period to 2005-2012 period, the interval with \(\pm 1\) standard deviation shrinks and moves upward at Site B because of the change in mean value and standard deviation (Figure 4(a)). Clearly, this change leads to most of the SAT variabilities being below +1 standard deviation and also more cold extremes. For Site C, we can see a similar change but in a much smaller magnitude (Figure 4 (b)). As discussed, this difference can be attributed to the sea ice and its strong temperature constraint. Figure 4 (c) shows the sea ice concentration change between 1998-2005 and 2005-2012. We can see that Site B is in a zone where sea ice is melting the most. Hence the PDF change at Site B is the most intense. As a comparison, Site C also experiences a sea ice loss but to a relatively small extent. As the temperature increases, the sea ice concentration decreases, and the effect of sea ice restriction gradually fades. This is why Site C also displays a similar PDF change but with a less significant level. In summary, the significant W-distance change at Site B implies the removal of physical temperature constraint imposed by sea ice that causes the sharp peak near the frozen line in PDF. Actually, the sea ice-induced PDF change we discuss here is also related to modelling bias. IPCC AR6 indicated that observational-based estimates of global mean surface temperature face a significant issue in areas where sea ice melts or grows, resulting in a switch between air temperature and sea surface temperature. This bias towards reduced warming in anomalies primarily affects analyses combining surface Figure 3: Same as Figure 2. Panel (a) and (c) is for Site B, and Panel (b) and (d) is for Site C. The dashed lines in Panel (a) and (b) represent the ocean frozen temperature (-1.7\({}^{\circ}\)C) in two distributions (blue for 1998-2005 and red for 2005-2012). Figure 4: Full daily 2m temperature time series of 1998-2005 (blue) and 2005-2012 (red) at Site B is plotted in Panel (a), and the counterpart at Site C is plotted in Panel (b). Panel (c) shows the sea ice concentration change (unit: %) between 1998-2005 and 2005-2012. air temperature anomalies over land and ice with sea surface temperature anomalies over ocean. According to Richardson et al. (2018), this underestimated warming in historical model simulations amounts to about 3% of the observed warming. Given that WSA is able to detect this sea ice-induced change in PDF, it may be useful for future model development and data correction. Overall, by detecting changes in the probability density function (PDF) between the early and later periods during slowdown period, WSA is able to identify the two physical processes during the warming slowdown period: the La Nina-like tropical Pacific temperature shift and physical value constraint due to sea ice. ## 4 Conclusion and Discussion In this work, we develop a novel method called WSA to detect climate change signals. The results indicate that WSA shows some advantages compared with the classic mean value analysis in certain cases. The classic trend analysis based on mean value cannot detect the extreme event signals in the central eastern Pacific Ocean during warming hiatus, while WSA can easily identify the La Nina-like temperature PDF shift. Additionally, due to the strong temperature constraint of sea ice, the sea ice cover area shows a different PDF pattern compared to the open ocean. As a result, unlike mean value analysis detecting significant trend regions, WSA can find out the strong sea-ice-loss area during warming hiatus, indicating a fundamental physical property change of the surface. It should be noted that the discussion of PDF change during the global warming slowdown period is incomplete in this study. We just chose several typical sites to illustrate the abundant information from the significant PDF change and verify the capability of the WSA algorithm. A more detailed discussion on scattered significant signals in the Indian Ocean and South America is another interesting topic. Meanwhile, we only applied the WSA in SAT in this study. Similarly, we can also apply the WSA to precipitation, wind speed, and pressure, which may also offer more insights into climate change. However, the current WSA method could only detect whether the distribution has changed or not, and the direction of change is not well depicted. For example, Fig. 1(a) can only give us a clue to discover PDF change signals, but cannot directly tell us whether most of the regions have more cold extremes or warm extremes. Therefore, how to measure the direction of PDF change is still a challenging topic and needs to be explored in the future. ## Data and Software Surface 2 meters air temperature data is from ERA5 reanalysis products of the European Centre for Medium-Range Weather Forecast (ECMWF) (Hersbach et al., 2020), which can be accessed at the following link [https://doi.org/10.24381/cds.adbb2d47](https://doi.org/10.24381/cds.adbb2d47). Sea ice data is from the Hadley Centre Sea Ice and Sea Surface Temperature dataset (HadISST) of Met Office in British (Rayner et al., 2003), which could be accessed via the following link [http://www.metoffice.gov.uk/hadobs/hadissst/](http://www.metoffice.gov.uk/hadobs/hadissst/). The code of Wasserstein Stability Analysis can be found online via the link [https://doi.org/10.5281/zenodo.7839648](https://doi.org/10.5281/zenodo.7839648) ## Acknowledgement The authors greatly appreciate the comments of Dr. Zhaolu Hou from the Ocean University of China and Dr. Shuailei Yao from the Institute of Atmospheric Physics, Chinese Academy of Sciences. This work is supported by the National Natural Science Foundation of China 42005039. **Supplementary Material** This material is the supporting information for Wasserstein distance in optimal transport and _W-distance test_ in Wasserstein Stability Analysis. In the first section, We give an introduction to Wasserstein distance, and especially we list the open source code to compute Wasserstein distance. Then in the second section, we give a detailed description of the _W-distance test_ of Wasserstein Stability Analysis, which is formulated in Algorithm 1 ## 1 Introduction to Wasserstein Distance and Optimal Transport This section gives an introduction to optimal transport and Wasserstein distance(W-distance). In practice, W-distance can quantify the distance and similarities between two probability distributions. Compared to other similarity metrics like Kullback-Leibler divergence, Wasserstein distance is rigorously defined and satisfies the metric axioms (Figalli & Glaudo, 2021). Wasserstein distance provides a powerful metric to quantify the similarities and discrepancies between probability distributions, and has been applied to climate science, such as model evaluation (Vissio et al., 2020), oceanographic data analysis (Hyun et al., 2022), and data assimilation (Tamang, 2022). In 1781, Gaspard Monge proposed the concept of optimal transport from one practical situation: if one uses soil to build fortifications, what is the cheapest way to transport the soil? Let \(\mu\) and \(\nu\) be two probability measures (distributions) on \(\mathbb{R}^{d}\), this scenario leads to the following Monge formulation of optimal transport \[\inf_{T_{\#}\mu=\nu}\ \int_{\mathbb{R}^{d}}c(x,T(x))\ d\mu(x) \tag{3}\] where \(c(x,T(x))\) is the cost of transporting unit mass from \(x\) to \(T(x)\), and \(\nu=T_{\#}\mu\) means that for any (measurable) set \(A\subset\mathbb{R}^{d}\), \[\nu(A)=\mu(T^{-1}(A)) \tag{4}\] Such \(T\) is said to be a transport map, and since \(T\) is a map, the mass at \(x\) could only be transported to one destination, which means Monge formulation does not allow splitting mass. In the 1940s, Leonid Kantorovich revisited Monge's problem and gave relaxation to Monge's formulation by splitting the mass. Kantorovich relaxation leads to the following formulation of optimal transport \[\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}c(x,y)\ d\pi(x,y) \tag{5}\] where \(\pi\in\Pi(\mu,\nu)\) means \(\pi\) is a joint distribution with marginals \(\mu\) and \(\nu\), i.e. \[\Pi(\mu,\nu)=\{\pi\in\mathcal{P}(\mathbb{R}^{d}\times\mathbb{R}^{d}):\pi(A \times\mathbb{R}^{d})=\mu(A),\pi(\mathbb{R}^{d}\times B)=\nu(B),A,B\subset \mathbb{R}^{d}\} \tag{6}\] When the cost function \(c(x,y)=\left|x-y\right|^{p}\), the metric side of Kantorovich formulation makes it valid to quantify the similarities between probability measures \(\mu\) and \(\nu\) via p-Wasserstein distance, defined as below \[\mathcal{W}_{p}(\mu,\nu)=\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}\left|x-y\right|^{p}\ d\pi(x,y)\right)^{\frac{1}{p}} \tag{7}\] where \(p\in[1,\infty)\). In this work, we use the following earth mover's distance, i.e. \(p=1\), \[\mathcal{W}_{1}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}\left|x-y\right|\ d\pi(x,y) \tag{8}\] Earth mover's distance can be seen as the minimum amount of "work" required to transform mass from the probability measure (distribution) \(\mu\) into another probability measure (distribution) \(\nu\), where the "work" is measured as the amount of distribution weight that must be moved, multiplied by the distance it has to be moved. The code to calculate the earth mover's distance between two \(1D\) distributions has been integrated into the \(wasserstein\_distance\) function of _Scipy_ package. The input of \(wasserstein\_distance\) is just the two sequences of observed values in the empirical distributions, then the function returns the computed distance between these two distributions. The details can be found at (Scipy, 2023). Another option to compute Wasserstein distance is to use the \(ot.emd2\) function in Python optimal transport package \(POT\)(Flamary et al., 2021). The \(ot.emd2\) function could compute the earth mover distance in \(1D\) as long as the cost matrix is given by the corresponding \(l_{1}\) distance. While \(POT\) is one of the most efficient exact optimal transport solvers, it has not been designed to handle large-scale optimal transport problems. Therefore, if one needs to solve optimal transport with large number of samples, we do not recommend to use \(POT\). Interested readers may refer to \(POT\) homepage [https://PythonOT.github.io/](https://PythonOT.github.io/) for more information. ## 2 W-distance Test in Wasserstein Stability Analysis Algorithm 1 provides details for the first step _W-distance test_ in Wasserstein Stability Analysis. Given two samples \(X\) and \(Y\), both series are first normalized individually by the following scalar \[\frac{x-\mu}{\sigma} \tag{9}\] where \(\mu\) is the mean value and \(\sigma\) is the standard deviation. Then Wasserstein distance \(\mathcal{W}_{1}(X,Y)\) between \(X\) and \(Y\) are obtained after normalization. It is worth noting that this normalization is necessary since the geographic difference may lead to a large variation in W-distance. For example, the high surface temperature variation at high latitudes usually leads to a larger W-distance than its counterpart in tropical zones. Nevertheless, it does not necessarily mean the PDF change at high latitudes is stronger than that at low latitudes. Furthermore, the length of the two samples \(X\) and \(Y\) in WAS could be different. Our significance test is based on the Monte Carro test (Xie et al., 2017). The null hypothesis \(H_{0}\) of WSA is that the PDF discrepancy between two samples \(X\) and \(Y\) can be explained by the white noise. Clearly, the alternate hypothesis \((H_{1})\) is that the discrepancy of PDFs between \(X\) and \(Y\) results from factors other than white noise. After getting Wasserstein distance \(\mathcal{W}_{1}(X,Y)\), we generate two new samples, \(X_{N}\) and \(Y_{N}\), by adding white noise \(N(0,1)\) to \(X\) and \(Y\), where \(N(0,1)\) is the standard Gaussian distribution. Then one could obtain the W-distance \(\mathcal{W}_{1}(X_{N},Y_{N})\) between the new samples \(X_{N}\) and \(Y_{N}\). Such additional operation of white noise is performed 500 times. Therefore, one could get a sequence of W-distances \(\{\mathcal{W}_{1}(X_{N},Y_{N})\}_{N=1}^{500}\), and then get a confidence interval (C.I.) of W-distances. Under the null hypothesis \(H_{0}\) that the distribution discrepancy between \(X\) and \(Y\) can be explained by the white noise \(N(0,1)\), the W-distance between the original samples \(X\) and \(Y\) must be indistinguishable from the counterpart distance between perturbed samples \(X_{N}\) and \(Y_{N}\), i.e., \(\mathcal{W}_{1}(X,Y)\) must be indistinguishable from \(\mathcal{W}_{1}(X_{N},Y_{N})\). Under the null hypothesis \(H_{0}\) with the confidence level \(1-\alpha\), one would expect that \(\mathcal{W}_{1}(X,Y)\) is within the confidence interval of \(\{\mathcal{W}_{1}(X_{N},Y_{N})\}_{N=1}^{500}\). If not, \(H_{0}\) is rejected, and one could claim that the distribution discrepancy between \(X\) and \(Y\) is not from the white noise \(N(0,1)\) with the confidence level of \(1-\alpha\), where \(\alpha=0.01\). In this study, a higher confidence level of \(1-\alpha=99\%\), rather than the common 95%, is used, since W-distance is very sensitive to the PDF change. Furthermore, when adding white noise to the normalized samples of \(X\) and \(Y\), we use the standard Gaussian distribution \(N(0,1)\). Since a large size of samples is used, according to the central limit theorem, the amplitude of white noise is set as one standard deviation individually. After performing the W-distance test for all locations, we could detect the places where the W-distance is significant, which are regarded as the significant PDF change zones. After getting the significant PDF change zones, we could do _PDF analysis_: plot two PDFs and the corresponding change for each significant change zone, to understand the climate change mechanism which makes W-distance significant. ``` 1:procedure(\(H_{0}\): the discrepancy in distributions between \(X\) and \(Y\) is from white noise, with confidence level \(\alpha=0.01\)) 2:\(X\) and \(Y\) are normalized individually by the \(\frac{x-\mu}{\sigma}\) scalar. 3: Get \(\mathcal{W}_{1}(X,Y)\) 4:\(N\gets 1\) 5:while\(N\leq 500\)do 6:\(X_{N}\gets X+random(0,1)\)\(\triangleright\) add white noise \(N(0,1)\) to \(X\) and \(Y\) 7:\(Y_{N}\gets Y+random(0,1)\) 8: Get \(\mathcal{W}_{1}(X_{N},Y_{N})\)\(\triangleright\)\(X_{N}\) and \(Y_{N}\) are series with white noise 9:\(N\gets N+1\) 10:endwhile 11:if\(\mathcal{W}_{1}(X,Y)\) is out of 99% C.I. of \(\{\mathcal{W}_{1}(X_{N},Y_{N})\}_{N=1}^{500}\)then 12: reject \(H_{0}\)\(\triangleright\) The discrepancy is not from \(N(0,1)\) 13:else 14:\(H_{0}\) is not rejected 15:endif 16:endprocedure ``` **Algorithm 1** Wasserstein Stability Analysis (WSA)
2308.15157
On the improvement of model-predictive controllers
This article investigates synthetic model-predictive control (MPC) problems to demonstrate that an increased precision of the internal prediction model (PM) automatially entails an improvement of the controller as a whole. In contrast to reinforcement learning (RL), MPC uses the PM to predict subsequent states of the controlled system (CS), instead of directly recommending suitable actions. To assess how the precision of the PM translates into the quality of the model-predictive controller, we compare a DNN-based PM to the optimal baseline PM for three well-known control problems of varying complexity. The baseline PM achieves perfect accuracy by accessing the simulation of the CS itself. Based on the obtained results, we argue that an improvement of the PM will always improve the controller as a whole, without considering the impact of other components such as action selection (which, in this article, relies on evolutionary optimization).
L. Féret, A. Gepperth, S. Lambeck
2023-08-29T09:39:12Z
http://arxiv.org/abs/2308.15157v1
# On the improvement of model-predictive controllers. ###### Abstract This article investigates synthetic model-predictive control (MPC) problems to demonstrate that an increased precision of the internal prediction model (PM) automatically entails an improvement of the controller as a whole. In contrast to reinforcement learning (RL), MPC uses the PM to predict subsequent states of the controlled system (CS), instead of directly recommending suitable actions. To assess how the precision of the PM translates into the quality of the model-predictive controller, we compare a DNN-based PM to the optimal baseline PM for three well-known control problems of varying complexity. The baseline PM achieves perfect accuracy by accessing the simulation of the CS itself. Based on the obtained results, we argue that an improvement of the PM will always improve the controller as a whole, without considering the impact of other components such as action selection (which, in this article, relies on evolutionary optimization). MPC, model-predictive control, machine learning, DNN, deep-neural-network ## I Introduction The use of machine learning (ML) is widespread in different scientific and industrial use-cases. One of these areas is control science, in which data-driven ideas have long been prevalent, for example in data-driven system identification [5, 6]. Even some of the best known use-cases of ML have their roots in control science, as many will associate ML with self-trained robots learning to walk [9, 10, 11] or fully automated industry productions [12]. Therefore, ML is not a new topic in the field of control science. In general, when systems are difficult to control with a traditional PID-controller or when the creation of a mathematical model for an MPC [1] or the training of reinforcement learning (RL) is problematic, the use of data-driven system-identification is a common option. With the recent advances in neural networks, many researchers are now working on using neural networks as PMs for ML-MPCs [2, 3, 7, 8]. This article provides support for the claim that the quality of ML-based model-predictive control mainly (ML-MPC) depends on the prediction accuracy of its PM. From this, it is argued, that current and upcoming research on this topic, can be strictly focused on the core components of an MPC without showing the improvements of the whole in practical setups. To provide a strong base for this argument, we employ a basic ML-MPC architecture as shown in Fig. 1. This allows us to eliminate any possible architectural reasons for the loss of control quality, except for the differences resulting from the different PMs. Consequently, only two points of interest remain, which are further discussed in section V and VII. For an overview of the experimental setup, all used core parts of both ML-MPC and comparison MPC (C-MPC), are discussed in section II, III, IV and V. Results are presented in VI. Within VII and VIII it is stated that the prediction accuracy is the main reason for any given control discrepancy between the ML-MPC and the C-MPC. ### _Related Work_ As this article shows, ML-MPC can be split into data-driven system-identification, non-linear optimization and model-predictive control. Thus, every contribution revolving around these topics is directly related to the building of an ML-MPC. For example, data-driven system-identification is discussed in [13, 14], and non-linear optimization in [15], and within data-science in [16, 19, 20]. Fig. 1: A simplified control loop for an MPC and its core components. MPC is a stable technology within control engineering, while ML-MPC is discussed in several domains [18, 21]. It is often analyzed regarding different prediction models [21, 22, 23]. A prominent discussion of this aspect is given in [17], which advocates the usage of an LSTM system customized with dropout layers to better predict noisy data, while preventing overfitting and using it for model-predictive control. It is shown that dropout LSTM and co-teaching LSTM give better results on non-Gaussian noisy data than the standard LSTM network does. While the core topic is data-driven system identification, the trained PMs are still used in the context of ML-MPC, and applied to a real chemical system. This is technically unnecessary, as, as shown in this article, any proven better network prediction is also directly improving any ML-MPC. The experimental setup could just be used to show how the prediction improved, without the need for an ML-MPC. This would reduce information overhead and complexity. ### _Contribution_ First of all, this article aims to give a good overview of the basic architecture of a simple ML-MPC and C-MPC and its components. More importantly, it suggests that the problem of finding good ML-MPCs can be split into sub-problems for future research, while giving an experimental evidence to justify this decision. This way, the next iteration of work on ML-MPC can focus on the issues ahead, like the research on better system-identification or the work on non-linear optimization algorithms. This will reduce the overhead of work and complexity, and will hopefully result in cleaner and more minimalistic future problem formulations within this field of research. ## II Simulations In classical control theory, non-linear systems are often controlled by linearizing around fixed points and then applying solutions from linear control theory. This type of linearization can become quite difficult where CS are strongly non-linear. Therefore, ML may be able to generate easier and even better control solutions. To generate training data and targets for the DNN, the following three simulations of non-linear physical dynamic systems with varying complexity are used. They will then be used as the CS, as well as the the PM for C-MPC. ### _Pendulum_ The first simulation is an idealized, mathematical model of a pendulum, defined as a bob of constant mass \(m[kg]\) on a rigid, weightless rod of a constant length \(l[m]\), which is fixed to a pivot point with one degree of freedom as shown in Fig. 2. Here, the gravitational force is defined as \(F_{g}=m\cdot g\), where \(g\) is the acceleration of gravity on earth. The deflection angle \(\varphi[^{\circ}]\) is defined as the angle between the rod and the rest position of the pendulum, which is the position perpendicular below the pivot point. If the pendulum is not at the rest position, a restoring force \(F_{tan}\) will act on it, pointing towards the rest position. The restoring force increases proportional to the deflection angle \(\varphi\). As the force vector is pointing toward the rest position, (1) has a negative sign. \[F_{tan}(t)=-m\cdot g\cdot\sin(\varphi(t)) \tag{1}\] Since the pendulum can only move in a circle around the pivot point, the acceleration is tangential, also called a tangential acceleration \(a_{tan}[^{\circ}\cdot s^{-2}]\), which is defined as the angular acceleration \(\tilde{\varphi}[^{\circ}\cdot s^{-2}]\) multiplied by the length \(l\) of the pendulum. \[a_{tan}(t)=l\cdot\tilde{\varphi} \tag{2}\] According to Newton's second law, the equation of motion for the restoring force can be written as: \[F_{tan}(t)=m\cdot a_{tan}(t) \tag{3}\] Using the previous equations, the relationship can be written as: \[\tilde{\varphi}+\frac{g}{l}\cdot\sin(\varphi(t))=0 \tag{4}\] According to Newton's First Law, wherein the net force acting on an idealized point mass is the sum of all involved forces, the next angular velocity \(\hat{\phi}\) can be written as: \[\hat{\phi}=F_{vel}+F_{acc}+F_{in} \tag{5}\] with the velocity force \(F_{vel}\), the angular acceleration force \(F_{acc}\) and an arbitrary input force \(F_{in}\). To calculate the angular velocity force \(F_{vel}\), (6) is used, where the current velocity \(\dot{\varphi}\) is adjusted in relation to the friction \(b\), and the pendulum constants, pointing to the resting position and therefore including a negative sign. \[F_{vel}=-\frac{b}{m\cdot l^{2}}\cdot\dot{\varphi} \tag{6}\] The angular acceleration force \(F_{acc}\) is the result of the gravitational pull acting on the pendulum towards its rest position as discussed in the previous equations regarding Fig. 2: Idealized, mathematical pendulum. the tangential restoration force \(F_{tan}\). From this, the angular acceleration \(\ddot{\varphi}\) can be calculated, as shown in (4) and in (7) in the context of the acceleration force \(F_{acc}\). \[F_{acc}=-\frac{g}{l}\cdot\sin(\varphi) \tag{7}\] The external input force \(F_{in}\) is an arbitrary force, given by an input value \(u[N]\) and the physical values of the pendulum shown in (8). \[F_{in}=\frac{1}{m\cdot l^{2}}\cdot u \tag{8}\] ### _Cartpole_ The second model is the mathematical cartpole, which represents a cart of constant mass \(m_{c}[kg]\), that has a pole with a constant mass \(m_{p}[kg]\) connected to it. The pole has a constant length \(l_{p}[m]\) and a theoretical balanced position orthogonal to the top of the cartpole. The pole angle \(\theta[^{\circ}]\) is defined regarding this balanced position. The cartpole has only one degree of freedom, where its position \(x[m]\) is known. This system is shown in Fig. 3. Four values represent the state of the system \(S\), which are the cart's position \(x\), the cart's velocity \(\dot{x}[m\cdot s^{-1}]\), the pole angle \(\theta[^{\circ}]\) and the angular velocity of the pole \(\dot{\theta[^{\circ}\cdot s^{-1}]}\). The simulation of the cartpole is performed in discrete time, and therefore an arbitrary timestep variable \(\tau[s]\) is used. For each time step within the simulation, the next system state must be calculated, while using the previous system state and the physical values of the cartpole. In conclusion, the system state is defined as \(S=[x,\dot{x},\theta,\dot{\theta}]\) and, correspondingly, the next system state as \(\hat{S}=[\hat{x},\dot{\hat{x}},\dot{\theta},\dot{\theta}]\). Its components can be calculated using the following equations: \[\hat{x} =x+\tau\cdot\dot{x} \tag{9}\] \[\dot{\hat{x}} =\dot{x}+\tau\cdot\ddot{x}\] \[\hat{\theta} =\theta+\tau\cdot\dot{\theta}\] \[\dot{\hat{\theta}} =\dot{\theta}+\tau\cdot\ddot{\theta}\] As the system state \(S\) is monitored, only the cart acceleration \(\ddot{x}[m\cdot s^{-2}]\) and the pole angle angular acceleration \(\dot{\theta}[^{\circ}\cdot s^{-2}]\) must be calculated. Both calculations are further discussed in [4]. As the cartpole system is controlable, an arbitrary input force \(F_{cin}[N]\) can be applied, which is pointing in the same direction as the single degree of freedom of the cart. With the input force \(F_{cin}\), the current system state \(S\) and the physical constants of the cartpole, the angular acceleration \(\ddot{\theta}\) of the pole can be computed as: \[\ddot{\theta}=\frac{g\cdot\sin(\theta)-\cos(\theta)\cdot\frac{F_{cin}+(m_{p} \cdot l_{p})\cdot\dot{\theta}^{2},\sin(\theta)}{m_{p}\cdot m_{c}}}{l_{p}\cdot \frac{4}{3}-\frac{m_{p}\cdot\cos(\theta)^{2}}{m_{p}\cdot m_{c}}} \tag{10}\] The cart's acceleration \(\ddot{x}\) can be calculated using (11). \[\ddot{x}=\frac{F_{cin}+(m_{p}\cdot l_{p})\cdot\dot{\theta}^{2}\cdot\sin( \theta)}{m_{p}\cdot m_{c}}-\frac{(m_{p}\cdot l_{p})\cdot\ddot{\theta}\cdot \cos(\theta)}{m_{p}\cdot m_{c}} \tag{11}\] Given the previous equations, the next state \(\hat{S}\) can be calculated, while dismissing any possible friction. ### _Three-Tank_ The three-tank system consists of three connected tanks, as shown in Fig. 4. \(Tank_{1}\) and \(Tank_{3}\) are connected to an input valve. The level \(x_{2}[m]\) in \(Tank_{2}\) can only change by the flow through the pipes connecting it to \(Tank_{1}\) (\(q_{12}\)) and \(Tank_{3}\) (\(q_{23}\)). The outgoing flow is only possible through pipe \(q_{3}\) connected to \(Tank_{3}\). For each of the tanks, the change of contained mass \(\dot{M}_{t}\) is defined by any incoming flow of mass \(F_{tin}\) subtracted by any outgoing flow of mass \(F_{tout}\), as shown in (12). \[\dot{M}_{t}=F_{tin}-F_{tout} \tag{12}\] The change of mass \(\dot{M}_{t}\) depends on the constant cross-sectional area of the tank \(A[m^{\,2}]\), the density of the fluid \(p[kg\cdot m^{-3}]\) and the change of the fill level \(\dot{x}[m\cdot s^{-1}]\), as shown in (13). Fig. 4: Abstracted three-tank system. Fig. 3: Abstracted Cartpole. \[\dot{M}_{t}=A\cdot p\cdot\dot{x}(t), \tag{13}\] where the cross-sectional area of the tank \(A\) and the density of the fluid \(p\) are considered constant. In such idealized scenarios, the mass of the fluid is proportional to its volume. The change of volume \(\dot{V}[m^{3}\cdot s^{-1}]\) depends on the cross-sectional area of the tank \(A\) and the change of the level \(\dot{x}[m\cdot s^{-1}]\), which in return depends on the in- and outgoing volumes flows \(q_{in}(t)[m^{3}\cdot s^{-1}]\) and \(q_{out}(t)[m^{3}\cdot s^{-1}]\), as shown in (14). \[\dot{V} =A\cdot\dot{x}=\sum q_{in}(t)-\sum q_{out}(t) \tag{14}\] \[\dot{x} =\frac{1}{A}\cdot(\sum q_{in}(t)-\sum q_{out}(t))\] This can be used to calculate the change of fill level \(\dot{x}\) of the individual tanks. Every tank has different in- and outgoing flows, as shown in (15). \[\dot{x_{1}} =\frac{1}{A_{1}}\cdot(q_{in_{1}}(t)-q_{out_{12}}(t)) \tag{15}\] \[\dot{x_{2}} =\frac{1}{A_{2}}\cdot(q_{in_{12}}(t)-q_{out_{23}}(t))\] \[\dot{x_{3}} =\frac{1}{A_{3}}\cdot(q_{in_{3}}(t)+q_{in_{23}}(t)-q_{out_{3}}(t))\] To calculate the in- and outgoing volume flows, the connected pipes must be simulated. As all of those are very short, their flow dynamics will not be taken into account. Furthermore, the cross section of the pipes is very small in comparison to the cross section of the tanks. This results in volume flows through the connecting pipes between the tanks that only depend on the cross section and the pipe-end pressures \(P_{A_{i}}\) and \(P_{A_{j}}\), which are, in this idealized construction, equal to the pressure at ground level of the tanks. For a constant cross section, Bernoulli's equation (pressure equation) can describe the volume flow through the pipes with the cross section \(a_{ij}\) of the pipe, an outflow constant \(\alpha_{ij}\) with \(0<=\alpha_{ij}>=1\), the fluid density \(p\) and the ground pressures \(P_{A_{i}}\) and \(P_{A_{j}}\) of the tanks. \[sgn(t) =sign(P_{A_{j}}(t)-P_{A_{i}}(t)) \tag{16}\] \[q_{ij}(t) =\alpha_{ij}\cdot a_{ij}\cdot sgn(t)\cdot\sqrt{\frac{2}{p}\cdot P _{A_{i}}(t)-P_{A_{j}}(t)}\] Furthermore, the presssures at ground level \(P_{A}\) depend on the current level \(x\) of the connected tanks. The pressures at ground level \(P_{A}\) can be written as the force acting on the ground of the tank \(F_{A}[N]\) regarding the cross sectional area \(A\). The acting ground force \(F_{A}\) is the product of the containing masses \(M_{t}\) and the gravitational force \(g\). As discussed before, the containing masses \(M_{t}\) can also be expressed by a volume \(V\), given the idealized system architecture. The volume \(V\) is calculated by using the cross sectional area \(A\) and the level of the tank \(x\). Therefore, it is possible to simplify the fraction by the cross sectional area \(A\). \[P_{A}(t) =\frac{F_{A}(t)}{A}=\frac{M_{t}(t)\cdot g}{A}=\frac{p\cdot V(t) \cdot g}{A} \tag{17}\] \[\frac{p\cdot V(t)\cdot g}{A} =p\cdot\frac{A\cdot x(t)\cdot g}{A}=p\cdot g\cdot x(t)\] Inserting (17) in (16), the complete flow equation can be written as: \[sgn_{ij}(t) =sign(x_{i}(t)-x_{j}(t)) \tag{18}\] \[q_{ij}(t) =\alpha_{ij}\cdot a_{ij}\cdot sgn_{ij}(t)\cdot\sqrt{2\cdot g \cdot|x_{i}(t)-x_{j}(t)|}\] The volume flow \(q_{3}\) of pipe 3 is the outgoing flow of the system. Its pressure on the disconnected end equals the atmospheric pressure of zero. The final calculations of every volume flow of each pipe are shown in (19). \[sgn_{ij}(t) =sign(x_{i}(t)-x_{j}(t)) \tag{19}\] \[q_{12}(t) =\alpha_{12}\cdot a_{12}\cdot sgn_{12}(t)\cdot\sqrt{2\cdot g \cdot|x_{1}(t)-x_{2}(t)|}\] \[q_{23}(t) =\alpha_{23}\cdot a_{23}\cdot sgn_{23}(t)\cdot\sqrt{2\cdot g \cdot|x_{2}(t)-x_{3}(t)|}\] \[q_{3}(t) =\alpha_{3}\cdot a_{3}\cdot\sqrt{2\cdot g\cdot x_{3}(t)}\] Inserting (19) into (15), every change of each individual level \(\dot{x}\) can be calculated. ## III Data Preparation The previously described simulations are used to generate training data. The data format is a list of samples, each of which represents an unique measurement session of equal length. Each time step contains either the input, the output, or a combination of both. Random input is generated for each sample using the configuration given in table I under section IV. The simulations produce the corresponding output, and get reset between samples. Because DNN models require a combination of input and output data for learning and predicting, the input and output data are merged, as shown in Figure 5. Now, one time step includes inputs and their corresponding output. But this would imply the inclusion of targets in the training data. Therefore, input and output of the merged data are shifted, so that each time step now contains the current Fig. 5: Data merge example with five time steps. input and the previous output, while the first input and last output of the sample are deleted, as shown in Fig. 6. Since the simulated systems are dynamic, the DNN needs information from past states to be able to predict the future output. This information is termed _lookback_ and contains the last \(L\) previous values. Therefore, data are grouped into packages, each containing the current time step and its lookback, as shown in Fig. 7. To get the data into an understandable format for the DNN, each lookback package gets collapsed into a single tuple, as shown in Fig. 8. Since the input data preparation removes some of the recorded time steps, the corresponding label data must also be "cleaned". Therefore, the first \(L\) time steps of the output data must be removed, equal to the number of values \(L\) in each lookback package. This is shown in Fig. 9. Finally, the samples of training data and targets each get collapsed into a list of time steps, as shown in Fig. 10. ## IV Prediction Models Each MPC requires its own prediction model for the CS under consideration. The used ML-MPC is using a single DNN architecture for each of the three problems as its PM, with the configuration shown in table I. The C-MPC uses the underlying simulation itself as its PM. C-MPC therefore have access to predictions that are guaranteed to produce the best possible result that the ML-MPC could ever reach. ## V Model Predictive Control Any MPC can be formulated as an optimization of the best next input (or: action, the language of RL) using a model of the CS for future predictions, based on an arbitrary reference value, that defines the state, the CS should reach, also often called setpoint in control science. Fig. 8: Data lookback collapse example with two time steps. Fig. 6: Data shift example with five time steps. Fig. 7: Data lookback example with four time steps and a lookback of three. Fig. 10: Data collapse example for three samples. Fig. 9: Data label cleanup example for two time steps. Since no linearization of the discussed non-linear systems is used, the MPC includes a non-linear optimization algorithm, in particular, a custom-built genetic algorithm. Using this approach, the ML-MPC and the C-MPC are not guaranteed to find the globally optimal action. However, since this deficiency is shared by both architectures, the comparison remains valid. The used control loop is straightforward. The CS receives an input (action) and returns an output. Then, the MPC searches the best subsequent input based on the set of recorded outputs and the given reference value. The found input is now again applied to the CS, thus terminating one control iteration. Within each control iteration, a non-linear optimization problem must be solved. Depending on the chosen algorithm, this takes time, energy and often does not guarantee the global minimum. Within the optimization, the PM has to make multiple future predictions, which means, a slow prediction time of the DNN or the simulation is one core issue within the non-linear optimization algorithm. Furthermore, each prediction within the optimization has to be done from the state the CS is currently in. As the C-MPC uses the same simulation used for the CS as its PM, it can just mirror the simulations states one to one, which allows for 100% prediction accuracy. The ML-MPC, on the other hand, uses a DNN as its PM, where the inner states cannot be set to mirror the CS. Therefore, the current state of the CS is encoded within the input data. This is done by creating the first input state for each prediction with the past outputs of the CS and discarding any possible outputs from the DNN. This also means that the DNN can only start to predict from the correct state when enough output data of the CS is collected to create the first lookback-package for the input. The process of encoding the states of the CS into the input data of the DNN is further called the state correction, and a DNN with corrected state is called corrected DNN (C-DNN). Shown is a comparison between a prediction session of a DNN with and without state correction in Fig. 11. It is evident that the state correction process is effectivly removing the summing of prediction errors over time. Furthermore, we observe that the state of the CS can be encoded into the input data of the DNN. With state correction, it is possible to start each future prediction within the optimization from the correct CS state. But this can only be done between each control iteration, hence the predictions within the optimization include any summed prediction errors. However, as the state correction is effective, the DNN can be optimized to only predict the needed length for the optimization; the prediction horizon. Furthermore, any quality loss of the control cannot be descripted to possible incorrect state correction. ## VI Comparison The final control comparison is run for each of the simulations, with its result presented in the following subsections. Fig. 11: Prediction session of the pendulum system with and without state correction. ### _Comparison-Result: Pendulum_ The control session on the pendulum system is shown in Fig. 12. ### _Comparison-Result: Cartpole_ The control session on the cartpole system is shown in Fig. 13. ### _Comparison-Result: Three-Tank_ The control session on the three-tank system is shown in Fig. 14. ## VII Discussion The presented results demonstrate that the control works with varying degrees of accuracy and quality, while the C-MPC has the better result. Since the MPCs only differ in their PMs, it is reasonable to assume that any discrepancy between their results comes from the prediction errors within the optimization. This is shown by plotting the used predictions within the optimization and comparing them to the ideal ones, as done in Fig. 15. Since the state correction was shown to be effective, the prediction errors within the control simulations are the result of the inherent prediction errors of the trained DNNs. ## VIII Conclusion The experiments show that DNNs can be used for ML-MPCs, and that any diminished control quality regarding the C-MPC is the result of the prediction error only. Knowing this, the ML-MPC should be improved within its separated components, where a better prediction accuracy of the ML-algorithmm directly results in a better control quality. There is no need to develop and test on ML-MPCs, as any compatible MPC-architecture can be improved without an ML-MPC in mind. Hence, separating the issues into categories of machine learning (prediction accuracy and prediction time), control (MPC-architectures) and non-linear optimization (prediction optimization algorithm like genetic algorithm or quadratic programming) is possible and recommended, while at the same time guaranteeing an improvement of any ML-MPC. Fig. 14: ML-MPC control versus C-MPC control with three reference jumps on the three-tank simulation. Fig. 12: ML-MPC control versus C-MPC control with three reference jumps on the pendulum simulation. Fig. 13: ML-MPC control versus C-MPC control with three reference jumps on the cartpole simulation. ## Acknowledgment We thank Dr. Johann Letnev and Dr. Ivan Jursic for useful hints about the manuscript.
2303.07621
Two-stage Neural Network for ICASSP 2023 Speech Signal Improvement Challenge
In ICASSP 2023 speech signal improvement challenge, we developed a dual-stage neural model which improves speech signal quality induced by different distortions in a stage-wise divide-and-conquer fashion. Specifically, in the first stage, the speech improvement network focuses on recovering the missing components of the spectrum, while in the second stage, our model aims to further suppress noise, reverberation, and artifacts introduced by the first-stage model. Achieving 0.446 in the final score and 0.517 in the P.835 score, our system ranks 4th in the non-real-time track.
Mingshuai Liu, Shubo Lv, Zihan Zhang, Runduo Han, Xiang Hao, Xianjun Xia, Li Chen, Yijian Xiao, Lei Xie
2023-03-14T04:19:41Z
http://arxiv.org/abs/2303.07621v1
# Two-Stage Neural Network for ICASSP 2023 Speech Signal Improvement Challenge ###### Abstract In ICASSP 2023 speech signal improvement challenge, we developed a dual-stage neural model which improves speech signal quality induced by different distortions in a stage-wise divide-and-conquer fashion. Specifically, in the first stage, the speech improvement network focuses on recovering the missing components of the spectrum, while in the second stage, our model aims to further suppress noise, reverberation, and artifacts introduced by the first-stage model. Achieving 0.446 in the final score and 0.517 in the P.835 score, our system ranks 4th in the non-real-time track. Mingshuai Liu\({}^{1}\), Shubo Lv\({}^{1}\), Zihan Zhang\({}^{1}\), Runduo Han\({}^{1}\), Xiang Hao\({}^{1}\), Xianjun Xia\({}^{2}\), Li Chen\({}^{2}\), Yijian Xiao\({}^{2}\), Lei Xie\({}^{1*}\)\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU), School of Software, Northwestern Polytechnical University, Xi'an, China \({}^{2}\)ByteDance, China speech distortion, speech enhancement ## 1 Introduction During audio communication, speech signals may be degraded by multiple distortions, including coloration, discontinuity, loudness, noisiness, and reverberation. Existing methods have an impressive performance in noise suppression, but there is still an open problem on how to repair several distortions simultaneously. To stimulate research in this direction, ICASSP 2023 has specifically held the first Speech Signal Improvement Challenge1, as a flagship event of Signal Processing Grand Challenge. Footnote 1: [https://www.microsoft.com/en-us/research/academic-program/speech-signal-improvement-challenge-icassp-2023/](https://www.microsoft.com/en-us/research/academic-program/speech-signal-improvement-challenge-icassp-2023/) * Corresponding author. Inspired by the idea of decoupling difficult tasks into multiple easier sub-tasks [1], we propose a neural speech signal improvement approach with a training procedure that involves two stages. In the first stage, we adopt DCCRN [2] as a repair tool and substitute its oracle convolution structure with a more powerful gated convolution with the aim to mainly repair the missing components in the spectrum. Thus with simulated paired data of perfect and impaired speech, GatedDCCRN learns to improve quality problems caused by coloration, discontinuity, and loudness. For the loss function, besides SI-SNR loss and power-law compressed loss [3], we also integrate adversarial loss further to improve speech naturalness. In the second stage, a variant of S-DCCRN [4] is cascaded to the first stage GatedDCCRN model to particularly remove noise, and reverberation, and suppress possible artifacts introduced by the previous stage. Specifically, S-DCCRN is a powerful denoising model working on super wide-band and full-band signals, consisting of two small-footprint DCCRNs - one operates on sub-band signal and one works on full-band signal, benefiting from both local and global frequency information. With this denoising model, we further update its bottleneck layers from LSTM to STCM [5] for better temporal modeling. The proposed system has achieved 0.446 in the final evaluation score and 0.517 in P.835 score, leading our submission to the 4th place in the non-real-time track. ## 2 Approach As illustrated in Fig.1, the training procedure of our system consists of two stages. The details of the network architecture, training data, and loss function used in these two stages are introduced below. ### Stage 1: Repairing Net Considering its impressive ability in signal mapping, we employ DCCRN [2] as the backbone of our first-stage speech repair network. DCCRN is a U-Net structured complex network working on complex-valued spectrum, where the encoder and decoder are both composed of layered convolution, and LSTM is served as the bottleneck for temporal modeling. Inspired by the superior performance of gated convolution in image inpainting [6], we update the complex convolution with gated complex convolution, as shown in Fig.1, resulting in GateDCCRN. For the model training loss, in addition to SI-SNR (\(\mathcal{L}_{\text{SI-SNR}}\)) and power-law compressed loss (\(\mathcal{L}_{\text{PLC}}\)), we particularly integrate adversarial loss (\(\mathcal{L}_{\text{Adv}}\)) by adding the Multi-Period Discriminator [7] and Multi-Scale Discriminator [7] into the model optimization process to further improve the speech naturalness. Thus the final loss function is \(\mathcal{L}_{\text{Aug1}}=\mathcal{L}_{\text{SI-SNR}}+10\cdot\mathcal{L}_{ \text{PLC}}+15\cdot\mathcal{L}_{\text{Adv}}\). ### Stage 2: Denoising Net In the second stage, the pre-trained GateDCCRN and an S-DCCRN [4] are chained as the denoising structure. Specifically, as shown in Fig.1 stage 2, two lightweight DCCRN sub-modules consequently work on sub-band and full-band signals, with the design to model fine details of different frequency bands with further inter-band smoothing. Different from the oracle S-DCCRN, we substitute LSTM with squeezed temporal convolution module (STCM) [5] in the bottleneck layer of the two DCCRNs, which aims to further strengthen the temporal modeling ability. With this update, the new model is named S-DCCSN. During training, we add noise and reverberation to the data simulated in the first stage to train the second stage model, which makes the model further achieve the ability to suppress noise, reverberation, and artifacts introduced by GateDCCRN. Note that the parameters of the pre-trained GateDCCRN are frozen in this training stage. We adopt SI-SNR loss (\(\mathcal{L}_{\text{SI-SNR}}\)), power-law compressed loss (\(\mathcal{L}_{\text{PLC}}\)), and mean square error loss (\(\mathcal{L}_{\text{Mtg}}\)) to optimize the model parameters, and the final loss becomes \(\mathcal{L}_{\text{stage2}}=\mathcal{L}_{\text{SI-SNR}}+\mathcal{L}_{\text{PLC }}+\mathcal{L}_{\text{Mtg}}\). ## 3 Experiments ### Datasets The training set is created using the DNS4 dataset. Specifically, the DNS4 dataset includes a total of 750 hours of clean speech and 181 hours of noise. In addition, 50,000 RIR clips are simulated by HYB method. The RT60 of RIRs ranges from 0.2s to 1.2s. The room size ranges from \(5\times 3\times 3m^{3}\) to \(8\times 5\times 4m^{3}\). Totally, there are 110,248 RIR clips, combined from the simulated RIR set and the provided RIR set by the challenge. We perform dynamic simulations of speech distortions during the training stage. In the first stage, we generate training data degraded by coloration, discontinuity, and loudness, accounting for 60\(\%\), 25\(\%\), and 15\(\%\) respectively. For coloration, we follow the description in [8] to design a low pass filter. We convolve it with full-band speech and perform resampling on the filtered result to generate low-band speech. Besides producing low-bandwidth distortions, we also restrict signal amplitudes within a range \([-\eta,+\eta]\) (\(\eta\in(0,1)\)). When we produce coloration distortion, the low-bandwidth speech and clipping speech account for 60\(\%\) and 40\(\%\) respectively. Specifically, full band speech is down-sampled to 4K, 8K, 16K, and 24K with the same probability. For discontinuity, the value of speech samples is set to zero randomly with a window size of 20ms. And the probability of setting the value of the sample point to zero is 10\(\%\). For loudness, we multiply the signal amplitudes by a scale within a range [0.1, 0.5]. In the second training stage, noise is further added to the first-stage training data with an SNR range of [0, 20]dB, and reverberation is further added to 50\(\%\) of the first-stage training data. Dynamic mixing is still used during training. We denote the first-stage and second-stage training data as S1 and S2 respectively, where S2 includes simulations for all signal distortions. ### Experiment Setup The window length and frame shift are 20ms and 10ms respectively, resulting in 10ms algorithmic latency and 10ms buffering latency. The STFT length is 1024. The number of channels for DCCRN / GateDCCRN is {16, 32, 64, 128, 256, 256}, and the convolution kernel size and stride are set to (5,2) and (2,1) respectively. There are two LSTM layers with 256 nodes followed by a 2048 \(\times\) 256 fully connected layer between the encoder and decoder. The number of channels for the sub-band module and the full-band module of S-DCCRN / S-DCCSN is {64, 64, 64, 64, 128, 128}, and the convolution kernel size and stride are set to (5,2) and (2,1) respectively. For the CED and CFD of S-DCCRN / S-DCCSN, the number of channels is 32 and the depth of DenseBlock is 5. In S-DCCSN, The hidden channels of STCM adopted by the sub-band module and full-band module are 64. While in S-DCCRN, there are two LSTM layers with 256 nodes instead. And a fully connected layer with 512 nodes is adopted after the last LSTM layer. Models are optimized by Adam with the initial learning rate of 0.001, halved if the validation loss of two consecutive epochs no longer decreases. ### Results We conduct ablation experiments to validate each proposed module. In Table 1, we train DCCRN and GateDCCRN with and without (w/o) discriminator using the first-stage training data (S1). Then we froze the parameters of the pre-trained GateDCCRN and cascade this model with S-DCCRN and S-DCCSN respectively. And the cascaded models are trained using the second-stage training data (S2). In addition, we also train a GateDCCRN and an S-DCCSN with the second-stage training data only (S2). DNSMOS results in Table 1 show that gated convolution and adversarial training using a discriminator can effectively improve the signal improvement ability of DCCRN. Moreover, S-DCCSN surpasses S-DCCRN in all evaluation metrics with a small increase in model size, which proves the better temporal modeling ability of STCM. Finally, the single-stage models (GateDCCRN and S-DCCSN) trained using S2 reflecting all distortions are inferior to the multi-stage model (GateDCCRN+S-DCCSN). Table 2 shows the subjective results of our submitted two-stage model (GateDCCRN+S-DCCSN) on the official test set. We can see that speech quality is clearly improved for different types of distortions except for discontinuity. The bad cases may attribute to our imperfect data simulation on discontinuity which desires further investigation. The number of parameters of the submitted two-stage system is 10M. The RTF is 1.478 tested on Intel(R) Xeon(R) CPU E5-2678 v3 2.4GHz using a single thread.
2307.04807
The Dragon-II simulations -- III. Compact binary mergers in clusters with up to 1 million stars: mass, spin, eccentricity, merger rate and pair instability supernovae rate
Compact binary mergers forming in star clusters may exhibit distinctive features that can be used to identify them among observed gravitational-wave (GW) sources. Such features likely depend on the host cluster structure and the physics of massive star evolution. Here, we dissect the population of compact binary mergers in the \textsc{Dragon-II} simulation database, a suite of 19 direct $N$-body models representing dense star clusters with up to $10^6$ stars and $<33\%$ of stars in primordial binaries. We find a substantial population of black hole binary (BBH) mergers, some of them involving an intermediate-mass BH (IMBH), and a handful mergers involving a stellar BH and either a neutron star (NS) or a white dwarf (WD). Primordial binary mergers, $\sim 30\%$ of the whole population, dominate ejected mergers. Dynamical mergers, instead, dominate the population of in-cluster mergers and are systematically heavier than primordial ones. Around $20\%$ of \textsc{Dragon-II} mergers are eccentric in the LISA band and $5\%$ in the LIGO band. We infer a mean cosmic merger rate of $\mathcal{R}\sim 12(4.4)(1.2)$ yr$^{-1}$ Gpc$^3$ for BBHs, NS-BH, and WD-BH binary mergers, respectively, and discuss the prospects for multimessenger detection of WD-BH binaries with LISA. We model the rate of pair-instability supernovae (PISNe) in star clusters and find that surveys with a limiting magnitude $m_{\rm bol}=25$ can detect $\sim 1-15$ yr$^{-1}$ PISNe. Comparing these estimates with future observations could help to pin down the impact of massive star evolution on the mass spectrum of compact stellar objects in star clusters.
Manuel Arca Sedda, Albrecht W. H. Kamlah, Rainer Spurzem, Francesco Paolo Rizzuto, Mirek Giersz, Thorsten Naab, Peter Berczik
2023-07-10T18:00:58Z
http://arxiv.org/abs/2307.04807v1
The Dragon-II simulations - III. Compact binary mergers in clusters with up to 1 million stars: mass, spin, eccentricity, merger rate and pair instability supernovae rate. ###### Abstract Compact binary mergers forming in star clusters may exhibit distinctive features that can be used to identify them among observed gravitational-wave (GW) sources. Such features likely depend on the host cluster structure and the physics of massive star evolution. Here, we dissect the population of compact binary mergers in the Dragon-II simulation database, a suite of 19 direct \(N\)-body models representing dense star clusters with up to \(10^{6}\) stars and \(<33\%\) of stars in primordial binaries. We find a substantial population of black hole binary (BBH) mergers, some of them involving an intermediate-mass BH (IMBH), and a handful mergers involving a stellar BH and either a neutron star (NS) or a white dwarf (WD). Primordial binary mergers, \(\sim 30\%\) of the whole population, dominate ejected mergers. Dynamical mergers, instead, dominate the population of in-cluster mergers and are systematically heavier than primordial ones. Around 20% of Dragon-II mergers are eccentric in the LISA band and 5% in the LIGO band. We infer a mean cosmic merger rate of \(\mathcal{R}\sim 12(4.4)(1.2)\) yr\({}^{-1}\) Gpc\({}^{3}\) for BBHs, NS-BH, and WD-BH binary mergers, respectively, and discuss the prospects for multimessenger detection of WD-BH binaries with LISA. We model the rate of pair-instability supernovae (PISNe) in star clusters and find that surveys with a limiting magnitude \(m_{\rm bol}=25\) can detect \(\sim 1-15\) yr\({}^{-1}\) PISNe. Comparing these estimates with future observations could help to pin down the impact of massive star evolution on the mass spectrum of compact stellar objects in star clusters. keywords: methods: numerical - galaxies: star clusters: general - stars: general, black holes ## 1 Introduction In less than a decade, the LIGO-Virgo-Kagra (LVK) collaboration discovered 76 confident gravitational-wave (GW) sources associated to merging stellar black holes (BHs) and neutron stars (NSs) (The LIGO Scientific Collaboration et al., 2021). This number raises up to 90 if one considers the population of events with a probability to have an astrophysical origin \(>0.5\)(The LIGO Scientific Collaboration et al., 2021), and it is destined to further increase by the end of the fourth observation run. Measurable quantities like component masses, spins, or the orbital eccentricity, and the merger rate of different types of compact binary mergers can represent the keys to identify the signatures of different formation channels (Arca Sedda & Benacquist, 2019; Arca Sedda et al., 2020; Zevin et al., 2020; Arca Sedda et al., 2023; Bouffanais et al., 2021; Mapelli et al., 2022). From the theoretical standpoint, there is a plethora of mechanisms proposed to explain the formation of compact binary mergers, like isolated binary evolution (Belczynski et al., 2002; Dominik et al., 2012; Belczynski et al., 2016; Giacobbo et al., 2018; Spera et al., 2019), dynamical pairing in dense star clusters (Miller & Hamilton, 2002; Downing et al., 2010; Rodriguez et al., 2016; Askar et al., 2017; Banerjee, 2018; Di Carlo et al., 2019; Rizzuto et al., 2022), formation in AGN disks (McKernan et al., 2012; Stone et al., 2017; McKernan et al., 2018; Tagawa et al., 2020), secular dynamics involving three compact objects (Antonini et al., 2018; Vigna-Gomez et al., 2021) or a binary orbiting a supermassive black hole (Antonini and Perets, 2012; Hoang et al., 2018; Fragione et al., 2019; Arca Sedda, 2020), and primordial BH evolution (Carr and Hawking, 1974; Carr et al., 2016; Sasaki et al., 2016). The majority of the aforementioned mechanisms relies on the assumption that compact objects are the relic of massive stars, and therefore they suffer the uncertainties affecting stellar evolution. For example, the insurgence of pair instability supernova (PISN) and pulsational pair instability supernova (PPISN) mechanisms can carve in the BH mass spectrum the so-called upper-mass gap, a region extending in the range \(m_{\rm gap}=40-150\) M\({}_{\odot}\) where no remnants are expected. The boundaries of the gap are highly uncertain and depend on many poorly constrained quantities, like stellar rotation, rate of nuclear reactions, stellar evolution model (Woosley and Heger, 2021; Vink et al., 2021; Stevenson et al., 2019; Farmer et al., 2019; Costa et al., 2021; Iorio et al., 2022). The presence of several upper mass-gap BH candidates in the LVK source catalogue poses the question about the origin of these BHs. Stellar mergers, star-BH interactions, and repeated BH mergers represent possible pathways to overcome (P)PISN (e.g. Spera et al., 2019; Banerjee, 2022; Costa et al., 2022; Ballone et al., 2022) and produce merging compact objects in dense star clusters (e.g. Rodriguez et al., 2018; Di Carlo et al., 2020; Kremer et al., 2020; Di Carlo et al., 2021; Rizzuto et al., 2021; Arca-Sedda et al., 2021; Rizzuto et al., 2022). Spins could carry crucial information on the BH formation scenario and help placing constraints on the evolution of massive stars, but little is known about the distribution of stellar BH natal spins. Observations of merging BHs indicate that the spin distribution follows a Maxwellian distribution, with a peak around \(\chi_{\rm BH}\sim 0.2-0.5\) (The LIGO Scientific Collaboration et al., 2021). However, stellar BHs detected in low-mass X-ray binaries (LMXBs) are characterised by spins broadly distributed in the whole allowed range (Fragos and McClintock, 2015), whilst those in high-mass X-ray binaries (HMXBs) involve BHs almost maximally spinning (see e.g. Qin et al., 2019; Reynolds, 2021). Despite these differences may suffer observation biases, they may represent peculiarities of different evolutionary pathways. Efficient angular momentum transport driven by magnetic stars could trigger the formation of BHs with natal spins as small as \(\chi_{\rm BH}\lesssim 0.01\), a mechanism proposed for BHs forming from single stars and in binaries with a negligible mass transfer among the components (Fuller and Ma, 2019). Significant mass transfer, instead, has been proposed to produce BHs with spin in a broad range in LMXB, even for BHs spinless at birth (Fragos and McClintock, 2015), and nearly extremal BHs in HMXBs (Qin et al., 2019; Gallegos-Garcia et al., 2022). Common envelope evolution in massive stellar binaries can lead to merging BBHs consisting in a nearly non-rotating BH (Qin et al., 2018; Bayrera et al., 2020), although this strongly depends on the stellar evolution adopted (Belczynski et al., 2020), and a BH companion with a spin spanning the whole allowed range of values (Qin et al., 2018; Bayrera et al., 2020; Belczynski et al., 2020). Amplitude aside, also the alignment of the spin vectors among themselves and with the binary angular momentum can affect both the waveform, the final merger remnant mass and spin, and the recoil kick (e.g. see Equation 10). From an "observational" perspective, measuring the spin components is intrinsically hard and their directions generally vary owing to precession, thus the spin of observed mergers can be characterised through the so-called effective spin parameter (Racine, 2008; Santamaria et al., 2010; Ajith et al., 2011) \[\chi_{\rm eff}=\frac{\vec{\chi}_{1}+q\vec{\chi}_{2}}{1+q}\cdot\vec{L}, \tag{1}\] where \(q<1\) is the binary mass ratio, \(\vec{\chi}_{1,2}\) are the two component spin vectors and \(\vec{L}\) is the binary orbital angular momentum. Observations of BBH mergers suggest that \(\chi_{\rm eff}\) may increase at increasing the binary merger mass ratio, although some merging binaries exhibit a negative value of \(\chi_{\rm eff}\)(The LIGO Scientific Collaboration et al., 2021), a feature generally associated with dynamical sources. The orbital eccentricity at merger could represent another distinguishing feature of compact binary mergers, as dynamical interactions could trigger the formation of fairly eccentric (\(>0.1\)) sources contrarily to mergers forming from isolated binaries (see e.g. Nishizawa et al., 2016). It has been recently claimed that up to four LVK sources may be eccentric (Romero-Shaw et al., 2019, 2020; Gayathri et al., 2022; Romero-Shaw et al., 2022), although the effects of eccentricity and precession can lead to degeneracies in GW data analysis, making the eccentricity a poorly constrained quantity (see e.g. Romero-Shaw et al., 2023). Alongside GWs, the detection of (P)PISNe can represent a key piece to understand the final stages of massive stars' life. So far, only a few, most of which controversial, PISN and PPISN candidates have been observed in the last two decades (Gal-Yam et al., 2009; Gomez et al., 2019; Woosley and Smith, 2022). The rarity of PISNe observations sets an intrinsic limit on the frequency of PISNe in star clusters, a quantity poorly constrained in theoretical and numerical models. Dynamical interactions among stars in dense and massive star clusters can trigger both the formation of merging binaries and the development of PISNe, either from single massive stars or from stellar merger products. Young and intermediate-age star clusters are particularly interesting environments where these sources can form, because they are still in their dynamical youth, when cluster mass-loss and expansion did not yet affected substantially the cluster structure and the interaction rate among stars is maximal. There is vast literature investigating the formation and evolution of merging BHs in star clusters via different techniques, e.g. direct \(N\)-body simulations (Banerjee et al., 2010; Downing et al., 2010; Di Carlo et al., 2019; Rastello et al., 2020; Arca Sedda, 2020; Di Carlo et al., 2021; Banerjee, 2018, 2021; Wang et al., 2022; Chattopadhyay et al., 2022; Rizzuto et al., 2022), Monte Carlo simulations (Rodriguez et al., 2016; Askar et al., 2017; Kremer et al., 2019; Rodriguez et al., 2019; Ye et al., 2020; Kremer et al., 2020; Maliszewski et al., 2022), and semi-analytic tools (Fragione and Kocsis, 2018; Antonini et al., 2019; Arca Sedda and Benacquista, 2019; Arca Sedda et al., 2020; Antonini and Gieels, 2020); Gonzalez et al. (2021); Mapelli et al. (2021); Arca Sedda et al. (2023); Mapelli et al. (2022); Artinini et al. (2022); Kritos et al. (2022). However, there is lack of direct \(N\)-body simulations of particularly dense (\(>10^{5}\) M\({}_{\odot}\) pc\({}^{-3}\)) and massive (\(>100,000\) M\({}_{\odot}\)) star clusters, owing to the computational cost required to simulate such systems. Exploring this range of mass and densities with \(N\)-body models can complement the already existing simulations and can offer a term of comparison to Monte Carlo simulations (see Figure 1 in Arca Sedda et al 2023a, hereafter AS-I). In this work, which represents the third of a series, we present results from the Dragon-II star cluster database, a suite of 19 direct \(N\)-body simulations of young and intermediate-age star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries, characterised by typical densities \(\rho=(1.2\times 10^{4}-1.5\times 10^{7})\) M\({}_{\odot}\) pc\({}^{-3}\). In our previous papers, we focused on the general properties of our cluster models and their compact object populations (paper AS-I) and the processes that regulate the formation and growth of IMBHs (Arca Sedda et al 2023b, hereafter AS-II). Here, we dissect the properties of BH-BH, BH-NS, and BH-WD mergers developing in the Dragon-II star cluster database, a suite of 19 direct \(N\)-body simulations of star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries (details about these models are discussed in our companion paper AS-I), performed with the Nbody6++GPU code1. The paper is organised as follows: in Section 2 we briefly summarise the main features of our models; Section 3 discusses the main properties of compact binary mergers in our models, focusing on the component masses and mass ratios, the eccentricity at merger, and the possible signatures that can identify their formation history; in Section 4 we explore the impact of BH natal spins onto the global properties of the population, and we adopt a cosmologically motivated framework to infer the compact binary merger rate, the detection perspectives for future low-frequency GW detections, and the frequency rate and detection perspectives in magnitude limited surveys of PISNe; Section 5 summarises the main results of this work. Footnote 1: [https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing](https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing) ## 2 Numerical Methods ### The Dragon-II clusters The Dragon-II simulation database consists of 19 star cluster models characterised by an initial number of stars \(N=(1.2,\ 3,\ 6,\ 10)\times 10^{5}\), half-mass radius \(R_{\rm HM}=(0.48,\ 0.80,\ 1.76)\) pc, and an initial binary fraction \(f_{b}=0.05-0.2\). In the following, we briefly summarise the main properties of Dragon-II clusters, referring the interested readers to our companion paper AS-I for more details on the run properties. To initialise the Dragon-II clusters we exploit the McLuster tool (Kupper et al., 2011; Kamlah et al., 2022; Leveque et al., 2022, Leveque in prep.). Each cluster is modelled according to a King (1966) profile with adimensional potential well \(W_{0}=6\). We adopt an initial metallicity \(Z=0.0005\), typical of several clusters possibly hosting a dense sub-system of compact objects or an IMBH, like NGC3201 or NGC6254 (Askar et al., 2018; Arca Sedda et al., 2018; Weatherford et al., 2020). Star masses are drawn according to a Kroupa (2001) initial mass function limited in the range \(m_{\rm ZAMS}=(0.08-150)\) M\({}_{\odot}\). Stars in primordial binaries are paired depending on their mass, with stars heavier than \(>5\) M\({}_{\odot}\) paired according to a flat mass-ratio distribution, and lighter stars paired randomly. Binary eccentricities are distributed according to a thermal distribution, \(P(e){\rm d}e=e^{2}{\rm d}e\), while initial semimajor axes are assigned according to a distribution flat in logarithmic values limited between the sum of stars' radii and a maximum value of 50 AU. The host galaxy potential is modelled through a Keplerian potential assuming a total mass of \(M_{\rm gal}=1.78\times 10^{11}\) M\({}_{\odot}\). All Dragon-II clusters are placed on a circular orbit around this galaxy model at a distance of \(R_{\rm clu}=13.3\) kpc. The adopted galaxy mass and orbital radius lead to a value of the circular velocity compatible with what is observed in the Milky Way. The resulting tidal radius is much larger than the cluster half-mass radius. Therefore, Dragon-II models are initially underfilling their Roche lobe, which implies that the initial impact of the host galaxy potential is negligible. All simulations are terminated when either the mean BH mass falls below \(\langle m_{\rm BH}\rangle\lesssim 15\) M\({}_{\odot}\), there are no BHs with a mass above 30 M\({}_{\odot}\), or the simulated time exceeds at least one relaxation time. As a result, the simulation time in Dragon-II models spans a range \(T_{\rm sim}=0.1-2.3\) Gyr, corresponding to \(0.8-80\) times the initial half-mass relaxation time (see also Table 1). Over the simulated time, we find a nice overlap (see also Figure 2 in paper AS-I) between the evolution of Dragon-II clusters' mass and half-mass radius and observed properties of young and intermediate-age massive clusters in the Milky Way (Portegies Zwart et al., 2010), the Magellanic clouds (Gatto et al., 2021), and other galaxies in the local Universe like Henize 2-10 (Nguyen et al., 2014) or M83 (Ryon et al., 2015). In this sense, Dragon-II models can represent one possible evolutionary pathways of (relatively) young massive clusters. ### The Nbody6++GPU code Dragon-II simulations have been performed with the Nbody6++GPU code (Wang et al., 2015), a state-of-the-art direct \(N\)-body integrator that runs on high performance computing hardware equipped with graphic-processing-units (GPUs, Spurzem, 1999; Nitadori & Aarseth, 2012; Wang et al., 2015). The code is part of the famous NBODY code series that was pioneered almost sixty years ago by Serre Aarseth (Aarseth et al., 1974; Spurzem, 1999; Aarseth, 1999, 2003; Aarseth et al., 2008; Nitadori & Aarseth, 2012; Wang et al., 2015; Kamlah et al., 2022). The code implements a 4th-order Hermite integrator scheme with adaptive time-step based on the Ahmad-Cohen scheme for neighbours (Ahmad & Cohen, 1973), and implements a treatment for close encounters and few-body dynamics via the Kustaanheimo-Stiefel regularisation (Stiefel & Kustaanheimo, 1965) and chain regularisation (Mikola & Aarseth, 1993). Stellar evolution in Nbody6++GPU is based on an upgraded version of the population synthesis code BSE (Hurley et al., 2002). The main features of this state-of-the-art version, named BSE++, are described in detail in Kamlah et al. (2022) (but see also Banerjee et al., 2020). We adopt the so-called level B of stellar evolution (see Kamlah et al., 2022), whose main characteristics are: delayed supernova (SN) scheme (Fryer et al., 2012), pair- and pulsation pair-instability supernova (PISN and PPISN) treated follow ing Belczynski et al. (2016); Banerjee (2021), fallback prescription for NS/BH natal kicks, and metallicity-dependend winds for massive stars Vink et al. (2001); Belczynski et al. (2010). We refer the reader to Kamlah et al. (2022) and paper AS-I for further details. The common envelope phase in binaries is modelled through the widely known \(\alpha_{\rm CE}-\lambda_{\rm CE}\) scheme, which enables us to regulate the fraction of orbital energy injected into the envelope (\(\alpha_{\rm CE}\)) and to scale the binding energy of the envelope by a factor \(\lambda_{\rm CE}\). In this work, we adopt \(\alpha_{\rm CE}=3\) and \(\lambda=0.5\)(Giacobbo & Mapelli, 2018; Kamlah et al., 2022). The adopted stellar evolution recipes imply that the stellar BH mass-spectrum in Dragon-II clusters is limited to \(m_{\rm BH,max}=40.5\) M\({}_{\odot}\)(Belczynski et al., 2016), unless BHs form from stellar mergers or star-BH interactions. In the latter case, Nbody6++GPU parametrises the amount of mass accreted in a strong star-BH interaction or collision via an accretion parameter \(f_{c}\)(Rizzuto et al., 2021, 2022), which we set to \(f_{c}=0.5\). We refer the reader to Rizzuto et al. (2021, 2022) for a discussion about the impact of \(f_{c}\) on BH evolution. #### 2.2.1 Modelling the final stages of compact object binary mergers The dynamics of relativistic binaries is followed via the orbit-average formalism (Peters, 1964), which enables us to follow the evolution of compact binaries and their coalescence inside the cluster, similarly to previous works (see e.g. Di Carlo et al., 2019, 2020, 2021; Rizzuto et al., 2021, 2022; Rastello et al., 2021; Torniamenti et al., 2022). In its current implementation, Nbody6++GPU follows the dynamics of relativistic binaries also if they are part of triples (see e.g. Rizzuto et al., 2021) and multiple systems, as well as if they form via hyperbolic interactions. However, the BBH evolution is not followed down to the merger, rather the binary is decoupled from dynamics and promptly merged when the BBH pericentre falls below a critical value, which we set to \(10^{2}\) Schwarzschild radii, i.e. \(a_{\rm dec}=2kGm_{\rm bin}/c^{2}=kR_{\rm Sch}\) with \(k=100\). Adopting such limiting separation ensures that the binary is unlikely to undergo any further interaction with surrounding stars before merging. Considering the range of binary masses (\(1-300\) M\({}_{\odot}\)), star cluster masses (\(<10^{6}\) M\({}_{\odot}\)) and half-mass radii (\(0.1-3\) pc) explored in this work, it is easy to show that the binary--single interaction timescale \(t_{2-1}=(n\sigma\Sigma)^{-1}\) - with \(n\) the cluster density, \(\sigma\) the cluster velocity dispersion, and \(\Sigma\) the binary cross section - is generally \(>10^{8}\) larger than the binary inspiral timescale, \(t_{\rm insp}\propto a^{4}/(m_{1}m_{2}m_{\rm bin})\)(Peters & Mathews, 1963). Moreover, the typical merger time for a binary with mass \(m_{\rm bin}<200\) M\({}_{\odot}\) and separation \(a_{\rm dec}\) is generally \(t_{\rm insp}<100\) yr, i.e. much smaller than the cluster crossing time, \(t_{\rm step}\sim 10^{5}\) yr. Therefore, our procedure ensures reliable results while reducing the computational effort required to simulate the evolution of a binary with an orbital period of minutes or hours. The pre-merger stages of the merging binary orbits are reconstructed by retrieving the orbital parameters at decoupling and integrating the orbit via the Peters (1964) equations: \[\frac{{\rm d}a}{{\rm d}t} = -\frac{64}{5}\beta(m_{1},m_{2})\frac{F(e)}{a^{3}}, \tag{2}\] \[\frac{{\rm d}e}{{\rm d}t} = -\frac{304}{15}\beta(m_{1},m_{2})\frac{eG(e)}{a^{4}}, \tag{3}\] with \[F(e) = (1-e^{2})^{-7/2}\left(1+\frac{73}{24}e^{2}+\frac{37}{96}e^{4} \right); \tag{4}\] \[\beta(m_{1},m_{2}) = (G^{3}/c^{5})m_{1}m_{2}(m_{1}+m_{2});\] (5) \[G(e) = (1-e^{2})^{-5/2}\left(1+\frac{121}{304}e^{2}\right). \tag{6}\] Along with the orbital evolution we calculate the associated GW strain and frequency (see e.g. Peters & Mathews, 1963; Kocsis et al., 2012; Arca Sedda et al., 2021). Natal spins of stellar BHs can be assigned according to different distribution, three of which are based on physical stellar model, namely the "Geneva", "MESA", and "Fuller" models (see e.g. Kamlah et al., 2022), and four are rather generic, namely zero-spins, uniform spin distribution, Gaussian spin distribution with mean value \(\chi=0.5\) and dispersion \(\sigma_{\chi}=0.2\), and Maxwellian distribution with dispersion \(\sigma_{\chi}=0.2\). In this work, whenever spins are taken into account during the simulation we assume a Gaussian distribution with \(\chi=0.5\) for stellar BHs, whilst for IMBHs we decide on a case by case basis, depending on the IMBH formation scenario (see paper AS-II). Compact binary merger products are assigned a final mass and spin calculated via numerical relativity fitting formulae (Jimenez-Forteza et al., 2017; Arca Sedda et al., 2020) and a relativistic recoil, generated by asymmetric GW emission (Campanelli et al., 2007; Lousto & Zlochower, 2008; Lousto et al., 2012), expressed via the following relation: \[\vec{v}_{\rm GW}= v_{m}\hat{e}_{\perp,1}+v_{\perp}(\cos\xi\hat{e}_{\perp,1}+\sin \xi\hat{e}_{\perp,2})+v_{\parallel}\hat{e}_{\parallel}, \tag{7}\] \[v_{m}= \ A\eta^{2}\sqrt{1-4\eta}(1+B\eta),\] (8) \[v_{\perp}= \frac{H\eta^{2}}{1+q_{\rm BBH}}\left(S_{2,\parallel}-q_{\rm BBH }S_{1,\parallel}\right),\] (9) \[v_{\parallel}= \frac{16\eta^{2}}{1+q_{\rm BBH}}\left[V_{11}+V_{A}\Xi_{ \parallel}+V_{B}\Xi_{\parallel}^{2}+V_{C}\Xi_{\parallel}^{3}\right]\times\] \[\times\left|S_{2,\perp}-q_{\rm BBH}\vec{S}_{1,\perp}\right|\cos( \phi_{\Delta}-\phi_{1}). \tag{10}\] Here, \(\eta\equiv q_{\rm BBH}/(1+q_{\rm BBH})^{2}\) is the symmetric mass ratio, \(\vec{\Xi}\equiv 2(\vec{S}_{2}+q_{\rm BBH}^{3}\vec{S}_{1})/(1+q_{\rm BBH})^{2}\), and the subscripts \(\perp\) and \(\parallel\) mark the perpendicular and parallel directions of the BH spin vector (\(\vec{S}\)) with respect to the direction of the binary angular momentum. We assume \(A=1.2\times 10^{4}\) km s\({}^{-1}\), \(B=-0.93\), \(H=6.9\times 10^{3}\) km s\({}^{-1}\), and \(\xi=145^{\circ}\)Gonzalez et al. (2007); Lousto & Zlochower (2008), \(V_{11}=3677.76\) km s\({}^{-1}\), and \(V_{A,B,C}=(2.481,1.793,1.507)\times 10^{3}\) km s\({}^{-1}\). The vector \(\vec{\Delta}\) is defined as \(\vec{\Delta}\equiv(M_{a}+M_{b})^{2}(\vec{S}_{a}-q_{\rm BBH}\vec{S}_{a})/(1+q_{ \rm BBH})\). The angle between the direction of the infall at merger and the in-plane component of \(\vec{\Delta}\), i.e. \(\phi_{\Delta}\), is drawn from a uniform distribution, while \(\phi_{1}=0-2\pi\), which represents the phase of the binary, is extracted between the two limiting values according to a uniform distribution. In Nbody6++GPU, the user can decide to set the GW recoil to zero or to a fixed value, or to calculate it self-consistently via Eqs. 7 and 10, in which case the kick is assigned to the remnant and the resulting energy correction is included in a similar way as it is done for natal BH kicks. As described in detail in paper AS-II, in this paper series we adopt a simplified approach to investigate the impact of GW recoil in the simulations, owing to the fact that the relatively small sample of mergers does not enable us to filter out the inevitable stochastic effect of the BH spin directions and amplitudes on the kick amplitude. The approach consists of three steps. First, we run all simulations assuming no GW recoil. Second, for each merger event in each simulation we evaluate the GR recoil assuming different distribution for BHs natal spins and we determine whether the remnant is likely to be retained or not in the cluster. Third, if a BH undergoes \(n\) mergers in a simulation with zero GW kick, we restart the simulation shortly before the \(n\)-th merger event and enable GW kicks assuming a spin for the merging components that depends on the BH formation history. This enables us to verify whether the remnant can be retained in the cluster and eventually merge again in a \(n+1\)th merger generation. In paper AS-II, we have shown that this approach permits us to highlight the fact that even when GW kicks are not taken into account, Newtonian dynamics is sufficient to eject all BH remnants from the parent cluster via strong binary-single encounters. We note that none of the mergers with component masses \(<100\) M\({}_{\odot}\) undergo multiple mergers. This suggests that even adopting zero GW recoils may have a negligible impact on the formation of compact binary mergers with mass \(<100\) M\({}_{\odot}\). ## 3 Results ### The population of black hole binary mergers in Dragon-II clusters In this section we describe the main results of our simulations, focusing on the population of compact binary mergers. Table 1 summarizes the main properties of Dragon-II clusters and their compact objects. The population of BHs formed in Dragon-II models and, in general, in star clusters likely suffers both the effects of single and binary stellar evolution and stellar dynamics. To highlight this aspect we show in Figure 1, for the models with \(R_{\rm HM}=0.8\)pc and \(N=300\)k, the so-called initial to final mass relation (IFMR) that links the masses of compact objects and their stellar progenitors. The plot is dissected into BHs with a progenitor initially single or in a primordial binary system. The population of BHs forming from single stars generally follows the expectations of the adopted stellar evolution recipes (e.g. see Figure B1 in Kamlah et al., 2022). Deviation from the general trend owes to initially single stars that got caught in a pair and underwent mass-transfer. The IFMR of BHs formed from stars in primordial binaries is more complex, being characterised, for example, by BHs in the upper mass-gap with masses in the range \(40.5-80\) M\({}_{\odot}\). This highlights the crucial role of binary stellar evolution and dynamics in sculpting the population of BHs in star clusters (e.g., see also Figure 2 in Di Carlo et al., 2019). #### 3.1.1 Component masses and formation channels The population of compact binary mergers in Dragon-II consists in 75 BH-BH, 2 NS-BH, and 1 WD-BH. Among BH-BH mergers, 45 involve two BHs below the PPISN maximum mass (\(m_{\rm BH}<40.5\) M\({}_{\odot}\)), 12 involve two mass-gap BHs, and 21 involve one BH below the gap and a mass-gap BH. Six BH-BH mergers involve a primary with a mass \(m_{\rm BH,1}=(5.4-7.1)\) M\({}_{\odot}\) and a companion with mass \(m_{\rm BH,2}=(2.55-3.6)\) M\({}_{\odot}\), i.e. just above the threshold separating NSs and BHs in our models. All these low-mass mergers are in primordial binaries. As discussed in paper AS-II, the BHs in the upper-mass gap mostly form in a star-BH accretion event, either by purely dynamical interactions or stellar evolution. We stress that, throughout our models, we assume that a fraction \(f_{c}=0.5\) of the star mass is accreted onto the BH during an accretion event (for a discussion about the impact of \(f_{c}\) on simulations, see Rizzuto et al., 2021). When GW recoil are "switched off" 4 mergers involve a second or third generation BH, i.e. which underwent one or two previous mergers. The inclusion of GW recoil reduces the number of total mergers to 74. For a detailed discussion about the impact of GW recoil, see paper AS-II. Figure 2 shows the component masses and mass ratio of Dragon-II mergers and of mergers observed during the first three LVK observation campaign, collected in the so-called GWTC-3 catalogue (The LIGO Scientific Collaboration et al., 2021, 2021). The plot includes all mergers occurring in the clus Figure 1: Zero age main sequence mass and final mass of stellar BH progenitors. We dissect the population into progenitors that are initially single (top panel, red circles) and those that are in a primordial binary (bottom panel, light blue triangles). ter or outside the clutter after being ejected via dynamical interactions considering zero GW kicks. This plot illustrates the wealth of information hid in the Dragon-II star clusters: we find mergers in the upper-mass gap, IMBHs2, repeated mergers, and in a handful cases also BHs merging with either a NS or a WD. Interestingly, we find that mergers occurring inside the cluster are characterised by a primary with mass \(m_{\rm BH,1}>30\) M\({}_{\odot}\) and a companion with a mass in the range \(m_{\rm BH,2}=(20-50)\) M\({}_{\odot}\). Conversely, mergers occurring outside the cluster -- or ejected mergers -- are characterised by a mass-ratio \(q>0.6\) and a primary mass typically \(m_{\rm BH,1}<40\) M\({}_{\odot}\). Footnote 2: In this work we set a mass treshold of \(M_{\rm IMBH,min}=100\) M\({}_{\odot}\) to discern between BHs and IMBHs. The number of mergers occurring inside the cluster (31) is comparable to that of binaries that merge after being ejected from the cluster (47), thus suggesting that in-cluster mergers can made-up the 40% of the total merger population. Among all of them, 27 are from primordial binaries (3 inside, 24 ejected), whilst 51 (28 inside, 23 ejected) are from dynamical binaries. Figure 3 shows the primary and companion mass of mergers originated from primordial, dynamical, or mixed binaries, with the latter identifying binary mergers in which at least one component was originally in a primordial binary. The plot exhibits some interesting features: 1) mergers from primordial binaries tend to have nearly equal-mass components, 2) purely dynamical mergers have masses that occupy a tight region of the plane with \(m_{\rm BH,1}=(20-50)\) M\({}_{\odot}\) and \(m_{\rm BH,2}=(20-40)\) M\({}_{\odot}\), 3) mergers with one component previously in a primordial binary are characterised by a heavy primary, \(m_{\rm BH,1}>40\) M\({}_{\odot}\), and a heavy companion, \(m_{\rm BH,2}>20\) M\({}_{\odot}\). A similar trend is observed in recent \(N\)-body simulations tailored to relatively light star clusters, i.e. with mass \(<8,000\) M\({}_{\odot}\)(Torniamenti et al., 2022). As deeply discussed in paper AS-II, the crucial role of primordial binary dynamics is highlighted by the fact that all the IMBHs in Dragon-II clusters but one have an ancestor that was member of a primordial binary, regardless of the IMBH formation scenario. Dynamics and binary stellar evolution deeply impact also the properties of stellar-size mergers. For example, "dynamical" and "primordial" mergers occupy two well separated regions of the primary mass - mass ratio plane. The vast majority of primordial binary mergers occupy a region delimited by \(q>0.6\) and \(m_{1}=(5-40)\) M\({}_{\odot}\), with the mass-ratio weakly increasing at increasing the primary mass: note that for \(m_{1}\lesssim 15\) M\({}_{\odot}\) mergers have mass-ratio \(q=0.6-1\), whilst mergers with a heavier primary have mass ratio \(q>0.85\). \begin{table} \begin{tabular}{c c c c c c c c|c c c|c c c|c c c|c c c} \hline \hline \(N_{\rm e}\) & \(M_{\rm c}\) & \(R_{h}\) & \(f_{b}\) & \(N_{\rm sim}\) & \(T_{\rm tbx}\) & \(T_{\rm seg}\) & \(T_{\rm sim}\) & \(N_{\rm GW,in}\) & \(N_{\rm GW,out}\) & \(M_{\rm max}\) & \(M_{\rm max,fin}\) & \(N_{>30}\) & \(N_{>40}\) \\ 10\({}^{3}\) & 10\({}^{5}\) M\({}_{\odot}\) & pc & & & Myr & Myr & & Myr & & & & & M\({}_{\odot}\) & & M\({}_{\odot}\) & & \\ \hline 120 & 0.7 & 1.75 & 0.05 & 2 & 99 & 2.1 & 2379 & 2326 & 0 & 2 & 2 & 0 & 64 & 76 & 25 & 34 & 0 & 2 & 0 & 0 \\ 300 & 1.8 & 1.75 & 0.05 & 2 & 142 & 2.7 & 1196 & 1422 & 0 & 2 & 2 & 2 & 69 & 77 & 40 & 40 & 13 & 13 & 5 & 1 \\ 1000 & 5.9 & 1.75 & 0.05 & 2 & 233 & 3.4 & 207 & 194 & 1 & 1 & 4 & 4 & 81 & 146 & 52 & 70 & 149 & 169 & 72 & 85 \\ 120 & 0.7 & 1.75 & 0.2 & 2 & 99 & 2.1 & 1710 & 1540 & 2 & 2 & 0 & 2 & 232 & 81 & 38 & 28 & 2 & 0 & 0 & 0 \\ 300 & 1.7 & 1.75 & 0.2 & 2 & 142 & 2.7 & 519 & 793 & 1 & 0 & 7 & 5 & 92 & 77 & 65 & 47 & 26 & 26 & 8 & 14 \\ 600 & 3.5 & 1.75 & 0.2 & 2 & 189 & 3.4 & 205 & 126 & 0 & 0 & 2 & 5 & 87 & 144 & 59 & 84 & 95 & 103 & 45 & 65 \\ 120 & 0.7 & 0.80 & 0.2 & 2 & 30 & 0.7 & 1154 & 1201 & 4 & 3 & 4 & 2 & 120 & 132 & 21 & 27 & 0 & 0 & 0 & 0 \\ 300 & 1.7 & 0.80 & 0.2 & 2 & 44 & 0.8 & 307 & 309 & 1 & 0 & 1 & 0 & 93 & 107 & 40 & 43 & 15 & 11 & 2 & 2 \\ 120 & 0.7 & 0.47 & 0.2 & 2 & 14 & 0.3 & 1149 & 530 & 2 & 2 & 3 & 1 & 350 & 92 & 50 & 30 & 1 & 0 & 1 & 0 \\ 300 & 1.7 & 0.47 & 0.2 & 1 & 20 & 0.4 & 148 & - & 4 & - & 3 & - & 245 & - & 48 & - & 22 & - & 9 & - \\ \hline \end{tabular} \end{table} Table 1: Col. 1-4: initial number of stars, cluster mass and half-mass radius, primordial binary fraction. Col. 5: number of indipendent realisations. Col. 6-7: initial half-mass relaxation and segregation time. Col. 8: simulated time. Col. 9-10: number of compact object mergers inside the cluster. Col. 11: maximum BH mass during the simulation. Col. 12: maximum BH mass at the end of the simulation. Col. 13-14: number of BHs with a mass \(m_{\rm BH}>30\) M\({}_{\odot}\) or \(>40\) M\({}_{\odot}\) at the last simulation snapshot. Figure 2: Primary mass (x-axis) and mass-ratio (y-axis) of BH mergers in Dragon-II simulations, occurring inside the cluster (points) or after dynamical ejection (diamonds). The colour map identifies the mass of the secondary. The dotted lines corresponds to a companion mass of \(m_{\rm BH,2}=3,\ 5,\ 10,\ 30,\ 50,\ 100\) M\({}_{\odot}\). Mergers with a primary mass on the right side of the black dashed line produce an IMBH. The red shaded area roughly identifies the upper-mass gap region. The grey area identify mergers containing a BH and another type of compact object: either a BH in the putative lower mass-gap, a NS or a WD. Shaded grey points represent observed BH mergers from the GWTC-3 catalogue (The LIGO Scientific Collaboration et al., 2021). Dynamical mergers, instead, form in the right hand-side of Figure 3, generally at \(m_{1}>40.5\;\mathrm{M}_{\odot}\). In this case, the mass ratio decreases with the primary mass as expected from the mass function limit, with companion masses in the range \(m_{2}=(30-50)\;\mathrm{M}_{\odot}\). We can identify three relatively well separated regions: low BH masses (\(m_{\mathrm{BH,1}}<15\;\mathrm{M}_{\odot}\)) and widely distributed mass ratio (\(q=0.6-1\)) dominated by primordial binary mergers, BH masses in the range \(m_{\mathrm{BH,1}}=(15-40.5)\;\mathrm{M}_{\odot}\) and high mass ratios (\(q>0.9\)) dominated by primordial binary mergers, and heavy BH primaries (\(m_{\mathrm{BH,1}}>40.5\;\mathrm{M}_{\odot}\)) with relatively massive companions (\(m_{2}=30-50\;\mathrm{M}_{\odot}\)) dominated by dynamical mergers. In Dragon-II clusters, most binaries merging outside the cluster originate from primordial binaries and their ejection is typically triggered by the BH natal kick. However, all ejected mergers with component masses \(m_{1,2}>30\;\mathrm{M}_{\odot}\) have a dynamical origin, owing to the adopted stellar evolution recipes. We note that, given the limited simulation time, the population of mergers in Dragon-II clusters may lack some element that could form later in the cluster life, beyond several relaxation times. These late mergers would unavoidably have a dynamical origin, or at most a "mixed" origin, because all BHs formed in primordial binaries undergo a binary exchange or have been ejected in Dragon-II clusters over the simulated time. Moreover, late mergers will likely have smaller masses compared to those shown in Figure 3. This is mostly due to the BH-burning process, by which the average BH mass decreases over time (see e.g. paper AS-I; Rodriguez et al., 2015; Chatterjee et al., 2017). As a consequence, some BH mergers forming at late time may have properties similar to the primordial binary mergers shown in Figure 3. Figure 4 shows the mass distribution of the primary BH in Dragon-II mergers, dissected into in-cluster/ejected mergers and primordial/dynamical ones. Ejected binaries dominate the \(m_{\mathrm{BH,1}}\lesssim 20\;\mathrm{M}_{\odot}\) mass range, whilst at larger primary masses their number and distribution is similar to that of in-cluster mergers. Dynamical mergers completely dominate the population of mergers with \(m_{\mathrm{BH,1}}>20\;\mathrm{M}_{\odot}\), while primordial mergers dominate the population of lighter mergers. Noteworthy, we see that the primary mass distribution for Dragon-II mergers nicely overlap with the sample of mergers in the GWTC-3 catalogue, i.e. the catalogue of BBH mergers detected by the LVK collaboration (The LIGO Scientific Collaboration et al., 2021). However, a thorough comparison between modelled and observed mergers would require to take into account observation biases (see e.g. Fishbach & Holz, 2017; Aeca Sedda et al., 2020, 2023). For this reason, we also overlay to our data the cosmic BH mass distribution inferred from GW detections. Comparing models and observations can be crucial to assess the impact of different formation channels on the population of BH-BH mergers (see e.g. Arca Sedda & Benacquist, 2019; Arca Sedda et al., 2020; Bavera et al., 2020; Zevin et al., 2021; Aeca Sedda et al., 2023; Mapelli et al., 2022). Our Dragon-II models suggest, for example, that BH mergers developing in star clusters could produce a substantial amount of mergers from primordial binaries. The progenitor binary could, in some cases, suffer the impact of dynamical interactions which may alter their orbital parameters. Nonetheless, in most cases BH mergers from primordial binaries could represent "isolated binary merger impostors", because they have properties typical of merging binaries developing within the isolated formation scenario but form in a dynamical environment. Taking into account the impact of these sources with a sort of mixed formation channel is crucial to correctly quantify the role of different formation channels in determining the shape of the mass distribution of detected merging BHs (see also Arca Sedda et al., 2023). Moreover, Dragon-II models highlights the role of dynamics in determining the formation of BH mergers with masses inside, and beyond, the mass-gap, supporting and complementing previous works on the topic based either on smaller, or lower-density, \(N\)-body cluster models and Monte Carlo simulations (e.g. Kremer et al., 2020; Di Carlo et al., 2020; Gonzalez et al., 2021; Rizzuto et al., 2022; Banerjee, 2022). #### 3.1.2 Delay times The delay time of Dragon-II mergers (\(t_{\mathrm{GW}}\)), defined as the time elapsed from the beginning of the simulation to the binary merger, is rather peculiar. As show in Figure 5, it exhibits three peaks at \(t_{\mathrm{GW}}\simeq(0.5,\;1.5,\;10)\) Gyr. However, when the delay time is normalised to the initial half-mass relaxation time (\(t_{\mathrm{rix}}\)) of the cluster, the overall \(t_{\mathrm{GW}}\) nicely distribute around a main peak located at \(t_{\mathrm{GW}}/t_{\mathrm{rix}}\simeq 8-30\). The exact location of the peak depends on the definition of \(t_{\mathrm{rix}}\). For the sake of clarity, in the plot we use three different expressions of \(t_{\mathrm{rix}}\) taken from Gatto et al. (2021) (GR21), Antonini & Rasio (2016) (AR16), or Rizzuto et al. (2021) (RN21). The three peaks that appear in the \(t_{\mathrm{GW}}\) distribution find a clear explanation looking at the \(t_{\mathrm{GW}}/t_{\mathrm{rix}}\) distribution. In fact, the first peak at \(t_{\mathrm{GW}}=500\;\mathrm{Myr}\) corresponds to mergers happening in simulations with \(t_{\mathrm{rix}}=50-100\;\mathrm{Myr}\), whilst Figure 3: Masses of the primary and companion merging BHs from primordial binaries (yellow diamonds), dynamical binaries (red circles), and binaries with at least one component being a former primordial binary member (blue squares). the second peak corresponds to mergers occurring in clusters with a longer relaxation time (see Table 1). This interesting features suggests, on the one hand, that the delay time depends intrinsically on the cluster initial properties, as they determine the relaxation time, and, on the other hand, that dynamical processes operate in a similar way over a relaxation time regardless of the cluster structure. The third peak, instead, corresponds to ejected binaries that merge outside the cluster, which are mostly products of primordial binaries ejected via SN explosion during the formation of one of the BHs in the pair. #### 3.1.3 Eccentricities One intriguing question that arose since the first detection of GWs is whether it is possible to untangle different formation channels in the observed population of BH mergers. Among all the parameters at play, the orbital eccentricity could represent the key to answer this question. Broadly speaking, in fact, most BH mergers forming via binary stellar evolution are expected to feature a negligible eccentricity close to merger, either because the BBH progenitor undergoes common envelope, which shrinks and circularise the orbit, or because the BBH separation is initially so large that GW emission circularise the orbit before the merger. Binaries developing in star clusters, instead, can form with high eccentricity and suffi Figure 4: Mass distribution of primary BH mergers from the GWTC-3 catalogue (The LIGO Scientific Collaboration et al., 2021) (orange straight line) and from Dragon-II simulations (filled light blue steps). The BH mass distribution inferred from LVK data is overlaid to the histograms (black line), with the shaded area encompassing the 90% credible level. Top panel: Simulated mergers are dissected into those occurring inside the cluster (dotted black line) and after dynamical ejection (dashed black line). Bottom panel: Simulated mergers are dissected into those forming from primordial binaries (dotted black line) or via dynamical interactions (dashed black line). Figure 5: Top panel: delay time distribution for Dragon-II mergers. Bottom panel: as in top panel, but the time is normalised to the initial cluster relaxation time calculated following Gatto et al. (2021) (red filled steps), Binney and Tremaine (2008) (red dashed steps), or Spitzer (1987) (red dotted steps). ciently small separation that the merger occurs on a timescale shorter than GW circularisation. At the lowest-order level, binaries merging in galactic fields, often called isolated binary mergers, are expected to be circular GW sources, whilst at least some of those developing in star clusters and galactic nuclei, named dynamical mergers, can preserve a significant eccentricity (i.e. \(e>0.1\)) when entering the typical frequency bands of GW detectors. This simplistic division between isolated and dynamical binaries does not take into account several layers of complication. For example, it is well known that star clusters and stellar nurseries may contain a large fraction of binaries, especially among the population of massive stars, where the percentage of paired stars attains values as large as \(50-100\%\)(Sana et al., 2012; Moe & Di Stefano, 2017). If primordial binaries evolve on a timescale shorter than the typical timescale of dynamical interactions, star cluster could harbor a subpopulation of compact binary mergers with properties pretty similar to those forming in galactic fields, e.g. low eccentricities or peculiar component masses and mass-ratios. With up to \(33\%\) of stars initially paired, Dragon-II simulations offer us the possibility to search for differences between mergers forming entirely via dynamics and those forming from the evolution of primordial binaries. Figure 6 shows the semimajor axis and eccentricity of all BH-BH mergers in Dragon-II clusters calculated at the moment of decoupling, i.e. when the GW emission starts dominating over dynamical perturbations. The plot dissects the Dragon-II population of BH mergers into those coming from the evolution of primordial binaries, those assembled purely via dynamical interactions, and those involving at least one component that was former member of a primordial binary. The population of dynamical and mixed binaries seem to follow two different sequences, although the low statistics make hard to understand whether they actually exist. The population of nearly circular primordial binaries is evident. These mergers can be considered mimickers of the field merger population, and constitute the \(33\%\) of the whole population of Dragon-II mergers. Only two of the primordial binaries exhibit a significant eccentricity and a relatively small separation. The first is a NS-BH binary, we postpone a discussion about this specific source to the next subsection. The second one involves two low-mass BHs, with masses \(m_{\rm BH,2}=(7.1+2.55)\) M\({}_{\odot}\) and eccentricity \(e=0.997\). The progenitor of this merger was a binary that underwent a common envelope phase first, after which the first BH formed, and later undergo Roche lobe overflow, at the end of which also the second BH forms and receives a small kick (\(\sim 3\) km/s) that triggers the eccentricity increase. As the binary shrinks and circularises because of GW emission, its frequency will increase. Therefore, a first step to determine whether a binary merger can appear eccentric in the sensitivity band of a specific GW detector requires to compare the binary eccentricity and the corresponding GW frequency. We show in Figure 7 the characteristic strain - frequency evolution for all mergers in our sample, assuming that they are located at a redshift \(z=0.05\), i.e. at a luminosity distance of \(230\) Mpc. To calculate the GW strain of Dragon-II sources we follows the formalism laid out in Peters & Mathews (1963); Peters (1964) and the implementation described in Arca Sedda et al. (2021) (see Eqs. 30-39). The GW signal from simulated mergers is overlaid to the sensitivity curves of current and future ground-based and space-based detectors like LIGO (Acernese et al., 2015), Einstein Telescope (ET Punturo et al., 2010; Branchesi et al., 2023), DECIGO (Kawamura et al., 2011; Isoyama et al., 2018; Kawamura et al., 2021), and LISA (Amaro-Seoane et al., 2013, 2022). The plot highlights how the eccentricity drops as the binary sweeps across different frequency bands. The top panel in Figure 8 shows the fraction of mergers with eccentricity above a given threshold calculated when Dragon-II mergers sweep through five frequency bands centered in \(f_{\rm band}=10^{-3}-10^{-2}-10^{-1}-1-10\) Hz, i.e. the typical sensitivity bands of space-borne detectors like LISA (\(<10^{-2}\) Hz), mid-frequency detectors like DECIGO (\(10^{-2}-1\) Hz), and ground-based detectors like LIGO-Virgo-Kagra or the Einstein Telescope (\(>1\) Hz). The plot highlights the fact that around \(20-40-5\%\) of all mergers appear eccentric, i.e. \(e>0.1\), while moving through the \(f=10^{-3}-10^{-1}-10^{1}\) Hz frequency bands, respectively. Clearly, the detectability of these mergers depend on many parameters, among which the location of the merger and the detector properties. Nonetheless, the plot makes apparent the importance of future deci-Hz detectors in placing constraints on the population of eccentric BBH mergers. Moreover, comparing models with future observations will help to quantify the impact of star cluster dynamics on the cosmic population of merging BHs. Noteworthy, the eccentricity carries information about the formation history of the merger. For example, we find that all mergers with an eccentricity \(e>0.1\) in both the \(0.05-1\) Hz and \(1-10\) Hz frequency bands occur inside the cluster. The number of eccentric binaries doubles in the \(10^{-2}-1\) Hz frequency band, but these eccentric binaries appear almost circular while reaching the ground-based detector band, ex Figure 6: Semimajor axis and eccentricity of compact binary merger at decoupling for primordial (yellow diamonds), dynamical (red points), and mixed (blue squares) mergers. Labels indicate particularly interesting primordial mergers with a significant eccentricity at decoupling. plaining why it is more likely to find a Dragon-II merging binary with significant eccentricity while sweeping through the deci-Hz band. Any binary merger will spend some time in the detector band before merging. In order to characterise the evolution of the eccentricity as the binary inspirals, we calculate the average binary eccentricity weighted with the time to the inspiral, i.e. \(\langle e\rangle=\int_{0}^{t_{\rm merge}}\mbox{$\mbox{$\mbox{$\mbox{$\mbox{ $\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$ \tiny the WD-BH merger occurs in a simulation with \(N=120\)k, \(R_{\rm HM}=0.47\)pc, the dynamical BH mergers develop in a simulation with \(N=300\)k, \(R_{\rm HM}=0.47\)pc, and the one forming from a primordial binary in a simulation with \(N=120\)k, \(R_{\rm HM}=0.8\)pc. This type of mergers are particularly rare in star clusters, because dynamical exchanges favor the replacement of the light component with another BH. Given their rarity, we discuss in the following the details of the formation and evolution of these interesting sources. #### 3.2.1 White dwarf - black hole mergers: implications for low-mass X-ray binaries The WD-BH binary consists in a BH with mass \(m_{\rm BH}=23.1\) M\({}_{\odot}\) and a carbon-oxygen white dwarf (COWD) with mass \(m_{\rm WD}=1.18\) M\({}_{\odot}\). Initially, dynamical interactions pair the BH with the WD progenitor, a MS star with mass \(m_{\rm MS,pro}=4.89\) M\({}_{\odot}\). The two objects undergo common envelope during the late AGB phase of the companion, at the end of which the star turns into a WD, after \(\sim 105\) Myr. The resulting WD-BH binary has an "initial" eccentricity of \(e=0.2\) and period of 900 days. The binary undergoes a series of strong scatterings that cause a rather chaotic variation of the binary semimajor axis and a systematic increase of the eccentricity from \(e=0.6\) up to \(e=0.99994930\) after 135 Myr, corresponding to \(\sim 4\) relaxation times. At this stage, GW emission becomes sufficiently effective to drive binary coalescence. Figure 9 shows the time variation of the WD-BH binary semimajor axis and eccentricity before coalescence. The WD Roche lobe is larger than the BH innermost stable circular orbit, hence the WD will likely undergo disruption and start feeding the BH, possibly evolving into a low-mass X-ray binary. In these regards, it is interesting noting the observation of a _ultracompact_ X-ray binary in the galactic cluster 47 Tuc (Miller-Jones et al., 2015; Bahramian et al., 2017), likely comprised of a COWD and a BH (Bahramian et al., 2017), with the BH being probably heavier than \(m_{\rm BH}>9\) M\({}_{\odot}\)(Church et al., 2017). Our model confirms the possibility to form such type of low-mass X-ray binary via interactions of stars and BHs in a dense cluster, even in a relatively short time (\(t<200\) Myr). Ultimately, the binary shrinkage driven by GW emission will disrupt the WD and the mass falling onto the BH could possibly power jets that can give rise to transients with peak energy \(10^{47}-10^{50}\) erg\({}^{-1}\) and duration of a few minutes (Fernandez et al., 2019), giving rise to a tidal disruption event (TDE). Despite this source is the only one undergoing coalescence, we find a total of 50 WD-BH binaries by the end of the simulation in all Dragon-II clusters. None of them have orbits such to trigger a TDE within a Hubble time, unless a strong interaction with some cluster member pushes the WD on an extremely eccentric orbit. Pushing the orbit to at least \(e>0.9999(0.99999)\) would lead to 1(26) further WD-BH mergers. Note that the eccentricity value required to trigger a WD TDE may seem extreme, but it is comparable to the eccentricity achieved by the WD-BH merger, hence testifying that it is possible to reach such extreme eccentricity values in Dragon-II clusters. #### 3.2.2 Neutron star - black hole mergers: implications for multimessenger astronomy Concerning NS-BH binaries, we find two mergers, one of dynamical origin and the other forming from the evolution of a primordial binary. The dynamical NS-BH has a NS with mass \(m_{\rm NS}=1.28\) M\({}_{\odot}\) and a BH with mass \(m_{\rm BH}=14.96\) M\({}_{\odot}\). The BH, whose progenitor had a mass \(m_{\rm MS}=26.7\) M\({}_{\odot}\), undergoes a series of chaotic interactions with a primordial binary containing the NS and its companion, which eventually leads to the merger. When the binary decouples from the cluster dynamics, it has a semimajor axis of \(a=0.33\) AU and an eccentricity \(e=0.99778817\), corresponding to a GW peak frequency \(f_{\rm GW}=0.01\) Hz. After decoupling, the binary evolution is completely dominated by GW emission and the variation of its orbital parameters can be described, at first order, via the Peters (1964) formalism. We find that as the binary sweeps through the \(0.01-0.5-1-10\) Hz GW frequency band the NS-BH merger has a residual eccentricity of \(e_{\rm SNBH}=0.99779-0.9974-0.21-0.02\), thus in principle measurable with future GW detectors, especially with those operating in the deci-Hz frequency band. The chirp mass of this merger, \({\cal M}_{\rm chirp}=3.4\) M\({}_{\odot}\), is typical of dynamically assembled NS-BH mergers (Arca Sedda, 2020; Rastello et al., 2020; Ye et al., 2020; Arca Sedda, 2021), but hard to produce with isolated binary evolution (Giacobbo & Mapelli, 2018; Zevin et al., 2020), although this strongly depends on the adopted stellar evolution scheme (Broekgaarden et al., 2021). The primordial NS-BH binary merger, instead, forms from a primordial binary with initial mass components \(m_{1,2}=(26.3+18.7)\) M\({}_{\odot}\) and evolves through a common envelope phase initiated by the primary, which eventually forms the BH. Shortly after, the binary undergoes a second common envelope phase and eventually the companion evolves into a NS. Eventually, the merging binary consists of a BH with mass \(m_{\rm BH}=5.6\) M\({}_{\odot}\) and a NS mass \(m_{\rm NS}=1.88\) M\({}_{\odot}\). Note that these properties, are intriguingly similar to GW200115, a GW source detected by the LVK during the O3 observation campaign, which was characterised by a BH with Figure 9: Time evolution of the semimajor axis (top panel) and eccentricity (bottom panel) of a dynamical WD-BH merger in Dragon-II clusters. \(m_{\rm BH}=5.7^{+1.8}_{-2.1}\) M\({}_{\odot}\) and a NS with \(m_{\rm NS}=1.5^{+0.7}_{-0.3}\) M\({}_{\odot}\). When the NS forms, the common envelope has shrunk the binary from 2.5 R\({}_{\odot}\) to \(a=0.6\) R\({}_{\odot}\), whilst the natal kick imparted at formation onto the NS causes an enhancement of the eccentricity from nearly zero to \(e=0.57\). The new orbital parameter as such that GW emission dominates over dynamics and the binary coalesces in \(\sim 7\times 10^{4}\) yr. At decoupling, the binary peak frequency is \(f_{\rm GW}\sim 2\) mHz, right in the middle of LISA sensitivity band. The development of a NS-BH binary merger from a primordial binary in a dense star cluster highlights the impact of primordial binaries in contributing to the population of mergers with properties similar to those forming in isolation, making quite hard untangling their actual origin. Merging NS-BH binaries are thought to be possible progenitors of several electromagnetic (EM) transients, like short Gamma Ray Bursts (sGRBs) (e.g. Lee & Ramirez-Ruiz, 2007) and kilonovae (e.g. Metzger & Berger, 2012). A basic condition for the possible development of an EM transient is that part of the NS material remains bound to the BH, forming a disk. The fraction of NS mass in the disk depends on several quantities, among which the BH-to-NS mass ratio \(m_{\rm BH}/m_{\rm NS}\), the BH spin \(\chi\), and the NS compactness \(C\equiv Gm_{\rm NS}/c^{2}R_{\rm NS}\)(Foucart, 2012). As numerical simulations have shown, in general the larger the \(m_{\rm BH}/m_{\rm NS}\) the larger the minimum spin required for the NS material to form a disk around the BH, and the larger the spin the larger the amount of matter bound to the BH (Foucart, 2012). Depending on the orbital parameters, the BH tidal field can tear apart the NS before it enters the BH event horizon, provided that the NS tidal radius \[R_{\rm tid}=R_{\rm NS}\left(\frac{3m_{\rm BH}}{m_{\rm NS}}\right)^{1/3}, \tag{12}\] exceeds the BH innermost stable circular orbit (ISCO), which for a spinning BH can be expressed as (Bardeen et al., 1972) \[R_{\rm ISCO}=\frac{Gm_{\rm BH}}{c^{2}}\left[3+Z_{2}-{\rm sign}( \chi)\left[(3-Z_{1})(3+Z_{1}+2Z_{2})\right]^{1/2}\right], \tag{13}\] where \(Z_{1,2}\) are functions of the BH dimensionless spin parameter \(\chi\). Whilst the condition \(R_{\rm tid}/R_{\rm ISCO}<1\) implies that the merger has no EM emission, the opposite does not ensure the EM counterpart detectability, as it depends on the geometry of the merger with respect to the observer and other possible observation biases. In Dragon-II clusters, the dynamical NS-BH merger is characterised by \(m_{\rm BH}/m_{\rm NS}=11.7\) and compactness \(C=0.19\) (assuming a NS radius of 10 km). As shown in Figures 6-8 of Foucart (2012), the minimum BH spin required for an accretion disk to form with a mass 10% of the NS mass around such type of binary is \(\chi_{\rm BH}>0.98\). The BH formed in this binary did not undergo any major interaction with stellar companions that could spin-up it (Qin et al., 2019; Bavera et al., 2020; Belczynski et al., 2020). Hence, it is possible that the BH formed with low-spin, according to the Fuller & Ma (2019) model, hampering the formation of a massive accretion disk around the BH and minimizing the probability for a EM counterpart to develop. The isolated NS-BH merger, instead, is characterised by \(m_{\rm BH}/m_{\rm NS}=2.98\) and \(C=0.27\). Even in this case, the spin required for an accretion disk to form is \(\chi_{\rm BH}>0.9\). The BH in this binary undergoes a RLO phase, which could, in principle, spin-up the BH up to extremal values (e.g. Fragos & McClintock, 2015), although this strongly depends on the stellar evolution recipes and the binary properties (Qin et al., 2019; Bavera et al., 2020; Belczynski et al., 2020). The development of just 2 NS-BH mergers highlights how rare are these type of objects on the one hand, and make any statistical analysis poor, on the other hand. Nonetheless, the fact that the NS-BH mergers developed in Dragon-II clusters seem to be unlikely to feature an EM counterpart supports the idea that most NS-BH mergers proceed unseen in star clusters (Arca Sedda, 2020). For comparison, note that for isolated binaries typically \(m_{\rm BH}\sim 12\) M\({}_{\odot}\) and \(m_{\rm NS}=1.6\) M\({}_{\odot}\)(Broekgaarden et al., 2021), which implies a minimum BH spin of \(\chi_{\rm BH}\gtrsim 0.8\) to permit the formation of a fairly massive (mass \(>0.1m_{\rm NS}\)) disk around the BH (Fryer et al., 2012). ## 4 Discussion ### The impact of natal spins on the properties of stellar black hole mergers Spin amplitude and mutual orientation at merger represent two possible quantities that can help dischering whether a BBH merger results from isolated stellar binary evolution or stellar dynamics (e.g. Arca Sedda & Benacquista, 2019; Arca Sedda et al., 2020; Zevin et al., 2021; Arca Sedda et al., 2023; Mapelli et al., 2022; Banerjee, 2022; Banerjee et al., 2023). In order to explore the impact of different spin prescriptions on Dragon-II mergers, we devise two models. The first model (hereafter STEV) assumes that the spin is intrinsically related to the BH evolutionary pathways. For BHs formed from single stellar evolution, we assume a negligible spin (\(\chi_{\rm BH}=0.01\)) owing to efficient angular momentum transport triggered by the Tayler-Spruit dynamo (Spruit, 2002; Fuller & Ma, 2019). For upper-mass gap BHs formed from massive binary evolution we assume that final spins spans the \(\chi_{\rm BH}=0.8-1\) range (Qin et al., 2018; Bavera et al., 2020; Belczynski et al., 2020; Schroder et al., 2020). For BHs in primordial binaries, instead, we assign to one BH a spin value of \(\chi_{\rm BH}=0.01\) and to the other \(\chi_{\rm BH}=0.1-1\)(Qin et al., 2018; Bavera et al., 2020). The second model (GAUS model) assumes, instead, that the spin distribution follows a Gaussian distribution with mean \(\bar{\chi}_{\rm BH}=0.5\) and dispersion \(\sigma_{\chi}=0.2\), regardless the BH past evolution, a case possibly supported by the population of observed BH-BH mergers (The LIGO Scientific Collaboration et al., 2021). In our analysis, we assume that the spin vectors in dynamical mergers are isotropically distributed, whilst for primordial mergers we proceeds as follows. We define an ad-hoc distribution function for the cosine of the angle between the spin of the \(i\)-th binary component and the binary angular momentum, \(\theta_{i}\), such that (Arca Sedda et al., 2023) \[P(\cos\theta)=[(\cos\theta+1)/2]^{n_{\theta}+1}. \tag{14}\] We set \(n_{\theta}=8\), which implies that binaries have a 20(55)% probability to have \(\theta_{1,2}\) that differ by less than 5(20)%. Note that \(n_{\theta}=0\) implies the isotropic distribution whilst \(n_{\theta}\gg 1\) implies fully aligned spins, i.e. \(\theta_{1}=\theta_{2}\). For each BBH merger in our sample we select 1,000 values of the spin and spin directions depending on the aforementioned assumptions, in order to assess statistically the properties of Dragon-II mergers. The top panels in Figure 10 show the median value and 95th percentile of the effective spin parameter and remnant BH mass for all BBH mergers in Dragon-II models. As, expected, we can clearly see a difference between primordial binaries, which have mildly aligned spins and thus \(\chi_{\rm eff}>0\), and dynamical binaries, for which \(\chi_{\rm eff}\sim 0\). The plots suggest that the STEV model, based on stellar evolution models, leads primordial binaries to have a \(\chi_{\rm eff}\) smaller, on average, compared to the GAUSS model. The bottom panels of Figure 10 overlay to a single realisation of the simulated data the observed mergers from GWTC-3, for comparison's sake. Noteworthy, the assumption that BHs form with a negligible spin unless matter accretion processes are at play (STEV model) leads to a sub-population of mergers with \(\chi_{\rm eff}\sim 0\) and \(m_{\rm bin}=(40-100)\) M\({}_{\odot}\), a feature that disappears when a global Gaussian spin distribution is adopted (GAUS model) as shown in Figure 10. If BH spins do not strongly depend on stellar evolution processes, but rather are well described by a general distribution, like a Gaussian, we can identify two populations in the plot, one with clearly positive \(\chi_{\rm eff}\) values and \(m_{\rm BH}<40\) M\({}_{\odot}\), and one widely distributed around zero \(\chi_{\rm eff}\) involving massive BHs, \(m_{\rm BH}>40\) M\({}_{\odot}\). In order to improve the poor statistics, we proceed as follows: from the list of Dragon-II mergers we create an oversampled catalogue by repeating the spin assignment 100 times and, at each time, selecting a new "mock" BH mass in the range \(2.5-40.5\) M\({}_{\odot}\) if the Dragon-II BH merger mass is below the upper-mass gap, and in the range \(40.5-100\) M\({}_{\odot}\) otherwise. This way, each real Dragon-II merger will have 100 counterparts with BHs of the same class (upper mass-gap or not, merger in primordial or dynamical binary), but enabling to build-up a catalogue sufficiently rich to analyse the overall \(\chi_{\rm eff}\) distribution. Figure 11 shows the distribution of \(\chi_{\rm eff}\) for the augmented sample in STEV and GAUS models. We see that the STEV model follows a narrower distribution compared to the GAUS model, and exhibits a clear peak around zero owing to the population of BHs formed from single stars (see also Figure 9 in Aca Sedda et al.2023). ### Compact binary merger rates #### 4.2.1 Merger efficiency As described in the previous section, we have simulated a total Dragon-II mass of \(M_{\rm sim}=3.65\times 10^{6}\) M\({}_{\odot}\) and find in total 78 mergers when GW recoil is not accounted for, and 74 otherwise. Therefore, the resulting BH merger efficiency, defined as the ratio between the number of mergers and the total simulated mass (Ziosi et al.2014), is given by \[\eta_{\rm GW}=\frac{N_{\rm GW}}{M_{\rm sim}}\simeq(2.0-2.1)\times 10^{-5}\ {\rm M }_{\odot}^{-1}, \tag{15}\] similar to what inferred for young and open clusters with a similar metallicity (e.g. Di Carlo et al.2020; Santoliquido et al.2020; Rastello et al.2021). Note that given the limited simulation time our estimate could represent a lower limit to the total merger efficiency in massive young and intermediate-age clusters. Nonetheless, we note that as the cluster loses mass and expands, the binary formation rate and binary-single interaction rate will sharply decrease until the point in which it will be unlikely for tight binaries to form and merge within a Hubble time. Interestingly, at fixed value of the half-mass radius, the merger efficiency changes sensibly with the initial binary fraction, being \[\eta_{\rm GW,fb}=\begin{cases}2.3\times 10^{-5}\ {\rm M}_{\odot}^{-1}&f_{b}= 0.20,\\ 1.2\times 10^{-5}\ {\rm M}_{\odot}^{-1}&f_{b}=0.05.\end{cases} \tag{16}\] This highlights the role of primordial binaries in determining the formation of merging compact objects. For comparison, note that the merger efficiency derived in Rastello et al.2021 is based on star cluster models containing \(\sim 40\%\) of stars in primordial binaries. To further explore the impact of cluster properties on the merger efficiency, we show in Figure 12 the average merger efficiency per cluster, \(\epsilon_{\rm GW}(R_{\rm HM})\), as a function of the average cluster density \(\langle\rho_{\rm sim}\rangle\), using the following definitions \[\epsilon_{\rm GW}(R_{\rm HM}) = \frac{N_{\rm GW}}{(M_{\rm sim}/N_{\rm sim})}, \tag{17}\] \[\langle\rho_{\rm sim}\rangle = \frac{M_{\rm sim}}{N_{\rm sim}R_{\rm HM}^{3}} \tag{18}\] where \(M_{\rm sim}\) is the total simulated mass and \(N_{\rm sim}\) is the number of simulations performed for a given value of the half-mass radius, \(R_{\rm HM}\). At fixed value of the binary fraction, this relation is well described by a power-law in the form \(\epsilon_{\rm GW}=a(\langle\rho_{\rm HM}\rangle/1{\rm M}_{\odot}{\rm pc}^{-3})^ {b}\), with \(a=(0.15\pm 0.07)\times 10^{-5}\) and \(b=0.25\pm 0.03\). The plot makes clear that increasing the cluster density by two orders of magnitude leads to \(\sim 2.5\times\) more mergers. Moreover, it further highlights the role of primordial binaries, showing that clusters with a lower binary fraction have a probability \(\sim 50\%\) smaller to develop a merger, at least in the case of \(R_{\rm HM}=1.75\) pc. #### 4.2.2 Merger rate for black hole binaries We define the cosmic merger rate following Santoliquido et al.(2020); Bouffanais et al.(2021) \[\mathcal{R}(z)= \frac{\rm d}{{\rm d}t_{\rm h}(z)}\int_{0}^{z_{\rm max}}\psi_{\rm clus }(z^{\prime})\frac{{\rm d}t_{\rm h}(z)}{{\rm d}z^{\prime}}{\rm d}z^{\prime}\] \[\times\int_{Z_{\rm min}}^{Z_{\rm max}}\eta_{\rm GW}(Z)\mathcal{F}( z^{\prime},z,Z){\rm d}Z, \tag{19}\] where \(t_{\rm h}(z)\) is the lookback time at merger, \(\psi_{\rm clus}(z^{\prime})\) is the star cluster formation rate when the merging binary formed, \(\eta_{\rm GW}(Z)\) is the merger efficiency at the metallicity \(Z\), \(\mathcal{F}(z^{\prime},z,Z)\) is the number of mergers forming at redshift \(z^{\prime}\) and merging at redshift \(z\) in environments with metallicity \(Z\). The adoption of Equation 19 enables us to compare Dragon-II simulation results with those obtained for low-mass star clusters (Santoliquido et al.2020). Note that this procedure does not take into account possible effects related to the initial cluster mass function, which could indeed have an impact on the overall merger rate (see e.g. Antonini and Gieles2020). Nonetheless, the similarity between the merger efficiency derived from Dragon-II simulations and that obtained by Santoliquido et al.(2020) for low-mass clusters suggests that it is possible to safely utilise the merger efficiency as a proxy of the overall number of mergers per unit mass in the whole range of possible cluster masses. This choice, although representing an approximation, permits us to avoid the inclusion of a cluster mass function in Equation 19 and all the related uncertainties, like the cluster mass function boundaries and functional form. We adopt a cosmic star cluster formation rate in the form \[\psi_{\rm clus}(z)=\frac{0.01(1+z)^{2.6}f_{\rm CFE}}{1+[(1+z)/3.2]^{6.2}}~{}~{} \mathrm{M}_{\odot}\mathrm{yr}^{-1}\mathrm{Mpc}^{-3}, \tag{20}\] i.e. we rescale the stellar star formation rate derived by (Madau and Fragos, 2017) by a factor \(f_{\rm CFE}\), which represents the cluster formation efficiency, i.e. the fraction of star formation that goes into bound clusters. Although uncertain, observations and models suggest that the cluster formation efficiency (CFE) can be as large as \(f_{\rm CFE,VC}=0.3\) for young clusters (Mapelli et al., 2021) and \(f_{\rm CFE,CC}=0.08\pm 0.03\)(Bastian, 2008) for globular clusters, regardless of the star formation history. In the following, we adopt both young and globular cluster CFE values to constrain the BBH merger rate in our simulations. For dynamical mergers, it has been shown that the merger efficiency \(\eta_{\rm GW}(Z)\) remains almost constant in the range \(Z<10^{-3}\), and decreases roughly by an order of magnitude at solar values (Di Carlo et al., 2020, 2021; Rastello et al., 2021). Since our models have all the same metallicity, \(Z=0.005\), to infer the merger rate we assume that the merger efficiency is constant at \(Z<0.005\) and reduces by 10 times at larger metallicities (see e.g. Figure 1 in Santoliquido et al., 2020). Moreover, we factorise the function \(F(z,z^{\prime},Z)=p(Z,z^{\prime})N(z,z^{\prime})\) Figure 10: Top panels: median and 95th percentile value of remnant mass and effective spin parameters of Dragon-II mergers assuming a prescription for BH natal spin based on stellar evolution models (STEV, left panel) or a Gaussian distribution (GAUSS, right panel). Bottom panel: effective spin parameter and remnant mass for one realisation of Dragon-II mergers (red points) and the observed population of LVK mergers in the GWTC-3 catalog. thus assuming that the number of mergers at redshift \(z\) that formed at \(z^{\prime}\) is independent on the metallicity distribution. The \(p(Z,z^{\prime})\) term represents the cosmic fraction of clusters with metallicity in the (\(Z\),\(Z+dZ\)) bin at redshift \(z^{\prime}\). We assume that the metallicity follows a log-normal distribution peaked at (Madau and Fragos, 2017) \[\mathrm{Log}\left\langle\frac{Z(z)}{\mathrm{Z_{\odot}}}\right\rangle=0.153-0.07 4z^{1.34}, \tag{21}\] with dispersion either \(\sigma_{Z}=0.2-0.5-0.8\)(Schroder et al., 2020; Bouffanais et al., 2021; Bavera et al., 2021; Zevin et al., 2021). Since all Dragon-II models have the same metallicity, to infer the simulated merger rate we integrate Equation 19 under two assumptions, one conservative and one optimistic. In the conservative case, we consider only clusters with a metallicity \(Z<0.005\) and assume that they have a similar merger rate efficiency (Santoliquido et al., 2020). In the optimistic case, instead, we include in the integration also clusters with metallicity larger than the simulated one, reducing for metal-rich clusters the simulated merger efficiency by a factor 10, as expected from low-\(N\) simulations of young clusters (Di Carlo et al., 2019). To compare with similar estimates in the literature, we first set \(f_{\mathrm{CFE}}=1\), i.e. that all stars form in star clusters, and calculate a merger rate of \(\mathcal{R}=27\) yr\({}^{-1}\) Gpc\({}^{-3}\), in broad agreement with the rate inferred for low-mass star clusters (\(N=10^{2}-5\times 10^{4}\) M\({}_{\odot}\)) (Di Carlo et al., 2020; Rastello et al., 2021; Santoliquido et al., 2020) and semi-analytic models of young and globular clusters (see e.g. Mapelli et al., 2021). A more reliable estimate of the merger rate is shown in Figure 13 for both the conservative and optimistic cases, and assuming different values of the cluster formation efficiency, \(f_{\mathrm{CFE}}=0.08-0.3\). As shown in the plot, we find a simulated merger rate of \(\mathcal{R}_{\mathrm{GW}}=(12\pm 7)\) at redshift \(z=0.2\). At the same redshift, the BBH merger rate inferred by the LVK is \(\mathcal{R}_{\mathrm{LVK}}=17.9-44\) yr\({}^{-1}\) Gpc\({}^{-3}\)(The LIGO Scientific Collaboration et al., 2021, 2021). #### 4.2.3 Merger rate for exotic mergers In Dragon-II simulations we find 3 elusive mergers: one WD-BH and two NS-BH mergers. Despite they are evidently too scarce to allow a statistical treatment, we can exploit them to attempt a rough, order of magnitude, estimate of the merger rates for these two classes of GW sources assembled in star clusters as: \[R_{\mathrm{sBH}}(<D)=\frac{N_{\mathrm{x}}}{M_{\mathrm{sim}}}f_{\star}\delta M _{g\star}N(<D)t_{\mathrm{rel}}^{-1}, \tag{22}\] where \(M_{g\star}\) is the galaxy stellar mass, \(\delta=0.001-0.01\) is the fraction of galaxy mass made up by star clusters (Spitler and Forbes, 2009; Georgiev et al., 2010; Harris et al., 2013; Arca-Sedda and Capuzzo-Dolcetta, 2014; Gnedin et al., 2014; Webb \begin{table} \begin{tabular}{c|c} \hline \hline Source type & \(\mathcal{R}_{\mathrm{loc}}\) \\ & yr\({}^{-1}\) Gpc\({}^{-3}\) \\ \hline BBH & \(5-19\) \\ NS-BH & \(0.027-8.7\) \\ WD-BH & \(3.8\times 10^{-4}-2.3\) \\ \hline \end{tabular} \end{table} Table 2: Volumetric merger rate calculated in the local Universe for different GW sources in Dragon-II clusters. Figure 11: Effective spin distribution for a population of mergers based on Dragon-II models assuming model STEV (filled red steps) or GAUS (dashed line). Figure 12: Merger efficiency as a function of the average cluster density. Different symbols correspond to different values of the cluster half-mass radius. Open circles mark simulations with the lowest binary fraction. & Leigh, 2015), \(f_{x}\) is the fraction of clusters with a given property (e.g. age within a certain range), \(t_{\rm rel}\) is the cluster relaxation time, and \(N(<D)\) is the number of MW equivalent galaxies within a given cosmological distance \(D\)(Abadie et al., 2010) \[N(<D)=4\pi/3(2.26)^{-3}\left(\frac{D}{\rm Mpc}\right)^{3}\frac{\rho_{g}}{\rm Mpc ^{-3}}, \tag{23}\] where \(\rho_{g}=0.0116\) Mpc\({}^{-3}\) is the galaxy number density in the local Universe (Kopparapu et al., 2008). Moreover, we consider typical relaxation times of either globular clusters, \(t_{\rm rel}=10^{9}\) yr (Harris, 2010), or massive and relatively young clusters in the Small Magellanic Cloud (SMC), \(t_{\rm rel}=3.2\times 10^{7}\) yr (Gatto et al., 2021). Note that the relaxation time of Galactic clusters is inferred from their present time properties. Depending on the amount of mass lost and the level of cluster expansion, it could be possible that the initial relaxation time was relatively shorter and therefore the number of _dynamically old_ globular clusters is larger than what we see at present. In these regards, note that the relaxation time of SMC clusters, which are generally younger than a few Gyr, is sensibly smaller compared to Milky Way globulars, possibly because relaxation processes did not have time to sufficiently influence the cluster dynamics. In the following calculations, we consider Milky Way-like galaxies only, \(M_{g*}=6\times 10^{10}\) M\({}_{\odot}\)(Licquia & Newman, 2015) located within \(D=1\) Gpc. In the Milky Way, there are only \(\sim 4\) out of 155 globular clusters with an age larger than 1 relaxation time, whilst around half of clusters in the SMC satisfy this requirement, thus \(f_{x}\sim 0.025-0.5\). This implies a frequency rate for WD-BH mergers in the local Universe of \(R_{\rm WDBH}=(1.8\times 10^{-3}-10.8)\) yr\({}^{-1}\), corresponding to a volumetric merger rate \(\mathcal{R}_{\rm WDBH}=RV_{\rm com}^{-1}(1\ {\rm Gpc})=(3.8\times 10^{-4}-2.3)\) yr\({}^{-1}\) Gpc\({}^{-3}\). In the case of NS-BH mergers, instead, the event occurs over a timescale of \(t_{\rm GW}=(0.04-0.5)t_{\rm rel}\). The fraction of cluster with an age longer than \(t_{\rm GW}\) is \(f_{x}\sim 0.94\) for clusters in both the Milky Way and the SMC, the resulting frequency rate for NS-BH mergers is \(R_{\rm NSBH}=(0.13-40.7)\) yr\({}^{-1}\), which implies a volumetric merger rate of \(\mathcal{R}_{\rm NSBH}=(0.027-8.7)\) yr\({}^{-1}\) Gpc\({}^{-3}\). ### Multimessenger sources: prospects for LISA detection Over the next decade, the network of ground-based detectors will be complemented by LISA, possibly the first space-borne low-frequency detector. LISA will be able to possibly catch the GW echoes of merging stellar BHs, IMBHs, and nearby WD and NS binaries. While we postpone a detailed discussion about BBHs forming in young massive clusters detectable with LISA to a forthcoming paper, we focus in the following on the handful exotic mergers that develop in our Dragon-II models. Let us consider the case of a WD-BH merger. We have shown in Section 3.2 that such a source could appear as an X-ray binary and give rise to a TDE once the WD approaches too closely the BH. Assuming that the binary evolves solely via GW emission, and adopting the Peters & Mathews (1963) formalism to evolve the binary until the merger, we find that around 6 months prior to the merger, the WD will overfill the Roche lobe and start the X-ray binary phase. At disruption, the frequency of the associated GW emission by (Hils & Bender, 1995; Kobayashi et al., 2004; Rosswog et al., 2009; Dai & Blandford, 2013; Fragione et al., 2020) \[f_{\rm GW}\simeq 0.09{\rm Hz}\left(1+\frac{M_{\rm WD}}{M_{\rm BH}}\right)\times\] \[\times\left(\frac{M_{\rm WD}}{0.6\ {\rm M}_{\odot}}\right)^{1/2} \left(\frac{R_{\rm WD}}{10^{4}{\rm km}}\right)^{-3/2}=0.13{\rm Hz}, \tag{24}\] where we have assumed \(R_{\rm WD}=10^{4}\) km. Note that an eccentricity between 0 and 1 would affect \(f_{\rm GW}\) by less than 20% (Fragione et al., 2020). The amplitude of the emitted signal at disruption will be (Robson et al., 2019; Fragione et al., 2020) \[h_{c}\simeq 2\times 10^{-20}\left(\frac{T_{\rm obs}}{4{\rm yr}}\right)^{1/2} \left(\frac{D_{L}}{10{\rm Mpc}}\right)^{-1}\left(\frac{M_{\rm BH}}{10\ {\rm M}_{\odot}}\right)^{0.66}\times\] \[\times\left(\frac{M_{\rm WD}}{0.6\ {\rm M}_{\odot}}\right)^{1.58} \left(\frac{R_{\rm WD}}{10^{4}{\rm km}}\right)^{-1.75}\simeq 10^{-19}. \tag{25}\] Since the WD will disrupt completely as crossing its Roche limit, the associated GW emission will appear as a burst (Rosswog et al., 2009). For such source, the corresponding signal-to-noise ratio (S/N) for LISA can be written as (see e.g. Robson et al., 2019) \[\rm(S/N)=f^{2/3}\frac{h_{c}}{S_{c}}=1.2\left(\frac{D_{L}}{10{\rm Mpc}}\right)^ {-1}, \tag{26}\] where \(S_{c}\) is the detector sensitivity curve in terms of characteristic strain (Robson et al., 2019) and we have exploited the intrinsic dependence on the measurable GW strain and the source luminosity distance \(D\) Figure 13: Merger rate density for all simulations in our runs, assuming either metal-poor only (red straight line) or both metal-poor and rich (black dashed line) clusters. The shaded area embrace two limiting values of the cluster formation efficiency, with the upper limit corresponding to \(f_{\rm CFE}=0.3\) and the lower one corresponding to \(f_{\rm CFE}=0.08\). If the merger occurs inside the Milky Way, i.e. at \(D<0.1\) Mpc, it would appear as a loud source in LISA, with (S/N)\(>120\). More in general, the maximum distance at which LISA could detect such merger with a minimum signal-to-noise ratio of (S/N)\(>8(15)\) is \(D<1.5\) Mpc(0.7 Mpc). Note that the Andromeda galaxy is \(\sim 0.7-0.8\) Mpc away from us, therefore to roughly estimate the probability for a closeby WD-BH merger we can replace in Equation 22\(N(<D)=2\) and find an upper limit to the local merger rate of closely WD-BH mergers of \(R_{\rm WDBH,close}<(8.4\times 10^{-10}-5.1\times 10^{-6})\) yr\({}^{-1}\). The pair-instability supernova rate for massive star clusters: perspectives for detection via magnitude limited surveys The onset of IMBH formation and the development of BBH mergers depend intrinsically on the cluster radius and initial density, the amount of stars initially in a binary, and the stellar evolution recipes adopted - e.g. BH matter accretion efficiency, the physics of PISNe and PPISNe. In these regards, the fact that PISNe are rare events for which a smoking gun has not been observed yet (e.g. Woosley et al., 2007; Gal-Yam et al., 2009; Terreran et al., 2017; Gomez et al., 2019; Woosley and Smith, 2022), offers us the possibility to use this physical process as a diagnostic quantity in Dragon-II models. In practice, we can infer the PISN rate in Dragon-II simulations and compare such rate with current observation limits to explore whether our simulations produce unrealistically large PISN frequency rates. As described in paper AS-II, in Dragon-II models PISNe develop either in single stars or in stellar merger products, provided that their core Helium reaches a mass in the range \((64-130)\) M\({}_{\odot}\). This offer us a unique possibility to explore the impact of PISNe in star clusters, taking simultaneously into account the impact of stellar mergers in the overall population of PISN progenitors. According to the adopted stellar evolution, in a simple stellar population only stars heavier than \(m_{\rm ZAMS}\geq m_{\rm PISN}=150\) M\({}_{\odot}\) could undergo a PISN event, i.e. larger than the maximum stellar mass adopted for the initial mass function. Instead, in Dragon-II models we find 23 stars that undergo supernova. All these stars are either in a primordial binary or are captured in a binary before the explosion and undergo one or more stellar merger and accretion events that bring the star mass above \(m_{\rm PISN}\). Typical masses for Dragon-II PISN progenitors are in the range \((150-282)\) M\({}_{\odot}\). The simulated PISN efficiency can be defined similarly to the compact object merger rate, i.e. \(\eta_{\rm PISN}=N_{\rm PISN}/M_{\rm sim}=6.2\times 10^{-6}\) M\({}_{\odot}^{-1}\). To calculate the PISN rate, we follow the approach adopted by du Buisson et al. (2020). Firstly, we assume that the Ni mass of the massive star that goes off as a PISN can be calculated via the following equation: \[{\rm Log}\left(M_{\rm Ni}/\ {\rm M}_{\odot}\right)=r\left(M_{\rm He,f}/\ {\rm M}_{\odot}\right)^{s}+t, \tag{27}\] where \(r=-5.02\times 10^{4}\), \(s=-2.159\), and \(t=2.887\)(Heger and Woosley, 2002; du Buisson et al., 2020), and \(M_{\rm He,f}\) is the final mass of the star He core. The Ni mass is used to infer the peak bolometric magnitude exploiting an Arnett-like relation (Arnett, 1982) \[\Upsilon^{\rm GW}_{\rm bol,Ni}=-19.2-2.5{\rm Log}\left(M_{\rm Ni}/0.6\ {\rm M}_{\odot}\right), \tag{28}\] which can be converted into an apparent bolometric magnitude via the Pogson's relation \[\mu^{\rm GW}_{\rm bol}=\Upsilon^{\rm GW}_{\rm bol,Ni}+5{\rm Log}(D_{L}/10{ \rm pc}), \tag{29}\] being \(D_{L}\) the luminosity distance. To simplify the calculations, we adopt for the He mass, which is the main ingredient to calculate the Ni mass, an average value of \(M_{\rm He,f}=90.4\) M\({}_{\odot}\) as extracted from our models. The value of \(\mu^{\rm GW}_{\rm bol}\) is used to calculate whether a PISN can be detected in a magnitude limited survey. Assuming to have a population of PISNe with apparent magnitude distributed according to a Gaussian around \(\mu^{\rm GW}_{\rm bol}\) and a magnitude detection threshold \(\mu_{\rm lim}\), we define the fraction of detectable sources as \[f_{\rm GSS}=0.5\left[1+{\rm erf}\left(\frac{\mu_{\rm lim}-\mu^{\rm GW}_{\rm bol }}{\sqrt{2}\sigma_{\mu}}\right)\right], \tag{30}\] where we adopted \(\sigma_{\mu}=0.2\)3. Footnote 3: We verified that varying \(\sigma_{\mu}\) in the range \(0.1-0.3\) has little effect on our results. The PISN rate as a function of the redshift can thus be evaluated as: \[{\cal R}_{\rm PISN}(z)=\int_{z_{1}}^{z_{2}}\frac{dV}{dz}\psi(z)\eta_{\rm PISN}f_ {\rm GSS}(z)f_{\rm Z}(z)dz, \tag{31}\] where \(dV/dz\) is the comoving volume element and \(\psi(z)\) is the cosmic star formation rate, for which we assume the cosmic star formation history in Equation 20(Madau and Fragos, 2017) and the same limits for \(f_{\rm CFE}\) described in Section 4.2, and that only stars with a metallicity \(Z\leq 0.008\) undergo PISNe (Spera and Mapelli, 2017). Figure 14 shows the PISNe rate for the intrinsic cosmic population and assuming different detection threshold in magnitude limited surveys, namely \(\mu_{\rm bol}=17,\ 20,\ 25\). Note that these threshold roughly corresponds to the typical maximum detectable magnitude of already completed, like the Sloan Digital Sky Survey (SDSS4) or the Palomar Transient Factory (PTF5), ongoing, e.g. the Dark Energy Survey (DES6), and future surveys, like the Large Synoptic Survey Telescope (LSST7), the Zwicky Transient Facility (ZTF8), or the EUCLID mission9(Moriya et al., 2022). From Figure 14 we see that only future surveys (\(\mu_{\rm bol}\geq 25\)) will be able to probe the cosmological properties of PISNe, whilst current surveys could in principle place constraints on PISNe within a redshift \(z<0.3\). Footnote 5: PTF home: [http://www.ptf.caltech.edu](http://www.ptf.caltech.edu) Footnote 6: DES home: [http://www.darkenergssurvey.org](http://www.darkenergssurvey.org) Footnote 7: LSST home: [http://www.lsst.org](http://www.lsst.org) Footnote 8: ZTF home: [https://www.ztf.caltech.edu](https://www.ztf.caltech.edu) Footnote 9: EUCLID home: [https://sci.esa.int/web/euclid](https://sci.esa.int/web/euclid) Integrating Equation 31 over the redshift returns the number of detected sources per year. The possible number of PISN detections per year for different values of the limiting bolometric magnitude, \(\mu_{\rm bol}\), and the cluster formation efficiency, \(f_{\rm CFE}\), is summarized in Table 3. From the table is clear that the detection of PISNe from star clusters is still highly unlikely in completed and ongoing surveys, but it could lead to \(\sim 8\) detections per year with the next generation of detectors. Comparing future PISNe detections with numerical models could have a twofold aim. On the one hand, it will permit us to shed light on the actual contribution of massive stars in dense clusters to the overall population of PISNe. On the other hand, it will provide us with an useful term of comparison to determine the reliability of cluster simulations. ## 5 Conclusions In this paper we have presented and discussed the properties of compact binary mergers and PISNe in the Dragon-II simulations, a suite of direct \(N\)-body models representing star clusters with up to 1 million stars and a relatively large \((10\%-33\%)\) binary fraction. Our main results can be summarised as follows: * We find a population of 75 BBH, 2 NS-BH, and 1 WD-BH mergers. Among them, 4 BBHs avoid merger when GW recoils are enabled. Mergers occurring inside the cluster make-up the \(\gtrsim 40\%\) of the whole population and are mostly due to mergers formed via dynamical interactions (dynamical mergers). The population of ejected mergers, which merge outside the parent cluster, are equally contributed by mergers formed dynamically and from primordial binaries (primordial mergers). Typically, in-cluster mergers have primaries with masses \(m_{\rm BH,1}>30\) M\({}_{\odot}\) and companion in the \(m_{\rm BH,2}=30-50\) M\({}_{\odot}\) mass range, whilst ejected mergers involve lighter primaries, \(m_{\rm BH,1}<40\) M\({}_{\odot}\), and are characterised by fairly large mass ratios, \(q>0.6\); * Mergers forming from primordial binaries are characterised by large mass ratios and component masses clearly smaller than those formed dynamically. Among dynamical mergers, the most massive ones are those in which at least one component had an ancestor in a primordial binary; * BBH mergers are characterised by a delay time that nicely distribute around a value of \(10-30\) cluster relaxation time. This highlights the fact that the processes that trigger BBH formation and merger are intrinsically related to the cluster dynamical evolution; * The population of mergers forming from dynamical interactions or primordial binaries is clearly distinguishable from the residual eccentricity of the binary as it enters in the typical frequency bands of GW detectors, i.e. \(f=0.001-100\) Hz. We find that practically all primordial binaries are circular at merger, this implying that primordial binaries merge before dynamics can have an impact on their evolution, whilst around \(20-40-5\%\) of mergers preserve an eccentricity \(e>0.1\) when entering the LISA-DECIGO-LIGO bands. All mergers with \(e>0.1\) in the 0.05-1 Hz and 1-10 Hz bands occur inside the cluster, whilst half of eccentric mergers in the mHz band are ejected. This hints at the possibility to distinguish the formation history of a BBH merger from the frequency band in which it is observed; * We identify three exotic mergers in our sample: a WD-BH binary formed dynamically and two NS-BH mergers, one formed dynamically and the other from a primoardial binary. A WD-BH merger that forms after 4 cluster relaxation time and it is triggered by chaotic interactions that increase the eccentricity up to an extremal value of \(e=0.99994930\). Once the WD approaches sufficiently close the BH, this type of sources could appear as an ultraluminous X-ray sources and, ultimately, be a source detectable by LISA if it occurs within 700 kpc from us, i.e. within the distance between the Milky Way and Andromeda. The dynamical NS-BH binary is characterised by a chirp mass \(\mathcal{M}=3.4\) M\({}_{\odot}\), larger than what predicted by the isolated stellar evolution scenario, and preserve an eccentricity of \(e=0.9974(0.21)\) when crossing a frequency of \(f=0.5(1)\) Hz, thus future observations with ET could help probing the population of closeby, dynamically formed, NS-BH mergers. The primordial NS-BH binary is not affected by dynamics at all, thus they can be mistaken for a merger occurring in isolation. This highlights the importance of star clusters with a large binary fraction as contributors of the isolated scenario of compact binary mergers. None of the NS-BH mergers are expected to release EM emission, unless the BHs have a spin \(\chi>0.9\); * We find that comparing the remnant mass and spin of BBH mergers could help untangling their origin. Using a model based on stellar evolution theories, we show that primordial binary mergers are characterised by remnant masses systematically smaller and effective spin parameters systematically larger than dynamical mergers; * We derive a BBH merger efficiency of \(\sim 2\times 10^{-5}\) M\({}_{\odot}^{-1}\), comparable with the value estimated for low-mass star clusters. Interestingly, we find that the merger efficiency depends Figure 14: PISNe cosmological rate (blue straight line) and detection rate assuming a magnitude limited survey at different threshold values, namely \(m_{\rm bol}=17\) (red dotted line), 20 (green dashed-dotted line), and 25 (orange dashed line), assuming a cluster formation efficiency of \(f_{\rm CFE}=1\) and assuming that only clusters with a metallicity \(Z\lesssim 0.008\) can host stars undergoing PISN. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(f_{\rm CFE}\) & \multicolumn{4}{c}{No. of detections per 1 year} \\ & Cosmological & \(m_{\rm bol}=25\) & \(m_{\rm bol}=20\) & \(m_{\rm bol}=17\) \\ \hline 0.08 & 24 & 0.7 & 0.001 & \(1.8\times 10^{-5}\) \\ 0.3 & 89 & 2.6 & 0.004 & \(6.8\times 10^{-5}\) \\ 1 & 297 & 8.8 & 0.012 & \(2.3\times 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 3: Cosmological and detectable PISNe occurrence rate. Col. 1: cluster formation efficiency. Col. 2-5: number of PISNe per year for the cosmological population, and for magnitude limited surveys with different threshold magnitudes. on the star cluster properties. Decreasing the binary fraction by a factor 4, for example, leads to a decrease of the merger efficiency by a factor \(\sim 2\). Moreover, the merger efficiency increases with the cluster density following a power-law with slope \(\sim 0.25\). We adopt a series of cosmologically motivated assumptions for the cosmic star formation history, and use them to infer a merger rate density at redshift \(z<0.2\) of \(\mathcal{R}=5-19\ (0.027-8.7)\ (3.8\times 10^{-4}-2.3)\ \rm{yr}^{-1}\ \rm{Gpc}^{-3}\) for BBHs(WD-BH)(NS-BH) mergers, respectively. We predict that, in a 4 yr-long mission, LISA could detect \(N_{\rm BBH}=12\pm 7(5\pm 3)\) BBH mergers (IMRIs) and can identify the WD-BH merger with a signal-to-noise ratio SNR\(>8(15)\) if it occurs within \(D_{L}<1.5(0.7)\) Mpc from us. * We retrieve the cosmic frequency rate of PISNe, in order to explore the reliability of our simulations on the one hand, and to make predictions for PISNe detection from star clusters on the other hand. We find that future surveys with a limiting magnitude of \(m_{\rm bol}=25\) could detect \(N_{\rm PISN}=0.7-8.8\) PISNe per year. Comparing these estimates with future surveys could help placing constraints on the population of massive stars in dense star clusters. The Dragon-II clusters represent a further step forward in the modelling of young and intermediate-age star clusters, providing the first suite of simulations that models clusters with both \(N>120,000\) stars (up to \(10^{6}\)), a high binary fraction (up to \(33\%\)), and an initial density of \(\rho=(1.2\times 10^{4}-1.6\times 10^{6})\ \rm{M}_{\odot}\ \rm{pc}^{-3}\). These simulations complement the vast literature of \(N\)-body simulations of lower-mass and lower density star clusters (e.g. Di Carlo et al., 2019; Rizzuto et al., 2021; Rastello et al., 2021; Banerjee, 2021; Rizzuto et al., 2022; Banerjee, 2022), and provide the largest catalogue of BH mergers obtained in direct \(N\)-body simulations of metal-poor, dense and massive young clusters. ## Acknowledgements The authors thank the referee for their constructive and helpful report. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the McLuster code, and Giuliano Iorio, Sara Rastello, and Michela Mapelli for useful comments and discussion. This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 "Dynamical Mechanisms of Accretion in Galactic Nuclei" and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 "The Milky Way System", and by the COST Action CA16104 "GWverse". The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Julich Supercomputing Centre (JSC). Data analysis and part of the runs were conducted on the GRACE-BH HPC workstation, funded by the European Union's under the research project GRACE-BH. MAS acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda). AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9BS70. PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triune model of Galactic center dynamical evolution on cosmological time scale"). The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346. RS thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits MG was partially supported by the Polish National Science Center (NCN) through the grant No. 2021/41/B/ST9/01191. FPR acknowledges the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930). TN acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311 of the DFG Cluster of Excellence "ORIGINS". ## Data availability The data from the runs of these simulations and their initial models will be made available upon reasonable request by the corresponding author. The Nbody6++GPU code is publicly available10. The McLuster version used in this work will soon be available. A similar version is described in Leveque et al. (2022). Footnote 10: [https://github.com/hobdy6PPgpu/Nbody6PPGPU-beijing](https://github.com/hobdy6PPgpu/Nbody6PPGPU-beijing)
2306.02187
Zeroing the Output of a Nonlinear System Without Relative Degree
The goal of this paper is to establish some facts concerning the problem of zeroing the output of an input-output system that does not have relative degree. The approach taken is to work with systems that have a Chen-Fliess series representation. The main result is that a class of generating series called primely nullable series provides the building blocks for solving this problem using the shuffle algebra. It is shown that the shuffle algebra on the set of generating polynomials is a unique factorization domain so that any polynomial can be uniquely factored modulo a permutation into its irreducible elements for the purpose of identifying nullable factors. This is achieved using the fact that this shuffle algebra is isomorphic to the symmetric algebra over the vector space spanned by Lyndon words. A specific algorithm for factoring generating polynomials into its irreducible factors is presented based on the Chen-Fox-Lyndon factorization of words.
W. Steven Gray, Kurusch Ebrahimi-Fard, Alexander Schmeding
2023-06-03T19:50:22Z
http://arxiv.org/abs/2306.02187v1
# Zeroing the Output of a Nonlinear System Without Relative Degree ###### Abstract The goal of this paper is to establish some facts concerning the problem of zeroing the output of an input-output system that does not have relative degree. The approach taken is to work with systems that have a Chen-Fliess series representation. The main result is that a class of generating series called _primely nullable series_ provides the building blocks for solving this problem using the shuffle algebra. It is shown that the shuffle algebra on the set of generating polynomials is a unique factorization domain so that any polynomial can be uniquely factored modulo a permutation into its irreducible elements for the purpose of identifying nullable factors. This is achieved using the fact that this shuffle algebra is isomorphic to the symmetric algebra over the vector space spanned by Lyndon words. A specific algorithm for factoring generating polynomials into its irreducible factors is presented based on the Chen-Fox-Lyndon factorization of words. keywords: nonlinear control systems, Chen-Fliess series, shuffle algebra + Footnote †: journal: Computer Science ## 1 Introduction Consider a smooth control-affine state space realization \[\dot{z} =g_{0}(z)+g_{1}(z)u,\ \ z(0)=z_{0} \tag{1a}\] \[y =h(z), \tag{1b}\] where \(g_{0}\), \(g_{1}\), and \(h\) are defined on \(W\subseteq\mathbb{R}^{n}\). If the realization has a well defined relative degree at \(z_{0}\in W\), then it is a classical result that the corresponding input-output map \(F:u\mapsto y\) is left invertible [20; 26]. If the zero output is known to be in the range of \(F\) for some class of inputs \(\mathcal{U}\), then there exists a unique input \(u^{*}\in\mathcal{U}\) satisfying \(F(u^{*})=0\) which can be generated in real-time using feedback [20; 26] or computed analytically using formal power series methods [12]. This construction leads to the notion of _zero dynamics_[14; 20; 21; 26]. When the system fails to have relative degree, however, there appears to be little known about the problem of zeroing the output. Take as a simple example the system \[\dot{z}_{1} =1-u,\ \ \dot{z}_{2}=z_{3}-u,\ \ \dot{z}_{3}=1,\ \ z(0)=0 \tag{2a}\] \[y =z_{1}z_{2}. \tag{2b}\] It is easily verified that this realization does not have relative degree at the origin. Nevertheless, there are two inputs which give the zero output: \(u^{*}(t)=1\), \(t\geq 0\) and \(u^{*}(t)=t\), \(t\geq 0\). The general goal of this paper is to establish some facts concerning how to zero the output of a system that does not have relative degree at some point of interest. The approach taken here will be to work purely in the input-output setting using Chen-Fliess series representations. One advantage to this point of view is that the nonuniqueness of coordinate systems can be avoided. That is, the generating series for the input-output map of a state space realization is invariant under coordinate transformation. In addition, this framework is more general as every analytic state space realization (1) has an input-output map with a Chen-Fliess series representation but not conversely. In order to avoid convergence issues associated with such series, the analysis will be done using _formal_ Fliess operators [17], that is, maps that take an infinite jet representing a formal input function to an infinite jet representing a formal output function. In this context, the problem of zeroing the output boils down to a purely algebraic problem. The concept of a _nullable_ generating series is presented first. This is a formal power series representing a formal Fliess operator having the property that the zero output (jet) is in the range of the operator. A generating series is called _strongly nullable_ if there is a nonzero input that maps to the zero output and _primely nullable_ if this input is the only input with this property. There are in some sense the building blocks for constructing other nullable polynomials. A special class of primely series are those having relative degree and one additional property. These will be called _linearly nullable_. While there is no known direct test for general nullability, linearly nullable series can be completely characterized, and in this case, the nulling input can be directly computed. It is shown that the shuffle product of two linearly nullable series is always strongly nullable but not linearly nullable. The shuffle product corresponds to the parallel product interconnection of two systems [8]. The focus then turns to an inverse problem, namely, factoring a polynomial into its irreducible elements in the shuffle algebra. It is first established that the shuffle algebra as a commutative polynomial ring over a finite set of noncommuting indeterminates is a unique factorization domain. This is achieved by assembling existing results from algebra [4] and algebraic combinatorics [24]. Of particular importance is the fact that this shuffle algebra can be viewed as the symmetric algebra over the \(\mathbb{R}\)-vector space spanned by Lyndon words [24; 28]. Such factorizations can be used to identify any nullable factors. What is unknown at present is whether every nullable series can be written uniquely as the shuffle product of _primely_ nullable series. Finally, an algorithm is given to factor a polynomial into its irreducible shuffle components. This is done by first mapping the polynomial to the symmetric algebra using the Chen-Fox-Lyndon factorization of words [3; 19; 24; 28; 30]. The resulting polynomial is then factored using one of the many known algorithms for factoring multivariate commutative polynomials [32]. Then each factor is mapped back to the shuffle algebra. The paper is organized as follows. In the next section, a brief summary is given of the mathematical tools used to establish the main results of the paper. In Section 3, the concept of nullable generating series is presented. The subsequent section addresses the problem of factoring generating series in the shuffle algebra. The final section provides the main conclusions of the paper. It should be stated that a shorter preliminary version of this paper was presented as a conference paper [15]. ## 2 Preliminaries An _alphabet_\(X=\{x_{0},x_{1},\)\(\ldots,x_{m}\}\) is any nonempty and finite set of symbols referred to as _letters_. A _word_\(\eta=x_{i_{1}}\cdots x_{i_{k}}\) is a finite sequence of letters from \(X\). The number of letters in a word \(\eta\), written as \(|\eta|\), is called its _length_. The empty word, \(\emptyset\), is taken to have length zero. The collection of all words having length \(k\) is denoted by \(X^{k}\). Define \(X^{*}=\bigcup_{k\geq 0}X^{k}\), which is a monoid under the concatenation product. Any mapping \(c:X^{*}\to\mathbb{R}^{\ell}\) is called a _formal power series_. Often \(c\) is written as the formal sum \(c=\sum_{\eta\in X^{*}}(c,\eta)\eta\), where the _coefficient_\((c,\eta)\in\mathbb{R}^{\ell}\) is the image of \(\eta\in X^{*}\) under \(c\). The _support_ of \(c\), \(\mathrm{supp}(c)\), is the set of all words having nonzero coefficients. A series \(c\) is called _proper_ if \(\emptyset\not\in\)supp\((c)\). The _order_ of \(c\), \(\mathrm{ord}(c)\), is the length of the shortest word in its support. By definition the order of the zero series is \(+\infty\). The set of all noncommutative formal power series over the alphabet \(X\) is denoted by \(\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\). The subset of series with finite support, i.e., polynomials, is represented by \(\mathbb{R}^{\ell}\langle X\rangle\). Each set is an associative \(\mathbb{R}\)-algebra under the concatenation product and an associative and commutative \(\mathbb{R}\)-algebra under the _shuffle product_, that is, the bilinear product uniquely specified by the shuffle product of two words \[(x_{i}\eta)\,\mathbf{\shuffle}\,(x_{j}\xi)=x_{i}(\eta\,\mathbf{\shuffle}\,(x_{j}\xi))+x_{j}((x_{i}\eta)\,\mbox{\boldmath$\shuffle$ }\,\xi),\] where \(x_{i},x_{j}\in X\), \(\eta,\xi\in X^{*}\) and with \(\eta\,\mathbf{\shuffle}\,\emptyset=\emptyset\,\mathbf{ \shuffle}\,\eta=\eta\)[8]. For any letter \(x_{i}\in X\), let \(x_{i}^{-1}\) denote the \(\mathbb{R}\)-linear _left-shift operator_ defined by \(x_{i}^{-1}(\eta)=\eta^{\prime}\) when \(\eta=x_{i}\eta^{\prime}\) and zero otherwise. Higher order shifts are defined inductively via \((x_{i}\xi)^{-1}(\cdot)=\xi^{-1}x_{i}^{-1}(\cdot)\), where \(\xi\in X^{*}\). It acts as a derivation on the shuffle product. ### Chen-Fliess series Given any \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) one can associate a causal \(m\)-input, \(\ell\)-output operator, \(F_{c}\), in the following manner. Let \(\mathfrak{p}\geq 1\) and \(t_{0}<t_{1}\) be given. For a Lebesgue measurable function \(u:[t_{0},t_{1}]\to\mathbb{R}^{m}\), define \(\left\|u\right\|_{\mathfrak{p}}=\max\{\left\|u_{i}\right\|_{\mathfrak{p}}:\ 1\leq i\leq m\}\), where \(\left\|u_{i}\right\|_{\mathfrak{p}}\) is the usual \(L_{\mathfrak{p}}\)-norm for a measurable real-valued function, \(u_{i}\), defined on \([t_{0},t_{1}]\). Let \(L_{\mathfrak{p}}^{m}[t_{0},t_{1}]\) denote the set of all measurable functions defined on \([t_{0},t_{1}]\) having a finite \(\left\|\cdot\right\|_{\mathfrak{p}}\) norm and \(B_{\mathfrak{p}}^{m}(R)[t_{0},t_{1}]:=\{u\in L_{\mathfrak{p}}^{m}[t_{0},t_{1}]: \left\|u\right\|_{\mathfrak{p}}\leq R\}\). Assume \(C[t_{0},t_{1}]\) is the subset of continuous functions in \(L_{1}^{m}[t_{0},t_{1}]\). Define inductively for each word \(\eta=x_{i}\bar{\eta}\in X^{*}\) the map \(E_{\eta}:L_{1}^{m}[t_{0},t_{1}]\to C[t_{0},t_{1}]\) by setting \(E_{\mathfrak{p}}[u]=1\) and letting \[E_{x_{i}\bar{\eta}}[u](t,t_{0})=\int_{t_{0}}^{t}u_{i}(\tau)E_{\bar{\eta}}[u]( \tau,t_{0})\,d\tau,\] where \(x_{i}\in X\), \(\bar{\eta}\in X^{*}\), and \(u_{0}=1\). The _Chen-Fliess series_ corresponding to \(c\in\mathbb{R}^{\ell}\langle\langle X\rangle\rangle\) is \[y(t)=F_{c}[u](t)=\sum_{\eta\in X^{*}}(c,\eta)\,E_{\eta}[u](t,t_{0})\] [8]. If there exist real numbers \(K_{c},M_{c}>0\) such that \[|(c,\eta)|\leq K_{c}M_{c}^{|\eta|}|\eta|!,\ \ \forall\eta\in X^{*}, \tag{3}\] then \(F_{c}\) constitutes a well defined mapping from \(B_{\mathfrak{p}}^{m}(R)[t_{0},t_{0}+T]\) into \(B_{\mathfrak{q}}^{\ell}(S)[t_{0},\,t_{0}+T]\) for sufficiently small \(R,T>0\) and some \(S>0\), where the numbers \(\mathfrak{p},\mathfrak{q}\in[1,\infty]\) are conjugate exponents, i.e., \(1/\mathfrak{p}+1/\mathfrak{q}=1\)[16]. (Here, \(|z|:=\max_{i}|z_{i}|\) when \(z\in\mathbb{R}^{\ell}\).) Any series \(c\) satisfying (3) is called _locally convergent_, and \(F_{c}\) is called a _Fliess operator_. The subset of all locally convergent series is denoted by \(\mathbb{R}^{\ell}_{LC}\langle\langle X\rangle\rangle\). A Fliess operator \(F_{c}\) defined on \(B_{\mathfrak{p}}^{m}(R)[t_{0},t_{0}+T]\) with \(\ell=1\) is said to be _realizable_ when there exists a state space realization (1) with each \(g_{i}\) being an analytic vector field expressed in local coordinates on some neighborhood \(W\) of \(z_{0}\in\mathbb{R}^{n}\), and the real-valued output function \(h\) is an analytic function on \(W\) such that (1a) has a well defined solution \(z(t)\), \(t\in[t_{0},t_{0}+T]\) for any given input \(u\in B_{\mathfrak{p}}^{m}(R)[t_{0},t_{0}+T]\), and \(y(t)=F_{c}[u](t)=h(z(t))\), \(t\in[t_{0},t_{0}+T]\). Denoting the _Lie derivative_ of \(h\) with respect to \(g_{i}\) by \(L_{g_{i}}h\), it can be shown that for any word \(\eta=x_{i_{k}}\cdots x_{i_{1}}\in X^{*}\) \[(c,\eta)=L_{g_{i}}h(z_{0}):=L_{g_{i_{1}}}\cdots L_{g_{i_{k}}}h(z_{0}) \tag{4}\] [8; 20; 26]. ### System interconnections Given Fliss operators \(F_{c}\) and \(F_{d}\), where \(c,d\in\mathbb{R}^{\ell}_{LC}\langle\langle X\rangle\rangle\), the parallel and product connections satisfy \(F_{c}+F_{d}=F_{c+d}\) and \(F_{c}F_{d}=F_{c\boldsymbol{\upmu}d}\), respectively [8]. It is also known that the composition of two Fliss operators \(F_{c}\) and \(F_{d}\) with \(c\in\mathbb{R}^{\ell}_{LC}\langle\langle X\rangle\rangle\) and \(d\in\mathbb{R}^{m}_{LC}\langle\langle X\rangle\rangle\) always yields another Fliss operator with generating series \(c\circ d\), where the _composition product_ is given by \[c\circ d=\sum_{\eta\in X^{*}}(c,\eta)\,\psi_{d}(\eta)(\boldsymbol{1}) \tag{5}\] [7]. Here \(\psi_{d}\) is the continuous (in the ultrametric sense) algebra homomorphism from \(\mathbb{R}\langle\langle X\rangle\rangle\) to the vector space endomorphisms on \(\mathbb{R}\langle\langle X\rangle\rangle\), \(\operatorname{End}(\mathbb{R}\langle\langle X\rangle\rangle)\), uniquely specified by \(\psi_{d}(x_{i}\eta)=\psi_{d}(x_{i})\circ\psi_{d}(\eta)\) with \(\psi_{d}(x_{i})(e)=x_{0}(d_{i}\shuffle e)\), \(i=0,1,\ldots,m\) for any \(e\in\mathbb{R}\langle\langle X\rangle\rangle\), and where \(d_{i}\) is the \(i\)-th component series of \(d\) (\(d_{0}:=\boldsymbol{1}:=1\emptyset\)). By definition, \(\psi_{d}(\emptyset)\) is the identity map on \(\mathbb{R}\langle\langle X\rangle\rangle\). It can be verified directly that \[x_{j}^{-1}(c\circ d)=\begin{cases}x_{0}^{-1}(c)\circ d+\sum_{i=1}^{m}\!\!d_{i }\,\boldsymbol{\upmu}(x_{i}^{-1}(c)\circ d):&j=0\\ 0&:&j\neq 0.\end{cases} \tag{6}\] If \(c,d\in\mathbb{R}\langle\langle X\rangle\rangle\) with \(m=\ell=1\) and \(d\) non-proper, then one can define the quotient \(c/d=c\boldsymbol{\upmu}d^{\boldsymbol{\upmu}-1}\) so that \(F_{c}/F_{d}=F_{c/d}\) with the shuffle inverse of \(d\) defined as \[d^{\boldsymbol{\upmu}-1}=((d,\emptyset)(1-d^{\prime}))^{\boldsymbol{\upmu}-1 }=(d,\emptyset)^{-1}(d^{\prime})^{\boldsymbol{\upmu}*},\] where \(d^{\prime}=\boldsymbol{1}-(d/(d,\emptyset))\) is proper and \((d^{\prime})^{\boldsymbol{\upmu}*}:=\sum_{k\geq 0}(d^{\prime})^{ \boldsymbol{\upmu}k}\)[12]. The following lemma will be useful. **Lemma 2.1**.: _For any \(c,d,e\in\mathbb{R}\langle\langle X\rangle\rangle\) with \(d\) non-proper, the following identity holds_ \[(c/d)\circ e=(c\circ e)/(d\circ e).\] _Proof:_ It can be shown directly from the definition of the composition product that if \(d\) is non-proper then so is \(d\circ e\). In fact, \((d\circ e,\emptyset)=(d,\emptyset)\neq 0\). Thus, both sides of the equality in question are at least well defined formal power series. In light of the known identity \[(c\shuffle d)\circ e=(c\circ e)\shuffle(d\circ e) \tag{7}\] for any \(c,d,e\in\mathbb{R}\langle\langle X\rangle\rangle\)[9], it is sufficient to show that \[d^{\boldsymbol{\upmu}-1}\circ e=(d\circ e)^{\boldsymbol{\upmu}-1}. \tag{8}\] It is clear via induction that for any \(k\in\mathbb{N}\), \[d^{\boldsymbol{\upmu}k}\circ e=(d\circ e)^{\boldsymbol{\upmu}k}.\] Therefore, since \(d\) is non-proper, it follows that \[d^{\boldsymbol{\upmu}-1}\circ e=(d,\emptyset)^{-1}\lim_{n\to \infty}\sum_{k=0}^{n}(d^{\prime})^{\boldsymbol{\upmu}k}\circ e\] \[=(d\circ e,\emptyset)^{-1}\lim_{n\to\infty}\sum_{k=0}^{n}(d^{ \prime}\circ e)^{\boldsymbol{\upmu}k}\] \[=(d\circ e)^{\boldsymbol{\upmu}-1}.\] As \(d^{\prime}\) and \(d^{\prime}\circ e\) are both proper, all the limits above (in the ultrametric sense) exist, and thus, the claim is verified. \(\blacksquare\) ### Formal Fliss operators Suppose \(X=\{x_{0},x_{1}\}\) and define \(X_{0}=\{x_{0}\}\). Then every series \(c_{u}\in\mathbb{R}[[X_{0}]]\) can be identified with an infinite jet \(j_{l_{0}}^{\infty}(u)\) for any fixed \(t_{0}\in\mathbb{R}\). By Borel's Lemma, there is a real-valued function \(u\in C^{\infty}(t_{0})\) whose Taylor series corresponds to \(j_{l_{0}}^{\infty}(u)\). In the event that the coefficients of \(c_{u}\) satisfy the growth bound (3), then \(u\) is real-analytic. In which case, for any \(c\in\mathbb{R}_{LC}\langle\langle X\rangle\rangle\), \(F_{c_{y}}[v]=y=F_{c}[u]=F_{c}[F_{c_{u}}[v]]=F_{c\circ c_{u}}[v]\), where \(v\) is just a placeholder in this chain of equalities. If the Taylor series for \(u\) does not converge, it is viewed as a formal function. Nevertheless, the mapping \(c\circ:\mathbb{R}[[X_{0}]]\to\mathbb{R}[[X_{0}]]:c_{u}\mapsto c_{y}=c\circ c_{u}\) is still well defined and takes the input infinite jet to the output infinite jet. This is called a _formal Fliss operator_[17]. The advantage of working with these formal objects is that their algebraic properties can be characterized independently of their analytic nature. This will be the approach taken below. ### Relative degree of a generating series Observe that \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) can always be decomposed into its natural and forced components, that is, \(c=c_{N}+c_{F}\), where \(c_{N}:=\sum_{k\geq 0}(c,x_{0}^{k})x_{0}^{k}\) and \(c_{F}:=c-c_{N}\). **Definition 2.1**.: _[_12_]_ _Given \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) with \(X=\{x_{0},x_{1}\}\), let \(r\geq 1\) be the largest integer such that \(\operatorname{supp}(c_{F})\subseteq x_{0}^{r-1}X^{*}\). Then \(c\) has \(\operatorname{relative}\)\(\operatorname{degree}\)\(r\) if the linear word \(x_{0}^{r-1}x_{1}\in\operatorname{supp}(c)\), otherwise it is not well defined._ It is immediate that \(c\) has relative degree \(r\) if and only if there exists some \(e\in\mathbb{R}\langle\langle X\rangle\rangle\) with \(\operatorname{supp}(e)\subseteq X^{*}/\{X_{0}^{*},x_{1}\}\) such that \[c=c_{N}+c_{F}=c_{N}+Kx_{0}^{r-1}x_{1}+x_{0}^{r-1}e \tag{9}\] and \(K\neq 0\). This notion of relative degree coincides with the usual definition given in a state space setting [13]. ## 3 Nullable Generating Series It is assumed for the remainder of the paper that all systems are single-input, single-output, i.e., \(m=\ell=1\) so that \(X=\{x_{0},x_{1}\}\) and all series coefficients are real-valued. Consider the following classes of generating series. **Definition 3.1**.: _A series \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) is said to be nullable if the zero series is in the range of the mapping \(c\circ:\mathbb{R}[[X_{0}]]\to\mathbb{R}[[X_{0}]],c_{u}\mapsto c\circ c_{u}\). That is, there exists a nulling series \(c_{u^{*}}\in\mathbb{R}[[X_{0}]]\) such that \(c\circ c_{u^{*}}=0\). The series is strongly nullable if it has a nonzero nulling series. A strongly nullable series is primely nullable if its nulling series is unique._ Observe that from (5) it follows that \((c\circ c_{u},\emptyset)=(c,\emptyset)\) for all \(c_{u}\in\mathbb{R}[[X_{0}]]\). Thus, if \(c\) is nullable, then necessarily \(c\) must be proper. Also, every series \(c=c_{F}\) satisfies \(c\circ 0=0\). Thus, it is nullable. If \(c=c_{N}+c_{F}\) with \(c_{N}\neq 0\), then \(c\circ 0=c_{N}\). Therefore, if \(c\) is nullable, it must be strongly nullable. **Example 3.1**.: Observe that \(c=x_{0}^{2}-x_{1}x_{0}\) is primely nullable since \(c\circ\mathbf{1}=x_{0}^{2}-x_{0}^{2}=0\), and \(c_{u^{*}}=\mathbf{1}\) is the only series with this property. **Example 3.2**.: The polynomial \(c=x_{0}+x_{0}x_{1}\) is not nullable since \(c\circ c_{u}=x_{0}+x_{0}^{2}c_{u}\neq 0\) for all \(c_{u}\in\mathbb{R}[[X_{0}]]\). A sufficient condition for a series to be primely nullable is given in the following theorem. **Theorem 3.1**.: _If \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) has relative degree \(r\), and \(\operatorname{supp}(c_{N})\subseteq x_{0}^{r}X_{0}^{s}\) is nonempty, then \(c\) is primely nullable._ _Proof:_ Since \(c_{N}\neq 0\) by assumption, any nulling series must be nonzero. The claim is that \(c\) has a unique nonzero nulling series. Applying (6) to \(c_{y}=c\circ c_{u}\) with \(m=1\) (let \(d_{1}=d\)) under the assumption that \(c\) has relative degree \(r\) gives \[c_{y} =c\circ c_{u}\] \[x_{0}^{-1}(c_{y}) =x_{0}^{-1}(c)\circ c_{u}\] \[\vdots\] \[x_{0}^{-r+1}(c_{y}) =x_{0}^{-r+1}(c)\circ c_{u}\] \[x_{0}^{-r}(c_{y}) =x_{0}^{-r}(c)\circ c_{u}+c_{u}\mathop{\hbox{\vrule height 6.999 905pt width 0.5pt depth 0.0pt\vrule height 6.999905pt width 0.5pt depth 0. **Example 3.4**.: The polynomial \(c=x_{0}^{2}-x_{1}x_{0}\) in Example 3.1 does not have relative degree. So it is primely nullable but not linearly nullable. **Example 3.5**.: The polynomial \(c=x_{0}+x_{0}x_{1}\) in Example 3.2 has relative degree 2 and was shown not to be nullable. Observe \(c_{N}=x_{0}\not\in x_{0}^{2}X_{0}^{*}\), which is consistent with Theorem 3.1. **Example 3.6**.: Consider the series \(c=\sum_{\eta\in X^{+}}|\eta|!\,\eta\), where \(X^{+}:=X^{*}/\{\emptyset\}\). The series has relative degree 1 and is linearly nullable. In this instance, the corresponding Chen-Fliess series has the closed-form expression \[F_{c}[u]=\frac{F_{x_{0}+x_{1}}[u]}{1-F_{x_{0}+x_{1}}[u]}.\] Therefore, the unique nulling series for \(c\) is \(c_{u^{*}}=-\mathbf{1}\). Let \(c\in\mathbb{R}\langle\langle X\rangle\rangle\) be nullable. Define the (two-sided) principal ideal \[I_{c}=(c):=\{c\boldsymbol{\shuffle}d:d\in\mathbb{R}\langle\langle X\rangle\rangle\}\] in the shuffle algebra on \(\mathbb{R}\langle\langle X\rangle\rangle\). **Lemma 3.1**.: _Every series in \(I_{c}\) is nullable. If \(c\) is strongly nullable, then every series in \(I_{c}\) is strongly nullable._ _Proof:_ Applying (7) it follows that \((c\boldsymbol{\shuffle}d)\circ c_{u^{*}}=(c\circ c_{u^{*}})\boldsymbol{ \shuffle}(d\circ c_{u^{*}})=0\) if \(c_{u^{*}}\) is selected so that \(c\circ c_{u^{*}}=0\), which is always possible since \(c\) is nullable by assumption. The second claim is now obvious. The first theorem below is obvious given the definition of primely nullable. The second theorem is less trivial. **Theorem 3.2**.: _If \(c,d\in\mathbb{R}\langle\langle X\rangle\rangle\) are primely nullable with \(c_{u^{*}}\neq d_{u^{*}}\), then \(c\boldsymbol{\shuffle}d\) is strongly nullable but not primely nullable._ **Theorem 3.3**.: _If \(c,d\in\mathbb{R}\langle\langle X\rangle\rangle\) are linearly nullable, then \(c\boldsymbol{\shuffle}d\) is strongly nullable but not linearly nullable._ _Proof:_ The strong nullability property follows directly from the lemma above. Regarding the second assertion, if \(c\boldsymbol{\shuffle}d\) is linearly nullable, then necessarily \(c\boldsymbol{\shuffle}d\) must have relative degree, say \(s\), and \((c\boldsymbol{\shuffle}d)_{N}\in x_{0}^{s}X_{0}^{*}\). Observe that \[c\boldsymbol{\shuffle}d =(x_{0}^{r_{c}}e_{0}+K_{c}x_{0}^{r_{c}-1}x_{1}+x_{0}^{r_{c}-1}e_{ 1})\boldsymbol{\shuffle}\] \[(x_{0}^{r_{d}}f_{0}+K_{d}x_{0}^{r_{d}-1}x_{1}+x_{0}^{r_{d}-1}f_{1})\] has the property that \((c\boldsymbol{\shuffle}d)_{N}\in x_{0}^{r_{c}+r_{d}}X_{0}^{*}\). But the assertion is that \(c\boldsymbol{\shuffle}d\) cannot have relative degree \(r_{c}+r_{d}\). This would require that the shortest linear word in \(\operatorname{supp}(c\boldsymbol{\shuffle}d)_{F}\) are \(x_{0}^{r_{c}+r_{d}-1}x_{1}\) and all other words in \(\operatorname{supp}((c\boldsymbol{\shuffle}d)_{F})\) must have the prefix \(x_{0}^{r_{c}+r_{d}-1}\). This linear word will only be present if \[K_{c}(f_{0},\emptyset)+K_{d}(e_{0},\emptyset)\neq 0. \tag{10}\] This means that at least one of the constant terms \((e_{0},\emptyset)\) or \((f_{0},\emptyset)\) must be nonzero. In addition, note that every word in the support of \[(e_{0},\emptyset)x_{0}^{r_{c}}\boldsymbol{\shuffle}K_{d}x_{0}^{r_{c}-1}x_{1} +(f_{0},\emptyset)x_{0}^{r_{d}}\boldsymbol{\shuffle}K_{c}x_{0}^{r_{c}-1}x_{1}\] has length \(r_{c}+r_{d}\), and these words must have the required prefix \(x_{0}^{r_{c}+r_{d}-1}\) since no other words in the larger shuffle product are short enough to cancel these words. But the only way to remove an illegal word would violate (10). For example, if \(r_{c}=r_{d}=1\), then \[(e_{0},\emptyset)x_{0}\boldsymbol{\shuffle}K_{d}x_{1}+(f_{0}, \emptyset)x_{0}\boldsymbol{\shuffle}K_{c}x_{1}\] \[=K_{d}(e_{0},\emptyset)(x_{0}x_{1}+x_{1}x_{0})+K_{c}(f_{0}, \emptyset)(x_{0}x_{1}+x_{1}x_{0}).\] The illegal word \(x_{1}x_{0}\) cannot be canceled without removing the required linear word \(x_{0}x_{1}\). Thus, \(c\boldsymbol{\shuffle}d\) cannot be linearly nullable. **Example 3.7**.: Suppose \(c=x_{0}-x_{1}\) and \(d=x_{0}^{2}-x_{1}\). Both series are linearly nullable with relative degree 1. The nulling series for \(c\) is \(c_{u^{*}}=\mathbf{1}\), and the nulling series for \(d\) is \(d_{u^{*}}=x_{0}\). Observe \[c\boldsymbol{\shuffle}d=-x_{0}x_{1}-x_{1}x_{0}+2x_{1}^{2}+3x_{0}^{3}-x_{0}^{2}x _{1}-x_{0}x_{1}x_{0}-x_{1}x_{0}^{2}\] does not have relative degree. Therefore \(c\boldsymbol{\shuffle}d\) is strongly nullable, but not linearly nullable and not primely nullable. In fact, if the coefficients for the realization (2) are computed from (4), one will find directly that the generating series is the polynomial given above. This is the origin of the example given in the introduction. **Example 3.8**.: Suppose \(c=x_{0}+x_{1}\) and \(d=\mathbf{1}+x_{1}\). In this case, \(c\) is linearly nullable with relative degree 1, and \(d\) also has relative degree 1 but is not nullable as it is not proper. Observe \[c\boldsymbol{\shuffle}d=x_{0}+x_{1}+x_{0}x_{1}+x_{1}x_{0}+2x_{1}^{2}\] is also linearly nullable with relative degree 1. That is, Theorem 3.3 does not preclude the possibility that primely nullable series can have shuffle factors that are not nullable. **Example 3.9**.: Suppose \(c=d=x_{0}-x_{1}\) so that both series are linearly nullable with relative degree 1. As expected, \[c\boldsymbol{\shuffle}d=2x_{0}^{2}-2x_{0}x_{1}-2x_{1}x_{0}-2x_{1}^{2}\] is not linearly nullable as it does not have relative degree, but it is primely nullable since \(c_{u^{*}}=d_{u^{*}}=\mathbf{1}\) is the only nulling series for \(c\boldsymbol{\shuffle}d\) as the shuffle product is an integral domain. That is, in general \((c\boldsymbol{\shuffle}d)\circ e_{u}=(c\boldsymbol{\shuffle}e_{u})\boldsymbol{ \shuffle}(d\circ e_{u})=0\) if and only if at least one argument in the second shuffle product is the zero series. In summary, if \(\mathbb{R}_{p}\langle\langle X\rangle\rangle\) is the set of all proper series in \(\mathbb{R}\langle\langle X\rangle\rangle\), then the following inclusions hold: \(\mathbb{R}_{p}\langle\langle X\rangle\rangle\supset\) nullable series \(\supset\) strongly nullable series \(\supset\) primely nullable series \(\supset\) linearly nullable series. In light of Theorems 3.2 and 3.3, only the set of nullable series and strongly nullable series are closed under the shuffle product. The final example provides an application of nullable series. **Example 3.10**.: In optimal control problems, it is often necessary to determine critical points of quadratic objective functions. From the calculus of variations, this is accomplished by computing the critical points of a variational derivative. For a system described only in terms of a Chen-Fliess series, this would involve determining the critical points of a variational derivative of a Chen-Fliess series. The Frechet derivative of \(F_{c}\), for example, can be computed by introducing a variational alphabet associated with \(X=\{x_{0},x_{1}\}\), say \(\delta X=\{\delta x_{0},\delta x_{1}\}\). Define the mapping \(\delta:X\to\delta X\) by \(\delta(x_{0})=\delta x_{0}=0\) and \(\delta(x_{1})=\delta x_{1}\). Extend the definition of \(\delta\) to \(X^{*}\) by letting it act as a derivation with respect to concatenation. Further extend the definition to \(\mathbb{R}\langle\langle X\rangle\rangle\) by linearity. In which case, the Frechet derivative of \(F_{c}\) at \(u\) is the linear functional \(DF_{c}[u][h]=F_{\delta(c)}[u,h]\)[5]. Consider the simple example where \(c=x_{0}x_{1}+x_{1}x_{0}+x_{1}^{2}\) so that \(\delta(c)=(x_{0}+x_{1})\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule {6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}\,\delta x_{1}\). Identifying \(u\) with \(x_{1}\) and \(h\) with \(\delta x_{1}\) from some admissible set of functions \(\mathcal{U}\), it follows that \[DF_{c}[u][h]=F_{x_{0}+x_{1}}[u]E_{\delta x_{1}}[h],\] where \(DF_{c}[u][\cdot]\) is clearly linear. Critical points in this context are the inputs \(u^{*}\in\mathcal{U}\) such that \(DF_{c}[u^{*}][h]=0\) for all \(h\in\mathcal{U}\). Here it is evident since \(x_{0}+x_{1}\) is linearly nullable that \(u^{*}(t)=-1\), \(t\geq 0\). ## 4 Factorizations in the Shuffle Algebra The shuffle product on \(\mathbb{R}\langle X\rangle\) forms a commutative ring. Such structures appear in the following chain of class inclusions: \[\begin{array}{l}\mbox{commutative rings}\supset\mbox{integral domains}\supset\mbox{ integrally closed domains}\supset\mbox{GCD domains}\supset\mbox{unique factorization domains}\supset\mbox{principal ideal domains}\supset\mbox{Euclidean domains}\supset\mbox{fields}\\ \hline\mbox{main}\supset\mbox{Euclidean domains}\supset\mbox{fields}\\ \end{array}\] [1, 23]. The integral domain property of the shuffle algebra was proved in [29, Theorem 3.2]. The following theorem identifies the strongest structure available on this ring. **Theorem 4.1**.: _The shuffle algebra on \(\mathbb{R}\langle X\rangle\) is a unique factorization domain but not a principal ideal domain._ _Proof:_ The claim that the shuffle algebra on \(\mathbb{R}\langle X\rangle\) is a unique factorization domain follows from existing results. It is known from [28] (see also [18, Section 6] and [24, Chapter 5]) that the shuffle algebra on \(\mathbb{R}\langle X\rangle\) is isomorphic to the symmetric algebra on the \(\mathbb{R}\)-vector space \(V\) having basis \(L=\{l_{i}\}_{i\geq 0}\), the set of Lyndon words. The symmetric algebra \(S(V)\) in turn canonically isomorphic to the free polynomial algebra \(\mathbb{R}[L]\). Thus, there exists an \(\mathbb{R}\)-linear map \(\mathscr{L}:\mathbb{R}\langle X\rangle\to\mathbb{R}[L]\) such that \[\mathscr{L}(\eta_{1}\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt} \rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}\,\eta_{2})=\mathscr{L}(\eta_{1} )\mathscr{L}(\eta_{2}),\ \ \forall\eta_{i}\in X^{*} \tag{11}\] with \(\mathscr{L}(\mathbf{1})=\mathbf{1}\), and, in particular, \(\mathscr{L}(l_{i})=l_{i}\). Put another way, the shuffle algebra on \(\mathbb{R}\langle X\rangle\) is freely generated by the set of all Lyndon words [30, Theorem 6.1]. It is shown in [4, Section 4, Corollary 1] that any such polynomial ring is a unique factorization domain (see also [2]). To be a principal ideal domain, it is necessary that every ideal in \(\mathbb{R}\langle X\rangle\) be generated by a single element. The classical argument that this is not the case in the present context goes as follows (e.g., see [25, p. 153]). The assertion is that the set of all proper polynomials in \(\mathbb{R}\langle X\rangle\), \(\mathbb{R}_{p}\langle X\rangle\), is an ideal which is not principal. It is clear that \(\mathbb{R}_{p}\langle X\rangle\) is an ideal. Now suppose \(\mathbb{R}_{p}\langle X\rangle\) has a single generator \(p\) in the shuffle algebra, i.e., \(\mathbb{R}_{p}\langle X\rangle=(p):=\{p\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5 pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}\,q:q\in\mathbb{R} \langle X\rangle\}\). Since \(x_{0},x_{1}\in\mathbb{R}_{p}\langle X\rangle\), there must exist \(q_{0},q_{1}\in\mathbb{R}\langle X\rangle\) such that \(x_{0}=p\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule [6.5pt]{6.5pt}{0.4pt}}}\,q_{0}\) and \(x_{1}=p\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule [6.5pt]{6.5pt}{0.4pt}}}\,q_{1}\). In light of the degrees of \(x_{0}\) and \(x_{1}\), this would require a generator of the form \(p=\alpha_{0}x_{0}+\alpha_{1}x_{1}\), \(\alpha_{1}\in\mathbb{R}\). If \(\alpha_{0}=0\), then \(p\) will generate \(x_{1}\) but not \(x_{2}\). Likewise, if \(\alpha_{1}=0\) then \(p\) will generate \(x_{0}\) but not \(x_{1}\). Thus, the ideal \(\mathbb{R}_{p}\langle X\rangle\) has two basis elements, that is, \(\mathbb{R}_{p}\langle X\rangle=(x_{0},x_{1}):=\{x_{0}\,\raisebox{-1.72pt}{\hbox{ \rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}\,q_{0}+x_{1} \,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5 pt}{0.4pt}}}\,q_{1}:q_{0},q_{1}\in\mathbb{R}\langle X\rangle\}\), and thus is not principal. The main theorem of this section is presented next. **Theorem 4.2**.: _Let \(c\in\mathbb{R}\langle X\rangle\) with \(c_{N}\neq 0\) and unique factorization \(c=c_{1}\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule [6.5pt]{6.5pt}{0.4pt}}}\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt} \rule[6.5pt]{6.5pt}{0.4pt}}}\,\cdots\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}\,c_{n}\) (modulo a permutation), where each \(c_{i}\) is irreducible as a polynomial in the shuffle algebra. Then \(c_{u^{*}}\neq 0\) is a nulling series for \(c\) if and only if it is a nulling series for at least one of the factors \(c_{i}\)._ _Proof:_ If \(c_{u^{*}}\neq 0\) is a nulling series for \(c_{i}\), then directly from Lemma 3.1 it is a nulling series for \(c\). Conversely, if \[c\circ c_{u^{*}}=(c_{1}\circ c_{u^{*}})\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5 pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}\,(c_{2}\circ c_{u^{*}})\, \raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5 pt}{0.4pt}}}\,\cdots\,\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt} \rule[6.5pt]{6.5pt}{0.4pt}}}\,(c_{n}\circ c_{u^{*}})=0\] for some \(c_{u^{*}}\neq 0\), then since the shuffle algebra is an integral domain, at least one series \(c_{i}\circ c_{u^{*}}\) must be the zero series, and the theorem is proved. It is important to point out what the theorem above is not saying, namely, that every nullable series can be factored into a shuffle product of _primely_ nullable series. While it is easy to demonstrate that a primely nullable series need not be irreducible (Examples 3.8 and 4.3 corresponding lexicographical ordering on \(X^{+}\). Recall that a word \(\eta\in X^{+}\) is called a _Lyndon word_ if all factorizations \(\eta=\xi\nu\) with \(\xi,\nu\in X^{+}\) have the property that \(\eta<\nu\xi\). In this case, the first few Lyndon words are \(L=\{l_{i}\}_{i\geq 0}=\{x_{0},x_{1},x_{0}x_{1},x_{0}x_{1},x_{0}x_{1}^{2},x_{0}x_{1} ^{3},x_{0}x_{1},x_{0}^{2}x_{1}^{2},x_{0}x_{1}^{3},\ldots\}\), where here the ordering is by increasing word length and then lexicographically among words of the same length.1 The Chen-Fox-Lyndon factorization of a word \(\eta\in X^{+}\) is a unique non-increasing product of Lyndon words so that Footnote 1: This ordering is only for convenience in displaying results and does not play any mathematical role in this presentation. \[\eta=l_{i_{1}}l_{i_{2}}\cdots l_{i_{n}},\,\,\,l_{i_{1}}\geq l_{i_{2}}\geq\cdots \geq l_{i_{n}}\] [3, 19, 24, 30]. A consequence of this factorization is that \[l_{i_{1}}\,\hbox{\hbox{\vrule height 6.998933pt width 0.4pt depth 0.0pt \vrule height 0.0pt width 4.3pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.98933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.98933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.99893pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.998933pt width 0.4pt depth 0.0pt\vrule height 6.98933pt width 0.4pt depth 0.0pt\vrule height which gives \[c_{u_{2}^{*}}=-\mathbf{1}+x_{0}^{2}-3x_{0}^{4}+15x_{0}^{6}-105x_{0}^{8}+\cdots.\] To empirically verify that \(c\circ c_{u_{1}^{*}}=0\), it is necessary to truncate \(c_{u_{1}^{*}}\). This means that \(c\circ c_{u_{1}^{*}}\) will not be exactly zero, but instead zero up to some word length depending on the number of terms retained in \(c_{u_{1}^{*}}\). For example, truncating both \(c_{u_{1}^{*}}\) to words of maximum length six gives \[c\circ c_{u_{1}^{*}} =87380x_{0}^{10}+2946560x_{0}^{12}+153856528x_{0}^{14}+O(x_{0}^{16})\] \[c\circ c_{u_{2}^{*}} =2100x_{0}^{10}-840840x_{0}^{14}+57657600x_{0}^{16}-O(x_{0}^{18}).\] **Example 4.3**.: Reconsider Example 3.8 where \(c_{L}=l_{0}+l_{1}\) and \(d_{L}=\mathbf{1}+l_{1}\). As observed earlier, \(c\shuffle d\) is primely nullable but not linearly nullable. Clearly \((c\shuffle d)_{L}=c_{L}d_{L}\) is reducible with one linearly nullable factor \(c_{L}\). **Example 4.4**.: Recall that for polynomials in one variable, the class of irreducible polynomials depends on the base field. For example, over the real field, the irreducible polynomials are either of degree 1 or degree 2 (e.g., \(x_{0}^{2}+1\)). Over the complex field, there are only degree 1 irreducibles [22, Chapter IV.1]. However, in every multivariate polynomial ring there are irreducible elements of higher degree. Consider the polynomial \(c=6x_{1}^{3}-2x_{1}x_{0}^{2}-2x_{0}x_{1}x_{0}-2x_{0}^{2}x_{1}-24x_{0}^{4}\in \mathbb{R}_{p}\langle X\rangle\). It does not have relative degree, and thus, it is not linearly nullable. There is at present no direct test for any other form of nullability. In the Lyndon basis, it follows that \(c_{L}=l_{1}^{3}-l_{0}^{2}l_{1}-l_{0}^{4}\in\mathbb{R}[L]\). Now if \(c_{L}\) is reducible, one could write \[c_{L}=(l_{1}-p_{1}(l_{0}))(l_{1}^{2}+p_{2}(l_{0})l_{1}+p_{3}(l_{0})) \tag{14}\] for some polynomials \(p_{i}(l_{0})\). Since \(l_{0}^{4}=p_{1}(l_{0})p_{3}(l_{0})\), necessarily \(p_{1}(l_{0})=al_{0}^{n}\) and \(p_{3}(l_{0})=bl_{0}^{4-n}\) for some \(n\in\{0,1,2,3,4\}\) and \(a,b\in\mathbb{R}\) with \(ab=1\). Substituting these forms into (14) shows directly that there are no values of \(n\) that can yield \(c_{L}\). Thus, \(c_{L}\) is an irreducible multivariate polynomial of degree 4 as an element in \(\mathbb{R}[L]\). ## 5 Conclusions Working entirely in a Chen-Fliess series setting, it was shown that a class of generating series called primely nullable series provides building blocks in the shuffle algebra for the problem of zeroing the output. Next, it was shown that the shuffle algebra on \(\mathbb{R}\langle X\rangle\) is a unique factorization domain so that any polynomial can be uniquely factored into its irreducible elements for the purpose of identifying nullable factors. This factorization is done by viewing this shuffle algebra as the symmetric algebra over the vector space spanned by Lyndon words. A specific polynomial factorization algorithm was given based on the Chen-Fox-Lyndon factorization of words. ## Acknowledgments KEF and AS were supported by the Research Council of Norway through project 302831 Computational Dynamics and Stochastics on Manifolds (CODYSMA).
2301.06346
Truncated Wigner approximation for the bosonic model of large spin baths
The central spin model has a wide applicability, it is ideally suited to describe a small quantum system, for instance a quantum bit, in contact to a bath of spins, e.g., nuclear spins, or other small quantum systems in general. According to previous work~[R\"ohrig \textit{et al.}, Phys. Rev. B {\bf 97}, 165431 (2018)], a large bath of quantum spins can be described as a bath of quantum harmonic oscillators. But the resulting quantum model is still far from being straightforward solvable. Hence we consider a chain representation for the bosonic degrees of freedom to study how well a truncated Wigner approximation of the effective model of harmonic oscillators works in comparison with other approximate and exact methods. Numerically, we examine the effect of the number of bath spins and of the truncation level, i.e., the chain length.
Mohsen Yarmohammadi, Katrin Bolsmann, Yvonne Ribbeheger, Timo Gräßer, Götz S. Uhrig
2023-01-16T10:42:20Z
http://arxiv.org/abs/2301.06346v1
# Truncated Wigner approximation for the bosonic model of large spin baths ###### Abstract The central spin model has a wide applicability, it is ideally suited to describe a small quantum system, for instance a quantum bit, in contact to a bath of spins, e.g., nuclear spins, or other small quantum systems in general. According to previous work [Rohrig _et al._, Phys. Rev. B **97**, 165431 (2018)], a large bath of quantum spins can be described as a bath of quantum harmonic oscillators. But the resulting quantum model is still far from being straightforward solvable. Hence we consider a chain representation for the bosonic degrees of freedom to study how well a truncated Wigner approximation of the effective model of harmonic oscillators works in comparison with other approximate and exact methods. Numerically, we examine the effect of the number of bath spins and of the truncation level, i.e., the chain length. ## I Introduction The central spin model (CSM) is a well-known model describing the interaction of a single "central" spin with surrounding spins [1; 2], for instance, the interaction of the spin of a localized electron with nuclear spins in quantum dots [3; 4; 5]. In view of the intense search for physical realizations of quantum bits [6], a localized electron in a quantum dot can be seen as a two-level system and thus as a promising candidate for quantum bits [7; 8; 9]. The CSM is a quantum many-body system and major progress has been made to understand its properties in its applications for phenomena in material science and quantum information technology [10; 11; 12; 13]. Polarization recovery in a longitudinal field [14; 15], nuclei-induced frequency focusing [16; 17; 18], spin precession mode locking [19; 20], the effect of spin inertia [21; 22], spin noise [23; 24; 25; 26; 27], and many other effects belong to the particularly rich physics of the CSM. Furthermore, the CSM is also used to understand the dynamics of quantum sensors [28] which helps to reach high sensitivities. For a finite, not too large number of bath spins [29], it is possible to use the Bethe ansatz [1; 30; 31] to diagonalize the CSM Hamiltonian and to analyze rigorous restrictions of the central spin dynamics stemming from conserved quantities [32; 33]. If all couplings are equal the CSM reduces to the so called _box model_ allowing one to compute the spin dynamics for large spin baths essentially analytically [34; 35; 36]. However, the complexity of the CSM in practical applications is related mainly to the electron spin decoherence when interacting with an (almost) infinite number of nuclei spins [37; 38; 39; 40; 41; 42]. In this scenario, the initial polarization and information on the spin state is quickly and irreversibly lost. To describe this decoherence of the central spin and to conceive strategies against it, various approaches have been conceived. Density-matrix renormalization group (DMRG) deals with up to 1000 spins, but only up to relatively short times [43; 44] due to the fast growth of entanglement. The linked-cluster and cluster-correlation expansions [45; 46; 47; 48] investigate the long-time spin decoherence, but of finite, relatively small spin baths. Moreover, considering the nuclear-electric quadrupolar interactions for a few spins, the spin-noise spectrum at various timescales has been calculated using Chebyshev polynomials [49; 50; 27]. Furthermore, a coherent interface between electron and nuclear spins was recently developed [51] with the vision to realize long-lived quantum memory. Although a classical description of CSM with a large-enough number of nuclear spins can be justified over a long time, it neglects all quantum mechanical aspects [43; 44] which are vital for quantum bits. This originates from the fact that the central spin is a truly quantum mechanical object and its back-action on the bath spins is not classical. The truncated Wigner approximation (TWA) [52] is a general semi-classical approach in which quantum fluctuations are partly taken into account through random initial conditions for the classical equations of motion. Although the equations of motion themselves are still purely classical, correlations and the probabilities of quantum measurements can be simulated to a certain degree. The TWA has often been used to simulate the dynamics of the CSM [10; 53; 54]. The spins are taken as classical vectors precessing around local classical fields. We abbreviate this semi-classical approach to spins sTWA. It can be implemented for moderate numbers of spins (\(N\approx 200\)) if one has to simulate long times. Experimentally, the bath sizes range from \(10^{4}\) to \(10^{6}\) still exceeding numerical resources by far even though a hierarchical chain representation based on generalized Overhauser fields helps to reconcile large spin baths and long-time simulations [54]. In this framework, a fully quantum mechanical approach [55] based on iterated equations of motion (iEoM) has been suggested for large spin baths. The asset of this approach is that it is particularly suited to capture very large or even infinitely large spin baths. The bath of spins is mapped to a bath of hierarchically coupled bosons and the central spin is mapped to a four-dimensional impurity. But the fully quantum mechanical evaluation of the dynamics of the effective bosonic model for long times represents still a tremendous challenge. Hence, it is interesting to study approximate ways to treat this effective bosonic model. In this work, we study the application of the TWA to the mapped effective bosonic model resulting from iEoM [55], i.e., to the harmonic oscillators. The impurity is described by two spins with \(S=1/2\) which, in turn, are treated as classical vectors. In order to distinguish this TWA from the one resulting from the classical treatment of the spins we call it bosonic TWA (bTWA). Clearly, the bTWA would remove the restrictions on the maximum number of bosonic modes which can be simulated. The immediate aim is to describe the experimental spin noise spectra [56; 26; 57]. To benchmark the bTWA, we compare our data to data from some of the above-mentioned techniques under the same conditions. This paper is organized as follows. In Sec. II, we review the CSM and in Sec. III, we present its bosonic formulation. In Sec. IV, we present our results and compare them with results from other techniques. Finally, the paper is summarized in Sec. V. ## II Initial model In this section, we briefly introduce the CSM. For our proof-of-principle study, we restrict ourselves to the paradigmatic isotropic version of the CSM. This implies that we neglect dipole-dipole interaction [38; 58], quadrupole-pair couplings [59; 60; 61; 62], and spin-orbit couplings [63; 64; 65; 66; 67] of the nuclear spins which usually become relevant on very long timescales. We start with the CSM comprising a central spin \(\hat{\vec{S}}_{0}\) with \(S=1/2\) interacting through the hyperfine coupling with a bath of \(N\) spins \(\hat{\vec{S}}_{i}\). The Hamiltonian reads \[\hat{\mathcal{H}}=\sum_{i=1}^{N}\,J_{i}\,\hat{\vec{S}}_{0}\cdot\hat{\vec{S}}_ {i}\,, \tag{1}\] where \(J_{i}\) denotes the hyperfine coupling of the \(i\)-th spin in the bath. In electronic quantum dots, the coupling constants \(J_{i}\) are proportional to the probability that the electron is present at the site of the nucleus \(i\)[58; 38] which is given by the modulus squared of the electronic wave function. It is convenient to define a composite field for the effect of the bath spins, \(\hat{\vec{B}}=\sum_{i=1}^{N}\,J_{i}\,\hat{\vec{S}}_{i}\), which is called the Overhauser field. With its help, the Hamiltonian can simply be rewritten as \(\hat{\mathcal{H}}=\hat{\vec{S}}_{0}\cdot\hat{\vec{B}}\). Let us consider an infinite spin bath (\(N\to\infty\)) with decreasing couplings. We consider the generic parametrization \(J_{i}=C\exp(-i\gamma)\)[31; 33; 54; 68; 30; 34] with \(i\in[1,N]\), where the prefactor \(C\) sets the energy scale. For \(\gamma>0\), the exponential term is decreasing with \(i\). The meaning of \(\gamma\) is elucidated by the following argument. Even if \(N\to\infty\), there is only a finite number of bath spins which is appreciably coupled to the central spin. We denote this number by \(N_{\rm eff}\) and define it by the ratio of the squared sum of all couplings and the sum of all squared couplings [38; 43; 44; 54; 58; 69], i.e., \(N_{\rm eff}:=\Big{(}\sum_{i=1}^{N}J_{i}\Big{)}^{2}/J_{\rm Q}^{2}\), where \(J_{\rm Q}^{2}:=\sum_{i=1}^{N}J_{i}^{2}\). Inserting our parametrization \(J_{i}\) into \(N_{\rm eff}\) in the limit \(N\to\infty\), we find for small values \(\gamma\) \[N_{\rm eff}=\frac{2}{\gamma}+\mathcal{O}(\gamma)\,. \tag{2}\] So \(\gamma\) is about twice the inverse number of effectively coupled spins. The electron spin in quantum dots is coupled to a very large number of bath spins [70; 71; 38], \(N_{\rm eff}\approx 10^{4}\) to \(10^{6}\), so, \(\gamma\approx 10^{-4}\) to \(10^{-6}\) is a realistic estimate. Moreover, we set the energy scale for all simulations by requiring \(J_{\rm Q}=1\). This results in \(C\simeq\sqrt{2\gamma}\approx 10^{-2}\) to \(10^{-3}\), which is a very small number implying that the contribution of an individual bath spin is negligible. Only suitable sums over all spins have a sizable impact. In contrast, for large \(\gamma\), we deal with a small number of bath spins, see Eq. (2), and the dynamics of the central spin can be determined using fully quantum mechanical descriptions [48; 49; 27]. ## III Effective model and semi-classical approach In what follows, we sketch the mapping of the spin bath on a bosonic bath (iEoM [55]). Then, we introduce the semi-classical approach bTWA based on a hierarchical chain representation to describe the _long-time_ spin dynamics. ### Objective We begin with the application of the Heisenberg equation of motion to the CSM, \(\partial_{t}\hat{\mathcal{A}}=\texttt{i}[\hat{\mathcal{H}},\hat{\mathcal{A}}]\) (throughout the present work, \(\hbar\) is set to unity), where \(\hat{\mathcal{A}}\) are operators of the CSM forming a suitable operator basis for the products of all components of spin operators at all sites [55]. In the end, we are interested in the \(\alpha\) component of the spin-spin autocorrelation function of the central spin at infinite temperature \[S^{\alpha}(t)=\langle\hat{S}_{0}^{\alpha}(t)\hat{S}_{0}^{\alpha}(0)\rangle\,, \tag{3}\] for small values of the parameter \(\gamma\) corresponding to large spin baths. In particular, the long-term behavior of \(S^{z}(t)\) provides information about the fate of state with the central spin aligned along the \(z\)-axis initially, i.e., at \(t=0\). Assuming infinite temperature is well justified because the thermal energy in the bath at temperatures of a few Kelvin is at least one order of magnitude larger than the individual couplings in a quantum dot [72]. For motivation, we provide the autocorrelation if a con stant external or internal magnetic field is applied [38; 43] \[\hat{\vec{S}}_{0}(t) = \vec{n}\big{[}\vec{n}\cdot\hat{\vec{S}}_{0}(0)\big{]}+\big{\{}\hat{ \vec{S}}_{0}(0)-\vec{n}\big{[}\vec{n}\cdot\hat{\vec{S}}_{0}(0)\big{]}\big{\}} \cos(Bt) \tag{4}\] \[-\,\big{[}\vec{n}\times\hat{\vec{S}}_{0}(0)\big{]}\sin(Bt)\,,\] where \(\vec{n}\) points in the direction of the magnetic field. This formula is identical to the classical one since \(\vec{B}\) is a classical vector and the equations of motion are linear in the spin operators. Obviously, powers of \(B\) up to infinite order occur so that a suitable operator basis needs operators including high powers of the Overhauser field if we want to capture its intrinsic quantum character and the ensuing dynamics. If one neglects the dynamics of the Overhauser field completely, the frozen Overhauser field approximation is retrieved for which one averages over all random directions and random strengths of the Overhauser field [38; 43] yielding \[S^{\alpha}(t)=\frac{1}{12}\left[2e^{-J_{0}^{2}t^{2}/8}(1-J_{0}^{2}t^{2}/4)+1 \right]\,. \tag{5}\] This analytic result is convenient as reference, see the figures below. ### Effective model with higher powers of the Overhauser field The orthogonal Hermite polynomials of the Overhauser field and similar composite weighted sums of the bath spin have been introduced by Rohrig _et al._[55] as suitable operator basis. These polynomials are orthogonal for a Gaussian weight function [73], and can be applied to different components of generalized Overhauser fields by the recursive relation \[G_{j}^{\alpha}H_{n}(G_{j}^{\alpha})=\sqrt{n}H_{n-1}(G_{j}^{\alpha})+\sqrt{n+1}H _{n+1}(G_{j}^{\alpha}), \tag{6}\] where \(\alpha=\{x,y,z\}\) and \(H_{0}(G_{j}^{\alpha})=1\) by definition. The polynomials \(H_{n}(G_{j}^{\alpha})\) are the Hermite polynomials of degree \(n\) in the generalized Overhauser field vectors \(\vec{G}_{j}\). These fields are defined by \[G_{j}^{\alpha}:=2\sum_{i=1}^{N}\mathcal{P}_{j}(J_{i})\hat{S}_{i}^{\alpha}\,, \tag{7}\] where the real orthogonal polynomials \(\mathcal{P}_{j}(x)\) are defined such that they comply with the orthogonality relation [54; 55] \[\delta_{j,m}=\sum_{i=1}^{N}\mathcal{P}_{j}(J_{i})\mathcal{P}_{m}(J_{i})\,. \tag{8}\] The polynomials \(\mathcal{P}_{j}(J_{i})\) describe the weight of each bath spin \(\vec{S}_{i}\). The established EoM for this basis of operators tells us that a single \(H_{n}(G_{j}^{\alpha})\) is transformed into the terms \(\sqrt{n}H_{n-1}(G_{j}^{\alpha})\) and \(\sqrt{n+1}H_{n+1}(G_{j}^{\alpha})\). This is identical to the effect of an annihilation (\(\hat{a}\)) and creation (\(\hat{a}^{\dagger}\)) bosonic operator, respectively, applied to the eigenstates \(|n\rangle\) of an harmonic oscillator. Eventually, a quantum mechanical representation of large spin baths by means of the iEoM for the generalized Overhauser fields including an external magnetic field has been obtained and developed, see Ref. [55] for further details. It is shown that in the limit \(N\to\infty\) the isotropic CSM can be mapped onto a four-dimensional impurity coupled to a non-interacting bosonic bath yielding the effective Hamiltonian \(\hat{\mathcal{H}}_{\text{eff}}=\hat{\mathcal{H}}_{\text{eff}}^{\text{CS}}+ \hat{\mathcal{H}}_{\text{eff}}^{\text{ch}}+\hat{\mathcal{H}}_{\text{eff}}^{2}\) in the presence of an external Zeeman magnetic field \(h\) along the \(z\)-direction. It is given by \[\hat{\mathcal{H}}_{\text{eff}}^{\text{CS}} =\frac{1}{2}\sum_{\alpha=1}^{3}\hat{K}_{\alpha}\big{(}\hat{a}_{1, \alpha}^{\dagger}+\hat{a}_{1,\alpha}\big{)}\,, \tag{9a}\] \[\hat{\mathcal{H}}_{\text{eff}}^{\text{ch}} =\frac{\text{i}}{2}\sum_{j=1}^{N_{\text{tr}}}\sum_{\alpha,\beta, \delta=1}^{3}\epsilon_{\alpha\beta\delta}\hat{M}_{\beta}\big{[}\chi_{j}\hat{a }_{j,\delta}^{\dagger}\hat{a}_{j,\alpha}\] \[\qquad\qquad\qquad\qquad+\eta_{j}(\hat{a}_{j+1,\delta}^{\dagger} \hat{a}_{j,\alpha}-\hat{a}_{j,\alpha}^{\dagger}\hat{a}_{j+1,\delta})\big{]}\,,\] (9b) \[\hat{\mathcal{H}}_{\text{eff}}^{\text{Z}} =\,-\,h\hat{K}_{z}\,, \tag{9c}\] where \(\hat{\mathcal{H}}_{\text{eff}}^{\text{CS}}\) refers to the central spin located at the head of a bosonic chain, whereas \(\hat{\mathcal{H}}_{\text{eff}}^{\text{CD}}\) acts on a bosonic chain with flavors \(\alpha\) as depicted in Fig. 1. In the above equations, \(\epsilon_{\alpha\beta\delta}\) is the Levi-Civita tensor. The couplings \(\eta_{j}\) and \(\chi_{j}\) result from the recursion of the orthogonal polynomials \(\mathcal{P}_{j}\) which can be expressed in the matrix form \[\hat{\mathcal{T}}=\begin{pmatrix}\chi_{1}&\eta_{1}&0&0&\cdots\\ \eta_{1}&\chi_{2}&\eta_{2}&0&\cdots\\ 0&\eta_{2}&\chi_{3}&\eta_{3}&\cdots\\ \vdots&\vdots&\ddots&\ddots&\ddots\end{pmatrix}\,, \tag{10}\] with \(J_{i}\underline{\mathcal{P}}_{j}(J_{i})=\hat{\mathcal{T}}\underline{\mathcal{P }}_{j}(J_{i})\) using the vector of polynomials \(\underline{\mathcal{P}}_{j}(J_{i})=\left[\mathcal{P}_{1}(J_{i}),\mathcal{P}_{2 }(J_{i}),\cdots\mathcal{P}_{n}(J_{i})\right]^{\top}\). By definition, we have \(\eta_{0}=0\). While the chain is half-infinite for an infinite bath, it is truncated at \(j_{\text{max}}\) in practical calculations [54; 55] so that we also have \(\eta_{j_{\text{max}}}=0\). (In Ref. [54], the truncation level was denoted by \(N_{\text{tr}}=j_{\text{max}}\).) The commutation and anticommutation of the operators of the central spin with \(\hat{\sigma}_{\alpha}\) (Pauli matrices) in the chain are expressed by the matrices \(\hat{K}_{\alpha}\) and \(\hat{M}_{\alpha}\), respectively, with matrix elements \(\langle\langle n|\hat{K}_{\alpha}|m\rangle\rangle=\frac{1}{2}\langle\langle \hat{\sigma}_{n}|[\hat{\sigma}_{\alpha},\hat{\sigma}_{m}]\rangle\rangle\) and \(\langle\langle n|\hat{M}_{\alpha}|m\rangle\rangle=\frac{1}{2}\langle\langle \hat{\sigma}_{n}|[\hat{\sigma}_{\alpha},\hat{\sigma}_{m}]\rangle\rangle\) for \(\{m,n\}\in\{x,y,z\}\). The notation \(\langle\langle\dots\rangle\rangle\) is used for the scalar product of operators for which we use \(\langle\langle\hat{A}|\hat{B}\rangle\rangle:=\langle\hat{A}^{\dagger}\hat{B} \rangle_{T=\infty}\), i.e., the expectation value at infinite temperature. Straightforwardly, we find \[\hat{K}_{\alpha}=\text{i}\begin{pmatrix}0&0&0&0\\ 0&0&\delta_{\alpha,z}&-\delta_{\alpha,y}\\ 0&-\delta_{\alpha,z}&0&\delta_{\alpha,x}\\ 0&\delta_{\alpha,y}&-\delta_{\alpha,x}&0\end{pmatrix}\,, \tag{11a}\] \[\hat{M}_{\alpha} =\begin{pmatrix}0&\delta_{\alpha,x}&\delta_{\alpha,y}&\delta_{ \alpha,z}\\ \delta_{\alpha,x}&0&0&0\\ \delta_{\alpha,y}&0&0&0\\ \delta_{\alpha,z}&0&0&0\end{pmatrix}\,. \tag{11b}\] We emphasize that the chain Hamiltonian \(\hat{\mathcal{H}}_{\text{eff}}^{\text{ch}}\) induces only a slow dynamics because the coupling between the head of the chain and its next site is \(J_{\text{Q}}=1\), while the coupling between different chain sites as well as the hopping processes between different flavors at each site is of order \(\sqrt{\gamma}J_{\text{Q}}\approx 10^{-2}\) to \(10^{-3}\). Therefore, the quantum effects such as the dynamics in the bath and eventually dephasing and relaxation of the polarization of the central spin due to the presence of the bath of spins is slow. Finally, we state that the autocorrelation expressed by the derived effective model reads \[S^{\alpha}(t)=\frac{1}{4}\langle e_{\alpha},\mathbf{0}|e^{-\mathbf{i}\hat{ \mathcal{H}}_{\text{eff}}}|e_{\alpha},\mathbf{0}\rangle\,, \tag{12}\] with \(e_{\alpha}=(0,\delta_{\alpha,x},\delta_{\alpha,y},\delta_{\alpha,z})^{\top}\) and \(\mathbf{0}\) being the vacuum of all bosons. The autocorrelation (12) can be reformulated with the help of the matrix \(\hat{M}_{\alpha}\) and \(e_{1}=(1,0,0,0)^{\top}\) \[S^{\alpha}(t) =\frac{1}{4}\langle e_{1},\mathbf{0}|\hat{M}_{\alpha}e^{-\mathbf{ i}\hat{\mathcal{H}}_{\text{eff}}}\hat{M}_{\alpha}|e_{1},\mathbf{0}\rangle\,, \tag{13a}\] \[=\frac{1}{4}\langle e_{1},\mathbf{0}|e^{i\hat{\mathcal{H}}_{ \text{eff}}}\hat{M}_{\alpha}e^{-i\hat{\mathcal{H}}_{\text{eff}}}\hat{M}_{ \alpha}|e_{1},\mathbf{0}\rangle\,,\] (13b) \[=\frac{1}{4}\langle e_{1},\mathbf{0}|\hat{M}_{\alpha}(t)\hat{M}_{ \alpha}(0)|e_{1},\mathbf{0}\rangle\,, \tag{13c}\] where we used the fact that \(\hat{\mathcal{H}}_{\text{eff}}|e_{1},\mathbf{0}\rangle=0\) since \(\hat{K}_{\alpha}e_{1}=0\) and all bosonic terms in the chain part annihilate the bosonic vacua. ### The bosonic truncated Wigner approximation In order to apply a TWA to the effective model defined in the previous section we need to represent the four-dimensional impurity by objects which have classical counterparts. Here we choose two spins with \(S=1/2\) which together span a four dimensional Hilbert space. We denote their singlet state by \(\ket{s}\) and their three triplet states by \(\ket{t_{\alpha}}\) for \(\alpha\in\{x,y,z\}\), identified with the four-dimensional Cartesian vectors \(\ket{s}=\begin{pmatrix}1&0&0&0\end{pmatrix}^{\top}\) and \(\ket{t_{\alpha}}=\begin{pmatrix}0&\delta_{\alpha x}&\delta_{\alpha y}&\delta_ {\alpha z}\end{pmatrix}^{\top}\). Elementary linear algebra [74] yields the action of the spin operators on these states \[\hat{S}_{\nu,\alpha}\ket{s}= -\frac{(-1)^{\nu}}{2}\sum_{\beta}\delta_{\alpha\beta}\ket{t_{ \beta}}\,, \tag{14a}\] \[\hat{S}_{\nu,\alpha}\ket{t_{\beta}}= -\frac{1}{2}\big{[}2(-1)^{\nu}\delta_{\alpha\beta}\ket{s}- \mathbf{i}\sum_{\delta}\epsilon_{\alpha\beta\delta}\ket{t_{\delta}}\big{]}\,, \tag{14b}\] where \(\nu=\{1,2\}\) labels the spin \(\hat{S}_{1}\) and \(\hat{S}_{2}\), respectively. With these definitions, the matrices \(\hat{K}\) and \(\hat{M}\) in Eqs. (11a) and (11b) can be expressed in terms of these spin operators \[\hat{K}_{\alpha}= -\left(\hat{S}_{1,\alpha}+\hat{S}_{2,\alpha}\right), \tag{15a}\] \[\hat{M}_{\alpha}= \hat{S}_{1,\alpha}-\hat{S}_{2,\alpha}\,. \tag{15b}\] The annihilation and creation operators of the harmonic oscillators can be expressed by position and momentum operators in the standard way \[\hat{r}_{j,\alpha}= \frac{1}{\sqrt{2}}(\hat{a}_{j,\alpha}^{\dagger}+\hat{a}_{j, \alpha})\,, \tag{16a}\] \[\hat{p}_{j,\alpha}= \frac{\mathbf{i}}{\sqrt{2}}(\hat{a}_{j,\alpha}^{\dagger}-\hat{a}_{j,\alpha})\,. \tag{16b}\] With these relations, the Hamiltonian in Eq. (9) can be rewritten into \[\hat{\mathcal{H}}_{\text{eff}}^{\text{CS}}= -\frac{1}{\sqrt{2}}(\hat{\bar{S}}_{1}+\hat{\bar{S}}_{2})\cdot\hat{ \bar{r}}_{1}\,, \tag{17a}\] \[\hat{\mathcal{H}}_{\text{eff}}^{\text{ch}}= \frac{1}{2}\sum_{j=1}^{N_{\text{tr}}}(\hat{\bar{S}}_{2}-\hat{\bar {S}}_{1})\cdot[(\chi_{j}\hat{\bar{r}}_{j}+\eta_{j-1}\hat{\bar{r}}_{j-1}+\eta_{j} \hat{\bar{r}}_{j+1})\times\hat{\bar{p}}_{j}],\] (17b) \[\hat{\mathcal{H}}_{\text{eff}}^{\text{Z}}= h(\hat{\bar{S}}_{1,z}+\hat{\bar{S}}_{2,z})\,. \tag{17c}\] The ensuing time evolution of the operators \(\hat{\bar{r}}\), \(\hat{\bar{p}}\), \(\hat{\bar{S}}_{1}\), and \(\hat{\bar{S}}_{2}\) according to the Heisenberg equation of motion reads \[\frac{\mathrm{d}}{\mathrm{d}t}\hat{\bar{r}}_{1}=\frac{\chi_{1}}{2}(\hat{\bar{S} }_{2}-\hat{\bar{S}}_{1})\times\hat{\bar{r}}_{1}+\frac{\eta_{1}}{2}(\hat{\bar{S}}_ {2}-\hat{\bar{S}}_{1})\times\hat{\bar{r}}_{2}\,, \tag{18a}\] Figure 1: Sketch of the CSM described by Eqs. (9a) and (9b). The central spin and the bosons in the chain are shown by the black and light gray solid spheres, respectively. The solid two-sided arrows inside the boxes illustrate the couplings \(\chi_{j}/2\) between bosons of different flavors at the same site of the chain, while the dotted ones indicate the couplings \(\eta_{j}/2\) between bosons on adjacent sites. \[\frac{\mathrm{d}}{\mathrm{d}t}\hat{\vec{p}}_{1} =\frac{\chi_{1}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1})\times\hat{ \vec{p}}_{1}+\frac{\eta_{1}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1})\times\hat{ \vec{p}}_{2}\] \[\quad+\frac{1}{\sqrt{2}}(\hat{\vec{S}}_{2}+\hat{\vec{S}}_{1})\,, \tag{18b}\] for \(j=1\) while for general \(j>1\) we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\hat{\vec{r}}_{j} =\frac{\chi_{j}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1})\times \hat{\vec{r}}_{j}+\frac{\eta_{j}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1}) \times\hat{\vec{r}}_{j+1}\] \[\quad+\frac{\eta_{j-1}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1}) \times\hat{\vec{r}}_{j-1}\,, \tag{19a}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\hat{\vec{r}}_{j} =\frac{\chi_{j}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1})\times \hat{\vec{p}}_{j}+\frac{\eta_{j}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1}) \times\hat{\vec{p}}_{j+1}\] \[\quad+\frac{\eta_{j-1}}{2}(\hat{\vec{S}}_{2}-\hat{\vec{S}}_{1}) \times\hat{\vec{p}}_{j-1}\,,\] (19b) \[\frac{\mathrm{d}}{\mathrm{d}t}\hat{\vec{S}}_{\nu} =\frac{1}{\sqrt{2}}\hat{\vec{S}}_{\nu}\times\hat{\vec{r}}_{1}+ \frac{3-2\nu}{2}\hat{S}_{\nu}\times\sum_{j=1}^{N_{\mathrm{tr}}}\left[\chi_{j} (\hat{\vec{r}}_{j}\times\hat{\vec{p}}_{j})\right.\] \[\quad\left.+\eta_{j}(\hat{\vec{r}}_{j+1}\times\hat{\vec{p}}_{j})+ \eta_{j-1}(\hat{\vec{r}}_{j-1}\times\hat{\vec{p}}_{j})\right]-h\hat{\vec{S}}_ {\nu}\times\vec{z}\,, \tag{19c}\] where we use \(\vec{z}=\begin{pmatrix}0&0&1\end{pmatrix}^{\top}\) in the last term of Eq. (19c). The sought autocorrelation (3) has been expressed for the effective model in (13c) which implies \[S^{\alpha}(t)=\frac{1}{4}\langle(\hat{S}^{\alpha}_{1}(t)-\hat{S}^{\alpha}_{2}( t))(\hat{S}^{\alpha}_{1}(0)-\hat{S}^{\alpha}_{2}(0))\rangle\,, \tag{20}\] where the expectation value is taken with respect to the singlet state of spin 1 and 2 and the bosonic vacua. Applying the standard TWA [52], the leading quantum corrections are recovered by averaging classical trajectories over distributions of initial conditions. The equations of motions (18) and (19) are viewed as differential equations for classical vectors starting from random initial conditions. For this purpose, normal distributions have turned out to be particularly suitable for the initial conditions. Their asset is that only the mean value and the variance are needed to determine the distribution fully. We choose a normal distribution for spin \(\vec{S}_{1}\) with vanishing mean value and variance \(1/4\) for each component because \((\hat{S}^{\alpha})^{2}=1/4\) for \(S=1/2\)[53]. Since we mimic a singlet state \(\vec{S}_{2}\) is always chosen to be \(-\vec{S}_{1}\) initially. The position and momentum components are also drawn from a normal distribution with vanishing means. The variances are straightforwardly computed considering (16) yielding \(\langle\hat{r}_{j,\alpha}^{2}\rangle=1/2=\langle\hat{p}_{j,\alpha}^{2}\rangle\). In practice, the time-evolution of the central spin \(S^{\alpha}(t)\) in Eq. (20) is calculated for configuration average over \(\mathcal{M}\) classical trajectories with \(\mathcal{M}\) being of the order of \(10^{6}-10^{7}\) to keep statistical errors low. ## IV Numerical results Here we show results of the two TWAs which are the protagonists of this study. The sTWA relies on the classical equations of motion for the spin operators of original CSM. Either each spin is tracked individually or a hierarchical chain representation is used. This does not make any discernible difference. In contrast, the bTWA solves the classical equations of motion for the effective model obtained by mapping the large spin bath to a bath of bosons. Since \(J_{\mathrm{Q}}\) is the energy unit in the numerical calculations, all times are measured in units of \(1/J_{\mathrm{Q}}\) having set \(\hbar\) to unity. The equations of motion have no lower or upper validity cutoff in time and, thus, can be applied to discuss the spin-spin correlation from \(t=0\) to \(t\to\infty\). The effective number of coupled spins \(N_{\mathrm{eff}}\) can also be chosen arbitrarily, but we keep in mind that the mapping to the effective model becomes exact in the limit of large spin baths. Further details of the effect of \(N_{\mathrm{eff}}=2/\gamma\) in the bTWA are provided in App. A. Figure 2 shows the autocorrelation of the central spin in absence of external magnetic fields. This is the central result of this paper. Clearly, we see that both approaches, sTWA and bTWA, are converged with respect to the truncation level \(j_{\mathrm{max}}\) (for further details of the effect of \(j_{\mathrm{max}}\) in the bTWA, see App. B). The curves for \(j_{\mathrm{max}}=16\) do not differ discernibly from those for \(j_{\mathrm{max}}=32\). In the inset, we focus on the behavior on short to moderate times. Here the agreement between both approaches is very good. Since we know from previous studies [44] that the sTWA represents the quantum mechanical result very well we deduce that the bTWA also works well in this temporal regime. In the main panel of Fig. 2 we discern a significant discrepancy between the sTWA and the bTWA. This is quite surprising in view of the nice agreement up to \(t\approx 30/J_{\mathrm{Q}}\). The convincing results obtained previously with sTWA [44] agrees with rigorous bounds [32; 33] indicating a very slow decay of the autocorrelation. Thus, the conclusion is indicated that the bTWA does not approximate the long-time behavior of the CSM well. Still, it is (i) desirable to Figure 2: Comparison of \(S^{z}(t)\) obtained by sTWA and by bTWA for the truncation levels \(j_{\mathrm{max}}=16\) and \(32\), fixed number of bath spins \(N=1000\), \(\gamma=0.01\) (\(N_{\mathrm{eff}}=200\)), and zero external magnetic field. corroborate this conclusion further and (ii) important to understand whether the mapping to the effective bosonic model introduces the observed difference or whether it is the TWA applied to the bosonic model which induces this discrepancy. Among the other approaches we employ the Bethe ansatz (BA) from which we use the data published in Ref. [33]. The BA works perfectly for long times, but only for a moderate number of bath spins. Second, in systematically controlled numerical DMRG calculations we consider 4096 states [43] with a threshold of 0.001 for the accumulated discarded weight with second-order Trotter-Suzuki decomposition. The DMRG is not able to follow the dynamics for long times due to the rapid growth of entanglement. But up to \(t\approx 50/J_{\rm Q}\) it is reliable. The quantum mechanical evaluation of the iEoM up to \(j_{\rm max}=3\) with {181,8,1} number of bosons, respectively, yields reliable data as well up to \(t\approx 50/J_{\rm Q}\)[55]. Data from these methods are depicted in Fig. 3 for two different sets of \(N\) and \(N_{\rm eff}\). The results from BA and DMRG agree very nicely for all times except for a tiny discrepancy at the minimum which we attribute to numerical inaccuracies. Note that the BA is evaluated based on Monte Carlo importance sampling implying small statistical fluctuations [30; 31]. The iEoM approach, i.e., the quantum mechanical evaluation of the effective bosonic model also agrees well with the BA and DMRG data, in particular for the slow decay beyond \(t\approx 6/J_{\rm Q}\). Only the wiggles at \(t\approx 50/J_{\rm Q}\) indicate that the evaluation with the limited number of bosons is at the verge of its validitiy at this time. The discrepancies of the iEoM data to BA and DMRG data can be attributed to the fact that the mapping to the effective model is valid for large spin baths only, see the discussion in Ref. [55]. The sTWA data does not capture the minimum particularly well, but it agrees with the other approaches (BA, DMRG, iEoM) for longer times. The frozen Overhauser data from Eq. (5) is characterized by the constant plateau for long times because no dynamics of the Overhauser field is included. What is the behavior of the data from bTWA? As we have already seen in Fig. 2 for short and moderate times the agreement with sTWA and thus with the other data is good. In view of the long-time discrepancy observed in Fig. 2 we focus on the longer times beyond \(20/J_{\rm Q}\). We discern that the data from bTWA clearly lies below the other data which coincide very well (except for the frozen Overhauser curve). This observation corroborates our finding in Fig. 2 that the TWA applied to the effective bosonic model does not approximate the long-time behavior reliably. In addition, we learn that the iEoM data, i.e., the quantum mechanical evaluation of the effective bosonic model, works fine at these times. Hence, Fig. 3 provides evidence that it is not the mapping to the effective bosonic bath which is responsible for the discrepancy, but the bTWA. Hence, the two questions posed above are answered. This raises the question why the TWA is not as efficient as it is when applied directly to the spins. We do not yet have a concluding answer but the hypothesis suggesting itself is that the conserved quantities of the quantum effective bosonic model and its classical counterparts are not the same. In the CSM, the conserved quantities of the quantum and of the classical model are the same which makes their dynamics very similar [44]. Finally, we address the CSM in a finite magnetic field which has been well investigated both theoretically and experimentally [75; 76]. Data from DMRG, iEoM, and bTWA is depicted in Fig. 4 for a magnetic field in \(z\)-direction. In the main panel, all data sets agree very well. All of them show the clear signature of Larmor precession with a period \(T_{\rm L}=2\pi/\sqrt{h^{2}+J_{\rm Q}^{2}/2}\approx 0.63/J_{\rm Q}\), cf. Refs. [27; 44]. The envelope function of the Larmor precession is given by \(S_{\rm env.\ func.}(t)=\frac{1}{4}\exp(-J_{\rm Q}^{2}t^{2}/8)\)[38]. If we zoom far into the behavior at longer times after the signal has dephased, only minor discrepancies occur. This behavior is not unexpected since we learned already in the previous figures that the bTWA works well for Figure 3: Comparison of \(S^{z}(t)\) from various approaches (BA, sTWA, DMRG, and iEoM) for fixed number of bath spins \(N=36\) and two different (a) \(\gamma=1/18\) (\(N_{\rm eff}=N=36\)) and (b) \(\gamma=1/12\) (\(N_{\rm eff}=24\)), see App. A for further details of the effect of \(\gamma\) in the bTWA. The analytic data for random static (frozen) Overhauser field (fOver) from Eq. (5) is included for comparison as well. In both iEoM and the TWAs, we use the truncation level \(j_{\rm max}=3\), see App. B for further details of the effect of \(j_{\rm max}\) in the bTWA. times below \(30/J_{\rm Q}\). Hence the Larmor precessions and the Gaussian dephasing as shown by the black envelope function are retrieved reliably. Only the small discrepancies at later times indicate that the approximate treatment is not perfect at long times. But in a magnetic field the signal has essentially vanished anyway in the long-time regime. ## V Summary and discussion In this article, we theoretically studied the spin dynamics of the central spin in the central spin model (CSM). The CSM describes a so-called central spin coupled to spins in its environment in a star-like topology, i.e., without coupling between pairs of bath spins. This model is relevant for a plethora of physical systems where a small quantum system is coupled to a bath of other small quantum systems. A particularly interesting framework is the realization of quantum bits and their decoherence mechanisms due to their interaction with spin baths. For many phenomena the long-time dynamics of large spin baths needs to be described reliably which poses an insurmountable challenge to brute force numerical approaches because of the exponential growth of the quantum Hilbert space. Hence, accurate, systematically controlled approximative approaches are needed. One of them is the mapping of the CSM with a large spin bath to a bath of bosons, i.e., to an effective bosonic model, including a four-dimensional impurity at the head of the chain. The bosonic degrees of freedom can be represented in a star topology or in a chain topology [55]. The latter has the advantage that one can add site by site of the chain in order to reach a reliable description up to longer and longer times. Thus, we employed this representation here. Still, the quantum mechanical evaluation of the resulting central spin dynamics is a great numerical challenge. For this reason, we studied in the present article how well a truncated Wigner approximation (TWA) for the bosonic effective model, dubbed bTWA, captures the sought dynamics. This kind of approximation averages correlations of classical trajectories over distributions of initial conditions and describes leading quantum correlations in this way [52]. We found that the bTWA works very nicely for short and moderate times if the spin bath is large. This condition on the size of the spin bath does not result from the TWA, but from the mapping of the CSM to the effective bosonic model. Only a few bosonic sites in the chain representation of the bosonic bath are necessary. Much to our surprise, however, we found a qualitative discrepancy of the bTWA results compared to other approaches at long times. In this regime, the bTWA results display a significantly faster decay than the results by a direct application of the TWA to the CSM, dubbed sTWA. This discrepancy does not stem from the sTWA, but from the bTWA. Inspecting and comparing the behavior at moderate times where results from other approaches such as Bethe ansatz and DMRG are available indicates clearly that the correlations from bTWA are the deviating ones which are decaying too fast. Although the origin of this unexpected discrepancy is still unclear, we presume that the _classical_ effective bosonic model, from which the trajectories are derived, that are averaged in bTWA over initial conditions, has different, probably less, conserved quantities than the quantum effective bosonic model or the original CSM. Note that the quantum and the classical CSM share the same conserved quantities [32; 33; 44] so that their very similar behavior is plausible. But clearly, further studies are called for to (i) identify unambiguously the origin of the discrepancy and (ii) to conceive reliable and efficient evaluation techniques for the effective bosonic model. One idea suggesting itself is to use numerical renormalization group techniques to evaluate its dynamics. Surely, this will enhance our understanding of decoherence and relaxation of small quantum systems suitable for realizing quantum bits or quantum sensors. ###### Acknowledgements. We would like to thank P Schering for useful discussions and for providing data of other approximate and exact approaches. This study has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) and the Russian Foundation for Basic Research in the International Collaborative Research Centre TRR 160 (GSU), by the DFG in project Figure 4: Comparison of the \(x\)-component of the spin-spin autocorrelation obtained from DMRG, iEoM, and the bTWA at finite magnetic field \(h/J_{\rm Q}=10\) along the \(z\)-direction. The parameters are \(N=500\), \(j_{\rm max}=3\) and \(N_{\rm eff}=200\). The period of the Larmor precession is given by \(T_{\rm L}=2\pi/\sqrt{\hbar^{2}+J_{\rm Q}^{2}/2}\approx 0.63/J_{\rm Q}\). The envelope function shown as black line is given by \(S_{\rm env.\ func.}(t)=\frac{1}{4}\,\exp(-J_{\rm Q}^{2}t^{2}/8)\). For the effect of \(h\) as well as the \(z\)-component of the spin-spin autocorrelation obtained from the bTWA, see App. C. UH 90/14-1 (TG and GSU), and by the Studienstiftung des Deutschen Volkes (KB). In addition, we also thank for the computing time provided on the Linux HPC cluster LiDO3 at TU Dortmund University. M.Y. greatly acknowledges the financial support by the National Science Foundation through award numbers DMR-1945529, PHY-1607611 and NSF PHY1748958 as well as from the Welch Foundation through award number AT-2036-20204001. ## Appendix A Effect of the effective number of bath spins \(N_{\text{eff}}\) in the bTWA The effective number of bath spins \(N_{\text{eff}}\) is one of the parameters influencing the minimum autocorrelation at intermediate time scales as well as the decoherence rates at long time scales. So, in the bTWA, it is important to investigate a range of \(N_{\text{eff}}\) for fixed \(j_{\text{max}}=3\) and \(N=500\) as depicted in Fig. 11, namely \(N_{\text{eff}}=200\), 100, 40, 25, and 20, respectively, corresponding to \(\gamma=0.01\), 0.02, 0.05, 0.08, and 0.10. We obtain a square root behavior of \(S_{\text{min}}^{z}(t_{\text{min}})\) as shown in the inset of Fig. 11 for increasing \(\gamma\) (decreasing \(N_{\text{eff}}\)). The coefficients \(a=0.285\pm 0.005\) and \(b=S_{\text{min}}^{z}(t_{\text{min}}J_{\text{Q}}=\sqrt{12})\) in the fitting function \(f(\gamma)=a\sqrt{\gamma}+b\) depend on the set of the other parameters. The spin-spin autocorrelation for \(\gamma=0\) equals the one for the frozen Overhauser field with \(S_{\text{min}}^{z}(t_{\text{min}}J_{\text{Q}}=\sqrt{12})\simeq 0.009\) as a benchmark, see Eq. (5). This fact stems from the hyperfine coupling to the \(i\)-th bath that is proportional to the square root of \(\gamma\). For larger values of \(\gamma\) beyond \(\simeq 0.08\) we observe that the further changes of \(\gamma\) do not change the curves anymore at least up to moderate times. This observation agrees with what was found by sTWA [54]. ## Appendix B Effect of the truncation level \(j_{\text{max}}\) in the bTWA Here we study the effect of the maximum number of bosonic modes \(j_{\text{max}}\) in the bTWA, see Fig. 11, at fixed number of bath spins \(N=500\) and \(\gamma=0.01\) (corresponding to \(N_{\text{eff}}=200\)). The curve for \(j_{\text{max}}=0\) shows the result for the frozen Overhauser field in Eq. (5). The curve for \(j_{\text{max}}=1\) induces only a very small temporal evolution of the Overhauser bath because the central spin is coupled only to a single harmonic oscillator which has a small effect on the position of the minimum. But the long-time plateau value of the autocorrelation stays close to the frozen Overhauser field one for the studied times. Taking into account a larger number of bosonic modes \(j_{\text{max}}\geq 2\), the difference between the static, frozen Overhauser result and the dynamic autocorrelations further increases. The frozen Overhauser curve (dashed line) is always below the other curves at short timescales. Clearly, the decay of the autocorrelation sets in only for \(t>\tau\) after a specific time \(\tau\simeq 10/J_{\text{Q}}\) which is almost independent of the set of parameters. For the shown time interval, the curves do not change significantly anymore for \(j_{\text{max}}\geq 3\) in accordance with previous results [55]. ## Appendix C Effect of the external magnetic field on the spin-spin autocorrelation in the bTWA In this appendix, we address the role of a longitudinal magnetic field in the bTWA with the parameters \(j_{\text{max}}=3\), \(N=500\), and \(\gamma=0.01\) (\(N_{\text{eff}}=200\)) in Fig. 11. In this case, the solution of Eq. (19c) displays the precession of the central spin about the effective Figure 11: The effect of the effective number of bath spins characterized by \(\gamma=2/N_{\text{eff}}\) in the bTWA on the spin-spin correlation at fixed \(j_{\text{max}}=3\) and \(N=500\). The dotted fitting function in the inset is \(f(\gamma)=a\sqrt{\gamma}+b\) with \(a=0.285\pm 0.005\) and \(b=S_{\text{min}}^{z}(t_{\text{min}}J_{\text{Q}}=\sqrt{12})\), which confirms the square root proportionality of the minimum value of the correlation on \(\gamma\). Figure 10: The effect of the truncation level characterized by \(j_{\text{max}}\) in the bTWA on the spin-spin autocorrelation at fixed \(N=500\) and \(\gamma=0.01\) (\(N_{\text{eff}}=200\)). magnetic field, i.e., the Overhauser field plus the external magnetic field. Depending on the considered spin component the Zeeman effect implies different behavior. For the \(z\)-autocorrelation of the central spin, Fig. C.1(a), one finds that the decoherence rate is strongly suppressed by the magnetic field in a way that it approaches zero at strong fields where the spin-spin autocorrelation becomes almost time-independent and tends to take the initial value of \(1/4\). This implies that the central spin polarization parallel to the external magnetic field is stabilized for \(h\gg J_{\mathrm{Q}}\). Upon increasing magnetic field, the minimum of the longitudinal autocorrelation occurs earlier and earlier before it is reduced to small oscillations and eventually to an almost constant plateau. In contrast to the longitudinal dynamics of the central spin, the transversal dynamics, Fig. C.1(b), displays pronounced Larmor precessions with fast decreasing amplitude due to the dephasing induced by the fluctuations of the Overhauser field.
2306.03299
The Gallium Anomaly
In order to test the end-to-end operations of gallium solar neutrino experiments, intense electron-capture sources were fabricated to measure the responses of the radiochemical SAGE and GALLEX/GNO detectors to known fluxes of low-energy neutrinos. Such tests were viewed at the time as a cross-check, given the many tests of $^{71}$Ge recovery and counting that had been routinely performed, with excellent results. However, the four $^{51}$Cr and $^{37}$Ar source experiments yielded rates below expectations, a result commonly known as the Ga anomaly. As the intensity of the electron-capture sources can be measured to high precision, the neutrino lines they produce are fixed by known atomic and nuclear rates, and the neutrino absorption cross section on $^{71}$Ga is tightly constrained by the lifetime of $^{71}$Ge, no simple explanation for the anomaly has been found. To check these calibration experiments, a dedicated experiment BEST was performed, utilizing a neutrino source of unprecedented intensity and a detector optimized to increase statistics while providing some information on counting rate as a function of distance from the source. The results BEST obtained are consistent with the earlier solar neutrino calibration experiments, and when combined with those measurements, yield a Ga anomaly with a significance of approximately $4\sigma$, under conservative assumptions. But BEST found no evidence of distance dependence and thus no explicit indication of new physics. In this review we describe the extensive campaigns carried out by SAGE, GALLEX/GNO, and BEST to demonstrate the reliability and precision of their experimental procedures, including $^{71}$Ge recovery, counting, and analysis. We also describe efforts to define uncertainties in the neutrino capture cross section. With the results from BEST, an anomaly remains.
Steven R. Elliott, Vladimir Gavrin, Wick Haxton
2023-06-05T22:49:51Z
http://arxiv.org/abs/2306.03299v1
# The Gallium Anomaly ###### Abstract In order to test the end-to-end operations of gallium solar neutrino experiments, intense electron-capture sources were fabricated to measure the responses of the radiochemical SAGE and GALLEX/GNO detectors to known fluxes of low-energy neutrinos. Such tests were viewed at the time as a cross-check, given the many tests of \({}^{71}\)Ge recovery and counting that had been routinely performed, with excellent results. However, the four \({}^{51}\)Cr and \({}^{37}\)Ar source experiments yielded rates below expectations, a result commonly known as the Ga anomaly. As the intensity of the electron-capture sources can be measured to high precision, the neutrino lines they produce are fixed by known atomic and nuclear rates, and the neutrino absorption cross section on \({}^{71}\)Ga is tightly constrained by the lifetime of \({}^{71}\)Ge, no simple explanation for the anomaly has been found. To check these calibration experiments, a dedicated experiment BEST was performed, utilizing a neutrino source of unprecedented intensity and a detector optimized to increase statistics while providing some information on counting rate as a function of distance from the source. The results BEST obtained are consistent with the earlier solar neutrino calibration experiments, and when combined with those measurements, yield a Ga anomaly with a significance of approximately \(4\sigma\), under conservative assumptions. But BEST found no evidence of distance dependence and thus no explicit indication of new physics. In this review we describe the extensive campaigns carried out by SAGE, GALLEX/GNO, and BEST to demonstrate the reliability and precision of their experimental procedures, including \({}^{71}\)Ge recovery, counting, and analysis. We also describe efforts to define uncertainties in the neutrino capture cross section, which now include estimates of effects at the \(\lesssim 0.5\%\) level such as radiative corrections and weak magnetism. With the results from BEST, an anomaly remains even if one retains only the transition to the \({}^{71}\)Ge ground state, whose strength is fixed by the known lifetime of \({}^{71}\)Ge. We then consider the new-physics solution most commonly suggested to resolve the Ga anomaly, oscillations into a sterile fourth neutrino, \(\nu_{e}\rightarrow\nu_{s}\). We find such a solution generates substantial tension with several null experiments, owing to the large mixing angle required. While this does not exclude such solutions - the sterile sector might include multiple neutrinos as well as new interactions - it shows the need for more experimental constraints, if we are to make progress in resolving the Ga and other low-energy neutrino anomalies. We conclude by consider the role future low-energy electron-capture sources could play in this effort. keywords: solar neutrinos, electron capture, radiochemistry, oscillations, sterile neutrinos + Footnote †: journal: Progress in Particle and Nuclear Physics ###### Contents * 1 Introduction * 2 History * 2.1 Gallium Radiochemical Radioactive Source Measurements * 2.2 SAGE \({}^{51}\)Cr and \({}^{37}\)Ar Source Measurements * 2.3 GALLEX \({}^{51}\)Cr Source Measurements * 2.4 Results from the Early Source Measurements * 3 The BEST Experiment * 3.1 The \({}^{51}\)Cr Source * 3.1.1 Source Fabrication * 3.1.2 Source Activity * 3.2 BEST Experimental Operations * 3.2.1 The Extractions * 3.2.2 \(\mathrm{GeH_{4}}\) Synthesis * 3.2.3 The Proportional Counters * 3.2.4 Waveform Analysis and Likelihood Fits * 4 Auxiliary Experimental Tests * 4.1 Extraction and Synthesis Efficiency * 4.2 Counter Efficiency * 4.3 Detector Effective Path Length and Distribution * 5 The \({}^{71}\)Ga Capture Cross Section * 5.1 The Ground State Transition Strength * 5.2 The Ground State Neutrino Capture Cross Section * 5.3 Excited-State Contributions to the Neutrino Capture Cross Section * 5.4 Comparisons to Past Work * 5.5 Nuclear and Atomic Data Uncertainties * 6 The Ga Anomaly and its Possible Implications for Sterile Neutrinos * 7 Summary and Outlook ## 1 Introduction The gallium anomaly can be stated: _The measurements of the charged-current capture rate of neutrinos on \({}^{71}\)Ga from strong radioactive sources have yielded results below those expected, based on the known strength of the principal transition supplemented by theory._ The mystery of this anomaly, which has persisted for over two decades, deepened with recent, high-precision results from the Baksan Experiment on Sterile Transitions (BEST) [1, 2]. The data that initially gave rise to the anomaly were obtained from calibration tests of two radiochemical detectors, SAGE and GALLEX, that were designed to probe low-energy components of the solar neutrino flux. It was anticipated that the calibration tests would be unusually free of uncertainties. First, the neutrino sources employed are well understood, as their intensities can be measured in multiple ways, to sub-1% precision, and as the spectra they produce are lines with precisely known energies and branching ratios. Second, the cross section for neutrino absorption on \({}^{71}\)Ga is tightly constrained by the known electron-capture lifetime of \({}^{71}\)Ge, which establishes a lower bound on the cross section, leaving only a \(\sim\)6% correction due to transitions to excited states in \({}^{71}\)Ge to be determined through a combination of experiment and theory: the cross section has been recently re-examined in an analysis that carefully propagates all known sources of uncertainty [3]. Third, the efficiency of the extraction of Ge from the Ga targets is independently verified by tracer experiments, in every experimental run. While questions have been raised about detector operations including the \({}^{71}\)Ge extraction efficiencies, no plausible experimental explanation for the anomaly has been identified. Consequently, the discrepancies found in the calibration experiments are concerning, and indeed have been taken as evidence for sterile neutrinos (\(\nu_{s}\)). The gallium anomaly found in the four original calibration experiments represents about a 2.5\(\sigma\) deviation from expectations. With the completion of BEST, the deviation has risen to 6\(\sigma\). This level of significance might not normally warrant suggestions of new physics. It has happened in this case because the gallium anomaly is one of several that have arisen in low-energy neutrino experiments. A broad overview of sterile neutrinos and anomalies motivating them can be found in various community white papers [4, 5, 6]. Yet, despite this supporting evidence as well as the theoretical enthusiasm for additional neutrino species, no compelling overall sterile-neutrino explanation for the collection of anomalies has emerged: there is some tension among the sterile neutrino parameters required to account for each anomaly. Indeed, the freedom one has in introducing sterile neutrinos - the specific mechanism, their number, and their masses and mixings - is considerable, making it difficult to either rule out or confirm such an explanation. Further, additional neutrino species can be accompanied by other new neutrino physics. The \(6\times 6\) mixing matrix that arises for three neutrino flavors with both Dirac and Majorana mass terms contains various mixing angles and phases [7] associated with possible CP [8] or CPT violation [9]. Other new physics could include non-standard neutrino interactions [10], neutrino decay [11], Lorentz violation [12], extra dimensions [13], energy dependent mixing parameters [14], dark photons [15], neutrinos coupled to fuzzy dark matter or dark energy [16], and bulk neutrinos [17]. So despite the expectation that new neutrino species may exist, the flexibility of the theory has made it difficult to access the plausibility of sterile neutrinos as the explanation for the various anomalies. For the same reason, it is difficult to design an experiment to either verify or falsify a hypothesized sterile neutrino. Specifically, BEST was not designed for this purpose. Instead, it was envisioned as a high-sensitivity test of the gallium anomaly. While it increased the significance of the anomaly, it failed to provide more specific evidence of oscillations through a tell-tale variation of signal with distance. In this review, we describe the status of the gallium anomaly, with special emphasis on the most recent results from BEST, which utilized a \({}^{51}\)Cr neutrino source of unprecedented strength. The resulting increase in the significance of the gallium anomaly is notable, though the results of BEST and the four earlier calibrations could still be attributed to an unlikely statistical fluctuation. We discuss the various cross-checks on the experimental methods that have been made over the three decades of gallium detector operations. We conclude by discussing possible steps that could be taken in the future, to resolve this perplexing situation. ## 2 History In 1968 the first results from the Homestake chlorine solar neutrino experiment [18] were announced. Ray Davis's radiochemical detector made use of the reaction \({}^{37}\)Cl(\(\nu_{e},e^{-}\))\({}^{37}\)Ar to observe \({}^{8}\)B and \({}^{7}\)Be solar neutrinos. This technique was first suggested by Pontecorvo [19], then explored in more detail by Alvarez [20], who was interested in doing a reactor experiment to test whether neutrinos were Majorana particles. The Davis detector consisted of 615 tons of perchloroethylene (C\({}_{2}\)Cl\({}_{4}\)), placed inside a steel containment vessel that had been constructed on the 4850-ft level of the Homestake gold mine. As a noble gas that does not interact chemically, argon can be extracted with high efficiency (\(\sim\)95%) from large volumes of organic liquid. The \(\sim\)35-d half-life of \({}^{37}\)Ar is nearly ideal, allowing tank concentrations to build up over a saturation time of about two months, yet permitting \({}^{37}\)Ar counting via electron capture (EC). On two-month intervals, approximately ten \({}^{37}\)Ar atoms produced by solar neutrino reactions would be extracted from the volume and counted in small proportional counters, which recorded the emitted x rays and Auger electrons produced after \({}^{37}\)Ar EC, as the K-shell vacancy is filled. Measurements continued until 2002, when the Homestake mine was closed. The end result was a neutrino capture rate of 2.56\(\pm\)0.16\(\pm\)0.16 SNU1, about one-third that predicted by the standard solar model (SSM) [21, 22, 23, 24, 25]. Footnote 1: 1 SNU (solar neutrino unit) = \(1\times 10^{-36}\) /(target atom - s). Approximately 75% of the events in the chlorine detector came from the capture of more energetic \({}^{8}\)B neutrinos. In the SSM the flux of these neutrinos varies as \(\phi\)(\({}^{8}\)B)\(\sim\) T\({}_{c}^{22}\), where T\({}_{c}\) is the solar core temperature. This prompted many early solutions of the so-called "solar neutrino problem" in which the SSM was modified in ways that would reduce the core temperature by \(\sim\) 5%, thereby eliminating the discrepancy between observation and theory. Others proposed solutions invoked new weak-interaction physics, including neutrino oscillations and neutrino decay, or questioned whether there might be a hidden flaw in the experiment, as the radiochemical method is indirect. An account of the many inventive solutions can be found in the entertaining review of Ref. [26]. The chlorine detector was ahead of its time: two decades passed before others could build detectors to cross check the Homestake result. In the early 1980s the proton decay experiment Kamiokande I was re-instrumented to detect lower energy events, which enabled Kamiokande II/III to measure the high-energy portion of the \({}^{8}\)B solar neutrino flux via the reaction \(\nu+e\rightarrow\nu^{\prime}+e^{\prime}\). The Cerenkov light produced by the recoiling electron was observed in the three-kiloton water detector. Kamiokande II operated with a 9 MeV threshold for the electron energy, which was lowered to 7 MeV in Kamiokande III [27]. The first results from Kamiokande II were announced in 1989. The neutrino event rate was approximately 50% that expected, based on the SSM \({}^{8}\)B flux prediction. The fact that Kamiokande measured neutrinos event-by-event, provided some information on the shape of the neutrino spectrum, and largely confirmed the results of the Homestake experiment had significant impact. Kuzmin [28] had proposed using \({}^{71}\)Ga as the target for a radiochemical solar neutrino experiment due to the 234 keV threshold of the reaction \({}^{71}\)Ga(\(\nu_{e}\),\(e^{-}\))\({}^{71}\)Ge (see Fig. 2.1) and 11.43 d half-life of \({}^{71}\)Ge. The low threshold provides sensitivity to the low-energy pp neutrinos - those generated in the first step of the pp chain via proton-proton (pp) fusion - which are produced in a \(\beta\)-decay spectrum with an endpoint of 420 keV. The SSM flux of pp neutrinos is about four orders of magnitude higher than that of the \({}^{8}\)B neutrinos. Further, in contrast to the temperature-dependent \({}^{8}\)B neutrinos, the flux of the pp neutrinos is constrained by the Sun's luminosity, assuming only a steady-state Sun and standard-model weak interaction physics. With these assumptions, a minimum counting rate of 79 SNU was predicted for this detector. Consequently, a rate lower than this bound would point to new neutrino physics, an exciting result. In the 1970s work began on two possible approaches to the chemistry of this detector, one employing gallium as a GaCl\({}_{3}\) solution and the other as a metal. In the former, after an exposure of about three weeks, the produced germanium was recovered as GeCl\({}_{4}\) by bubbling nitrogen through the solution, then scrubbing the gas. The Ge was further concentrated and purified, converted into GeH\({}_{4}\), then counted in miniaturized gas proportional counters similar to those used in the chlorine experiment. This procedure was employed in the GALLEX/GNO experiment. The SAGE experiment exploited the fact that gallium metal is a liquid at slightly above room temperature. The produced \({}^{71}\)Ge is separated from the metal by mixing into the gallium a solution of hydrogen peroxide and dilute hydrochloric acid, which produces an emulsion, with the germanium migrating to the surface of the emulsion droplets where it is oxidized and dissolved by hydrochloric acid. The Ge is extracted as GeCl\({}_{4}\), purified and concentrated, synthesized into GeH\({}_{4}\), then counted as in the GALLEX experiment. In both GALLEX and SAGE, the overall efficiency of the chemical procedures can be determined by introducing Ge carrier. The SAGE and GALLEX/GNO experiments (Fig. 2.2) began in the late 1980's with operations stretching into the new century. The deduced counting rates (about 70 SNU [29, 30]) were again low compared to the SSM prediction (137 SNU [25]). They were also just below the minimum astronomical value, suggesting that the solar neutrino problem extended beyond \({}^{8}\)B neutrinos, affecting also the lower energy portion of the solar neutrino flux. Perhaps most important, the combination of the Cl, Kamiokande, and SAGE/GALLEX results indicated a pattern of solar neutrino fluxes incompatible with any choice of the solar core temperature T\({}_{c}\). Consequently, a new solution involving new particle physics became plausible, giving impetus to three new experiments, Borexino, Super-Kamiokande, and the Sudbury Neutrino Observatory, that were to vastly improve our knowledge of both the Sun and the basic physics of neutrinos. Super-Kamiokande [31] measurements of both atmospheric and solar neutrinos and SNO [32] measurements separating electron from heavy-flavor solar neutrinos showed that neutrinos are massive and undergo flavor oscillations. Borexino's patient campaign to map out the entire spectrum of solar neutrinos revealed the transition between vacuum-dominated and matter-dominated solar neutrino oscillations, thereby providing the first information on the ordering of the three light neutrino mass eigenstates. There are several excellent reviews summarizing the history of the solar neutrino problem and its resolution (see [33, 34, 35]). This review focuses on follow-up calibration measurements done by the SAGE and GALLEX/GNO experiments that generated new questions, still not resolved. The critical role of gallium experiments in underscoring the seriousness of the solar neutrino problem, combined with the recognition that the chemistry of these detectors was more complicated than that of the chlorine detector, led to proposals to make end-to-end cross checks of operations, by exposing the detectors to well-calibrated artificial neutrino sources. In the mid 1990's and early 2000's, four high-activity, artificial-source experiments were conducted, three with \({}^{51}\)Cr and one with \({}^{37}\)Ar. These electron-capture line sources produce low energy neutrinos, similar to the \({}^{7}\)Be solar neutrino line source. As the source intensities were typically on the order of one MCi, the counting rates the sources induced in the detectors were high, by solar neutrino counting-rate standards. The intensities of the sources were calibrated by several means, and very well established. Yet in combination they showed a rate 13% below that expected, albeit with a \(\pm\)5% uncertainty. By the time Super-Kamiokande and SNO had produced their results, other nagging neutrino-physics discrepancies had been pointed out. One of the earliest and perhaps most widely discussed came from the LSND experiment [36], which searched for \(\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{e}\) using \(\bar{\nu}_{\mu}\)s from the decay of muons at rest and \(\nu_{\mu}\rightarrow\nu_{e}\) using \(\nu_{\mu}\)s from \(\pi^{+}\)s decaying in flight, observing events in a liquid scintillator detector. Difficulties the collaboration encountered in understand the spectrum of events led them to suggest oscillations into sterile neutrino states [37]. These and other similar claims, reviewed in [4, 5, 6], provided additional motivation for BEST - a fifth gallium calibration experiment employing a source of unprecedented intensity, performed with the existing SAGE detector, though redesigned (see Fig. 2.2) to have some sensitivity to oscillation baselines. The results from that experiment, as well as those from the four earlier gallium neutrino source experiments, form the focus of this report. ### Gallium Radiochemical Radioactive Source Measurements Exposing the gallium to a known source of \(\nu_{e}\)s and then carrying out the routine procedures of \({}^{71}\)Ge extraction and counting provides an end-to-end cross-check of all experimental procedures, including the \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge cross section. The efficiencies of various extraction and counting steps during solar neutrino operations were already calibrated through auxiliary measurements, providing a systematic verification of experimental performance of both GALLEX and SAGE. Thus one would not have expected a different result in a neutrino calibration experiment. Indeed, because carrier experiments verify that the extraction of Ge is highly efficient, it had been argued that a high-statistics neutrino source experiment could be viewed as a measurement of the neutrino absorption cross section [38, 39]. Though tightly constrained by experiment, the cross section does have some residual dependence on theory due to transitions to two excited states in \({}^{71}\)Ge. Yet we will see that the BEST/gallium anomalies cannot be attributed solely, or even primarily, to this theory uncertainty. The electron-capture lines produced by \({}^{37}\)Ar and \({}^{51}\)Cr sources are given in Table 2.1, taking into account K, L, and Figure 2.1: The \({}^{71}\)Ga-\({}^{71}\)Ge level diagram compared to the neutrino spectra from the pp fusion (red curve), \({}^{51}\)Cr (lower blue bars), and \({}^{37}\)Ar (top black bar). The shaded region denotes the portion of the pp spectrum below threshold for neutrino capture on \({}^{71}\)Ga. \begin{table} \begin{tabular}{c c c c} \hline Isotope & \(\tau_{1/2}\) (d) & \(E_{\nu}\) (keV) & \(f_{E_{\nu}}\) (\%) \\ \hline \hline \({}^{37}\)Ar & 35.0 & 813.5 & 8.66\(\pm\)0.01 \\ & & 810.7 & 90.23\(\pm\)0.01 \\ \hline \({}^{51}\)Cr & 27.7 & 751.8 & 8.42\(\pm\)0.01 \\ & & 746.5 & 80.25\(\pm\)0.01 \\ & & 432.3 & 0.15\(\pm\)0.01 \\ & & 431.7 & 0.92\(\pm\)0.01 \\ & & 426.4 & 8.86\(\pm\)0.01 \\ \hline \hline \end{tabular} \end{table} Table 2.1: Line neutrinos from the source isotopes, taking into account the probabilities for K, L, and M capture [40, 41, 42]. Figure 2.2: Left: Sketch of one of the SAGE reactors used for solar neutrino measurements. Right: Sketch of the GALLEX neutrino source experimental layout. (Figure from [https://www.mpi-hd.mpg.de/lin/images/tank.gif](https://www.mpi-hd.mpg.de/lin/images/tank.gif)) M capture and, in the case of \({}^{51}\)Cr, the \(\sim\)10% probability of capture to the first excited state of \({}^{51}\)V. The \({}^{51}\)Cr and \({}^{37}\)Ar sources can be produced by irradiating isotopically enriched \({}^{50}\)Cr (as the natural abundance of this isotope is just 4.3%) or natural Ca or CaO targets in a high-flux reactor, making use of the \((n,\gamma)\) and \((n,\alpha)\) reactions, respectively. As \({}^{71}\)Ge has an 11.43-d half-life [43], the exposures for \({}^{71}\)Ga(\(\nu_{e},e^{-}\)) were typically 5-10 days (5 d for GALLEX, and 10 d for BEST). The produced Ge was then extracted, along with Ge carrier, and a counter gas synthesized. This gas was inserted into a small proportional counter and counted for a few months. The important nuclear and atomic physics data for \({}^{71}\)Ge decay are given in Table 2. The source experiments are described in the following subsections. ### SAGE \({}^{51}\)Cr and \({}^{37}\)Ar Source Measurements The SAGE source experiments followed the same procedures used in SAGE solar neutrino measurements, with one important difference. During solar neutrino operations each reactor contained a stirring mechanism, installed to evenly disperse throughout the Ga target the small quantity of natural Ge carrier that was added. This mechanism took up a great deal of space and would have interfered with source installation. Hence a reactor without that mechanism was employed for the source exposures. This special reactor held 13 t of Ga compared to the 7 t it would have held had the stirring mechanism remained in place. However, lacking a stirring mechanism, this reactor could not be used for the extraction chemistry. Instead, after exposure, the Ga was pumped to other reactors for this step. After Ge extraction, the procedures followed those used for decades in solar neutrino running. The results of the \({}^{51}\)Cr and \({}^{37}\)Ar source measurements are given in Table 3.1 and depicted in Fig. 2.3. The GALLEX, SAGE, and BEST experiments followed very similar procedures in fabricating intense \({}^{51}\)Cr sources, as summarized in Sec. 3.1. In addition, SAGE performed one calibration with an \({}^{37}\)Ar source, which produces a neutrino line very similar in energy to the solar \({}^{7}\)Be line. That source was produced by irradiating CaO (12.36 kg Ca) in the fast neutron breeder reactor BN-600 at Zarechny, Russia. The CaO was dissolved in a nitric acid solution and the \({}^{37}\)Ar was extracted by a He purge [46]. The source activity of the Ar was estimated by six distinct procedures. First, the volume and isotopic composition of produced gas was measured. Second, the mass of Ar introduced into the source container was measured. Third and fourth, the heat output of the source was measured by calorimetry at Zarechny before shipping to Baksan and after arrival using the same apparatus as for the SAGE Cr measurements, noting that the energy released by \({}^{37}\)Ar is \(2.751\pm 0.021\) keV/decay[46]. Fifth, after exposures of the Ga were completed, the source was returned to Zarechny and the remaining \({}^{37}\)Ar activity of samples was measured. Lastly, these final samples were analyzed for isotope dilution. The final value for the activity (409\(\pm\)2 kCi), determined from the weighted average, has an uncertainty of 0.5%. \begin{table} \begin{tabular}{l c l} \hline Shell Capture & \(f_{c}\) & Emissions \\ \hline \hline & & 41.5\% 10.367-keV Auger e\({}^{-}\) \\ K & 88\% & 41.2\% 1.2-keV Auger e\({}^{-}\)\& 9.2-keV x ray \\ & & 5.3\% 0.12-keV Auger e\({}^{-}\)\& 10.26-keV x ray \\ \hline L & 10.3\% & 1.2-keV Auger e\({}^{-}\) \\ \hline M & 1.7\% & 0.12-keV Auger e\({}^{-}\) \\ \hline \hline \end{tabular} \end{table} Table 2.2: Key features of the EC decay of \({}^{71}\)Ge [44, 45]. ### GALLEX \({}^{51}\)Cr Source Measurements The GALLEX tank was filled with 101 tons GaCl\({}_{3}\), acidified to 2 M in HCL. The target contained 30.3 t of natural Ga (12 t \({}^{71}\)Ga). Ge forms the volatile molecule GeCl\({}_{4}\) which can be extracted from the non-volatile GaCl\({}_{3}\) by bubbling an inert gas through the solution [47]. The GALLEX synthesis of GeH\({}_{4}\) follows procedures similar to those employed by SAGE and BEST, described in Sec. 3. The counting and analysis procedures used by GALLEX and SAGE are also very similar. The GALLEX experimental setup is depicted in Fig. 2. The GALLEX source had a 50-cm diameter, whereas the SAGE source had a diameter of only 8 cm. Furthermore the GaCl\({}_{3}\) solution in GALLEX has a lower \({}^{71}\)Ga atomic density than the metallic Ga used by SAGE. Consequently, though the GALLEX sources were a factor three stronger than those used in SAGE, the Ge production rates and hence the precision of the calibrations were similar in the two experiments. The GALLEX solar neutrino results were reanalyzed [48] after all solar neutrino runs were completed in 2003. The counter efficiencies could then be measured with high statistics using internal radioactive sources that would have compromised their performance for low-background measurements. In addition, advanced pulse shape analysis [50] was implemented to better distinguish signal from background. The results of the two \({}^{51}\)Cr measurements are given in Table 3.1 and shown in Fig. 2. ### Results from the Early Source Measurements A summary of the GALLEX, SAGE and BEST results are given in Table 3.1. It should be noted that GALLEX updated their analysis and efficiencies [30, 48]. The SAGE reference [46] quoted different values for the measured-to-predicted ratios than those that were eventually established based on private communications regarding the upcoming efficiency updates. The differences are modest compared to the \(\sim\)10% uncertainties (1.01 instead of 1.00, and 0.84 instead of 0.81). The typical precision achieved in each GALLEX and SAGE source experiment was \(\gtrsim\) 10%. When combined, the four Figure 2.3: The ratio of the measured \({}^{71}\)Ge production rate to the predicted rate for all 6 measurements. The dotted blue line (shading) is the best fit (uncertainty) to all 6 results. experiments yield a ratio of observed to expected counts of \(R\)=0.87\(\pm\)0.05 [29], a deviation of \(\sim 2.5\sigma\) from 1. Although not statistically convincing, the discrepancy generated a great deal of speculation as to possible causes, and became known as the "Ga anomaly." In particular, the discrepancy has been often cited as tentative evidence for sterile neutrinos and \(\nu_{e}\to\nu_{s}\) oscillations [4, 5, 6]. If one assumes a two-component oscillation into a sterile state, the survival probability for a neutrino of energy \(E_{\nu}\) detected a distance \(L\) from its source is \[P_{ee}(E_{\nu},L)=1-\sin^{2}\!2\theta\,\sin^{2}\left(\frac{\pi L}{L_{\rm osc}} \right)\quad{\rm with}\quad\frac{\pi}{[L_{\rm osc}/{\rm m}]}=1.27\,\frac{ \left[\Delta m^{2}/{\rm eV}^{2}\right]}{[E_{\nu}/{\rm MeV}]} \tag{2.1}\] where \(\Delta m^{2}=m_{2}^{2}-m_{1}^{2}\), \(m_{i}\) is the mass of the \(i\)th mass eigenstate, and \(\theta\) is the mixing ange. To determine the impact of oscillations on the detection rate in the Ga calibration and BEST experiments, one must take into account the complex geometry of the detector and the source. Denoting the target and source volumes by \(V_{d}\) and \(V_{s}\), respectively, the neutrino capture rate \(r\) is \[r=\sum_{i=1}^{6}f_{i}\ \int_{V_{d}}d\vec{x}_{d}\ n(\vec{x}_{d})\ \int_{V_{s}}d\vec{x}_{s}\ a(\vec{x}_{s})\ \frac{1}{|\vec{x}_{d}-\vec{x}_{s}|^{2}}\ \sigma(E_{\nu}^{i})\ P_{ee}(E_{\nu}^{i},|\vec{x}_{d}- \vec{x}_{s}|) \tag{2.2}\] where \(i\) indexes the neutrino lines produced in \({}^{51}\)Cr EC (the six lines listed in Table 2.1), \(f_{i}\) is the fraction of the neutrinos emitted in line \(i\), \(E_{\nu}^{i}\) is the line energy, \(\sigma(E_{\nu}^{i})\) is the \({}^{71}\)Ga neutrino absorption cross section at that energy, \(n\) is the target \({}^{71}\)Ga number density, \(a\) is the source activity density, and \(P_{ee}(E_{\nu}^{i},D)\) is the oscillation survival probability for that line at a distance \(D\equiv|\vec{x}_{d}-\vec{x}_{s}|\) from the source. Under the assumption that the target number and source activity densities are uniform, this can be rewritten as \[r=\sum_{i=1}^{6}f_{i}\ \sigma(E_{\nu}^{i})\,N\,A\ \int dD\ \frac{1}{D^{2}}\ P_{ee}(E_{\nu}^{i},D)\ P(D) \tag{2.3}\] where \(N\) is the number of \({}^{71}\)Ga nuclei in the target, \(A\) is the total source activity, and \[P(D)\equiv\frac{1}{V_{d}}\int_{V_{d}}d\vec{x}_{d}\ \ \frac{1}{V_{s}}\int_{V_{s}}d \vec{x}_{s}\ \delta(D-|\vec{x}_{d}-\vec{x}_{s}|)\ \ \ \ {\rm with}\ \ \ \ \int dD\ P(D)=1. \tag{2.4}\] Together, Eqs (2.3) and (2.4) factor the cross section and oscillation physics from the target and source geometry, encoding the latter in a probability distribution \(P(D)\) describing the likelihood that a given neutrino interacting in the target did so after traveling a distance \(D\) from the point where it was produced. This quantity is computed by Monte Carlo integration due to the complexity of target and detector geometry, after which it can be used in detailed oscillation studies. One can also define an effective path length in the detector \[\langle L\rangle\equiv\frac{1}{4\pi}\int_{V_{d}}d\vec{x}_{d}\ \ \frac{1}{V_{s}}\int_{V_{s}}d\vec{x}_{s}\ \frac{1}{|\vec{x}_{d}-\vec{x}_{s}|^{2}}\ \to\ \frac{1}{4\pi}\int_{V_{d}}d\vec{x}_{d}\ \frac{1}{|\vec{x}_{d}|^{2}} \tag{2.5}\] where on the right we have taken the limit where the source radius is much smaller than the distance to the detecting region, so that the source can be approximated as a point. One can see that \(\langle L\rangle\) is basically the average thickness of the detection region: in a detector like BEST with inner and outer regions, if each has approximately the same \(\langle L\rangle\), the detection rate in each volume with be approximately equal, in the absence of oscillations. The sensitivity to oscillation lengths is also governed by \(\langle L\rangle\). If \(L_{\rm osc}>>\langle L\rangle\), the detector event rate will not be affected by oscillations; if \(L_{\rm osc}<<\langle L\rangle\) the detection rate will be reduced by the factor \(1-\frac{\sin^{2}\!2\theta}{2}\), but the rapidity of the oscillations will make it impossible to see variations as a function of distance. However, if \(L\sim L_{\rm osc}\) and a detector records the positions of events, variations can be seen. We relate \(\langle L\rangle\) to a weighted distribution involving \(P(D)\) by inserting \(1=\int dD\,\delta(D-|\vec{x}_{d}-\vec{x}_{s}|)\) in Eq. (2.5), yielding \[\langle L\rangle=\frac{V_{d}}{4\pi}\int dD\,\frac{1}{D^{2}}P(D)\equiv\int dD\,L(D) \tag{2.6}\] In a Monte Carlo evaluation of Eq. (2.5), if events are binned according the distance D between the neutrino production point in the source and absorption point in the target, one has in the resulting dimensionless distribution \(L(D)\) the information needed to simulate oscillations for any choice of oscillation parameters. \(L(D)\) has the normalization \[\int dD\,D^{2}L(D)=\frac{V_{d}}{4\pi} \tag{2.7}\] One can then re-express the rate in Eq. (2.3) in terms of \(L(D)\), \[r=\sum_{i=1}^{6}f_{i}\ \sigma(E_{\nu}^{i})\,4\pi\,n\,A\ \int dD\ P_{ee}(E_{\nu}^{i},D)\ L(D) \tag{2.8}\] making it clear that \(L(D)\) is the weight one must fold with the oscillation probability to get the rate \(r\). If one interprets the \(\sim 2.5\sigma\) discrepancy that emerged from the four SAGE and GALLEX calibration experiments as evidence for a sterile neutrino, the best fit to oscillation parameters yields [6]\(\Delta m^{2}\sim\) 2.15 (2.24) eV\({}^{2}\) and sin\({}^{2}2\theta\sim\) 0.24 (0.5), taking the neutrino absorption cross section from Ref. [49] (Ref. [39]). However, the allowed region is very large and the minimum in \(\chi^{2}\) quite flat. When this flatness is taken into account, it was found [6] that sin\({}^{2}\theta\gtrsim\) 0.07 and \(\Delta m^{2}\gtrsim\) 0.35 eV\({}^{2}\) at 95% C.L., using the cross section of [39]. There is modest tension between these oscillation parameters and those found from some other experimental anomalies. For example, LSND analyses allowed large \(\Delta m^{2}\) consistent with the Ga results, but only if correlated with smaller sin\({}^{2}2\theta\). As noted in [6], however, one could nicely account for both the gallium and reactor neutrino anomalies by postulating an oscillation into a sterile fourth neutrino. These inconclusive comparisons with other experiments provided additional motivation for a higher statistics neutrino source experiment, stimulating work on BEST. ## 3 The BEST Experiment BEST goals included a higher intensity source, to improve counting statistics, and the introduction of two nested target volumes, so that if oscillations were occurring, variations in the flux with distance might be identified. Figure 3.1 shows the experimental layout. Figure 3.2 shows the nested target volume during construction and Fig. 3.3 shows photos of the BEST lab at the Baksan Neutrino Observatory. The BEST source, approximately six times stronger than the SAGE sources, is described in Sec. 3.1. The procedure began with adding Ge carrier to each of the two zones and then installing the source into the center of the two-zone target volume, for an exposure of about 10 d. The source would then be moved to the calorimeter to measure its activity, while the Ga was pumped to the chemical reactors to perform the \({}^{71}\)Ge extraction. The extraction of the Ge (carrier and \({}^{71}\)Ge) was conducted over about a day. The GeH\({}_{4}\) gas was synthesized, mixed with Xe, and inserted into proportional counters. The gas was then counted for 60-150 d. The following subsections discuss these key experimental activities in further detail. Figure 3.1: A Cartoon of the BEST experiment configuration. Figure 3.2: A photograph of the two-target volume during assembly. ### The \({}^{51}\)Cr Source #### Source Fabrication Chromium has four isotopes. \({}^{50}\)Cr has a low natural abundance (4.35%) and \({}^{53}\)Cr has a high neutron capture cross section. As a result, one must enrich the Cr in isotope 50, deplete in 53, to reach the desired activities [51]. The Cr isotope enrichment for GALLEX and SAGE was done at the Kurchatov Institute by gas centrifugation of CrO\({}_{2}\)F\({}_{2}\)[52, 53]. The CrO\({}_{2}\)F\({}_{2}\) was then hydrolyzed to Cr\({}_{2}\)O\({}_{3}\) followed by reduction to metallic Cr. Impurities in Cr would activate while in the reactor, creating a potential health hazard and impacting the source strength measurement. Therefore, great care was taken to ensure no contamination occurred during processing. To verify this, the samples were chemically analyzed by mass spectroscopy prior to irradiation. SAGE extruded the Cr metal into rods, which were irradiated at the BN-350 fast breeder nuclear reactor in Aktau, Kazakhstan to produce a 516.6 kCi source [54]. GALLEX irradiated its Cr at Silo at Grenoble as chips within a ziralloy tube [55]. Both of these reactor facilities produce intense thermal neutron fluxes and allow for the loading of large samples for irradiation. BEST used 4 kg of 97%-enriched \({}^{50}\)Cr formed into 26 metal disks, which were irradiated for \(\sim\)100 d in the SM-3 reactor at the State Scientific Center Research Institute of Atomic Reactors, Dimitrovgrad, Russia. After irradiation the \(3.1414\pm 0.008\) MCi source was delivered to Baksan on 5 July 2019, with exposures beginning at 14:02 that same day. This was taken as the source strength reference time. The source is shown in Fig. 3.4. #### Source Activity All three experiments used calorimetry as their primary, and most precise, tool to estimate the activity of the sources. All three also used a variety of additional methods to cross-check the activity determination. The activity of the SAGE \({}^{51}\)Cr source was measured three ways [54]. A calorimeter was built to measure the heating power of the source [56]; the 320 keV \(\gamma\)-rays produced following EC to the first excited state in \({}^{51}\)V (\(\sim\)10% branch) were Figure 3.3: Left: A photograph of the Baksan Neutrino Observatory showing the BEST two-zone target in the foreground and the mixing reactors, with their red cap motor drives in the background. Right: A photograph of the two-zone target, the source, the calorimeter and the lead shield for \(\gamma\) spectrum measurements. \begin{table} \begin{tabular}{l c c c} \hline Measurement & Activity (\(10^{15}\) Bq) & Activity (MCi) & Measured/Expected \\ \hline \hline SAGE Cr & \(19.11\pm 0.22\) & \(0.5166\pm 0.0060\)[54] & \(0.95\pm 0.12\)[54, 46] \\ SAGE Ar & \(15.1\pm 0.7\) & \(0.409\pm 0.002\)[46] & \(0.79^{+0.09}_{-0.10}\)[46] \\ GALLEX Cr-1 & \(63.4^{+1.1}_{-1.6}\)[57] & \(1.714^{+0.03}_{-0.043}\) & \(0.953\pm 0.11\)[48] \\ GALLEX Cr-2 & \(69.1^{+3.3}_{-2.1}\)[57] & \(1.868^{+0.09}_{-0.057}\) & \(0.812^{+0.10}_{-0.11}\)[48] \\ BEST-inner & \(116.23\pm 0.03\) & \(3.1414\pm 0.008\)[1] & \(0.79\pm 0.05\)[1] \\ BEST-outer & \(116.23\pm 0.03\) & \(3.1414\pm 0.008\)[1] & \(0.77\pm 0.05\)[1] \\ \hline \hline \end{tabular} \end{table} Table 3.1: Summary of the source activities and measured-to-predicted ratios for each of the six experiments. The experiments used different units to quote activities, therefore we give both here. Figure 3.4: A photograph of the BEST source as it is being removed from its transport container. To the right side of the photo, the calorimeter can be seen. counted; and the activity was estimated from reactor physics. GALLEX [57] also made several measurements of source activity. Samples were collected of the irradiated Cr chips and the 320-keV \(\gamma\) was counted. This was done independently by three groups (Saclay, Karlsruhe and BNL) along with inductively coupled plasma-atomic emission spectroscopy. The activity of the total source was also measured by calorimetry, and finally by measuring the \({}^{51}\)V content of the source (Karlsruhe and BNL) after all irradiations were complete (at which time most of the \({}^{51}\)Cr had decayed). The BEST calorimetry results [58, 59] were complimented by measurements of the \(\gamma\) radiation from the source between exposures. Figures 3.3 and 3.4 show photos of calorimeter. The precision of the calorimetry, a technique with a long and successful history, combined with multiple supporting measurements, as described above, gives one confidence the the source intensities were very well determined. ### BEST Experimental Operations BEST performed 10 extractions from each of the two volumes for a total of 20 measurements of the \(\nu_{e}\) flux. #### 3.2.1 The Extractions The procedure for Ge extraction from metal Ga is described in detail in Ref. [60], with improvements employed in solar neutrino and source measurements after 1998 described in Ref. [46]. The extraction efficiency is measured by introducing a Ge carrier isotope to the Ga target at the beginning of each neutrino exposure. In the procedure followed since 2005, 2.4 \(\mu\)mol of Ge enriched in either \({}^{72}\)Ge (92%) or \({}^{76}\)Ge (95%) is added. The contents of the reactor are then stirred to thoroughly disperse the carrier throughout the target. At the end of the exposure, an extraction solution consisting of HCL and 30% H\({}_{2}\)O\({}_{2}\) is added and the Ga is intensively stirred. This causes the Ga to form into fine droplets which are covered with a Ga oxide film. This film prevents fusion of the droplet and holds the Ga as a fine emulsion. The dissolved Ge in the Ga migrates to the surface of the droplets, oxidizes, and is incorporated into the oxide film. Once the H\({}_{2}\)O\({}_{2}\) is consumed, the emulsion breaks down. To dissolve the oxide containing the Ge, a quantity of 7 M HCl is added and the Ga briefly stirred. This solution is decanted and concentrated by evaporation to a volume for sweeping. The Ge is swept from this volume as GeCl\({}_{4}\), which is volatile and thus can be swept out with a flow of air. The Ge is extracted into CCl\({}_{4}\) and then back extracted into low-tritium water. The process is repeated three times to concentrate the Ge into a small volume of water. A much more detailed discussion of these chemical procedures can be found in Ref. [60]. Based on tracer recovery, the overall Ge extraction efficiency for the BEST runs was 98%. #### 3.2.2 GeH\({}_{4}\) Synthesis GALLEX, SAGE, and BEST followed similar procedures for synthesizing the GeH\({}_{4}\) gas [47, 60]. The Ge-loaded water from the extraction has a final volume of about 100 ml. NaOH is added to adjust the pH and the solution is placed in a reduction flask. Low-tritium NaBH\({}_{4}\) dissolved in low-tritium water is added and the mixture is heated to 70 C. At this temperature the Ge is reduced by the NaBH\({}_{4}\), making GeH\({}_{4}\). The produced H\({}_{2}\) and flowing He sweep the GeH\({}_{4}\) into a chromatography unit where it is captured in a cold trap. After the reaction completes, the column temperature is raised and the GeH\({}_{4}\) is eluted with the He and frozen on another trap. It is then released, mixed with Xe and added into a miniature proportional counter. The overall synthesis efficiency for the BEST runs was 96%. #### 3.2.3 The Proportional Counters The small (\(\sim\)0.5 cm\({}^{3}\)) proportional counters used by BEST were identical to those SAGE employed after 2001 and also similar to those of GALLEX. The counters had a thin layer of a carbon deposited on the inner surface of a quartz body using thermal decomposition of isobutane. This layer serves as the cathode and minimized dead volume. The special design, with the walls rounded inwards near the cathode ends, minimizes edge effects. Connections to the cathode and anode were made of molybdenum band, which provided a good gas seal and guaranteed stability of amplification. This design had lower background and higher volume efficiency than the earlier SAGE design, while maintaining stable high gas amplification and good energy resolution. The counters were manufactured from radiopure materials. The counter bodies were fabricated of synthetic quartz (Suprasil(r)r [61]. The thickness of this body wall was etched with hydrofluoric acid to about 200 \(\mu\)m. This kept the background from the Suprasil(r)very low. Earlier SAGE counters used a zone-refined iron sleeve as the cathode. These were low background, but had a dead volume behind the sleeve. The GALLEX counters were similar to the early SAGE counters with an iron or silicon cathode sleeve and tungsten anode wire with a body made of Suprasil(r) [47]. #### 3.2.4 Waveform Analysis and Likelihood Fits A great deal of information is encoded in the rise times of the digitized waveforms from the proportional counters. The Auger electrons arising from the decay of \({}^{71}\)Ge deposit all their energy in a small volume. As the charge from this point-like energy deposition arrives at the central counter wire, it produces a pulse with fast rise time, as shown in the top panel of Fig. 3.6. In contrast, background events from \(\beta\) particles or Compton electrons may deposit a similar amount of energy, but produce an extended track, so that the ionization arrives at the central wire over an interval. This generates a pulse with a slower rise time, as shown in the bottom panel of Fig. 3.6. This difference has been exploited to improve the signal-to-background ratio [50, 62]. The power of rise-time techniques to distinguish signal from background is apparent from Fig. 3.7. The two panels show data taken early during counting when \({}^{71}\)Ge has not fully decayed away and hence shows its signature, and late in the counting after the \({}^{71}\)Ge has decayed and only background remains. Events selected by energy and rise time are used in a likelihood fit [63]. Here the time of each event is used to determine its probability of being a signal event or background. The number of signal events that maximizes the likelihood is used to determine the production rate. To determine the quality of fit, Cramer-von Mises and Anderson-Darling statistical tests were used [64]. The final BEST results are displayed in Fig. 2.3 along with those obtained in the four earlier solar neutrino calibration experiments. The significantly improved precision of the BEST measurements is clear. The measured-to-expected ratios found for the BEST inner and outer vessels are \(R_{in}=0.79\pm 0.05\) and \(R_{out}=0.77\pm 0.05\), respectively. These values differ significantly from unity, but agree with each other within uncertainties, revealing no tell-tale sign of oscillations. The Figure 3.5: A photograph of a BEST proportional counter. Figure 3.6: Top: A candidate signal pulse. Bottom: A candidate background pulse. Figure 3.7: The candidate events from the BEST experiment separated into time bins after start of counting [2]. Top: Energy vs rise-time histogram of all events of the outer target after the shield-open cut observed in all ten exposures during the first 30 days after extraction. The live time is 249 days, and 1387 events are shown. Bottom: The same histogram for the 504 events that occurred during an equal live-time interval beginning at 40 days after extraction. results for all six measurements are given in Table 3.1. The auxiliary tests that have been performed to check whether an experimental artifact is the cause of the observed deficits are discussed in Sec. 4. Cross section uncertainties are discussed in Sec. 5. The various nuclear and atomic physics inputs that might affect the result are discussed in Sec. 5.5. Some of the interest in BEST and other Ga calibration tests comes from suggestions that \(\nu_{e}\rightarrow\nu_{s}\) might be responsible for the low values of \(R\), as well as other neutrino anomalies. This possibility and the tension with other experiments is discussed in Sec. 6. ## 4 Auxiliary Experimental Tests The SAGE and GALLEX programs considered various steps in the Ge recovery and counting for which the efficiencies might have been overestimated. No evidence of such was found. The following subsections describe the tests that were performed. ### Extraction and Synthesis Efficiency The GALLEX extraction efficiency is based on the volatility of GeCl\({}_{4}\) with Ge in the tetravalent state. Efficient extraction of Ge(IV) carrier and \({}^{71}\)Ge(IV) was confirmed by a number of tests. One such test considered whether \({}^{71}\)Ge produced by \({}^{51}\)Cr \(\nu_{e}\) capture retained the molecular form for extraction. GALLEX performed a _hot-atom chemistry_ test with \({}^{71}\)As [65]. Hot-atom chemistry refers to the feature that the produced \({}^{71}\)Ge has a recoil energy resulting from \(\nu_{e}\) capture and subsequent \(\beta\) emission. As the recoil energy is comparable to the \(\sim\)3-4 eV chemical binding energy, the Ge-Cl bond could be broken, resulting in a depressed extraction efficiency. Also, EC produces an inner shell vacancy that, as it fills, can produce shake-off electrons altering the charge state, again resulting in molecule breakup. If the Ge ends up in a non-extractable form such as Ge(II) instead of the expected Ge(IV), the carrier recovery measurements might not be an accurate measure of the extraction efficiency. One can check hot-chemistry effects using \({}^{71}\)As, which decays by EC and \(\beta^{+}\), producing \({}^{71}\)Ge with kinematics resembling those resulting from \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge for \({}^{51}\)Cr source \(\nu_{e}\)s. By adding a known amount of \({}^{71}\)As (\(\tau_{1/2}\)=2.72 d) and counting the number of extracted \({}^{71}\)Ge atoms produced, GALLEX performed a high-statistics check of potential hot-chemistry effects. The recovery of \({}^{71}\)Ge was 100\(\pm\)1%, indicating that the produced \({}^{71}\)Ge ends up as volatile extractable GeCl\({}_{4}\). As there is no known technique for dissolving and stirring As within Ga metal, similar tests for SAGE and BEST have not been performed. Instead, SAGE performed Ga metal hot-atom chemistry tests by doping the detector with radioactive \({}^{70,72}\)Ga produced by neutron activation [54]. This checked whether introduction of the carriers \({}^{70,72}\)Ge via _in situ_ decay, and thus with significant recoil energy, would influence the efficiency of their recovery. No change in recovery efficiency was seen. However, as the maximum recoil energy of \({}^{70}\)Ge after \(\beta\) decay, 32 eV, is larger than both the 20 eV recoil of \({}^{71}\)Ge after \({}^{51}\)Cr \(\nu_{e}\) capture and the 6.1 eV recoil after pp \(\nu_{e}\) capture, there is not a precise equivalence between the carrier kinematics and those produced in the neutrino capture reactions. On the Earth's surface, \({}^{68}\)Ge (\(\tau_{1/2}\sim\)271 d) is produced cosmogenically within Ga. When the Ga was initially brought underground, extractions were conducted by SAGE to remove \({}^{68}\)Ge. The reduction of this isotope during these extractions followed that of the Ge carrier [54]. GALLEX also did extractions to remove \({}^{68}\)Ge. A small fraction of the \({}^{68}\)Ge was retained: they attributed the extraneous activity to some trace impurity interacting with the \({}^{68}\)Ge, releasing it slowly [47]. They note the effect is very small and only observable because of the very high initial level of \({}^{68}\)Ge. SAGE prepared a sample of carrier doped with a known number of \({}^{71}\)Ge atoms, adding the isotope to a reactor containing seven tons of Ga. Three extractions were performed and the measured rate was as expected from the stable carrier determination [54]. The pp solar neutrino flux measurements of SAGE and GALLEX/GNO agree, but also can be compared to the result from Borexino. If the radiochemical Ga experiments had an efficiency lower than claimed, this would be revealed by a higher rate in Borexino's event-by-event detection. The Ga result, \((6.0\pm 0.8)\times 10^{10}\)/cm\({}^{2}\)s [29], agrees with that from Borexino, \((6.1\pm 0.5^{+0.3}_{-0.5})\times 10^{10}\)/cm\({}^{2}\)s [66]. However, the comparison is not definitive due to the \(\sim\)12% uncertainty of each measurement. ### Counter Efficiency A variety of tests were performed by SAGE, GALLEX and BEST to ensure their counter efficiencies were estimated correctly. SAGE performed a counter efficiency test with \({}^{69}\)Ge [67]. This isotope decays 27% of the time by EC with a signal identical to that of \({}^{71}\)Ge. The remainder of the decays are EC followed by emission of a 1106-keV \(\gamma\). By placing a proportional counter near a HPGe detector, both the \(\gamma\)s and Auger emissions could be measured, confirming the Auger detection efficiency. SAGE also measured the volume efficiency using counters filled with \({}^{37}\)Ar and \({}^{71}\)Ge. The radioactive gas mixtures were measured in a typical counter then transferred to a large counter specifically designed for very high efficiency [60]. These radioactive-gas fills also provided data to verify the peak counting efficiency. The volume efficiency was corrected for gas pressure and mixture (GeH\({}_{4}\) fraction). GALLEX used \({}^{71}\)Ge-filled counters [68, 69] to establish their efficiencies and optimize counter design. BEST measured the volume efficiency of each counter with \({}^{37}\)Ar and \({}^{71}\)Ge after the experimental measurements were completed. These measurements also determined the peak selection efficiency and the rise-time cut efficiency [2]. ### Detector Effective Path Length and Distribution This effective neutrino path length \(\langle L\rangle\) through each zone of the BEST detector can be computed from Eq. (2.5). \(\langle L\rangle\), target Ga mass, and the distribution \(L(D)\) were evaluated by Monte Carlo integration, for both the near and far zones. The as-built geometry for BEST was used to define the integration volumes. Uncertainties in these quantities arise from the precision of the as-built measurements, the statistical precision of the Monte Carlo integration, and the comparison between the calculated and measured mass [2]. The total uncertainty is estimated to be \(\pm 0.3\%\), and thus the geometry is a negligible uncertainty in computing any of these quantities. The values for \(\langle L\rangle\) are given in Table 4.1. The two volumes were designed to provide similar values for \(\langle L\rangle\) and thus similar event rates, in the absence of oscillations. In Fig. 4.1 we plot the dimensionless quantity \(L(D)\), which encodes the source and detector geometry effects needed in determining event rates in the detector as a function of baseline, the distance between the points of neutrino production in the \({}^{51}\)Cr source and detection in the Ga. See the discussion in Sec. 2.4. In the presence of oscillations, the event rate \(r\) is proportional to the integral over possible baselines \(D\) of the convolution of \(L(D)\) with the oscillation probability \(P_{ee}(E_{\nu}^{i},D)\). \begin{table} \begin{tabular}{l c c} \hline \hline Measurement & \(\langle L\rangle\) (cm) & Oscillation Length Range (cm) \\ \hline \hline SAGE Cr & \(72.6\pm 0.2\)[54] & 0-109 \\ SAGE Ar & \(72.6\pm 0.2\)[54] & 0-109 \\ GALLEX Cr-1 & \(190\pm 1\)[70] & 0-194 \\ GALLEX Cr-2 & \(190\pm 1\)[70] & 0-194 \\ BEST-inner & \(52.03\pm 0.18\)[2] & 0-67 \\ BEST-outer & \(54.41\pm 0.18\)[2] & 67 -152 \\ \hline \hline \end{tabular} \end{table} Table 4.1: The mean free paths for the various measurements and the range of values for the oscillation length. The oscillation length ranges are our estimates based on the given geometry. Figure 4.1: The dimensionless distribution \(L(D)\) which encodes detector and source geometry needed to compute the distribution of neutrino captures within the Ga volumes as a function of baseline (the distance \(D\) between the points where the neutrino is produced in the source and detected in the Ga volume). The total rate in the presence of oscillations is proportional to the integral over the convolution of \(L(D)\) with the oscillation probability \(P_{ee}(E_{\nu}^{i},D)\) (see Sec. 2.4). Figure courtesy of Ralph Massarczyk. ## 5 The \({}^{71}\)Ga Capture Cross Section A critical issue in the analysis of the BEST and earlier Ga neutrino source calibrations is the cross section for \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge, a common systematic in all of the experiments. The nuclear physics is shown in Fig. 5.1. Here we summarize work very recently completed in which the four aspects of this problem were re-examined [3] 1. The strength of the transition from the \(\frac{3}{2}^{-}\) ground state of \({}^{71}\)Ga to the \(\frac{1}{2}^{-}\) ground state of \({}^{71}\)Ge. This transition dominates the neutrino absorption and is tightly constrained by the known electron-capture lifetime of \({}^{71}\)Ge. 2. The associated partial neutrino cross section \({}^{71}\)Ga\((\text{gs})(\nu_{e},e^{-})^{71}\)Ge\((\text{gs})\), computed from EC rate after a series of electroweak corrections are made. 3. Cross section contributions of two kinematically accessible excited states in \({}^{71}\)Ge, at 175 keV \((\frac{5}{2}^{-})\) and 500 keV \((\frac{3}{2}^{-})\). 4. The line neutrino spectra from the EC sources \({}^{51}\)Cr and \({}^{37}\)Ar that are folded with the cross section, in rate estimations. The fourth point was addressed early in this overview: the cross sections presented below represent weighted averages over the neutrino lines from each source, with the weights determined by the measured EC branching ratios. The needed data are given in Table 2.1. ### The Ground State Transition Strength The allowed strength for the neutrino-driven transition from \({}^{71}\)Ga to the ground state of \({}^{71}\)Ge is \[\text{B}_{\text{GT}}^{(\nu,\text{e})}(\text{gs})\equiv\frac{1}{2j_{i}+1}\left| \langle j_{f}^{\pi}=\frac{1}{2}^{-}||\sum_{1=1}^{A}\sigma(i)\tau_{-}(i)||j_{i} ^{\pi}=\frac{3}{2}^{-}\rangle\right|^{2} \tag{5.1}\] where \(\sigma(i)\) is the Pauli spin matrix, \(\tau_{-}(i)\) is the isospin lowering operator, and \(||\) denotes a matrix element reduced in angular momentum. As shown in [3], the Gamow-Teller (GT) transition strength \(\text{B}_{\text{GT}}\) can be extracted from the precisely measured electron-capture half-life of \({}^{71}\)Ge [43] \[\tau_{\frac{1}{2}}\big{[}{}^{71}\text{Ge}\big{]}=11.43\pm 0.03\ \text{d}\] through the relation \[\omega=\frac{\ln[2]}{\tau_{\frac{1}{2}}}=\frac{G_{F}^{2}\cos^{2}\theta_{C}}{ 2\pi}\ |\phi_{1s}^{av}|^{2}\ E_{\nu,1s}^{2}\ \left[2(1+\epsilon_{o}^{1s})(1+\frac{P_{L}+P_{M}}{P_{K}})\right]\ \ g_{A}^{2}\ [2 \ \text{B}_{\text{GT}}^{(\nu,\text{e})}(\text{gs})]\ [1+g_{v,b}]_{EC}\ [1+ \epsilon_{\text{q}}] \tag{5.2}\] This expression gives the total EC rate in terms of the partial rate for \(1s\)-capture. The terms in Eq. (5.2) are 1. The weak couplings. This expression was evaluated in [3] using Particle Data Group values for the Fermi coupling constant \(G_{F}\) and the Cabibbo angle \(\theta_{C}\) and the Perkeo II value for the axial vector coupling \(g_{A}\). 2. The energy of the emitted neutrino \(E_{\nu,1s}\), computed from the EC Q-value and \(1s\) binding energy of 10.37 keV \[Q_{EC}=M[^{71}\text{Ge}]-M[^{71}\text{Ga}]=232.443\pm 0.093\ \text{keV}\] \[Q_{EC}=E_{\nu,1s}+10.37\ \text{keV}\ \Rightarrow E_{\nu,1s}=222.1\pm 0.1\ \text{keV}.\] 3. The wave function probability, averaged over the nuclear volume, for a \(1s\) electron, \(|\phi_{1s}^{av}|^{2}\). This quantity is taken from atomic many-body theory, and its uncertainty is addressed in the next section. 4. The factor in the first square bracket relates single-electron \(1s\) capture rate to the total EC rate. It includes a) a factor of two, as there are two \(1s\) electrons; b) an exchange and overlap correction, taken from theory, that accounts for the fact that an instantaneous hole in the atomic cloud of \({}^{71}\)Ge does not overlap precisely with a similar hole in the daughter nucleus \({}^{71}\)Ga and c) the contributions from L and M EC capture relative to K capture, expressed in terms of the experimentally known capture probabilities \(P_{K}\), \(P_{L}\), and \(P_{M}\). 5. The second square bracket expresses the allowed matrix element for EC in terms of that for \((\nu_{e},e^{-})\), \(\mathrm{B}^{\mathrm{EC}}_{\mathrm{GT}}=2\ \mathrm{B}^{(\nu,\mathrm{e})}_{ \mathrm{GT}}(\mathrm{gs})\). 6. The factor \([1+g_{v,b}]_{EC}\) is the contribution from radiative corrections (the exchange of virtual photons and bremsstrahlung). 7. The factor \([1+\epsilon_{\mathrm{q}}]\) represents contributions beyond the allowed approximation. This contribution is dominated by the term linear in the three-momentum transfer \(q\) arising from interference between the allowed amplitude and weak magnetism. While the weak magnetism contribution is constrained by the known isovector magnetic moment, there are additional contributions that must be taken from nuclear theory [3]. The various terms in Eq. (5.2) have uncertainties, which are discussed in detail in [3] and, partially, in the next section of this paper. One finds that the gs \(\leftrightarrow\) gs transition strength for \((\nu,\)\(e^{-})\) is constrained by the known EC rate to about 1%, when all such uncertainties are considered \[\mathrm{\tilde{B}}^{(\nu,\mathrm{e})}_{\mathrm{GT}}(\mathrm{gs})\equiv \mathrm{B}^{(\nu,\mathrm{e})}_{\mathrm{GT}}(\mathrm{gs})\ [1+g_{v,b}]_{EC}=0.0864\,\pm\,0.0010\ \ (95\%\,\mathrm{C.L.})\] ### The Ground State Neutrino Capture Cross Section The cross section for \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge(gs) can be expressed in terms of the allowed matrix element defined above, \[\sigma_{\mathrm{gs}}=\frac{G_{F}^{2}\cos^{2}\theta_{C}}{\pi}\,p_{e}E_{e} \mathcal{F}(Z_{f},E_{e})\,g_{A}^{2}\,\mathrm{\tilde{B}}^{(\nu,\mathrm{e})}_{ \mathrm{GT}}(\mathrm{gs})\,\frac{[1+g_{v,b}]_{(\nu,e)}}{[1+g_{v,b}]_{EC}}\,[1+ \epsilon_{q}]. \tag{5.3}\] The terms in Eq. (5.3) are 1. \(E_{e}\) and \(p_{e}\) are the energy and three-momentum of the produced electron, related to the Q-value by \[E_{e}=E_{\nu}-Q_{EC}+m_{e}-0.09\,\mathrm{keV}\] where the last term on the right is a small correction for the energy lost to electronic rearrangement, following the neutrino reaction. 2. \(\mathcal{F}(Z_{f},E_{e})\) is the correction for the effects of Coulomb distortion of the outgoing electron. It corresponds to the Dirac solution for a nuclear charge distribution, here described as a Fermi distribution constrained to reproduce the experimental r.m.s. radius, with additional corrections to account for atomic screening. 3. The ratio \([1+g_{v,b}]_{(\nu,e)}/[1+g_{v,b}]_{EC}\) accounts for the differential effects of radiative correction on the inverse reactions of EC and \((\nu_{e},e^{-})\). This difference, generated by the bremsstrahlung contribution, was calculated as in [71]. 4. The factor \([1+\epsilon_{q}]\) is the correction for forbidden contributions, again dominated by the interference term involving weak magnetism. When these terms are combined and associated errors propagated, one finds for the \({}^{51}\)Cr and \({}^{37}\)Ar sources [3] \[\sigma_{gs}=\left\{\begin{array}{cc}(5.39\pm 0.06)\times 10^{-45}\ \mathrm{cm}^{2}& {}^{51}\mathrm{Cr}\\ (6.45\pm 0.07)\times 10^{-45}\ \mathrm{cm}^{2}&{}^{37}\mathrm{Ar}\end{array} \right.\quad(95\%\,\mathrm{C.L.}) \tag{5.4}\] ### Excited-State Contributions to the Neutrino Capture Cross Section The strength of the well-determined ground state cross section is already sufficient to generate a Ga anomaly. The contributions from the \(\frac{5}{2}^{` Equation (5.6) states the GT strength can still be measured through forward-angle (p,n) measurements, provided one subtracts the contribution from \(M_{T}\). While analyses were done in [38, 39], the recent work of [3] was the first to adequately assess the whether \(M^{\rm(p,n)}\) could quantitatively account for known weak transition rates. In this work, a) the transitions were carefully selected so that only data with well-established errors were included; b) the estimates of \(M_{T}\), which must be taken from theory, were computed for multiple interactions, to provide a measure of that uncertainty; and c) the pattern of the results were displayed as a function of the B\({}_{\rm GT}\) strength, to clearly display the effects of the subdominant amplitude. The many details of this analysis are given in [3]. The results, displayed in Fig. 5.2, are quite dramatic. A naive use of (p,n) reactions to map B\({}_{\rm GT}\) works well for strong transitions, but deteriorates as the transition strengths lessen, becoming highly unreliable for transitions of the strength of current interest, given by Eq. (5.5). However, with the inclusion of the typically subdominant tensor operator, the correlation between the (p,n) measurements and known weak strengths is restored. In using Eq. (5.6), Ref. [3] takes into account uncertainties in measurements, in the determination of the strength constant \(\delta\), and in variations in the theoretical estimates of \(M_{T}\). The end results is a determination of the tensor strength, \(\delta=0.075\pm 0.008\) (\(1\sigma\)). The effective operator can then be used in conjunction with the (p,n) measurements to extract the needed B\({}_{\rm GT}\) strengths for \({}^{71}\)Ga. The results can be expressed as [3] \[\frac{\rm B_{GT}(\frac{\rm 5}{2}^{-})}{\rm B_{GT}(g{\rm s})}<0.089\quad(68\%\, \rm C.L.)\qquad\qquad\frac{\rm B_{GT}(\frac{\rm 3}{2}^{-})}{\rm B_{GT}(g{\rm s})}=0.121 \pm 0.026\quad(68\%\,\rm C.L.) \tag{5.8}\] The total cross section can be expressed in terms of these ratios as [38] \[\sigma=\sigma_{\rm gs}\left[1+\xi(\frac{\rm 5}{2}^{-})\,\frac{\rm B_{GT}( \frac{\rm 5}{2}^{-})}{\rm B_{GT}(g{\rm s})}+\xi(\frac{\rm 3}{2}^{-})\,\frac{ \rm B_{GT}(\frac{\rm 3}{2}^{-})}{\rm B_{GT}(g{\rm s})}\right] \tag{5.9}\] Figure 5.2: In blue: correspondence between the (p,n) amplitude \(|M^{\rm(p,n)}|\) and the beta decay amplitude \(|M_{\rm GT}|\) is excellent when B\({}_{\rm GT}\) is strong, but deteriorates for weaker B\({}_{\rm GT}\). In red: the agreement is restored with the inclusion of \(|M_{\rm T}|\). The two excited states that contribute to the BEST cross section have weak transition strengths that would place them in the shaded region, where the agreement between (p,n) cross sections and B\({}_{\rm GT}\) is typically poor unless the tensor correction is included. where the phase-space coefficients are [3] \[{}^{51}\text{Cr}:\ \ \xi(\tfrac{5}{2}^{-})=0.669\ \ \ \xi(\tfrac{3}{2}^{-})=0.220\ \ \ \ \ \ \ \ \ \ \ \ {}^{37}\text{Ar}:\ \ \xi(\tfrac{5}{2}^{-})=0.696\ \ \ \xi(\tfrac{3}{2}^{-})=0.264\] This then leads to the following total cross sections \[\sigma(^{51}\text{Cr}) = \left\{\begin{array}{c}5.71^{+0.27}_{-0.10}\ \ \ 68\%\,\text{C.L.}\\ \\ 5.71^{+0.51}_{-0.23}\ \ \ 95\%\,\text{C.L.}\end{array}\right\}\times 10^{-45} \,\text{cm}^{2} \tag{5.10}\] \[\sigma(^{37}\text{Ar}) = \left\{\begin{array}{c}6.88^{+0.34}_{-0.13}\ \ \ 68\%\,\text{C.L.}\\ \\ 6.88^{+0.63}_{-0.28}\ \ \ 95\%\,\text{C.L.}\end{array}\right\}\times 10^{-45} \,\text{cm}^{2}\] This analysis leads to somewhat larger excited-state contributions because the GT and tensor operators interfere destructively for these transitions, which then requires a larger \(M_{\text{GT}}\) to compensate. The excited states increase the total \({}^{71}\text{Ga}(\nu_{e},e^{-})^{71}\text{Ge}\) cross sections by \(\sim\) 6.0% and \(\sim\) 6.6% for \({}^{51}\text{Cr}\) and \({}^{37}\text{Ar}\), respectively. Reference [3] also includes an analysis of excited-state contributions based on forward-angle (\({}^{3}\)He,t) charge-exchange cross sections [74]. ### Comparisons to Past Work There are a number of older cross section estimates available in the literature, summarized in Table 5.1. Some of their relevant attributes include: 1. Bahcall (1997) [49]: This work included, in its estimate of \(\sigma_{gs}\), overlap and exchange atomic effects, and used the then prevailing value of \(Q_{\text{EC}}\) = 232.69 \(\pm\) 0.15 keV. Excited state B\({}_{\text{GT}}\) values were taken from the (p,n) values of [72] without any added corrections. 2. Haxton (1998) [39]: Extending the arguments of [38], this paper was to explore in detail the possible consequences of interfering GT and tensor contributions to the (p,n) cross section for the 175 keV excited state, in extracting the neutrino absorption cross section. The broad error assigned reflects two factors, a 30% larger value for \(\delta\), which can be traced to deficiencies in the (p,n) data then available (see [3]), and a SM estimate of the tensor amplitude \(M_{T}\) for the \(\tfrac{3}{2}^{-}\rightarrow\tfrac{5}{2}^{-}\) transition that was nearly half of the single-particle \(2p_{3/2}\leftrightarrow 1f_{5/2}\) value. As the tensor and GT amplitudes interfere, this allows for a large GT matrix element. The paper discusses the importance of using the full \(2p_{3/2}1f_{5/2}1p_{1/2}1g_{9/2}\) SM space in order to properly describe the shape co-existence properties of Ga and Ge, but only the simplest effects of the \(1g_{9/2}\) shell were included in the calculations done, as the technology of the time limited bases to dimensions \(\lesssim 10^{6}\). (Use of the full SM space generates m-scheme bases exceeding \(10^{8}\)). Consequently, anticipating that the absence of correlation would make the truncated SM estimate of \(|M_{T}|\) too large, the author treated the SM estimate as an upper bound, yielding the broad range of cross sections shown in Table 5.1. 3. Barinov _et al._ (2018) [75]: This work used weak couplings updated to 2018, including a value for \(g_{A}\) of 1.272 \(\pm\) 0.002, and adopted the \(Q_{\text{EC}}=233.5\pm 1.2\) keV, which came from a Penning trap measurement of the mass difference [76], though this value had been superseded by a more accurate trapping result from [77]. This choice of \(Q_{EC}\) accounts in part for the slightly larger cross section obtained, compared to [49]. 4. Kostensalo _et al_ (2019) [78]: This work followed [39] by including the tensor correction in its analysis of excited-state contributions. The analysis employs SM estimates of the GT and tensor matrix elements. In [3] it is noted that calculations using the identical interaction did not reproduce the tabulated GT and tensor matrix elements of [78]. 5. Semenov (2020) [79]: This work follows [49] quite closely, treating the excited states as was done there, but utilizing updated weak couplings and and taking \(Q_{EC}\) from [77], which remains the best value. 6. Haxton _et al._ (2023) [3]: As described here, this work includes a much more advanced extraction of the needed excited-state contributions, propagating all identified experimental and theoretical errors in the determination of \(\delta\) and estimation of \(M_{T}\). That is, the excited-state treatment follows the plan of [39], but with improved data and error propagation and without SM limitations. Current Particle Data Group and Perkeo II weak couplings were used, and both radiative corrections and the contributions of weak magnetism were included in the EC and \((\nu_{e},e^{-})\) calculations. The original BEST analysis [1, 2] was done using the cross section from Bahcall [49], which is in reasonably good agreement with the recent determination of [3]. In [3], a variety of small changes - updated values for weak couplings and for \(Q_{EC}\), the inclusion of radiative corrections, the inclusion of weak magnetism, and the computation of Coulomb corrections using a realistic charge distribution consistent with the \({}^{71}\)Ga r.m.s. charge radius - combine to lower \(\sigma_{gs}\) by about 2.5%, while the excited state contribution increases to a bit over \(\sim\)6% when the effects of \(M_{T}\) are included in the extraction of B\({}_{\rm GT}\) from (p,n) cross sections. The net result is a total cross section \(\sigma\sim\)1.5% smaller than that of [49]. For the \(\nu_{e}\rightarrow\nu_{s}\) oscillation bounds we derive later in this paper, the updated cross section of [3] is used. ### Nuclear and Atomic Data Uncertainties Various atomic and nuclear parameters are needed in the \({}^{51}\)Cr and \({}^{37}\)Ar cross section calculations for \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge, and as noted at various points above, small changes have occurred in their values, as new measurements became available over the years. Here we briefly comment on these data uncertainties, emphasizing that they are quite modest and thus cannot account for the observed discrepancy in \(R\). _Nuclear Q Values:_ The EC Q-values for \({}^{37}\)Ar, \({}^{51}\)Cr, and \({}^{71}\)Ge and their uncertainties are, from the most recent available \begin{table} \begin{tabular}{l c c c c} \hline \hline Author & Year & \(\sigma(^{51}\)Cr) & \(\sigma(^{37}\)Ar) & \(Q_{\rm EC}(^{71}\)Ge) (keV) \\ \hline Bahcall [49] & 1997 & \(5.81^{+0.21}_{-0.16}\) & \(7.00^{+0.49}_{-0.21}\) & 232.69(15) \\ Haxton [39] & 1998 & \(6.39\pm 0.68\) & – & 232.69(15) \\ Barinov _et al._[75] & 2018 & \(5.91\pm 0.11\) & \(7.14\pm 0.15\) & 233.5(1.2) \\ Kostensalo _et al._[78] & 2019 & \(5.67\pm 0.06\) & \(6.80\pm 0.08\) & 232.49(22) \\ Semenov [79] & 2020 & \(5.94\pm 0.12\) & \(7.17\pm 0.15\) & 232.44(9) \\ Haxton _et al._[3] & 2023 & \(5.71^{+0.27}_{-0.10}\) & \(6.88^{+0.34}_{-0.13}\) & 232.44(9) \\ \hline \hline \end{tabular} \end{table} Table 5.1: A summary of the published neutrino reaction cross section estimates for \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge in units of \(10^{-45}\)cm\({}^{2}\). The value for \(Q_{EC}\) used in each calculation is shown. All results are given at 68% C.L. See text for details. data evaluation [80], \[Q_{EC}(^{37}\text{Ar})=813.87\pm 0.20\text{ keV}\qquad Q_{EC}(^{51}\text{Cr})=752.39 \pm 0.15\text{ keV}\qquad Q_{EC}(^{76}\text{Ge})=232.47\pm 0.09\text{ keV} \tag{5.11}\] The uncertainties range from 0.02% to 0.04%, implying an impact on rates or cross sections of \(\sim\) 0.04% to 0.08%. This is far below levels of current concern. The \({}^{71}\)Ge half-life:The \({}^{71}\)Ge half-life used in the BEST, GALLEX/GNO, and SAGE analyses is \(11.43\pm 0.03\) d. This value and its 0.3% uncertainty comes from [43]. The neutrino capture cross section depends directly on this value as described above. A recent paper [81] questioned the reliability of this half-life. Their suggestion of a longer half-life and much larger uncertainty for the \({}^{71}\)Ge EC half-life was based on four selected measurements, two of which were performed nearly 70 years ago. One of these older papers drives the conclusions of [81]. The arguments of [81] reflect a misunderstanding of both [43] and more generally of how nuclear data are evaluated and utilized. The lifetime of [43] represents not one measurement, but the combined results of six distinct measurements that were performed by the authors using different source preparation methods and two distinct counting techniques, in contrast to older work that the authors of [81] weighted equally. (The authors of [43] were of course aware of older efforts, noting the age of these measurements as motivation for their efforts to modernize the EC measurement.) Perhaps more important, the entire body of data on this decay - which includes [43] and ten other publications listed in the ENDSF files of the National Nuclear Data Center - was recently re-evaluated [82]. The evaluation included publications as of January 17, 2023, all of which would have have have been critically assessed. The resulting recommended value is also \(11.43\pm 0.03\) d [82]. Consequently, the speculations of [81] are not supported by evaluation of the existing body of data on this decay. This said, new measurements of this important EC half-life would of course be welcome, provided they are of the quality of those reported in [43]. The \({}^{51}\)Cr \(\gamma\)-decay Branching Ratio:The most precise method of determining \({}^{51}\)Cr activity is calorimetry. Calorimetry requires knowledge of the energy release per decay (\(\kappa\)), excluding the unmeasured neutrino contribution. The value \(\kappa\)=37.750\(\pm\)0.084 keV/decay [83] is dominated by EC to the first excited state of \({}^{51}\)V, for which the branching fraction is 9.91\(\pm\)0.02%. The associated \(\gamma\) has an energy of 320.0835\(\pm\)0.0004 keV. This contribution accounts for 86% of \(\kappa\). As noted by the BEST collaboration [2] and studied in Ref. [16], to first order, if the branching ratio for this decay were incorrect, the deduced source activity would be incorrect by the same factor. As this branching ratio is known to a precise 0.2%, this is also not an uncertainty of concern. Furthermore, any change would not affect the \({}^{37}\)Ar result. The \({}^{71}\)Ge Electron Density at the Nucleus:The dominant ground-state cross section for \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge is tightly constrained by the known EC rate of \({}^{71}\)Ge. This connection was exploited by Bahcall [49] and by almost all subsequent investigators. The newest cross section calculation [3] has now taken into account radiative corrections and nuclear operator contributions beyond the allowed approximation (dominated by the interference term with weak magnetism). These corrections do alter the relationship between EC and \((\nu_{e},e^{-})\), but enter at the level of \(\lesssim\) 0.5 %. In addition, this relationship depends on important input from atomic theory, the \(1s\) atomic wave function probability averaged over the nuclear transition density (see discussion in [3]). This averaging generates the probability \(|\phi^{av}_{1s}|^{2}\) that appears in Eq. (5.2). Uncertainties in \(|\phi^{av}_{1s}|^{2}\) directly impact the calculated EC rate. In [49] Bahcall cites as private communications three relativistic, self-consistent Hartree-Fock calculations that took into account the finite extent of the nucleus, the Breit interaction, vacuum polarization, and self-energy corrections, averaging the resulting wave functions over the nuclear volume to obtain \(|\phi_{i}^{av}|^{2}\) for \(K\), \(L\), and \(M\) capture. The calculations, performed by independent groups, agree at the \(\pm 0.2\%\) level. We are not aware of any subsequent calculations that are as complete. While [49] includes references to the atomic methods employed by the three groups, details on the specify calculations performed for \({}^{71}\)Ge do not appear to be published. This is somewhat unfortunate, given the importance of \(|\phi_{i}^{av}|^{2}\) in the \({}^{71}\)Ge EC calculation. The relationship between the dimensionless numerical quantity given in [49] and the density \(|\phi_{1s}^{av}|^{2}\) is not entirely obvious: see [3] for discussion. When converted to more conventional units, one finds the result \[(\hbar c)^{3}|\phi_{1s}^{av}|^{2} = (7.21\pm 0.03)\times 10^{-4}\ \mathrm{MeV}^{3} \tag{5.12}\] \[= \mathcal{R}\frac{(Z\alpha m_{e}c^{2})^{3}}{\pi}\Big{|}_{Z=32}\] with \(\mathcal{R}=1.333\). The \(0.4\%\) uncertainty (\(95\%\) C.L.) is determined from the standard deviations of the three atomic calculations reported in [49] and from differences in theoretical estimates of overlap and the exchange corrections, as computed by Bahcall and Vatai (see [3]). These two sources of theoretical uncertainty were combined in quadrature. This procedure accounts for differences apparent from the spread among competing calculations, but not those that could arise if the calculations being compared employed common but flawed assumptions. But unless some major mistake has been made in the atomic physics, atomic uncertainties are far below the level of current concern. In the second line of Eq. (5.12) the result has been re-expressed in terms of the Schrodinger density for an electron bound to a point charge \(Z\), evaluated at the origin. The dimensionless proportionality factor \(\mathcal{R}\) is not too different from unity. _K, L, and M Capture Ratios:_ Additional atomic data input uncertainties enter through the experimental K, L, and M EC probabilities for the \({}^{51}\)Cr and \({}^{37}\)Ar sources, listed in Table 2.1. We see that the absolute branching ratios are known to a typical accuracy of \(\sim\)\(10^{-4}\). Further, any error in these quantities would simply redistribute strength over an atomic energy scale, further diluting any impact. Consequently these uncertainties are far below levels of concern. The K, L, and M EC probabilities for \({}^{71}\)Ge appear in the theoretical expression for the capture, Eq. (5.2), used in [3] and in the neutrino analysis presented here. The uncertainties in these quantities, as well as in the exchange and overlap corrections one needs to relate theoretical instantaneous EC rates to the physical rates observed in \({}^{71}\)Ga, are described in [3]. This constitutes the dominant uncertainty in extracting B\({}_{\mathrm{GT}}(gs)\) from the EC rate. This uncertainty is propagated into the cross section calculation and reflected in the \(1.5\%\) uncertainty (\(95\%\) C.L.) assigned to \(\sigma_{gs}\). See [3] for details. ## 6 The Ga Anomaly and its Possible Implications for Sterile Neutrinos The preceding two sections summarize the steps taken to cross-check BEST and earlier Ga calibration experiments. Despite a great deal of effort, no candidate explanation has been found involving either a flaw in experimental procedures or uncertainties in the theoretical input used in the extraction of the \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge rate. Indeed, the experiments are unusually free of both neutrino source and detector cross section uncertainties. The sources generate simple line neutrino spectra, calorimetry and other techniques tightly constrain source intensity, and the known EC rate of \({}^{71}\)Ge establishes a minimum value for the neutrino capture cross section on \({}^{71}\)Ga. It is possible that the Ga anomaly is a statistical fluctuation - though a highly improbable one, if all uncertainties have been correctly estimated. The published BEST results for the inner and outer volumes [1, 2], using the 1997 Bahcall neutrino absorption cross section, can be compared with those obtained using the updated cross section of [3], \[\begin{array}{ll}\frac{R_{\rm out}}{R_{\rm out}^{\rm expected}}=0.77\pm 0.05& \qquad\frac{R_{\rm out}}{R_{\rm out}^{\rm expected}}=0.78\pm 0.05\\ \frac{R_{\rm in}}{R_{\rm in}^{\rm expected}}=0.79\pm 0.05& \qquad\frac{R_{\rm in}}{R_{\rm in}^{\rm expected}}=0.80\pm 0.05\end{array} \tag{6.1}\] Similarly, for the original Ga anomaly obtained from the weighted average of the four calibration experiments [29] \[\frac{R}{R^{\rm expected}}\Big{|}_{\rm calibration}=0.87\pm 0.05\quad\Rightarrow \quad\frac{R}{R^{\rm expected}}\Big{|}_{\rm calibration}=0.88\pm 0.05 \tag{6.2}\] When all of these data are combined with appropriate ratings weighting, one finds for the updated cross section \[\frac{R}{R^{\rm expected}}\Big{|}_{\rm combined}=\left\{\begin{array}{ll}0.82\pm 0.03&\mbox{uncorrelated}\\ 0.81\pm 0.05&\mbox{correlated}\end{array}\right. \tag{6.3}\] depending on whether one assumes the uncertainties in the five \({}^{51}\)Cr experiments are uncorrelated or correlated. The dominant correlated uncertainty is that associated with the cross section. While the original Ga anomaly had a significance of about \(2.2\sigma\), using the updated cross section, with the inclusion of the BEST results that has grown to \(\sim 4\)\(\sigma\) under conservative assumptions. These estimates are based on our current best knowledge of all input experimental and theoretical uncertainties. It cannot be attributed to nuclear physics uncertainties in the capture cross section: Using only capture to the \({}^{71}\)Ge ground state, one obtains \[\frac{R}{R^{\rm expected}}\Big{|}_{\rm combined}^{\rm minimum~{}cross~{} section}=\left\{\begin{array}{ll}0.87\pm 0.03&\mbox{uncorrelated}\\ 0.87\pm 0.05&\mbox{correlated}\end{array}\right. \tag{6.4}\] reducing the significance of the anomaly to approximately \(2.6\sigma\) under the most conservative assumptions, but not eliminating it. The BEST results have been attributed to \(\nu_{e}\to\nu_{s}\) but the absence of any distance dependence, from comparing rates in the inner and outer volumes, means that there is no direct evidence supporting this hypothesis. The rates observed in the two volumes were each low and consistent within their \(1\sigma\) uncertainties. But if \(\nu_{e}\to\nu_{s}\) is invoked to account for Ga anomaly, one can check the consistency of this hypothesis with other experiments. There exist both null results constraining the properties of sterile neutrinos, and other experimental anomalies that have been linked to their existence. For a recent review, see Ref. [4]. As discussed in the introduction to this paper, this is not an easy task as the number, masses, and couplings of possible sterile neutrinos are among the variables one should consider. Furthermore, sterile neutrinos can be accompanied by other new physics. As is apparent from Refs. [4, 5, 6], the modeling possibilities have resulted in an extensive literature, much of it generated in the last few years. Figure 6.1 shows the BEST constraints on the simplest (3+1) \(\nu_{e}\to\nu_{s}\) scenario involving a single sterile state, as well as the constraints when BEST is combined with the SAGE and GALLEX calibration experiments. The updated cross section of [3] has been used. The parameter space is very flat, particularly along the \(\Delta m^{2}\) direction. The contours exclude the origin and thus are consistent with the assumption that \(\nu_{e}\to\nu_{s}\) is occurring. One can then consider whether other results support or are in tension with the hypothesis of \(\nu_{e}\to\nu_{s}\) for the parameters indicated in Fig. 6.1. BEST's inner/outer detector geometry corresponds to oscillation lengths corresponding to \(\Delta m^{2}\sim 1\) eV\({}^{2}\), which one sees reflected in Fig. 6.1. Values much smaller that 1 eV\({}^{2}\), corresponding to longer oscillation lengths, are excluded by BEST's reduced counting rate, \(R\sim 0.8\). For \(\Delta m^{2}\gtrsim 2\) eV\({}^{2}\), BEST looses sensitivity to \(\Delta m^{2}\), as the oscillation length is short relative to detector dimensions. Only the average oscillation is relevant. Consequently BEST results are compatible with a wide range of relatively heavy sterile neutrinos. In contrast, because \(R\) is significantly less than 1, a relatively large mixing angle is indicated. This creates tension with other experimental constraints on sterile neutrinos. A number of other experiments have produced results impacting the sterile neutrino interpretations of the Ga anomaly: DANSS [84], Prospect [85], Stereo [86], RENO & NEOS [87, 88] and KATRIN [89] all quote limits and provide exclusion regions. As a collective they exclude most, but not all, of the BEST allowed space. The reactor anti-neutrino anomaly (RAA) [90], and the reactor experiment, Neutrino-4 [91], claim evidence for \(\nu_{e}\rightarrow\nu_{s}\). The allowed regions for Neutrino-4 and BEST overlap. The allowed regions for RAA and BEST overlap near \(\sin^{2}\!2\theta\sim\)0.2, but marginally. Similarly, limits from solar neutrinos [92] exclude almost all of the BEST allowed region, except for the lowest allowed mixing angles. The joint MiniBooNE-MicroBooNE results [93] yield an allowed region that overlaps poorly with the Ga results. Although the MicroBooNE results are limited by low statistics and hence do not significantly alter the MiniBooNE exclusion region, taken by themselves they are consistent with the Ga data [94]. Readers can find in Ref. [4] various exclusion plots summarizing existing constraints on the 3+1 sterile neutrino scenario. On balance no clear evidence has emerged from these experiments that supports the simplest new-physics hypothesis of \(\nu_{e}\rightarrow\nu_{s}\) to a fourth sterile neutrino state as an explanation for the BEST results. Of course, this does not exclude more complicated scenarios with additional beyond-the-Standard-Model degrees of freedom. On the other hand, as we have described in this review, the many cross-checks of the experimental procedures have been made, yielding no evidence of significant issues in either the BEST experiment or the four earlier Ga calibration efforts. Nor is there any identified theory uncertainty that could possibly account for a \(\sim\)20% reduction in the counting rate. Thus at this time we lack an explanation for the results that have been obtained. Figure 6.1: Left: The allowed region for oscillations into a sterile state determined from the BEST inner and outer results, using the update neutrino capture cross section from [3]. The best-fit point is \(\sin^{2}\!2\theta=0.41\), \(\Delta m^{2}=6.1\) eV\({}^{2}\), denoted above by b.f.p. Right: Allowed regions when the constraints from the two GALLEX and two SAGE calibration experiments are added. The best-fit point is \(\sin^{2}\!2\theta=0.32\), \(\Delta m^{2}=1.25\) eV\({}^{2}\). The parameter space, however, is very flat. Figure courtesy of Tanya Ibragimova. ## 7. Summary and Outlook The BEST experiment was designed as a test of the Ga anomaly that would achieve a higher counting rate, by using a \({}^{51}\)Cr source of unprecedented intensity to expose a large mass of Ga metal to the neutrino flux. The two-volume design provided sensitivity to oscillation lengths provided \(\Delta m^{2}\sim 1\) eV\({}^{2}\). No baseline dependence was observed. In both volumes a counting rate was obtained that was \(\sim\)80% of that expected, a result consistent with the earlier calibration experiments while also strengthening the statistical signifance the Ga anomaly. In this review we described the experimental procedures of BEST and the earlier calibration experiments, emphasizing the detailed checks that have been made to verify Ge extraction, proportional counter efficiency, and analysis procedures. We discussed the atomic physics of the sources and the multiple checks that have been made of source intensity. We described the nuclear physics of the \({}^{71}\)Ga\((\nu_{e},e^{-})^{71}\)Ge cross section, which is tightly constrained by the known EC rate for \({}^{71}\)Ge, including recent work that has provided a solid basis for estimating the uncertainty of the remaining \(\sim\)6% contribution from \({}^{71}\)Ge excited states. Two effects that can alter the relationship between \({}^{71}\)Ge EC and neutrino capture on \({}^{71}\)Ga, radiative corrections and weak magnetism, have been evaluated and found to enter at the level of \(\lesssim\)0.5%. No conventional explanation of the anomaly has been identified, apart from the possibility of an unfortunate statistical fluctuation. While it is clearly not possible to rule out some undiscovered experimental artifact altering either observed rates or current estimates of uncertainties, there is a marked contrast between the various efficiency tests performed, which typically verified procedures at the level of 1%, and the \(\sim\)20% counting deficit found in the BEST experiment. The lack of more conventional explanations for the anomaly has led to suggestions that new physics might be at play, specifically an oscillation into a fourth sterile neutrino \(\nu_{e}\rightarrow\nu_{s}\). The BEST and SAGE/GALLEX calibration results are consistent with such an explanation for a broad range of \(\Delta m^{2}\gtrsim 1\) eV\({}^{2}\) and large mixing angles in the range \(\sin^{2}2\theta\sim 0.3-0.4\). While sterile neutrinos have been invoked to account for other anomalies, in general the oscillation parameters indicated by BEST lead to conflicts with various short-baseline null experiments. While the 3+1 scenario explored is simple and many other possibilities exist, in our view one would need additional supporting evidence before claiming \(\nu_{e}\rightarrow\nu_{s}\) as a likely solution to the Ga anomaly. The most productive path forward might be to perform another high-intensity source experiment, improving the precision and helping to further rule out the possibility that the Ga anomaly is totally or partially a statistical fluctuation. Given the success in producing one high-intensity \({}^{51}\)Cr source (3.14 MCi), one has confidence that a second could be fabricated. But there are alternatives, including one experiment that would be sensitive to somewhat shorter oscillation lengths, corresponding to higher neutrino mass differences. Probing shorter oscillation lengths by making the inner volume of the BEST configuration smaller would be impractical, but developing a higher-energy neutrino source is an intriguing alternative. \({}^{65}\)Zn is an EC isotope with a small \(\beta^{+}\) decay branch (1.421%). Roughly half the electron captures are to an excited state at 1115.5 keV with the remainder to the ground state with a Q-value of 1352.1 keV. This results in K-capture \(\nu_{e}\)s of energies 1342.4 keV and 226.9 keV, the latter below the \({}^{71}\)Ga threshold. Therefore 48.35\(\pm\)0.11% of \({}^{65}\)Zn decays produce \(\nu_{e}\) that can interact. The longer half-life of \({}^{65}\)Zn (244.01\(\pm\)0.09 d [42]) means that many more extractions can be done, compared to the \({}^{51}\)Cr and \({}^{37}\)Ar source experiments previously performed [95]. Furthermore, the \({}^{65}\)Zn cross section is about three times larger. Even though only 48% of the decays produce useful \(\nu_{e}\), the count rates would be higher. A first assessment of source fabrication indicates that with 6-7 kg of enriched \({}^{64}\)Zn, a 0.5 MCi source could be produced. However, \({}^{65}\)Zn neutrinos can populate higher energy excited states in \({}^{71}\)Ge at 708 keV, 808 keV and 1096 keV in addition to the states at 175 and 500 keV that contributed to the \({}^{51}\)Cr source experiment. An estimate of the \({}^{65}\)Zn cross section for \({}^{71}\)Ga(\(\nu_{e}\),\(e^{-}\))\({}^{71}\)Ge of \((1.82\pm 0.05)\times 10^{-44}\) cm\({}^{2}\)[75] has been made, based on excited-state B\({}_{\rm GT}\) values extracted from forward-angle (\({}^{3}\)He,t) scattering. Some 20-30% of the cross section is due to such states [75]. This is problematic, as the model-dependent contribution to the \({}^{65}\)Zn \({}^{71}\)Ga(\(\nu_{e},e^{-}\))\({}^{71}\)Ge cross section would exceed the size of the anomaly one is testing. In contrast to the (p,n) analysis of [3], no systematic effective operator study of (\({}^{3}\)He,t) as a probe of weak Gamow-Teller strengths has been made. Thus it is not presently clear whether a \({}^{65}\)Zn neutrino source experiment could achieve the precision required, even though certain experimental attributes of this source are attractive. Huber [96] proposed using the Ce-doped, inorganic scintillating crystal Gd\({}_{3}\)Al\({}_{2}\)Ga\({}_{3}\)O\({}_{12}\) (Ce:GAGG) to test the anomaly by exposing a 1.5-ton detector of crystals to a BEST-like \({}^{51}\)Cr source ten times. The charged-current (CC) interaction rate on \({}^{71}\)Ga and the elastic scattering (ES) rate on the electrons within the crystal would both be measured. The ES cross section is well known and therefore the comparison of the two rates is a direct test of the CC cross section. With few previous measurements of CC cross sections in this energy range, this would be a useful measurement even without the motivation of the Ga anomaly. The absolute activity of the \({}^{51}\)Cr source would cancel out in forming the ratio of the two rates. There are clearly advantages to event-by-event detection, compared to the less direct radiochemical method that requires extraction and counting of event products. The CC signature would be an energy deposit of 510 keV (\(E_{\nu}-Q\)), a number unfortunately near the positron annihilation \(\gamma\) energy. The continuum of ES events extends up to the Q value. A careful background study will be required before the feasibility of this scheme will be known. Ten reproductions of the \({}^{51}\)Cr source would also pose a challenge. Another possibility would be to place a strong \(\bar{\nu}_{e}\) source near a liquid scintillator detector with position sensitivity and large proton density [97]. The SOX collaboration [98] had planned to place a \(\sim\)500 PBq \({}^{144}\)Ce-source near the Borexino detector. The \(\bar{\nu}_{e}\) spectrum from this \(\beta\) decay extends up to 3.0 MeV, well above the 1.806 MeV inverse beta decay (IBD) threshold of hydrogen. The sensitivity of Borexino to IBD and its position sensitivity meant that an oscillation curve could be mapped out. Unfortunately the fabrication of the source failed [99], causing the experiment to be abandoned. The line neutrinos produced in EC combined with calorimetry and other methods to measure source intensities to high precision help to make the source experiments described above quite attractive. Furthermore the cross sections for the reactions they induce are often more constrained than would be the case for higher energy neutrinos. In the example we have treated here, 94% the \({}^{51}\)Cr neutrino cross section for \({}^{71}\)Ga(\(\nu_{e},e^{-}\))\({}^{71}\)Ge can be determined from the \({}^{71}\)Ge EC rate, independent of nuclear models. Thus the further development of this field is important, given our incomplete knowledge of neutrino physics and the need for high precision tests of neutrino properties. ## Acknowledgements We thank Hamish Robertson for helpful discussions. This work was supported by the Department of Energy, Office of Nuclear Physics under Federal Prime Agreement LANLEM78 (SRE); by the Higher Education of Russian Federation under agreement no. 14.619.21.0009 (unique project identifier no. RFMEF161917X0009) (VG); and by the US Department of Energy under grants DE-SC0004658, DE-SC0015376, and DE-AC02-05CH11231, the National Science Foundation under cooperative agreement 2020275, and the Heising-Simons Foundation under award 00F1C7 (WH).
2308.13526
Interstellar radiation as a Maxwell field: improved numerical scheme and application to the spectral energy density
The existing models of the interstellar radiation field (ISRF) do not produce a Maxwell field. Here, the recent model of the ISRF as a Maxwell field is improved by considering separately the different frequencies at the stage of the fitting. Using this improved procedure: (i) It is checked in detail that the model does predict extremely high values of the spectral energy density (SED) on the axis of a galaxy, that however decrease very rapidly when $\rho $, the distance to the axis, is increased from zero. (ii) The difference between the SED values (with $\rho =1\,$kpc or $8\,$kpc), as predicted either by this model or by a recent radiation transfer model, is reduced significantly. (iii) The slower decrease of the SED with increasing altitude $z$, as compared with the radiation transfer model, is confirmed. We also calculate the evolutions of the SED at large $\rho $. We interpret these evolutions by determining asymptotic expansions of the SED at large $z$, and also ones at large $\rho $.
Mayeul Arminjon
2023-08-04T13:39:16Z
http://arxiv.org/abs/2308.13526v1
Interstellar radiation as a Maxwell field: improved numerical scheme and application to the spectral energy density ###### Abstract The existing models of the interstellar radiation field (ISRF) do not produce a Maxwell field. Here, the recent model of the ISRF as a Maxwell field is improved by considering separately the different frequencies at the stage of the fitting. Using this improved procedure: (i) It is checked in detail that the model does predict extremely high values of the spectral energy density (SED) on the axis of a galaxy, that however decrease very rapidly when \(\rho\), the distance to the axis, is increased from zero. (ii) The difference between the SED values (with \(\rho=1\,\)kpc or \(8\,\)kpc), as predicted either by this model or by a recent radiation transfer model, is reduced significantly. (iii) The slower decrease of the SED with increasing altitude \(z\), as compared with the radiation transfer model, is confirmed. We also calculate the evolutions of the SED at large \(\rho\). We interpret these evolutions by determining asymptotic expansions of the SED at large \(z\), and also ones at large \(\rho\). ## 1 Introduction The interstellar radiation field (ISRF) in a galaxy is an electromagnetic (EM) field in a very high vacuum, hence it should be a solution of the Maxwell equations. However, the existing models for the ISRF do not take into account the full EM field with its six components coupled through the Maxwell equations. Consider, for example, the model of Chi & Wolfendale [1]. It assumes an axisymmetric distribution of the volume emissivities \(j_{i}(\lambda,\rho,z)\) of four stellar components \((i)\)\((i=1,...,4)\): \(j_{i}\) decreases exponentially with both the distance \(\rho\) to the galactic axis and the altitude \(z\) over the galactic central disk. The contribution of component \((i)\) to the energy density of the ISRF at some position \((\rho^{\prime},z^{\prime})\) and wavelength \(\lambda\) is obtained by integrating \(j_{i}(\lambda,\rho,z)g/l^{2}\) over the whole galactic volume. Here \(l\) is the distance between the studied position and the running point in the galactic volume; \(g\) describes the dust absorption and is obtained by integrating the visual extinction per unit path length over the linear path joining the studied position and the running point in the galactic volume. Other models, e.g. by Mathis, Mezger and Panagia [2], Gordon et al. [3], Robitaille [4], Popescu et al. [5], are based on similar principles: all of these models consider quantities such as the stellar emissivity and luminosity, and the dust opacity, and they evolve the light intensity emitted by the stars by taking into account (in addition to the distance) the radiative transfer, in particular by dust absorption/ reemission. Clearly, those models do not produce an EM field, hence even less one that would be a solution of the Maxwell equations. In a recent work [6], we proposed a model applicable to the relevant ideal case of an axisymmetric galaxy, and that provides for the ISRF such an exact solution of the Maxwell equations -- a feature which, as discussed above, and to the best of our knowledge, appears to be fully new. This is indeed needed to study the relevance of a possible candidate for dark matter that emerges [7] from an alternative, scalar theory of gravity. However, it is also of astrophysical interest independently of the latter, since, as we noted, the ISRF must be an exact Maxwell field and this condition is not fulfilled by the existing models. As a step in checking the model proposed in Ref. [6], its application to predict the variation of the spectral energy density (SED) in our Galaxy has been subjected to a first test [8]. To this purpose, the model has been adjusted by asking that the SED predicted for our local position in the Galaxy coincide with the SED determined from spatial missions by Henry, Anderson & Fastie [9], Arendt et al. [10], Finkbeiner et al. [11], and Porter & Strong [12]. It has been found in that most recent work [8] that the spatial variation of the SED thus obtained with our model does not differ too much in magnitude from that predicted by the recent radiation transfer model of Ref. [5], but that the SED predicted by our model: (i) is extremely high on the axis of the Galaxy -- i.e., on the axis of the axial symmetry that is assumed for the model of the Galaxy; (ii) has rather marked oscillations as function of the wavelength; and (iii) seems to decrease more slowly when the altitude \(z\) increases (or rather when \(|z|\) increases), as compared with the radiation transfer model. The aim of this paper is to present an improved numerical scheme to operate that "Maxwell model of the ISRF", and to apply this improved scheme to check the findings (i)-(iii) above. Section 2 provides a summary of the model. Section 3 describes the improvement of the numerical scheme. In Sect. 4, we check whether the model really predicts extremely high values of the SED on the axis of the Galaxy. Section 5 studies the spatial variation of the SED and compares it with results of the literature. In Sect. 6, asymptotic expansions are used to interpret the findings of the foregoing section. The Conclusion section 7 is followed by Appendix A, which discusses the relation between the discrete and continuous descriptions of the SED. ## 2 Short presentation of the model This model has been presented in detail in Ref. [6]. An axisymmetric galaxy is modelled as a finite set of point-like "stars", the azimuthal distribution of which is uniform. Those points \({\bf x}_{i}\) (\(i=1,...,i_{\rm max}\)) are obtained by pseudo-random generation of their cylindrical coordinates \(\rho,\phi,z\) with specific probability laws, ensuring that the distribution of \(\rho\) and \(z\) is approximately that valid for the star distribution in the galaxy considered, and that the set \(\{{\bf x}_{i}\}\) is approximately invariant under azimuthal rotations of any angle \(\phi\)[6]. In the present work, as in Refs. [6, 8], \(16\times 16\times 36\) triplets \((\rho,z,\phi)\) were thus generated, so that \(i_{\rm max}=9216\), and the distribution of \(\rho\) and \(z\) is approximately that valid for the star distribution in the Milky Way. The ISRF is also assumed axisymmetric, and thus depends only on \(\rho\) and \(z\). Since we want to describe, not the field inside the sources and in their vicinity, but instead the smoothed-out field at the intragalactic scale, we search for a solution of the source-free Maxwell equations. In the axisymmetric case, any time-harmonic source-free Maxwell field is the sum of two Maxwell fields: (**i**) one deriving from a vector potential having just the axial component \(A_{z}\) non-zero, with \(A_{z}\) obeying the standard wave equation, and (**ii**) one deduced from a solution of the form (**i**) by EM duality [13]. We consider for simplicity a model ISRF that has a finite frequency spectrum \((\omega_{j})_{j=1,...,N_{\omega}}\), hence we may apply the foregoing result to each among its time-harmonic components \((j)\), and then just sum these components. Moreover, we envisage the ISRF as being indeed an EM _radiation_ field, thus excluding from consideration the purely magnetic part of the interstellar EM field [14]. Hence the ISRF is made of "totally propagating" EM waves, i.e., ones without any "evanescent" component [6, 16]. Specifically, we assume that the two scalar potentials \(A_{j\,z}\) and \(A^{\prime}_{j\,z}\) that define the decomposition (**i**)-(**ii**) of each time-harmonic component \((j)\), mentioned above, are themselves totally propagating. In that case, both \(A_{j\,z}\) and \(A^{\prime}_{j\,z}\) have the explicit form [15, 16]: \[\psi_{\omega_{j}\,\,\,S_{j}}\left(t,\rho,z\right)=e^{-{\rm i}\omega_{j}t}\int_ {-K_{j}}^{+K_{j}}\,J_{0}\left(\rho\sqrt{K_{j}^{2}-k^{2}}\right)\,\,e^{{\rm i} k\,z}\,S_{j}(k)\,{\rm d}k, \tag{1}\] with \(\omega_{j}\) the angular frequency, \(K_{j}:=\omega_{j}/c\), \(J_{0}\) the first-kind Bessel function of order 0, and where \(S_{j}\) is some (generally complex) function of \(k\in[-K_{j},+K_{j}]\). For a totally propagating, axisymmetric EM field, but otherwise fully general, the two potentials \(A_{j\,z}\) and \(A^{\prime}_{j\,z}\) may be different, i.e., may correspond with different "spectra" in Eq. (1), say \(S_{j}\) and \(S^{\prime}_{j}\)[13]. To determine these potentials, that is, to determine the spectrum functions \(S_{j}\), we use a sum of potentials emitted by the "stars". We assume that every "star", each at some point \({\bf x}_{i}\), contributes to the global potential \(A_{j\,z}\) of a given frequency \(\omega_{j}\) (\(j=1,...,N_{\omega}\)) by a spherically symmetric scalar wave of the same frequency \(\omega_{j}\), whose emission center is its spatial position \({\bf x}_{i}\) -- in order that all the directions starting from the star be equivalent. Thus, consider time-harmonic spherically symmetric solutions of the wave equation that have a given angular frequency \(\omega\). It is easy to check by direct calculation that they can be either an outgoing wave, an ingoing wave, or the sum of an ingoing wave and an outgoing one, and that, up to an amplitude factor, the following is the only outgoing wave: \[\psi_{\omega}\,\,(t,{\bf x})=\frac{e^{{\rm i}(Kr-\omega t)}}{Kr},\qquad K:= \frac{\omega}{c},\qquad r:=|{\bf x}|\,. \tag{2}\] Clearly, only that outgoing solution is relevant here, given that the point-like "stars" must be indeed _sources_ of radiation. 1 Thus, the contributions of the \(i\)-th star to the potentials \(A_{j\,z}\) and \(A^{\prime}_{j\,z}\) can differ only in amplitude, since both must be a multiple of Footnote 1: The \(i\)-th star is a \(i\)-th star, and the \(i\)-th star is a \(i\)-th star. \[\psi_{{\bf x}_{i}\,\omega_{j}}\ (t,{\bf x}):=\psi_{\omega_{j}}\ (t,{\bf x}-{\bf x }_{i})=\frac{e^{{\rm i}(K_{j}r_{i}-\omega_{j}t)}}{K_{j}r_{i}}, \tag{3}\] where \(\ K_{j}:=\frac{\omega_{j}}{c}\), \(\qquad r_{i}:=|{\bf x}-{\bf x}_{i}|\). But there is no apparent physical reason to affect different amplitudes to the contribution of the \(i\)-th star to \(A_{j\,z}\) and to \(A_{j\,z}^{\prime}\), hence we assume both of them to be equal to \(\psi_{{\bf x}_{i}\,\omega_{j}}\). To determine the global potentials \(A_{j\,z}\) and \(A_{j\,z}^{\prime}\) (\(j=1,...,N_{\omega}\)), that generate the axisymmetric model ISRF with a finite frequency spectrum (\(\omega_{j}\)), the sum of the spherical potentials (3) emanating from the point stars is fitted to the form (1). As noted in Ref. [6], this is not assuming that the ISRF is indeed the sum of the radiation fields emitted by the different stars (which is not correct, due to the radiation transfers) -- because (_a_) the equalities (4), (20) or (21) below are not exact equalities but ones in the sense of the least squares, and (_b_) nothing is really assumed regarding the EM field of the "star" itself, in particular we actually do not need to assume that it has the form (**i**)-(**ii**) above (e.g. the one corresponding with two equal potentials \(A_{i\,j\,z}=A^{\prime}_{i\,j\,z}=\psi_{{\bf x}_{i}\,\omega_{j}}\)). In the previous works [6, 8], this fitting was done for all frequencies at once. That is, the following least-squares problem was considered: \[\sum_{j=1}^{N_{\omega}}\sum_{i=1}^{i_{\rm max}}w_{j}\psi_{{\bf x}_{i}\,\omega_ {j}}\cong\sum_{j=1}^{N_{\omega}}\,\psi_{\omega_{j}}\ s_{j}\qquad\mbox{on $G$}, \tag{4}\] where the sign \(\cong\) indicates that the equality is in the sense of the least squares (the arguments of the functions varying on some spatio-temporal grid \(G\)), and where the numbers \(w_{j}>0\) are the weights affected to the different frequencies. In view of the axial symmetry, the spatial position \({\bf x}\) is restricted to the plane \(\phi=0\), so \({\bf x}={\bf x}(\rho,z)\) and \[G=\{(t_{l},\rho_{m},z_{p}),\ 1\leq l\leq N_{t},\ 1\leq m\leq N_{\rho},\ 1\leq p \leq N_{z}\}. \tag{5}\] Since the contributions of the \(i\)-th star to \(A_{j\,z}\) and to \(A^{\prime}_{j\,z}\) have both been assumed to be equal to \(\psi_{{\bf x}_{i}\,\omega_{j}}\), there is no possibility to distinguish between \(A_{j\,z}\) and \(A^{\prime}_{j\,z}\), either -- whence \(\psi_{\omega_{j}\,\,S_{j}}=A_{j\,z}=A^{\prime}_{j\,z}\) on the r.h.s. of (4). The unknowns of the problem are the spectrum functions \(S_{j},\quad j=1,...,N_{\omega}\). We determine \(S_{j}\) by the (generally complex) values \[S_{nj}:=S_{j}(k_{nj})\quad(n=0,...,N), \tag{6}\] where \[k_{nj}=-K_{j}+n\delta_{j}\quad(n=0,...,N), \tag{7}\] with \(\delta_{j}:=2K_{j}/N\), is a regular discretization of the interval \([-K_{j},+K_{j}]\) for \(k\) in the integral (1). Calculating those integrals with the "Simpson \(\frac{3}{8}\) composite rule", (4) becomes the computable least-squares problem \[\sum_{j=1}^{N_{\omega}}\sum_{i=1}^{i_{\rm max}}w_{j}\psi_{{\bf x}_{i}\,\omega_ {j}}\cong\sum_{j=1}^{N_{\omega}}\sum_{n=0}^{N}f_{nj}\,S_{nj}\qquad\mbox{on $G$}, \tag{8}\] with \[f_{nj}(t,\rho,z)=a_{nj}\,J_{0}\left(\rho\sqrt{K_{j}^{2}-k_{nj}^{2}}\right) \exp\left[{\rm i}\,(k_{nj}z-\omega_{j}\,t)\right]. \tag{9}\] The \(S_{nj}\)'s are the solved-for parameters in the least-squares problem (8). In Eq. (8), \(N\) must be a multiple of 3, and in Eq. (9) we have \[a_{nj} = (3/8)\,\delta_{j}\qquad\quad(n=0\mbox{ or }n=N), \tag{10}\] \[a_{nj} = 2\times(3/8)\,\delta_{j}\quad\mbox{ (mod($n,3$)=0\mbox{ and }$n\neq 0\mbox{ and }$n\neq N$)},\] (11) \[a_{nj} = 3\times(3/8)\,\delta_{j}\quad\mbox{ otherwise}. \tag{12}\] Part (**i**) of the decomposition of the model ISRF then obtains as follows [6]: \[E_{\phi}=B_{\rho}=B_{z}=0, \tag{13}\] \[B_{\phi}(t,\rho,z)=\sum_{n=0}^{N}\sum_{j=1}^{N_{\omega}}R_{n}\,J_{1}\left( \rho\frac{\omega_{j}}{\omega_{0}}R_{n}\right)\,{\cal R}e\left[F_{nj}(t,z) \right]+O\left(\frac{1}{N^{4}}\right), \tag{14}\] \[E_{\rho}(t,\rho,z)=\sum_{n=0}^{N}\sum_{j=1}^{N_{\omega}}\frac{c^{2}}{\omega_{ 0}}k_{n}R_{n}\,J_{1}\left(\rho\frac{\omega_{j}}{\omega_{0}}R_{n}\right)\,{\cal R }e\left[F_{nj}(t,z)\right]+O\left(\frac{1}{N^{4}}\right), \tag{15}\] \[E_{z}(t,\rho,z)=\sum_{n=0}^{N}\sum_{j=1}^{N_{\omega}}\left(\frac{c^{2}}{\omega_{0} }k_{n}^{2}-\omega_{0}\right)\,J_{0}\left(\rho\frac{\omega_{j}}{\omega_{0}}R_{n} \right)\,{\cal I}m\left[F_{nj}(t,z)\right]+O\left(\frac{1}{N^{4}}\right), \tag{16}\] with \(R_{n}=\sqrt{K_{0}^{2}-k_{n}^{2}}\) and \[F_{nj}(t,z)=\left(\frac{\omega_{j}}{\omega_{0}}\right)^{2}\,a_{n}\exp\left[{ \rm i}\left(\frac{\omega_{j}}{\omega_{0}}k_{n}z-\omega_{j}\,t\right)\right]\,S _{nj}. \tag{17}\] (Here \(k_{n}\) and \(a_{n}\) (\(0\leq n\leq N\)) are as \(k_{nj}\) and \(a_{nj}\) in Eqs. (7) and (10), replacing \(K_{j}\) by \(K_{0}=\frac{\omega_{0}}{c}\), with \(\omega_{0}\) some (arbitrary) reference frequency.) Since we assume \(A_{j\,z}=A^{\prime}_{j\,z}\) for the global potentials generating the model ISRF, part (**ii**) of its decomposition is deduced from the first part by the EM duality: \[{\bf E}^{\prime}=c{\bf B},\quad{\bf B}^{\prime}=-{\bf E}/c. \tag{18}\] It follows from this and from (13) that the model ISRF, sum of these two parts, has the components (14)-(16), and that the other components are just \[E_{\phi}=cB_{\phi},\qquad B_{\rho}=-E_{\rho}/c,\qquad B_{z}=-E_{z}/c. \tag{19}\] ## 3 Frequency-by-frequency fitting of the potentials Equation (4) may be split into the different frequencies (marked by the index \(j\)), simply by removing the sum on \(j\) from both sides of either equation. The same is true for Eq. (8). Naturally, also the weight \(w_{j}\) may then be removed from the l.h.s., by entering the inverse \(1/w_{j}\) into the unknown spectrum function \(S_{j}\) on the r.h.s. Equation (8) thus becomes \[\sum_{i=1}^{i_{\rm max}}\psi_{{\bf x}_{i}\,\omega_{j}}\cong\sum_{n=0}^{N}f_{nj }\,S_{nj}\qquad\mbox{on $G$}\qquad(j=1,...,N_{\omega}). \tag{20}\] At this point, one notes that both \(\psi_{{\bf x}_{i}\,\omega_{j}}\) [Eq. (3)] and \(f_{nj}\) [Eq. (9)] have the same dependence on time, \(\exp(-{\rm i}\omega_{j}t)\), which we can hence remove also, to obtain a least-squares problem with merely the spatial variables \(\rho\) and \(z\): \[\sum_{i=1}^{i_{\rm max}}\frac{e^{{\rm i}K_{j}r_{i}}}{K_{j}r_{i}}\cong\sum_{n=0} ^{N}g_{nj}\,S_{nj}\qquad\mbox{on $G^{\prime}$}\qquad(j=1,...,N_{\omega}), \tag{21}\] where \(G^{\prime}=\{(\rho_{m},z_{p}),\ 1\leq m\leq N_{\rho},\ 1\leq p\leq N_{z}\}\) is the spatial grid, and \[g_{nj}(\rho,z)=a_{nj}\,J_{0}\left(\rho\sqrt{K_{j}^{2}-k_{nj}^{2}}\right)\exp \left({\rm i}k_{nj}z\right). \tag{22}\] The separation, into the different frequencies, of the fitting of the sum of the potentials emitted by the "stars", is consistent with the linearity of the wave equation and the Maxwell equations. Moreover, the elimination of the time variable from the fitting represents an appreciable gain in computing time. We recall that, for the EM field in a galaxy, the arguments of the Bessel function \(J_{0}\) and the angular exponential, e.g. in Eq. (22), have the huge magnitude \(\left|{\bf x}\right|/\lambda\sim 10^{25}\), which enforces us to use a precision better than quadruple precision in the computer programs, thus leading to slow calculations [6]. Note that the "separate fitting", i.e. the least-squares problem (21), is not exactly equivalent to the "grouped fitting", i.e. the least-squares problem (8) (this will be confirmed by the numerical results below): the two are slightly different ways of adjusting the global potentials (1). 2 However, equations (13)-(19) apply with the separate fitting as well -- although the relevant values \(S_{nj}\) are different. The separate fitting is more appropriate, because solutions corresponding with different frequencies behave independently in the Maxwell equations, and each frequency can be treated with more precision by considering it alone. Indeed a very important point is that, by switching to the separate fitting, we improve the situation regarding the "overfitting", i.e., we decrease the ratio \(R\) of the number of parameters \(N_{\rm para}\) to the number of data \(N_{\rm data}\): now, for each value of the frequency index \(j\), we have to solve the least-squares problem (21), with \(N_{\rm para}=N+1\) unknown parameters and \(N_{\rm data}=N_{\rho}\times N_{z}\) data (the "data" are the values of the l.h.s. of (21) on the spatial grid \(G^{\prime}\)). Whereas, with the formerly used grouped fitting, we had to solve just one least-squares problem (8) with \(N_{\rm para}=(N+1)\times N_{\omega}\) unknown parameters and \(N_{\rm data}=N_{t}\times N_{\rho}\times N_{z}\) data. Footnote 2: If Eq. (20), or equivalently Eq. (21), were an exact equality instead of being an equality in the sense of the least squares, then of course it would imply Eq. (8) (with \(w_{j}\equiv 1\)) as an exact equality. On the other hand, through the processes of radiative transfer, there are indeed transfers of radiation intensity from some frequency domains to other ones, e.g. the interaction with dust leads to a transfer from higher to lower frequencies (see e.g. Fig. 3 in Ref. [3]). But these processes are not directly taken into account by the present model: not any more with the grouped fitting than with the separate fitting. They are indirectly taken into account through the adjustment of the energy density [8], which we briefly recall now. The time-averaged volumic energy density of an EM field having a finite set of frequencies, \((\omega_{j})_{j=1,...,N_{\omega}}\), is given by [8] \[\overline{U}({\bf x}):=\frac{\overline{\delta W}}{\delta V}({\bf x})=\sum_{j=1 }^{N_{\omega}}u_{j}({\bf x}),\qquad u_{j}({\bf x}):=\frac{1}{4}\sum_{q=1}^{6} \alpha_{q}\left|C_{j}^{(q)}({\bf x})\right|^{2}, \tag{23}\] where the complex numbers \(C_{j}^{(q)}({\bf x})\) (\(q=1,...,6\)) are the coefficients in the expansion, in time-harmonic functions, of each among the six components of the EM field: \[F^{(q)}(t,{\bf x})={\cal R}e\left(\sum_{j=1}^{N_{\omega}}C_{j}^{(q)}({\bf x})e^ {-{\rm i}\omega_{j}t}\right)\qquad(q=1,...,6); \tag{24}\] and where \(\alpha_{q}=\epsilon_{0}\) for an electric field component, whereas \(\alpha_{q}=\epsilon_{0}c^{2}\) for a magnetic field component (here \(\epsilon_{0}\) is the vacuum permittivity, with \(\epsilon_{0}=1/(4\pi\times 9\times 10^{9})\) in SI units). For an axisymmetric EM field, it is enough to consider the plane \(\phi=0\), thus \({\bf x}={\bf x}(\rho,z)\), and we have \[C_{j}^{(q)}=C_{j}^{(q)}(\rho,z),\qquad u_{j}=u_{j}(\rho,z). \tag{25}\] Using in that case the decomposition (**i**)-(**ii**), the expressions of three among the \(C_{j}^{(q)}\) coefficients follow directly from the expressions (14)-(16) of the corresponding components of the EM field [8]. Moreover, in the special subcase (18) considered here, the other components are given by (19), whence in the same way the three remaining \(C_{j}^{(q)}\) coefficients. Now note that, in the least-squares problem (21), that we use to determine the values \(S_{nj}\) allowing to compute the EM field (14)-(16) and (19), no data relative to the intensity of the fields emitted by the point-like "stars" has been used until now. Hence, we may multiply the l.h.s. of (21) by some number \(\xi_{j}>0\), thus obtaining now new values \(S_{nj}^{\prime}=\xi_{j}S_{nj}\) (\(n=0,...,N\)) as the solution of (21). 3 Therefore, to adjust the model, we determine the numbers \(\xi_{j}\) (\(j=1,...,N_{\omega}\)) so that the values \(u_{j}({\bf x}_{\rm loc})\) of the SED for our local position \({\bf x}_{\rm loc}\) in the Galaxy and for the frequencies \(\omega_{j}\), as given by Eq. (23), coincide with the measured values, as determined from spatial missions. We take the measured local values \(f_{{\bf x}_{\rm loc}}(\lambda_{j})\) as plotted in Ref. [12] (see Appendix A), and we take \(\rho_{\rm loc}=8\) kpc and \(z_{\rm loc}=0.02\) kpc, see e.g. Ref. [19]. The model thus adjusted then allows us to make predictions: in particular, predictions of the spatial variation of the SED in the Galaxy. Such predictions may then be compared with predictions of the mainstream models of the ISRF, which models are very different from the present model. ## 4 Results: maximum energy density In the foregoing work [8], the same adjustment just described was used in the framework of the "grouped fitting" (i.e. the least-squares problem (8)). A surprising result was that found for the values of the maximum of the energy density \(u_{j}({\bf x})\) in the Galaxy -- thus, owing to the axial symmetry (25), for the values of \[u_{j\rm max}={\rm Max}\{u_{j}(\rho_{m},z_{p});\ m=1,...,N_{\rho},\ p=1,...,N_{z }\}, \tag{26}\] found for the different spatial grids investigated, all having \(\rho\) varying regularly from \(\rho_{0}=0\) to \(\rho_{\rm max}\simeq 10\,{\rm kpc}\) and \(z\) varying regularly from \(z_{0}=0\) or \(z_{0}=-z_{\rm max}\) to \(z_{\rm max}\leq 1\,{\rm kpc}\). 4 These maximum values, which are always found at \(\rho=0\), thus on the axis of symmetry, are extremely high for lower wavelengths \(\lambda_{j}\), with \(u_{j\rm max}\simeq 10^{27}\,{\rm eV/cm}^{3}\). Moreover, the value of \(u_{j}(\rho=0,z)\) depends little on \(z\) in the domain investigated. These surprisingly high values occur in a larger or smaller range of wavelengths, depending on the settings of the calculation. Therefore, the question arises whether these extremely high values are a physical effect or a numerical artefact. However, the dependence on the settings is governed by the "amount of overfitting": less overfitting increases the range of the high values [8]. This makes it plausible that the high values might be a true prediction of the model. We will now try to check whether this is indeed the case. Footnote 4: Precisely: \(\rho_{0}:=\rho_{m=1}\,;\,\rho_{\rm max}:=\rho_{m=N_{\rho}}\) with, in this subsection, \(\rho_{\rm max}=10\,{\rm kpc}\times\frac{N_{\rho}-1}{N_{\rho}}\,;\)\(z_{0}:=z_{p=1}\) and \(z_{\rm max}:=z_{p=N_{z}}\) with, in this paper, \(z_{\rm max}=1\,{\rm kpc}\) and \(z_{0}=-z_{\rm max}\). ### Robustness of the high values on the axis In the present work based on the separate fitting (which, we argued, is more appropriate), we investigated rather systematically the question which we just asked. Since the influence of the spatial grid was found weak in the foregoing work [8], only two grids were tried: an \((N_{\rho}=10,\rho_{0}=0)\times(N_{z}=21,z_{0}=-1\,{\rm kpc},z_{\rm max}=1\,{ \rm kpc})\) grid (hereafter "rough grid"), and an \((N_{\rho}=20,\rho_{0}=0)\times(N_{z}=23,z_{0}=-1\,{\rm kpc},z_{\rm max}=1\,{ \rm kpc})\) grid (hereafter "fine grid"). However, we investigated the influence of the fineness of the frequency mesh \((N_{\omega})\) and the influence of the discretization number \(N\) quite in detail. [That integer \(N\) is used to compute the integrals over the wavenumber \(k\), e.g. the integral (1) approximated to \(\sum_{n=0}^{N}f_{nj}\,S_{nj}\), see Eq. (8).] The effect of choosing \(N_{\omega}=23\), \(N_{\omega}=46\), or \(N_{\omega}=76\), was studied simultaneously with the effect of choosing \(N=12\), or \(N=24,48,96,192,384\), and this was done for the two different grids. Figures 1 to 4 show these effects. The most salient result is that _the extremely high values of \(u_{j\rm max}\) are now found with all calculations and in the whole domain of \(\lambda\)_ -- except that on some curves, abrupt oscillations toward lower values of the energy density are present for some wavelengths. By looking at the set of these figures, it is manifest that such abrupt oscillations occur when an inappropriate choice of parameters is done: essentially, the discretization number \(N\) has to be large enough. (This is certainly expected, and this expectation is confirmed by the validation test in Ref. [6].) Indeed, for a given value of \(N_{\omega}\), those oscillations are maximum for the lowest value of \(N\) in the trial \((N=12)\) and progressively disappear when \(N\) is increased. What is a "large enough" value of \(N\) is not strongly dependent of the fineness of the spatial grid (i.e., of whether the "rough" one or the "fine" one is used) and that of the frequency mesh \((N_{\omega})\). However, when using the finest frequency mesh \((N_{\omega}=76)\) for the "rough" spatial grid (Fig. 3), increasing \(N\) does not allow us to eliminate the abrupt oscillations toward lower values: it even happens then, that increasing \(N\) from 192 to 384 actually deteriorates the \(u_{j\rm max}=f(\lambda_{j})\) curve. We interpret this as due to the fact that, when using a rougher spatial grid \(G^{\prime}\) for the fitting, less data are provided (the values taken on the grid \(G^{\prime}\) by the l.h.s. of Eq. (21)) to determine the unknowns \(S_{nj}\) on the r.h.s. of (21) -- while, of course, increasing \(N\) increases the number of unknowns and thus asks for more data. On the other hand, it is seen that (for the relevant values of \(N\), say \(N=192\) or \(N=384\), so that little or no oscillations are present), the levels of \(u_{j\rm max}\) depend quite little on \(N_{\omega}\) i.e. on the fineness of the frequency mesh: compare the bottom figures between Figs. 1, 2, and 3, and compare the three figures in Fig. 4. Also, the levels of \(u_{j\rm max}\) depend quite little on whether the rough or the fine spatial grid is being used (see e.g. Fig. 5). We also checked that the results are little dependent of the pseudo-random "draw" of the set of point-like "stars": another draw of \(16\times 16\times 36\) triplets \((\rho,z,\phi)\) gives very similar curves \(u_{j\rm max}=f(\lambda_{j})\) (Fig. 6). In summary, we now find that, for the relevant values of \(N\), say \(N=192\) or \(N=384\), \(u_{j\rm max}\) decreases smoothly from \(\simeq 10^{27}\) to \(\simeq 10^{21}{\rm eV/cm}^{3}\) when \(\lambda_{j}\) varies in the domain considered, i.e., from \(\lambda\simeq 0.11\mu\)m to \(\simeq 830\mu\)m. We note moreover that, for the low values of \(\lambda_{j}\), the values of \(u_{j\rm max}\) calculated using the present "separate fitting" have the same (extremely high) magnitude as those calculated with the former "grouped fitting" [8]. These observations lead us to conclude that: (i) the extremely high values of \(u_{j\rm max}\) (in the whole domain of \(\lambda\) considered) are really what the "Maxwell model of the ISRF" predicts for this model of the Galaxy. (ii) Somewhat surprisingly, it is the _low_ values of \(u_{j\rm max}\) obtained for the higher values of \(\lambda\) when the "grouped fitting" was used [8] that were a numerical artefact. ### Decrease of the energy density away from the axis Recall that the maxima of the \(u_{j}\)'s, which are extremely high, are always obtained for \(\rho=0\), i.e. on the axis of the (model of the) Galaxy, and that the energy density for \(\rho=0\) depends little on \(z\) in the domain investigated. The next questions are therefore: which is the extension around \(\rho=0\) of the domain of the very high values? Do such values imply that "too much EM energy" would be contained there? To answer these questions, we calculated the SED with successive lower and lower values of \(\rho_{\rm max}\) (see Note 4), starting from its value for the calculations shown on Figs. 1 to 3, i.e., \(\rho_{\rm max}=9\,\)kpc, and decreasing it to \(1,10^{-1},...,10^{-14}\,\)kpc, using the "rough grid" parameters (see above), i.e., in particular, \(N_{\rho}=10\), and using the \(S_{nj}\)'s obtained with this rough grid with \(\rho_{\rm max}=9\,\)kpc -- so that, for \(\rho_{\rm max}\neq 9\,\)kpc, those calculations are not ones on the fitting grid. We looked in detail to the values \(u_{j}(\rho_{m=2},z_{p=1})=u_{j}(\rho_{\rm max}/9,z=0)\). The main results are shown on Fig. 7: even for very small values of \(\rho\neq 0\), the values of \(u_{j}\) are much smaller than \(u_{j\,\rm max}\). That is, \(u_{j}(\rho,z)\) decreases very rapidly when \(\rho\) is increased from 0. Actually, we found on the example of the smallest wavelength, corresponding with \(j=1\), that, from \(\rho=1\,{\rm kpc}\) down to \(\,\rho=10^{-15}\,{\rm kpc}\), we have to a good approximation \(\,u_{1}(\rho,z=0)\simeq B/\rho\), with \[B=u_{1}(\rho=1\,{\rm kpc},z=0)\simeq 10^{-0.45}\,({\rm eV/cm}^{3}).{\rm kpc}. \tag{27}\] This behaviour is not valid until \(\rho=0\), because for \(\rho\to 0\), \(\,u_{1}(\rho,z=0)\) tends towards \(u_{1}(0,0)<\infty\), so we may assume \(u_{1}(\rho,0)\lesssim B/\rho\). On the other hand, Fig. 7 shows that there is nothing special to \(j=1\): we have \(u_{j}\lesssim u_{1}\), moreover for \(\rho\gtrsim 10^{-15}\,{\rm kpc}\), \(u_{j}\) depends only moderately on \(\lambda_{j}\). We observed in our calculations that, for \(\rho\leq 1\,{\rm kpc}\), \(u_{j}(\rho,z)\) depends quite little on \(z\) with \(|z|\leq z_{\rm max}=1\,{\rm kpc}\). Thus we may give the following approximation (which is likely an overestimate) to \(u_{j}\): for all \(j\), and for \(|z|\leq z_{\rm max}=1\,{\rm kpc}\), we have \[u_{j}(\rho,z)\simeq B/\rho\quad{\rm for}\quad 10^{-15}\,{\rm kpc}\leq\rho\leq 1 \,{\rm kpc}, \tag{28}\] \[u_{j}(\rho,z)\simeq u_{j}(\rho)\lesssim B/\rho\ \ {\rm for}\ \ \rho\leq 10^{-15}\,{\rm kpc}, \tag{29}\] with \(u_{j}(\rho)\) a decreasing function of \(\rho\). According to Eq. (51) of the Appendix, this implies that, for \(\,|z|\leq z_{\rm max}=1\,{\rm kpc}\), we have also \[f_{{\bf x}(\rho,z)}(\lambda)\simeq B/\rho\quad{\rm for}\quad 10^{-15}\,{\rm kpc }\leq\rho\leq 1\,{\rm kpc}, \tag{30}\] \[f_{{\bf x}(\rho,z)}(\lambda)\lesssim B/\rho\quad{\rm for}\quad 0\leq\rho\leq 10^ {-15}\,{\rm kpc}, \tag{31}\] independently of \(\lambda\) in the band \[\lambda^{(1)}:=0.1\mu{\rm m}\leq\lambda\leq\lambda^{(2)}:=830\mu{\rm m}. \tag{32}\] With this approximation, we can assess the total EM energy (53) contained in some disk \[D(\rho_{1}):\ (0\leq\rho\leq\rho_{1},\ 0\leq\phi\leq 2\pi,\ |z|\leq z_{\rm max}), \tag{33}\] with \(\rho_{1}\leq 1\,{\rm kpc}\), and in the wavelength band \([\lambda^{(1)},\lambda^{(2)}]\). This energy is bounded, owing to (30)-(31), by \[W_{1-2,\,D(\rho_{1})}\lesssim{\rm Log}\frac{\lambda^{(2)}}{\lambda^{(1)}} \times\int_{D(\rho_{1})}\ \frac{B}{\rho({\bf x})}{\rm d}^{3}{\bf x}={\rm Log}\frac{\lambda^{(2)}}{ \lambda^{(1)}}\times\int_{D(\rho_{1})}\ \frac{B}{\rho}\rho\,{\rm d}\rho\,{\rm d}\phi\,{ \rm d}z, \tag{34}\] i.e. \[W_{1-2,\,D(\rho_{1})}\lesssim{\rm Log}\frac{\lambda^{(2)}}{\lambda^{(1)}} \times B\,\rho_{1}\times 2\pi\times 2z_{\rm max}. \tag{35}\] But consider, instead of the disk \(D(\rho_{1})\), the ring \(R(\rho_{0},\rho_{1}):\;(\rho_{0}\leq\rho\leq\rho_{1},\;0\leq\phi\leq 2\pi,\;|z| \leq z_{\rm max})\), with \(\rho_{0}\geq 10^{-15}\,{\rm kpc}\). (Thus a ring with a very narrow aperture.) Using this time only (30), the same calculation gives \[W_{1-2,\,R(\rho_{0},\rho_{1})}\simeq{\rm Log}\frac{\lambda^{(2)}}{\lambda^{(1) }}\times B\,(\rho_{1}-\rho_{0})\times 2\pi\times 2z_{\rm max}. \tag{36}\] Taking \(\rho_{0}=10^{-15}\,{\rm kpc}\), the conjunction of (35) and (36) shows that the contribution of the domain with \(0\leq\rho\leq\rho_{0}\) is totally negligible, hence we may write \[W_{1-2,\,D(\rho_{1})}\simeq{\rm Log}\frac{\lambda^{(2)}}{\lambda^{(1)}}\times B \,\rho_{1}\times 2\pi\times 2z_{\rm max}. \tag{37}\] We can calculate the contribution \(\delta U\) that it gives to the average density of the EM energy in some disk \(\,D(\rho_{2})\) of the Galaxy, with \(\rho_{2}\geq\rho_{1}\), making a volume \(V_{2}=\pi\rho_{2}^{2}z_{\rm max}\): \[\delta U:=\frac{W_{1-2,\,D(\rho_{1})}}{V_{2}}\simeq 4\,{\rm Log}\frac{ \lambda^{(2)}}{\lambda^{(1)}}\times\frac{B\,\rho_{1}}{\rho_{2}^{2}}. \tag{38}\] (Note that we may leave \(B\) in (eV/cm\({}^{3}\)).kpc and \(\rho_{1}\) and \(\rho_{2}\) in kpc.) To give figures, let us first take \(\rho_{1}=\rho_{2}=1\,{\rm kpc}\), so that the corresponding value of \(\delta U\) is just the average volumic energy density \(\langle U\rangle_{D(1\,{\rm kpc})}\) in the disk \(D(\rho_{1}=1\,{\rm kpc})\). Then (38) with (27) give us \[\langle U\rangle_{D(1\,{\rm kpc})}\simeq 51\,{\rm eV/cm}^{3}. \tag{39}\] Note that this value is _not_ very high. Another interesting application of Eq. (38) is to assess the effect, on that average value in the same domain \(D(1\,{\rm kpc})\), of the domain of the "very high" values of the SED, say the domain for which \(u_{j}\geq 10^{6}\,{\rm eV/cm}^{3}\) -- i.e., from (27) and (28), \(\rho\leq\rho_{\rm vh}\), with \[\rho_{\rm vh}=10^{-6.45}\simeq 3.55\times 10^{-7}\,{\rm kpc}\simeq 1.1\times 10^ {10}\,{\rm km}, \tag{40}\] which is almost twice the average distance Sun-Pluto, but still very small on a galactic scale. Taking to this effect \(\rho_{1}=\rho_{\rm vh}\) and \(\rho_{2}=1\,{\rm kpc}\) in Eq. (38), the numerical values (27), (32) and (40) give us \(\delta U\simeq 4.54\times 10^{-6}\,{\rm eV/cm}^{3}\). _In summary,_ the "very high" values of the SED are confined to the close neighborhood of the Galaxy's axis and contribute negligibly to the average energy density (39) in the disk \(D(1\,{\rm kpc})\). Results: spatial variation of the SED & comparison with the literature This model's prediction for the spatial variation of the SED in the Galaxy was investigated, using again the separate fitting and the adjustment of the local SED on the measured values (both being described in Sect. 3). It was shown by using two different types of representations. First, we plotted the SED at four different points in the Galaxy, and we compared the results with those obtained by Popescu _et al._[5], who used a radiation transfer model built by them. (Their model also assumes axisymmetry.) Figures 8-11 show this comparison, our model being used here with \(N=192\) and \(N_{\omega}=76\). (Other choices of parameters that we tried gave similar figures.) It can be seen that the predictions of the present model do not differ very significantly from those of the radiation transfer model of Ref. [5]. The main difference is that our calculations oscillate somewhat strongly with the wavelength. The comparison of Figs. 8-11 here with Figs. 2-5 in Ref. [8] shows that the difference between the results of the two models is significantly smaller now than it was in our previous work, in which the calculations were based on the grouped fitting [8]: the difference in \(\log_{10}(u_{j})\) between the results of our model and Ref. [5] is here \(\lesssim 1\), whereas it went beyond 3 and even 4 in the previous calculations. However, the new calculations oscillate with the wavelength also at higher wavelengths. Whereas, when the grouped fitting was used, there was virtually no oscillation for \(\lambda\gtrsim 10\,\mu\)m at the two positions at \(\rho=8\,\)kpc. (There were oscillations in the whole range of \(\lambda\) for the two positions at \(\rho=1\,\)kpc.) In order to check if those calculations inside the spatial domain of small values of the SED could be "polluted" by the extremely high values on the galaxy's axis, we investigated the effect of doing the fitting on a "shifted" grid with \(\rho\geq 1\,\)kpc. This did not lead to less oscillations. The general reason for these oscillations may be simply that this model takes fully into account the wave nature of the EM field. Second, we plotted the radial and vertical profiles of the radiation fields at three wavelengths close to the ones considered in Fig. 7 of Popescu _et al._[5] ("K, B, UV"). Figures 12 and 13 show these profiles as they are calculated at points \((\rho,z)\) belonging to the "logarithmic" grid on which the fitting was done for this calculation (see the legend). Those profiles of the radiation fields obtained with the present model on the fitting grid are relatively similar to those that are predicted with the very different model of Ref. [5], both in the levels and in the rough shape of the profiles. The most important difference is seen on the vertical profiles of Fig. 13: according to the Maxwell model of the ISRF, the energy density level decreases more slowly when the altitude \(z\) increases -- or even, for the \(\lambda=2.29\mu\)m radiation at \(\rho=0.059\,\)kpc or the \(\lambda=0.113\mu\)m radiation at \(\rho=7.5\,\)kpc, the level of the SED does not decrease in the range considered for \(z\). A similar lack of decrease is found on the radial profiles of Fig. 12, for the \(\lambda=2.29\mu\)m radiation, either at \(z\simeq 0\) or at \(z=1.25\,\)kpc. Using that same fitting done on a logarithmic grid, we also calculated and plotted the radial and vertical profiles of the same radiations, but this time for regularly spaced values of \(\rho\) (or respectively \(z\)), and in a larger range for \(\rho\) (or respectively \(z\)), Figs. 14 and 15. The radial profiles of Figs. 12 and 14 are consistent, although, in contrast with Fig. 12, Fig. 14 plots the SED at points \((\rho,z)\) which were not involved in the fitting, and which moreover involve an extrapolation to a larger range for \(\rho\) as compared with the fitting. The vertical profiles of Fig. 15, which also correspond with points which were not involved in the fitting, and also involve an extrapolation to a larger range for \(z\) as compared with the fitting, show an oscillating behaviour without any tendency to a decrease at large \(z\). ## 6 Asymptotic behaviour at large \(\rho\) and at large \(z\) To help understanding the behaviours just noted, in this section we study the asymptotic behaviour of the expressions of the components of the EM field and of the SED, as they are given by the Maxwell model of the radiation field. The expressions (14)-(16) that are implemented in the numerical model, are deduced from the exact integral expressions of the EM field for a given angular frequency \(\omega\), after the summation over the frequencies, and after the discretization (7) is done. Hence, we begin with the exact integral expressions of the EM field for a given angular frequency. These expressions, which are valid for any totally propagating, axisymmetric, time-harmonic EM field, are (Eqs. (13)-(15) in Ref. [6]): \[B_{\phi\,\omega\,S}={\cal R}e\left[e^{-{\rm i}\omega t}\int_{-K}^{+K}\sqrt{K^{2}-k ^{2}}\,J_{1}\left(\rho\sqrt{K^{2}-k^{2}}\right)\,S(k)\,e^{{\rm i}kz}{\rm d}k \right], \tag{41}\] \[E_{\rho\,\omega\,S}={\cal R}e\left[-{\rm i}\frac{c^{2}}{\omega}e^{-{\rm i} \omega t}\int_{-K}^{+K}\sqrt{K^{2}-k^{2}}\,J_{1}\left(\rho\sqrt{K^{2}-k^{2}} \right){\rm i}k\,S(k)\,e^{{\rm i}kz}{\rm d}k\right], \tag{42}\] \[E_{z\,\omega\,S}={\cal R}e\left[{\rm i}e^{-{\rm i}\omega t}\int_{-K}^{+K}J_{0} \left(\rho\sqrt{K^{2}-k^{2}}\right)\,\left(\omega-\frac{c^{2}}{\omega}\,k^{2} \right)\,S(k)\,e^{{\rm i}kz}{\rm d}k\right], \tag{43}\] where \(K:=\omega/c\) -- the other components being obtained by the duality (18) from the components (41)-(43), with in the most general case an other spectrum function \(S^{\prime}(k)\). The dependence in \(\rho\) of the components (41)-(43) is determined by that of the Bessel functions \(J_{0}\) and \(J_{1}\), and by the form of the integrals which involve them. At large \(x\) we have the asymptotic expansion [20] \[J_{\alpha}(x)=\sqrt{\frac{2}{\pi x}}\cos\left(x-\frac{\alpha\pi}{2}-\frac{\pi }{4}\right)+O\left(x^{-\frac{3}{2}}\right). \tag{44}\] However, the argument of the Bessel functions in Eqs. (41)-(43) is \(x=\rho\sqrt{K^{2}-k^{2}}\). Hence, as \(\rho\to\infty\), \(x\) does not tend towards \(\infty\) uniformly, depending on the integration variable \(k\): we even have \(x\equiv 0\) independently of \(\rho\), for \(k=\pm K\). Therefore, it is not obvious to see if the integrals (41)-(43) do have an expansion at fixed \(z\) as \(\rho\to\infty\). As to the behaviour at fixed \(\rho\) and at large \(z\): up to the real part, and for a fixed value of \(\rho\), the components (41)-(43) are expressions of the form \(e^{-{\rm i}\omega t}I(z)\), with \[I(z)=\int_{a}^{b}f(k)e^{{\rm i}zg(k)}\,{\rm d}k, \tag{45}\] and where, specifically, \(\,a=-K,\,b=+K\), and the phase function is simply \(g(k)\equiv k\), which has no stationary point. (The regular function \(k\mapsto f(k)\) depends on the component being considered, and also on \(\rho\) as a parameter.) In that case, we have the asymptotic expansion [21] \[I(z)=\frac{f(K)}{{\rm i}z}{\rm e}^{{\rm i}zK}-\frac{f(-K)}{{\rm i}z}{\rm e}^{- {\rm i}zK}+O\left(\frac{1}{z^{2}}\right). \tag{46}\] So at large \(z\) and for a fixed value of \(\rho\), all components of any totally propagating, axisymmetric, time-harmonic EM field are order \(\frac{1}{z}\) (unless the coefficient of \(\frac{1}{z}\) in this expansion is zero, which is true only in particular cases -- then the relevant component is higher order in \(\frac{1}{z}\)). This applies indeed to the part (**i**) of the decomposition (**i**)-(**ii**), that is given by Eqs. (41)-(43), but also to the part (**ii**), since it is obtained from (41)-(43) by applying the EM duality (18) (with, in the most general case, a different spectrum function \(S^{\prime}(k)\)). Hence _the SED (23) is order \(\frac{1}{z^{2}}\) at large \(z\),_ for any fixed value of \(\rho\) -- when the \(C_{j}^{(q)}({\bf x})\) coefficients correspond with the exact expressions (41)-(43). [The explicit expression of the coefficient of \(\frac{1}{z^{2}}\), depending on \(\rho\), \(K\), \(S(K)\), \(S(-K)\) (and, in the most general case, of the values \(S^{\prime}(K)\), \(S^{\prime}(-K)\) of the spectrum function \(S^{\prime}\) corresponding to the part (**ii**) of the decomposition (**i**)-(**ii**)) might easily be obtained from (23), (41)-(43), and (46).] The foregoing result applies to a general spectrum function \(S(k)\) (and \(S^{\prime}(k)\)). By summation on the frequency index \(j\), it extends to an EM field having a finite set of frequencies. Let us now investigate the asymptotic behaviour of the EM field and the SED, still in the totally propagating case with axial symmetry, but now after the summation over the frequencies and the discretization (7). After the discretization, each among the \(C_{j}^{(q)}\) coefficients in the expansions (24) of the components of the EM field has the form [8]: \[C_{j}^{(q)}=C_{j}^{(q)}(\rho,z)=\sum_{n=0}^{N}R^{\prime}\,_{n}^{(q)}\,J_{ \alpha}\left(\rho\frac{\omega_{j}}{\omega_{0}}R_{n}\right)G_{nj}(z)\qquad( \alpha=0\mbox{ or }\alpha=1), \tag{47}\] where \(R^{\prime}\,_{n}^{(q)}>0\) (except for \(R^{\prime}\,_{0}^{(q)}\) and \(R^{\prime}\,_{N}^{(q)}\), both of which turn out to be zero) are constant numbers, and where \(G_{nj}(z)=\exp\left({\rm i}\omega_{j}\,t\right)F_{nj}(t,z)\) is just the function \(F_{nj}\) in Eq. (17) hereabove, deprived of its periodic time-dependence (and thus is a periodic function of \(z\)). Together with (44), Eq. (47) shows that, at a given value of \(z\), we have \(C_{j}^{(q)}=O(1/\sqrt{\rho})\) as \(\rho\to\infty\). The SED for an EM field having a finite set of frequencies is given by Eq. (23). For any given frequency \((j)\), \(u_{j}\) is a quadratic form of the \(C_{j}^{(q)}\) coefficients, hence \[u_{j}(\rho,z)=O\left(\frac{1}{\rho}\right)\quad(\rho\to\infty). \tag{48}\] This is compatible with the curves shown on Fig. 14. Passing to the behaviour at large \(z\): in Eq. (47), the dependence in \(z\) is entirely contained in the functions \(G_{nj}(z)\) which, we noted, are periodic. Hence, the coefficients \(C_{j}^{(q)}(\rho,z)\), each of which involves a linear combination of these functions (with coefficients that depend on \(\rho\)), are _almost-periodic functions of \(z\)[22]_, and the same is true for the components (24) of the EM field. Moreover, for any given value of \(\rho\), each \(u_{j}\) in Eq. (23) is hence the sum of the square moduli of periodic complex functions of \(z\). Therefore [22], _the SED is an almost-periodic function of \(z\), too._ This result allows us to understand the lack of a decrease with \(z\), observed on the vertical profiles of Fig. 15, which involve an extrapolation to a larger range for \(z\) as compared with the domain used for the fitting: an almost-periodic function \(f\) does not tend towards zero at infinity, unless \(f\equiv 0\). 5 As to Figs. 12 and 13, they involve no extrapolation, thus the relevant coefficients result from the fitting done on the very domain to which the curves belong. Hence the asymptotic behaviour of \(u_{j}\) (whether at large \(z\) or at large \(\rho\)) is not relevant to them. Footnote 5: This results from the most common definition of an almost-periodic function \(f\)[22]: the existence, for any \(\epsilon>0\), of a relatively dense set of \(\epsilon\) almost-periods. I.e., for any \(\epsilon>0\), there exists a length \(l_{\epsilon}\) such that, for any \(x\in\mathbb{R}\), there is at least one number \(T\in[x,x+l_{\epsilon}[\) such that for any \(t\in\mathbb{R}\), \(\ |f(t+T)-f(t)|\leq\epsilon\). If \(f\) is not identically zero, let \(a\in\mathbb{R}\), such that \(f(a)=\alpha\neq 0\). Taking \(\epsilon=\frac{\alpha}{2}\) in the definition above, we thus have \(|f(a+T)-f(a)|\leq\frac{\alpha}{2}\), hence \(|f(a+T)|\geq\frac{\alpha}{2}\). Since \(x\) can be taken arbitrarily large and since \(T\geq x\), this proves that \(f\) does not tend towards zero at infinity. ## 7 Discussion and conclusion In this paper, we developed an improved numerical scheme to adjust the Maxwell model of the ISRF in a galaxy, which was proposed in a foregoing work [6]. Namely, at the stage of fitting the radiations emitted by the many different point-like "stars" which make the model galaxy, we are now considering each time-harmonic component separately, which is more precise. This allows us as a bonus to eliminate the time variable at this stage, Eq. (21) -- thus reducing the computer time. We used that "separate fitting" procedure, first, to check if the extremely high values of the spectral energy density (SED), which were predicted by this model on the axis of our Galaxy with the former "grouped fitting" [8], are a physical prediction or a numerical artefact. A rather detailed investi gation led us to conclude that these extremely high values are indeed what the model predicts -- see Sect. 4.1. However, we find also that the SED decreases very rapidly when one departs from the galaxy's axis, see Fig. 7. Moreover, the average energy density of the EM field in, for example, a disk of diameter \(1\,{\rm kpc}\) and thickness \(2\,{\rm kpc}\), is not very high, Eq. (39). The extremely high values of the SED on the axis of our Galaxy (and likely also in many other galaxies) are a new and surprising prediction for the ISRF. Recall that our model is adjusted so that the SED predicted for our local position in the Galaxy coincide with the SED determined from spatial missions, and thus is fully compatible with what we see of the ISRF from where we are. The prediction of the present model may be interpreted as a kind of self-focusing of the EM field in an axisymmetric galaxy. On the other hand, as we mentioned in the Introduction, the existing (radiation-transfer) models for the ISRF do not consider the EM _field_ with its six components coupled through the Maxwell equations. These models consider paths of photons or rays and do not take into account the nature of the EM radiation as a field over space and time, subjected to specific PDE's. So a self-focusing cannot be seen with those models. It is difficult to assess the degree to which this prediction depends on the specific assumptions of the model, in particular the axial symmetry. If this prediction is at least partly confirmed, this will have important implications in the study of galaxies. Second, we studied the spatial variation of the SED predicted by our model with the new procedure, and compared it with the predictions of a recent radiation transfer model [5]. The difference between the results of the two models is much smaller now than it was [8] with the older procedure. However, the SED predicted by our model still oscillates as function of the wavelength (or the frequency) also with the new, "separate fitting" procedure, although the different frequencies are then fully uncoupled. We also plotted the radial and vertical profiles of the radiation fields at three wavelengths. We confirm the slower decrease at increasing altitude \(z\) as compared with the radiation transfer model of Ref. [5], indicated by the previous work [8]. Actually, when the vertical profiles of the radiation fields are calculated and plotted in a domain that involves an extrapolation to a (three times) larger domain of \(z\), a slightly oscillating behaviour without a decrease at large \(z\) is observed. This is explained by our study of the asymptotic behaviour of the analytical expressions of the EM field and the corresponding SED: we show that the SED calculated by the implemented model, that in volves a discretization of the wave number, is a quasi-periodic function of \(z\) -- although the exact SED obtained from the integral expressions (41)-(43) is order \(1/z^{2}\) at large \(z\). Thus, extrapolation on the domain of \(z\) should be used parsimoniously with the current numerical implementation based on a discretization of the wave number. ## Appendix A Appendix: Discrete vs. continuous descriptions of the spectral energy density The SED, \(\,u\,\) or rather \(\,u_{\bf x}\,\) (it depends on the spatial position \({\bf x}\)), is normally a continuous density with respect to the wavelength or the frequency: the time-averaged volume energy density of the EM field at some point \({\bf x}\) and in some wavelength band [\(\lambda^{(1)}\), \(\lambda^{(2)}\)] is given by \[\overline{U_{1\,2}}({\bf x}):=\frac{\overline{\delta W_{1\,2}}}{\delta V}({ \bf x})=\int_{\lambda^{(1)}}^{\lambda^{(2)}}u_{\bf x}(\lambda)\,{\rm d}\lambda. \tag{49}\] However, in many instances, including the present work, one is led to consider a discrete spectrum, thus a finite set of frequencies, \((\omega_{j})\) (\(j=1,...,N_{\omega}\)), hence a finite set of wavelengths. It leads also to a discrete energy density, Eq. (23). This raises the question of how to relate together these discrete and continuous descriptions of the SED. To answer this question, we note first that the \(u_{j}\)'s in Eq. (23) are indeed volumic energy densities. Whereas, \(u_{\bf x}\) in Eq. (49) has physical dimension \([U]/[L]\), i.e., it is \[f_{\bf x}(\lambda):=\lambda u_{\bf x}(\lambda) \tag{50}\] which is a volumic energy density. And it is indeed \(f_{\bf x}\) that is being considered by Popescu _et al._[5], when plotting the SEDs at different places in the Galaxy, or when plotting the radial or axial profiles of the radiation field at some selected wavelengths. As is more apparent with the "separate fitting" used now (see Sect. 3), the discrete set of frequencies \(\omega_{j}\), considered in the Maxwell model of the ISRF, represents just a finite sampling of the actual continuous distribution. The link between the two descriptions is hence given simply by the following relation: \[u_{j}({\bf x})=f_{\bf x}(\lambda_{j})=\lambda_{j}u_{\bf x}(\lambda_{j}). \tag{51}\] Consider a bounded spatial domain \(D\). The total EM energy contained in the domain \(D\) and in the wavelength band \([\lambda^{(1)},\,\lambda^{(2)}]\) is given, according to Eq. (49), by \[W_{1-2,\,D}:=\int_{D}\overline{\frac{\delta W_{1\,2}}{\delta V}}\,{\rm d}^{3}{ \bf x}=\int_{D}\left(\int_{\lambda^{(1)}}^{\lambda^{(2)}}u_{\bf x}(\lambda)\,{ \rm d}\lambda\right)\,{\rm d}^{3}{\bf x}, \tag{52}\] i.e., using (50): \[W_{1-2,\,D}=\int_{D}\left(\int_{\lambda^{(1)}}^{\lambda^{(2)}}f_{\bf x}( \lambda)\,{\rm d}\left({\rm Log}\lambda\right)\right)\,{\rm d}^{3}{\bf x}. \tag{53}\] If we are using a model considering a fine-enough finite set of wavelengths \((\lambda_{j}),\ j=1,...,N_{\omega}\), with \(\lambda_{1}=\lambda^{(1)}\) and \(\lambda_{N_{\omega}}=\lambda^{(2)}\), we may use (51) to estimate the integral over \(\lambda\) in Eq. (53) as a finite sum, e.g. a Riemann sum: \[\int_{\lambda^{(1)}}^{\lambda^{(2)}}f_{\bf x}(\lambda)\,{\rm d}\left({\rm Log }\lambda\right)\simeq\sum_{j=1}^{N_{\omega}-1}f_{\bf x}(\lambda_{j})({\rm Log }\lambda_{j+1}-{\rm Log}\lambda_{j})\simeq\sum_{j=1}^{N_{\omega}-1}u_{j}({\bf x })({\rm Log}\lambda_{j+1}-{\rm Log}\lambda_{j}) \tag{54}\] or a better approximation (trapezoidal, Simpson,...).